I'm writing about the video specs in cameras all the time now and I have to admit, a lot of the time I'm in over my head. I'm cool with 1080p and 720p. But when I see specs like 1920 x 1080 60i and 1920 x 1080 60p, I start to get a little lost. The resolution is simple enough but the 60i and 60p lose me a little. So please, correct me if I'm wrong here - 60i means 60 frames per second *interlaced* - basically half frames. And 60p means 60 complete frames per second? So 60i is essentially a strategy to arrive at 30 frames per second playback. But with 60p you could actually play back at 30 FPS with no loss of quality?
Like I said, I'm running into this stuff more and more when I write camera intros and reviews. Right now I'm writing about the new Panasonic Lumix ZS20 point-and-shoot. It has 1920x 1080 60p video and the previous model recorded full HD at 60i and I need to sort out the practical differences between the two.
But if that's how they say it your neck of the woods, then yeah, 60p is 60 full frames per second, and 60i is 60 "fields," or 30 whole frames.
That's part of the problem - I read it in camera specs and press releases and there doesn't seem to be a standard way to express things. The 60i and 60p thing is something that I just started seeing. It used to just be 720p or 1080p or the full resolution dimensions. I think these two specs are mostly in reference to the AVCHD format, too.
Anyway, thanks for the answer. I believe you confirmed my (attempted) understanding
Hopefully I'll get at least one more answer to give me full confidence in what I've written...
Interlacing isn't exactly half the frame rate or resolution.
Interlacing is a tie over from Cathode Ray Tube televisions. Back in the early days of television, an electron beam would excite phosphors on a screen to produce an image. The problem was that the phosphors would start to fade before the beam could excite the whole image. This resulted in a slow black band moving up the screen. To solve this they split the image into an odd and even field. This was faster to scan and meant that the cathode ray could keep up.
Most people assume that an interlaced image since it is displaying half the resolution is equal to either half the frame rate or half the resolution, but this isn't strictly correct. It still records the same number of lines but it does them in a batch of odd or even, and is still records as the same frame rate (in this case 60fps) it just does half and then half that is 120 half frames a second. Also it doesn't use the same lines each half frame but does in fact use the full area of the sensor
This is where it gets confusing but basically a 1080i sensor still has 1080 lines of resolution and it scans them 60 times a second, it just splits this information up into 120 half chunks.
For a digital device this is more about processor speed and bandwidth. an interlaced signal is easier for the device to process than a progressive signal as it can deal with the data in smaller chunks. Also whilst it is theoretically possible to use a 30p sensor to produce a 60i signal, the image would appear distorted as there is a line shift between the odd and even lines on the sensor that would make the image look unnatural and movement appear a little stuttered.
So in short a higher fps interlaced signal is still better than a lower fps progressive signal, but obviously a progressive signal is better than its equivalent interlaced signal.
An interlaced picture is not half the resolution, but it is half the frame rate.
It draws the picture half the lines at a time, but it doesn't draw each half twice as fast. It still takes 1/60th of a second to draw the first 540 lines, then another 1/60th to draw the next 540 lines that fall in between those. There's no 120 times per second in there.
(If you're thinking of the 120Hz refresh rate advertised for LCD TVs, that has nothing to do with the frame rate, but with how often each pixel on the screen is refreshed with the data it is supposed to be displaying. Each pixel is "touched" 120 times per second to make sure it's doing what it's supposed to be doing. The source material is still supplied at 60 Hz.)
Almost any TV these days capable of 1080p will actually deinterlace the 1080i signal it receives. It will buffer the two fields into an actual 1080-line frame and display that frame twice to fill the two 1/60th-second periods it took to receive that data from the broadcast stream.
You are correct in that it is a bandwidth issue. A 1920x1080 picture is a just about 2 megapixels. However, by interlacing, you're only asking the set to draw a 1920x540 picture every 60th of a second, just about one megapixel. By comparison, the other HD standard, 720p, is a picture of 1280x720. Guess what? That's just shy of one megapixel! Not a coincidence! The digital broadcast standard was built to carry about a megapixel 60 times per second. you can build a complete lower-resolution picture, or half of a higher-resolution picture. Six of one, half a dozen of the other, as far as the broadcast bandwidth allotment is concerned.
The bandwidth limitation applies to broadcast, not to connected devices, so your game console or Blu-ray player is perfectly happy to actually draw a full 1080 lines every 60th of a second, for a 1080p (or 1080-60p) picture. Blu-ray also adds the ability to drop the frame rate to 24 Hz, to match the cinematic frame rate without having to resort to 3:2 pull-down trcks to get films onto video systems.
1080i (or 1080-60i) feeds data to the screen every 60th of a second, but only fills half the screen lines, every other line. The next 60th it gets the other half, getting the lines it skipped before. 1080p (or 1080-60p) feeds data to the screen every 60th of a second as well, but is capable of carrying all 1080 lines in that single 60th of a second.
To the original poster's question, the i is essentially half the effective frame rate as the same resolution in p, but it is not a reduction in resolution. The two interlaced half-pictures assemble to a full-resolution image, just half as many times as the single progressive picture. An interlaced recording will take about half the storage of a progressive recording, as well.
Bottom line is if it's interlaced, it's uselss for fast movement as it reveals the odd/even fields.
60i has a 30fps frame rate split up into odd and even fields.
60p has a 6ofps frame rate with every line in each frame.
Interlacing is usually OK for studio, but poor for OB.
There have been a few reality programs using GoPro cameras where they seem to have put them in i mode not p mode, particularly bad where they have fast moving items like rotor blades that end up with alternate-line jagged edges.
SOOOO help me out here, can you or cant you make smooth slow motion with 60i? I Want to be able to slow the 60i into 24p to make like a 40% slow motion or something like that. Can or cant you do that and still have it smooth?
I'm no expert so take my advice here with a grain of salt. But I think I spent enough time writing about video in 2012 to understand this now.
Basically, just think of 60i as 30p because that's how it will be rendered/displayed. So the frame rate difference between the two speeds isn't really very significant. I think your best bet is to work with the video in 60i (which is basically 30p with more detail) and then slow your speed down with software. It's not the same thing as shooting at a faster frame rate but since there is more information in the 60i video I think you'll get better results than if you shot in 30p.