We have had a lot of questions on our forums about interlaced video, so I thought I would write a series of articles to clear everything up once and for all! 🙂
Where did interlacing come from?
Back in the day, when me and my buddies were developing broadcast television, we were going to model it on film, which took a sequence of photos of reality, so, if a blue circle was moving from left to right, two consecutive frames might look like this:
Instead of the 24 frame per second rate of film, for technical reasons we decided to match the frame rate with the rate of U.S. alternating current (60 cycles per second).
But, we had a big problem… we could not fit 60 full frames every second over the old, flimsy airwaves we had back then.
Just then, the new guy, Randy Interlacing, said “Hey, instead of full frames, let’s just broadcast every other line of the picture! And then, 1/60th of a second later, broadcast the in-between lines! No one will ever notice, with the slow phosphor fade of their RCA cathode ray tube TVs. This will cut our bandwidth needs in half!”
By that time, we were all too tired and hopped up on root beer to think of anything better, so we agreed. We even agreed to call this method of chopping a picture up into every other line after its inventor: “Interlacing.”
And nowadays, digital cable and satellite companies can cram twice the number of channels into the same bandwidth thanks to interlacing.
What is the difference between a “field” and a “frame”?
Randy said that a picture that only had every other line should not be called a “frame”. He wanted to call it a “Randy.” We thought that was weird, so we decided to call it a “field.”
Two consecutive fields, recorded a 60th of a second apart, together are called “one frame.” Although you can understand why, cause each field is really, kind of a “half frame”, cause it only contains every other line on the picture, it CAN be confusing that two different motion samples are considered one frame.
So, in interlaced video, the first picture that is stored is missing every other line, which is “field 1.” And a 60th of a second later, another picture is stored with the in-between (missing) lines.
So in each second, there were exactly 60 interlaced fields, which equaled 30 frames.
I said, “Let’s call this ’60i’ format, because the picture changes 60 times a second, and it’s interlaced. And they said “Umm, we’ll call it NTSC.”
What about over the pond?
In Europe, where the electricity ran at 50 cycles per second, they settled on 50 interlaced fields. I called their format “50i”, but they insisted on calling it “PAL” which they thought sounded more friendly. (France called their 50i format “SECAM”, because they always had to be different. Their format ended up being superior, like their bread and cheese, but that’s another story.)
Did NTSC video stay at exactly 60 fields/30 frames per second?
Nope. That would have made the math too easy. In the 1950’s, people demanded color TV. “What for?” I asked, cause I thought black & white looked cooler. But I was outvoted.
Wanting to keep broadcast signals compatible with older black & white TVs they just squeezed some chroma info in the luma, dropping the field rate to 59.94 fields per second to squeeze the color information into the signal. Which came out to 29.97 frames per second.
I refused to call it 59.94i, which took to long to say, so I kept calling it 60i, and that stuck, even today. So interlaced formats are named with the field rate and a lower-case “i”, like “50i”, “60i”, or 1080i”
What is field “dominance”?
As we know, interlaced video is a stream of pictures, each picture missing half the lines, alternating between the even and odd numbered lines. But which comes first? Even or odd?
For each set of two fields, if a video format records the odd fields first, and then records the even fields, it’s called “upper field first”. HDV is an upper field first format. If the first field contains the even numbered lines, this is called a “lower field first” format. DV is lower field first.
If you’re working with interlaced footage and somehow get the fields reversed, it’s extremely jarring and unpleasant to watch… so don’t do that. To help you prevent this, if you drag an HDV clip into a DV timeline, or vice versa, Final Cut Pro will automatically add a “Shift Fields” filter, to put the fields in the correct order.
What is “combing”?
Some things, like cars, people, birds, or blue circles, move so quickly that they have visibly changed positions in the time a video camera captures one field to the next, a 60th of a second later.
If we look at both fields in the blue ball example superimposed over each other, meaning we’re looking at the whole frame, we can the see the interlacing artifact called “combing” on the blue circle, because it was moving between fields. Combing is also referred to as “interlacing artifacts”, “serrated edges”, “the jaggies”, “weird horizontal lines” or “mice teeth.”
Here are visible interlacing artifacts on a dolphin nodding her head “yes” to the question “do you want a fish?”
As you can see, interlacing is ugly on still frames. (That’s why DV Kitchen’s TimeFreezer™ has two kinds of deinterlacing for still frame exports- the movie is informative on some interlacing considerations, watch it here: http://dvcreators.net/dv-kitchen/features/timefreezer)
When will you see combing artifacts?
While working with interlaced footage, there are some times when you will see combing artifacts, other times when you won’t. For example, you won’t see interlacing in your camcorder’s LCD screen, or an attached video monitor. When the Final Cut Canvas Window is sized to 100%, you’ll see combing when you have playback paused on a moving subject, but not when it is shrunk smaller. In Quicktime Player, there is a checkbox that will allow you to view video interlaced or not.
You won’t see any combing on an object that doesn’t move in relation to the camera… because it’s in the same position in both fields.
If you are working on your project and seeing interlacing, and it is distracting, your editing software probably allows you to turn it off. If you can’t find out how, ask on our forums.
When do I never have to worry about my viewers seeing visible combing artifacts?
Combing will never be visible on a television set if you’re feeding it with a signal from:
- any kind of DVD (standard def, Blu-Ray)
- Apple TV
- satellite, cable or over-the-air broadcast
Even the latest LCDs and plasma TVs are designed to deal with interlaced source material in one way or another. So, if you’re delivering a project on DVD or broadcast, you don’t have to worry about visible interlacing artifacts.
(If a TV is being fed from a computer, the TV is essentially a computer monitor, and will show combing artifacts.)
When should I deinterlace my movie?
For other screens other than televisions, you will almost always want to deinterlace your movie prior to delivery if it contains interlaced footage- including animations and effects, not just the original video footage.
And all computer screens will show combing in movies, which is distracting and weird-looking. iPods, iPhones, and some other portable viewing devices will also show interlacing in different, almost always bad, ways.
If you’re delivering your movie
- on the web
- on a disc designed to be played on a computer
- in a video podcast
- to portable devices
- on a projector, either fullscreen or embedded in a presentation
you’ll want to deinterlace it.
Our recommendation for deinterlacing (and encoding) your footage, if you are using a Mac, is DV Kitchen.
What if I shot my movie deinterlaced?
Many “24p”, “30p”, “24f”, “30f”, “Frame Mode”, and other similarly named settings actually store an interlaced signal, but using buffer tricks to repeat fields. There is much more to be said about interlacing with different field cadences, but this will have to wait for a future article since everyone in my office is yelling at me to finish this article so they can send an e-newsletter.
Here’s our friend D. Eric Franks’s cool movie about interlacing: