Darklord

De-interlacing and Scaling Explained

120 posts in this topic

Note: This article is deliberately very basic in it’s explanations and is designed as a simple primer to the complicated world of de-interlacing and scaling. There are many related sub-topics that have been left out for the sake of simplicity, but if you feel there are additions that should be made, or notice any glaring mistakes, please let me know.

Scaling and De-interlacing explained

When people go out and buy a shiny new High Definition display, most of them naturally assume that with the simple addition of a HD set top box, they’ll be getting the full high resolution picture that their display is capable of. In other words they think every pixel of their display will be utilised to deliver as much detail as possible. Unfortunately it’s not quite that simple…

At the time of writing, most HD digital displays are made up of the following fixed pixel structures:

1024x768 (Common on 42” HD plasmas)

1024x1024 (Less common, but found on "ALIS" 42” HD plasmas)

1280x720 (Common on DLP, LCD projectors and digital RPTVs)

1280x768 (Found on Pioneer HD 50” plasmas, DLP projectors and some LCDs)

1366x768 (Common on many high definition plasmas and LCDs)

1920x1080 (Full high definition. Very rare, but availability is increasing, and this will eventually become the standard high definition format for most displays).

Unfortunately the confusion doesn’t end there. All high definition broadcasts in Australia are transmitted in the 1440x1080i or 1920x1080i format (I’ll leave 576p out of the equation, as no one with any audio/visual knowledge classes this broadcast format as true high definition). The “i” after the 1080 figure stands for interlaced scan. Bare with me as this is where things get tricky!

Interlaced Scan

Since the beginning of television broadcasting in the 1930s, all the pictures we are used to seeing on our conventional CRT televisions have been made up of two interlaced “fields”. The odd lines (1,3,5,7 etc) are painted on the screen in the first pass (field one) followed by the even lines (2,4,6,8 etc) in the following pass (field two).

Interlacing was introduced as a clever way of cutting down on the bandwidth of a broadcast, reducing fullscreen flicker (by allowing 50 or 60 screen updates a second rather than 25 or 30) and giving the illusion of there being more lines on screen than there actually is. This interlacing method works fine on older CRT tube televisions, due to a combination of the persistence of the phosphors onscreen (they take time to fade giving the illusion of the fields blending together) and the perception of our brains, which due to rapid refreshes of the screen perceives the fields to be onscreen at the same time.

Since the introduction of new digital displays, we’ve run into a problem. The problem being that all digital displays are inherently "progressive". In other words they generally work by illuminating all pixels on screen at once, and therefore can’t operate in interlaced mode like older CRT technology.

This creates a problem with all interlaced formats that must be addressed. The interlaced material must be “de-interlaced” (converted to progressive scan) in order for it to be displayed properly on the digital display. Confused yet? Well hang on as things are about to get a lot messier…..

De-interlacing

De-interlacing is the process of taking interlaced fields and converting them to progressive frames for display. Sounds easy right? Well yes and no. Where things get complicated is the huge amount of various sources of content available. Whether something was original shot on film or video, and at what frame rate, has a huge bearing upon the type of de-interlacing that can be performed to deliver acceptable image quality.

Bob De-interlacing

The most simple and common method of de-interlacing is a method known as field interpolation, or “bob de-interlacing”. Bob works by taking a single interlaced field (for an example a 540 line interlaced field of the 1080i format) and filling in the gaps via interpolation, to convert that field to a complete progressive frame. Interpolation simply looks at surrounding lines from the same field and “guesses” how to fill in the gaps between the interlaced lines. This works fine, but the big problem is that it throws away much of the available resolution. I.e. 1080i effectively becomes 540p, as only half of the available information (one interlaced field) is taken into account with the de-interlacing process. Surely there must be a better way? Well there is…

Weave De-interlacing

As the name implies weave de-interlacing looks at both the odd and even line successive interlaced fields and weaves them back together into a single frame. This works very well for any material that was originally shot on film at 24 frames a second (the international standard for film or drama production). The reason being that in the case of 24 frames a second film sources, the odd and even line fields are from the same original film frame. All you have to do is put them back together and hey presto you have full resolution! If done correctly material that was originally transmitted as 1080i can be properly de-interlaced to 1080p with no loss of vertical resolution.

However (and this is a big however!) this only works for film based material, or anything originally shot at 24, 25 or 30 frames a second. This method of de-interlacing cannot be applied to anything that was shot on interlaced video. Why? Interlaced video is shot at 50 or 60 individual fields a second, that don't form complete frames, and this means that each field is from a “different moment in time”. If you attempt to use weave de-interlacing on 50i or 60i video, it will be combining fields that simply don’t belong together, and result in an effect called combing or mouse teeth. To combat this most displays or video processors will simply fall back on bob de-interlacing for interlaced video sources, and if clever enough, will use better weave de-interlacing for film sources. This is done through a process called “cadence detection” which in simple terms means the video processor is clever enough to differentiate between the origin of various source footage.

Note that in the USA and other 60hz markets, there is also a process called 3:2 pull-down which must also be applied as part of the weave de-interlacing process for good results. This process converts 24 frames a second to 60 frames a second to match their 60hz fresh rate. In Australia with our 50hz system, we don’t have this to worry about, so I have chosen to leave it out of this article. You can read about 3:2 pull down here and here.

So is there a better method than bob de-interlacing for 50i and 60i video sources? Yes, but it’s tricky…

Motion adaptive based de-interlacing

This is the most sophisticated type of de-interlacing and as a result is very processor intensive, requiring expensive dedicated chipsets. As such it is currently rare on any HD displays (and even external processors for that matter). Some displays and processors can apply this type of de-interlacing to standard definition sources (480i or 576i) but very few can use it with high definition 1080i. This is expected to change over the next few years.

So how does motion adaptive de-interlacing work? Motion adaptive de-interlacing analyses several successive fields (the more fields it analyses the better it generally is) and looks carefully at differences between those fields. For parts of the image that are motionless or relatively still, it will weave together sections of those successive fields, and for parts of the fields that are under motion, it will use interpolation to fill in the gaps. Think of it as a combination of weave and bob de-interlacing.

If done properly this can result in better resolution (as you can get close to the full vertical resolution when there is little movement) but the results vary wildly depending on how sophisticated the motion adaptive de-interlacing is. The best motion adaptive de-interlacing chipsets are generally ones that can de-interlace on a “per pixel” level which means they are powerful enough to look at differences between fields on an individual pixel basis.

So that about covers the basics of de-interlacing. There are other less common methods of de-interlacing available such as vector based de-interlacing, but for the sake of simplicity I’ve chosen to keep them out of this article.

The problem with de-interlacing

As you can see de-interlacing is tricky business, and as a result it’s often not done properly by manufacturers. By far the most common and cheapest implementation of high definition de-interlacing is the bob method, and hence the majority of HD displays on the market will use this method for all high definition 1080i sources, regardless of whether they are film or video based. In other words the display is unable to differentiate between sources, and simply applies basic bob de-interlacing to all HD content. The result is that you may often only be getting a maximum of 540 original lines of resolution (from one interlaced 1080i field) on your 720p, 768p or even 1080p display. To combat this some people invest in sophisticated external video processors which will often do a much better job than the de-interlacers and scalers built into their display.

So how can you tell what your display uses?

In short, unless you know what to look for, it’s very hard. Bob de-interlacing does soften the picture, and can leave ugly defects in the picture (aliasing, shimmer, moiré and line flicker to name a few) but unless you’re familiar with these picture artifacts you’re unlikely to notice.

The simplest way of knowing whether your display incorporates high quality high definition processing, is to look for specific mention of this in marketing material or on the manufacturer’s website. The rare HD displays that feature good de-interlacing and processing, will generally make a big deal of it. If you see no mention of it anywhere on the website or in other related marketing material, then it’s generally safe to assume is isn’t there.

I don’t at all recommend ringing the manufacturer to ask about processing capabilities, as you’re almost guaranteed to get someone who has no idea what the hell your talking about, and even if you do they will more than likely not be forthcoming with this type of information.

It is worth noting that video processing is improving all the time, and the next few years are likely to see big improvements to high definition de-interlacing and scaling in consumer displays.

Scaling

So now that we’ve covered de-interlacing, what about scaling? Scaling is a little less tricky than de-interlacing, as the same method of scaling can often be applied to all sources (after de-interlacing) regardless of their origin. However there are still many different methods, which can have a big impact on image quality. Put simply scaling is the process of taking a given video format (anything from 480i to 1080p) and converting it to match the resolution of the display. For example when a 720x576p signal (like a progressive PAL DVD) is input into a 1280x720p display it must “up-convert” that material to 720p, so that it matches the physical resolution of the device. Or when a 1920x1080i signal is input into a 1024x768 display it must "down-convert" it to match the display.

Like bob de-interlacing, scaling is done through a process of interpolation (in the case of up-converting) or by discarding resolution (in the case of down-converting). Sometimes a display must both down-convert and up-convert to match it’s native resolution (i.e. up-convert horizontal resolution while down-converting vertical resolution) which can result in a very bad picture if not done well.

Basic scaling analyses anywhere between 4 and 16 pixels from a single frame, to average one pixel in the final scaled frame. The number of pixels taken into account is given a figure of “xx-tap” scaling. For example a scaler that looks at only 4 pixels to create a final pixel is called a “4-tap scaler”. Top end scaling can go all the way up to 1024-tap as seen with Silicon Optix’ HQV processing.

There are many other different methods of scaling available, depending on the display in question, all of which are beyond the scope of this article.

Just like de-interlacing, a lot of home theatre enthusiasts now choose to use external scalers, as they give far greater flexibly with output formats, typically incorporate higher quality scaling algorithms, and will therefore nearly always result in a bettor picture than using the display’s own internal scaler.

Other forms of scaling

Also common these days are “up-converting DVD players” which can not only de-interlace a standard definition interlaced PAL DVD (from 576i to 576p) using weave de-interlacing, but also then up-convert the material to 720p, 1080i or even 1080p before being output to the display. The result can mean a smoother more detailed picture with less artifacts, depending on the quality of the de-interlacing and scaling built into the player.

All high definition digital set top boxes are also all capable of up-converting material to 720p or 1080i, however almost all of them use very basic bob de-interlacing and basic scaling algorithms, and you’ll nearly always get better results by outputting the material in it’s native format (576i or 1080i) and then let the display or an external scaler do the de-interlacing and scaling.

Some people also use computers (known as “Home Theatre PCs” when built for home theatre use) which can also perform decent quality de-interlacing and scaling at a very affordable price. An example of free software used for this task is the popular Dscaler.

So does you head hurt yet? It should! This is just a tiny article on an incredibly complex area. If you’re brave enough to want to find out more I suggest having a look at some of these links:

De-interlacing video basics – A good basic guide to de-interlacing.

Progressive Scan/De-interlacing Explained – Secret’s of Home Theater & Hi-Fi. This is probably the most comprehensive and accurate explanation on the internet of all the various de-interlacing methods (complete with animated pictures). Some of it is related to 60hz, but most of it is still very relevant to our 50hz system. Highly recommended reading.

HQV Processing FAQ – Silicon Optix HQV is arguably the most sophisticated consumer range video processing on the planet. This technology guide is a great explanation of what sets it’s de-interlacing and scaling hardware apart from the competition.

Understanding Interlace - A very good primer to interlaced scan from a broadcast perspective.

Share this post


Link to post
Share on other sites

Brilliant article DL. I'm going print this out and read it on the train, a great refresher.

Share this post


Link to post
Share on other sites

Vector-based Motion Adaptive Deinterlacing

Much like Motion Adaptive Deinterlacing, it will examine successive fields and use Bob or Weave deinterlacing when it deems necessary, but the special thing about it is that it will interpolate along several vectors (mostly diagonal lines)

Share this post


Link to post
Share on other sites

Thanks for the feedback guys.

Vector-based Motion Adaptive Deinterlacing

Much like Motion Adaptive Deinterlacing, it will examine successive fields and use Bob or Weave deinterlacing when it deems necessary, but the special thing about it is that it will interpolate along several vectors (mostly diagonal lines)

I’m aware of vector based de-interlacing Davo (briefly mentioned it in the article) but it’s very rare to see any in any consumer range products, so I left it out in the sake of simplifying things a little.

I think ATi might be using Vector based de-interlacing with AVIVO? Have you seen it in action?

Share this post


Link to post
Share on other sites
I think ATi might be using Vector based de-interlacing with AVIVO? Have you seen it in action?

Yup, that's why I thought it would be good to write something short about it, since ATi are the only ones making it widely available to just about every consumer with an ATi video card (from the Radeon 9500 and up, I believe). Their deinterlacing algorithms, much like wine, have gotten better with age :blink:

Share this post


Link to post
Share on other sites
Yup, that's why I thought it would be good to write something short about it, since ATi are the only ones making it widely available to just about every consumer with an ATi video card (from the Radeon 9500 and up, I believe). Their deinterlacing algorithms, much like wine, have gotten better with age :P

Given it's still very rare I think I'll leave it out of the article for now, as the topic is probably overwhelming enough for newbies :blink:

Share this post


Link to post
Share on other sites
Given it's still very rare I think I'll leave it out of the article for now, as the topic is probably overwhelming enough for newbies :blink:

Great article Dark lord I have been asked about this by a few friends and work mates and whilst I have an idea on how it all works turning it into an intelligent reply has been beyond me .I will now direct them here.

Thanks Brendon

Edited by brendonc

Share this post


Link to post
Share on other sites

Darklord, I think there is a bit of confusion about the description of "bob de-interlacing". I've come to know bob de-interlacing to be line doubling, where each alternate line becomes the same as the one above/below it. Doing this wouldn't require any "guesses" to decide how to fill in the blank lines, just a straight "copy&paste" (if you will) of a line into the blank line above/below it. In other words, take field 1 and copy&paste it into where field 2 would normally go and the result is a progressive frame. You also mention it "throws away" much of the available resolution, but it doesn't discard anything.

Am I missing something here?

Share this post


Link to post
Share on other sites
Darklord, I think there is a bit of confusion about the description of "bob de-interlacing".

No confusion here mate :blink:

I've come to know bob de-interlacing to be line doubling, where each alternate line becomes the same as the one above/below it. Doing this wouldn't require any "guesses" to decide how to fill in the blank lines, just a straight "copy&paste" (if you will) of a line into the one above/below it.

If you’ve read that bob is just line doubling, then that information is out of date. In the early days of video processing and CRTs, bob could simply mean line doubling when taking a 480i or 576i field and converting it directly to 480 or 576p. However bob is now generally accepted as the process of field interpolation. Rather than simply doubling lines, it looks at the surrounding lines and creates new intermediate lines. Bob is more correctly referred to now days as “field interpolation” but the bob name has hung around.

Since the move to digital displays this bob process has become part of the basic scaling process itself. Because processing is no longer just doubling 576i to 576p (as it used to on a CRT), it also has to up-convert to a higher resolution at the same time. In other words a 288 line interlaced field has to be bobbed to something like 768p. This is typically done through an interpolation process.

Have a read of how Secrets describes bob. (Secrets of Home Theatre/Hi-Fi is one of the most respected source of video processing info on the net)

Single-Field Interpolation (or “Bob”)

This just involves taking each field and scaling it to a full frame. The missing lines between each of the scan lines in the field are filled in with interpolated data from the lines above and below. Done badly, the screen looks blocky and pixellated. Even done well, the image looks very soft, as image resolution is unavoidably lost. In addition, thin horizontal lines will tend to “twitter” as the camera moves. These thin lines will fall on just one field of the frame, so they will appear and disappear as the player alternates between the odd fields and even fields. This is the most basic deinterlacing algorithm, and the one that almost every deinterlacer falls back on when nothing else will work.

You also mention it "throws away" much of the available resolution, but it doesn't discard anything.

Ah, well that depends on how you look at it doesn’t it!:P Bob is effectively throwing away half the available vertical resolution. Unlike weave which combines successive fields to achieve full 576p resolution, bob simply interpolates a single field to 576p which mean you’re effectively looking at only 288 lines of resolution. The rest is “made up”. So yes, in comparison to native interlaced on a CRT (where we get the perception of woven fields) and weave de-interlacing for film sources on a progressive display, bob definitely discards half of the available vertical resolution.

Share this post


Link to post
Share on other sites

Fair enough, but if comparisons are being made between successive fields to determine the best way to fill in the empty lines, isn't this essentially scaling? I guess the definition of the word has changed since I last checked!

btw the reason I said there was confusion about bob de-interacing is 'cause people seem to think that their plasmas are "throwing away 50% of the video signal" (see http://www.dtvforum.info/index.php?s=&show...dpost&p=379591 , http://www.dtvforum.info/index.php?s=&show...ndpost&p=379694) after reading your article.

I just don't agree that information is being thrown away, there were 540 lines per frame to begin with, the 540 lines remain after bobbing, where's the loss?

eg. for 1080i50 material that has come from a film source (25fps material), when using bob to de-interlace, field 1 is doubled to 1080p, field 2 is doubled to 1080p, we have two unique progressive frames for the one point in time, all the information is kept. It looks ugly but all the information is there.

I don't see how using adaptive weave de-interlacing results in any more resolution being obtained. There is no extra information, it's just being presented in a way that hides the loss of information in motion areas.

What do you think?

Share this post


Link to post
Share on other sites
Fair enough, but if comparisons are being made between successive fields to determine the best way to fill in the empty lines, isn't this essentially scaling?

De-interlacing is a form of scaling! Even oldschool stand alone line doublers are also referred to as scalers. Bob or field interpolation is certainly a form of scaling, but any scaling method that converts interlaced to progressive is also called de-interlacing.

the reason I said there was confusion about bob de-interacing is 'cause people seem to think that their plasmas are throwing away resolution (see http://www.dtvforum.info/index.php?showtop...st=20&p=386224) after reading your article.

I just don't agree that information is being thrown away, there were 540 lines per frame to begin with, the 540 lines remain after bobbing, where's the loss?

eg. for 1080i50 material that has come from a film source (25fps material), when using bob to de-interlace, field 1 is doubled to 1080p, field 2 is doubled to 1080p, we have two unique progressive frames for one point in time, all the information is kept. It looks ugly but all the information is there.

I can see why you’d think that Ikari, but you’re looking at it the wrong way. Interlaced scan formats only work properly on native interlaced CRT displays (which is what the format was invented for the in the first place early last century). On a CRT the fields are correctly interleaved. i.e. lines 1,3,5,7 and so on appear in the first field, and lines 2,4,6,8 and so on appear in the following field in-between the positions of the lines from the previous field. I bolded this last part of the sentence as it’s the relevant part to think of. In other words all the odd and even information from one field to the next lines up. Combine this proper interleaving with the ‘persistence of phosphors’ effect of a CRT display (which means the odd lines are still visible as the even lines appear on screen) and our brain perceives that both fields are onscreen at the same time. In other words even though the broadcast format is only comprised of 540 lines per motion update (1080i), you are actually seeing closer to the 1080 onscreen when there isn’t a lot of movement. When there is movement vertical resolution drops, but you’re always seeing somewhere between 540 and 1080 lines on screen.

It’s a very different situation with bob de-interlacing on a digital display. Because bob interpolates new intermediate data that fills in the gaps, and has to scale to match the display’s native resolution, the correct placement of these original interleaved fields is no longer present. The original resolution and interpolated lines are essentially placed over the top of each other and no interleaving takes place. So one field becomes a complete interpolated progressive frame, followed by the next complete interpolated progressive frame and so on. In addition to this there isn’t the phosphor persistence effect that is seen on CRTs, so there is no longer any perception of the successive fields/frames being joined together. The result is only the perception of 540 lines of original resolution at all times (with interpolated data smoothing out the frames).

To top it off you now also have interpolation jaggies, moiré effect, and line twitter as there can be detail bobbing up and down due to being in different positions from one frame to another.

Put simply, bob does NOT preserve any of the original interleaving effect that makes the interlaced scan format appear acceptable on a CRT. This is the reason good weave de-interlacing (for film sources) and motion adaptive de-interlacing (for video sources) is a must for good image quality with interlaced HD sources on digital displays. Unfortunately very few current displays feature weave de-interlacing for 1080i, let alone motion adaptive de-interlacing for 1080i yet (so yes, people are often only getting 540 unique lines of resolution per motion update on their HD screens) but thankfully this situation is starting to change with next generation displays.

Keep in mind that bob de-interlacing still retains full horizontal resolution of a 1080i source, so it will still look very good. On a 1368x768 display you would be seeing 1440x540 converted to 1368x768 and some people wouldn’t notice the difference between this and proper weave de-interlaced/scaled 1080i presented at full 1368x768p resolution.

I don't see how using adaptive weave de-interlacing results in any more resolution being obtained. There is no extra information, it's just being presented in a way that hides the loss of information in motion areas.

Ok, there are couple of things being confused here. You have to remember that with film sources there are only 24 or 25 original film frames, so the interlaced fields of 50i are from the same moment in time (both odd and even line fields are taken from the same original film frame). Hence even though the broadcast has arrived as 1440x540/50i (1080i), weave de-interlacing can easily recombine these interlaced fields back into full 1080/25p with no loss of vertical resolution.

Bob de-interlacing of film sourced 1080i material = 1440x540/50p (25 unique frames a second with each frame shown twice) - with some detail bobbing up and down and interpolation artefacts such as aliasing.

Weave de-interlacing of film sourced 1080i material = 1440x1080/50p (25 unique frames a second with each frame shown twice) – perfectly woven fields with twice the vertical resolution, smooth edges and much better detail.

Motion adaptive de-interlacing is only useful for native 50i or 60i video sources where there is movement between two fields (each field is from a different moment in time). In these cases good motion adaptive de-interlacing will weave together parts of the fields that aren’t under motion, and interpolate parts that are under motion. This helps retain some of the vertical resolution, but there is no comparison in image quality to native progressive scan video formats at 50p or 60p.

Share this post


Link to post
Share on other sites

lol

Ok, I'm not disagreeing with any of that :blink:

But don't you think it's a bit harsh to say that when given a 1080i signal, a HD plasma will throw away half the picture information? I mean, drop field throws away half the picture information but it's still widely used and nobody complains about it for some reason. Kinda pisses me off! Anyway it doesn't matter, we know the facts of what happens to the video when it's bob de-interlaced but everyone's interpretation will be different. I suppose the finished result of the image on the screen is the main thing. If weave or adaptive motion deinterlacing has the effect of containing more image detail when actually looking at it on the screen then I suppose it would be correct to say that they are in fact more detailed images (even though it technically doesn't contain any more picture information).

Share this post


Link to post
Share on other sites
But don't you think it's a bit harsh to say that when given a 1080i signal, a HD plasma will throw away half the picture information?

Why is it “harsh” if that’s exactly what it does? If used for film sources, bob de-interlacing throws away exactly half the resolution. 540 lines per screen update vs 1080 lines per screen update. Remember those 540 lines fields are designed to be shown together at the same time (they come from the same original progressive film frame!) not one after the other with the gaps artificially filled in!

The important thing to remember is that with non interlaced progressive scan frames detail is not taken into account if it’s not onscreen at the same time. If it was then you could argue that 720p is actually 1440p (if you pretend that the next frame is delivering half the information!)

If weave or adaptive motion deinterlacing has the effect of containing more image detail when actually looking at it on the screen then I suppose it would be correct to say that they are in fact more detailed images (even though it technically doesn't contain any more picture information).

?? Well that’s a pretty strange contradiction! As an example weaved 1080p contains exactly twice the original picture information (2,073,600 original pixels) than bob de-interlaced 1080p (1,036,800 original pixels). How exactly does that “not contain any more picture information”? Interpolating to a higher resolution (whether it be 768p or 1080p) doesn’t mean it has any more detail. Weaving however combines fields that belong together from the same original frame so you are getting twice the picture information over bob at all times.

Put it this way, if you took some 1280x720p progressive video footage, and converted it to 1280x360p, then up-converted it back to 1280x720p, would it still be 720p? Would you not argue that the original source footage contained twice as much as original detail as the up-converted footage?

If you still doubt his take any high resolution picture (anything over 1920x1080) on your computer and resize it into two different pictures. The first one resize to 1920x540. The second one resize to 1920x1080. Now take the first 1920x540 pic and resize it back to 1920x1080. I think you’ll find the original 1920x1080 picture has far better detail and sharpness than the one that has been re-interpolated back to 1080p. While only a still pic, it’s exactly the same principle.

--------------------------------------

By the way, don’t get motion adaptive de-interlacing (only useful for native video sources shot at 50i or 60i) confused with weave de-interlacing (used to recombine back interlaced fields that were taken from 24p or 25p film footage). Unlike my explanations of weave above, motion adaptive de-interlacing of native interlaced video footage does not deliver twice the picture information over bob de-interlacing. All it does it restore some of the vertical resolution lost through interlacing. This only works fro parts of the image that are still, where it can weave those parts of the successive fields together and display them as a complete frame. For parts that are under motion it uses a similar interpolation process to bob. The result with de-interlacing 1080i is that you’re always seeing somewhere between 540p and 1080p, depending on how much motion is taking place (rather than straight bob which means you are only seeing a maximum of 540p at all time, regardless of any movement).

Weave is very different as there is no “interfield motion” so full vertical resolution can be restored at all times (proper 1080p).

Share this post


Link to post
Share on other sites
If used for film sources, bob de-interlacing throws away exactly half the resolution. 540 lines per screen update vs 1080 lines per screen update.

It's not 1080 lines per screen update with weave 'cause the frame is repeated twice on the screen. Bob de-interlaced 1080i always updates 50 new unique images per second, regardless of the source format.

Remember those 540 lines fields are designed to be shown together at the same time (they come from the same original progressive film frame!) not one after the other with the gaps artificially filled in!

I disagree that the 540 line fields are designed to be shown together at the same time because when you display a 1080i video signal natively, on a HD CRT for example (let's say for argument's sake the Sony KVHR), it will always display field 1, followed by field 2, with equal time in between all field scans. If they were ever meant to be shown together at the same time then any 50fps 1080i video would have extremely poor motion. Remember it's all stored & transmitted 1080i50 (isn't it?).

For native interlaced display modes, as 1080i was designed for, the screen is always scanned field after field after field, never 2 fields at the same time.

The important thing to remember is that with non interlaced progressive scan frames detail is not taken into account if it’s not onscreen at the same time.

But it is! Field 2 of bob de-interlaced 1080i reveals more of the same progressive image that wasn't shown in bob de-interlaced field 1. It's positioned wrongly on the screen with respect to the previous frame, hence the flicker, but you are seeing more unique picture information in that second field, that originated from the progressive film source.

Put it this way, if you took some 1280x720p progressive video footage, and converted it to 1280x360p, then up-converted it back to 1280x720p, would it still be 720p? Would you not argue that the original source footage contained twice as much as original detail as the up-converted footage?

Of course it's not the same as the 720p that was there to begin with because it was down-scaled first, i.e picture information was lost.

By the way, don’t get motion adaptive de-interlacing (only useful for native video sources shot at 50i or 60i) confused with weave de-interlacing (used to recombine back interlaced fields that were taken from 24p or 25p film footage). Unlike my explanations of weave above, motion adaptive de-interlacing of native interlaced video footage does not deliver twice the picture information over bob de-interlacing. All it does it restore some of the vertical resolution lost through interlacing. This only works fro parts of the image that are still, where it can weave those parts of the successive fields together and display them as a complete frame. For parts that are under motion it uses a similar interpolation process to bob. The result with de-interlacing 1080i is that you’re always seeing somewhere between 540p and 1080p, depending on how much motion is taking place (rather than straight bob which means you are only seeing a maximum of 540p at all time, regardless of any movement).

Sure, but wouldn't motion adaptive de-interlacing work just the same as weave for film based sources since the algorithm would detect that the 2 fields correspond to the same frame and therefore contain no moving parts, and then weave them anyway?

Share this post


Link to post
Share on other sites
It's not 1080 lines per screen update with weave 'cause the frame is repeated twice.

Sorry Ikari but you’re still missing the point. The point is that with weave there are 1080 lines visible on screen at the one time. With bob there are only 540 + interpolated lines.

Sure, the frame is repeated for both bob and weave, the difference being that with bob the frames are placed over the top of each other They are not interleaved like they are on a CRT! It’s important to remember this! In addition some lines jump up and down due to differing placement from one field to another, creating aliasing and line twitter. You are definitely not perceiving any more detail than what is presented in any one frame (540 original lines of resolution).

I disagree that the 540 line fields are designed to be shown together at the same time because when you display a 1080i video signal natively, on a HD CRT for example (let's say for argument's sake the Sony KVHR), it will display field 1, followed by field 2. If they were designed to be shown together at the same time then any 50fps 1080i video signal would have extremely poor motion. Remember it's all stored 1080i50 (isn't it?).

It sounds like you’re getting native 50i thoroughly confused with 25p presented as 50i.

With native 50i video (shot in 50i format) the fields are from different moments in time and this format is designed to be shown on a native interlaced monitor with one field shown after the other. It was invented for use on CRTs back in the 1930s and it was known that it would work at the time because of the nature of CRT technology. No one could ever have imagined it would still be in use for display on progressive digital monitors 70 years later! :blink: That’s what make native interlaced so hard to convert properly to progressive scan (requiring advanced motion adaptive de-interlacing for it to look any good at all).

With 50i derived from 24/25p sources, it’s an entirely different situation, and this is the part you seem to be having a bit of trouble getting your head around. With 25p sources, the fields are from the same moment in time, and come from the same original film frame. If displayed this way on a CRT, you can get the illusion of there being more than 540 lines on screen due to proper interleaving of the fields, and the “persistence of phosphors” effect as explained above. However you have to get your head around the fact that bob de-interlacing does not keep the interlacing effect intact and you are no longer getting the perception of woven together fields.

Weave rectifies this situation entirely as it places the fields back together (back to the way it was actually shot and shown at the cinema!) and each frame is repeated (just as each frame is repeated at 48p at the cinema!) This means you get full vertical detail on screen at all times, and you also do away with ugly interpolation artefacts that are always present with bob.

But it is! Field 2 of bob upconverted 1080i reveals more of the same image that wasn't shown in bob upconverted field 1. It's positioned wrongly on the screen, hence the bob flicker, but you are seeing more unique picture information in that second field, that originated from the progressive film source.

Yes, but for the umpteenth time you’re only seeing 540 lines at once. That’s why bob is more accurately referred to as 1920x540/50p (while weaved 1080i is more accurately refereed to as 1920x1080/25p).

Given the source only has 24 of 25 frames to begin with, there is nothing to gained by having 50 unique frames, and bob is no smoother in motion (as the frames are still from the same moment in time).

Weave gives you twice as much detail on screen at all times. There is no getting around that point.

If you still don’t believe me, have a look at this this animated gif (taken from Secret’s guide to progressive scan) which is alternating between bob (single field interpolation) and weave. Spot the difference?

Here’s a quote from the same article, which was just above that gif.

Single-Field Interpolation

Below, on the left, is a full film frame, with both fields combined together (weave) to make a complete progressive frame. The center image is just the first field of the frame, with the missing scan lines interpolated. Note the loss of resolution and the increased stair stepping on the diagonal lines. Again, the images are zoomed 200%. The image below, on the right, is an animation that demonstrates what it looks like when the deinterlacer switches between weave and single-field interpolation. If motion-adaptive deinterlacing were employed, the image would retain the full vertical resolution. There are two larger versions that you can select from to view (below, right).

Please read the full article here. This is the most comprehensive and accurate article on de-interlacing I’ve come across, from one of the most respected A/V sites on the net. If you still have doubts after reading this, I give up :P

Of course it's not the same as the 720p that was there to begin with because it was down converted first, i.e picture information was thrown away.

Well, funny you should say that as it’s exactly the same situation with bob! Think of this way. Now days studios start with a 1080/24p HD master. Agreed? Now this is converted to 1920x1080i for broadcast distribution. Because it’s film sourced (24p) no spatial or temporal resolution has to be discarded for the conversion to interlaced scan. Think of it as “lossless compression” (but only when film sourced!). The information is simply “split into two fields”. Now this can be shown in interlace form, but only on an interlaced CRT monitor (where as explained you can get some of the perception of the full 1080 lines). However if is bobbed on a digital display you end up with 1920x540p. Full stop. Just because the line structure is slightly different from the first bobbed frame to the next does not in any way give you the benefit of increased resolution. Its still only 540 lines per frame with all the gaps “filled in”.

To try and explain this better, I whipped up a basic picture illustrating the differenced in presentation of film sources shown as interlaced scan, bob and weave. Check it out here.

Sure, but wouldn't motion adaptive de-interlacing work just the same as weave would for film based sources as the algorithm would detect that the 2 fields correspond to the same frame and therefore contain no moving parts, and then weave them anyway?

Absolutely, but only where there is no motion whatsoever. Whenever there is motion it must then use a combination of weaving and bobbing for different part of the picture. This is why it is so processor intensive and up until recently was reserved only for SD de-interlacing (and even then it’s not always used).

One very important thing to point out here Ikari is that all of that above is very well accepted information in the A/V world. None of it is under debate (with the exception of this thread :P). No one with any experience with video processing suggests bob delivers anything other than the amount of resolution it receives in one field (540 lines in the case of 1080i).

I guess the big question is have you seen it for yourself? I’ve seen the difference between bob and weave on numerous displays with a variety of SD and HD sources. The difference in image quality between the two methods for film sourced material is nothing short of dramatic.

It’s no secret that bob sucks.

To sum all this up, you only have to remember one thing:

Interlaced scan only delivers more information than what is present in one single field, IF presented in native interlaced scan. As soon as its interpolated to progressive scan via bob, line structure (interleaving) is destroyed, and filled in with made up data.

If bobbed 1080i actually presented more data than 540p, then you could argue that 720p is actually 1440p (taking two frames into account) or that native 1920x1080/60p is actually 1920x2160. Obviously that isn’t the case.

Share this post


Link to post
Share on other sites
It's not 1080 lines per screen update with weave 'cause the frame is repeated twice on the screen. Bob de-interlaced 1080i always updates 50 new unique images per second, regardless of the source format.

On a progressive display, Weave deinterlacing will combine two successive fields into one frame. On an Interlaced display, each 540 line field in 1080i will be shown during each screen update, like you said. But (!!!), due to the way our eyes work (you really must read up on this to fully understand how Interlacing works), the two 540 line fields will merge into what appears as a single 1080 line frame (though it won't be that high due to kell factor, and our crappy eyes).

There are two other things you really need to learn about and that is Video and Film sources. Typically, Video sources will have 50 unique images per second, and are true Interlaced sources. Film will have only 25 unique images per second. When you transfer it to an Interlaced system like 1080i, you take each Progressive frame and make two fields out of it. When you Weave deinterlace it, you're basically recombining the two fields to recreate the original frame. This cannot be done on Video sources, and can only be done on Film sources that have been converted to an Interlaced signal properly. An example of a bad conversion would be Star Wars Episode II on Ten, which for some reason went through an odd Interlacing process which turned a Progressive film into Interlaced video.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now



  • Recently Browsing

    No registered users viewing this page.