De-interlacing and Scaling Explained
Posted 24 March 2006 - 12:14 PM
Scaling and De-interlacing explained
When people go out and buy a shiny new High Definition display, most of them naturally assume that with the simple addition of a HD set top box, they’ll be getting the full high resolution picture that their display is capable of. In other words they think every pixel of their display will be utilised to deliver as much detail as possible. Unfortunately it’s not quite that simple…
At the time of writing, most HD digital displays are made up of the following fixed pixel structures:
1024x768 (Common on 42” HD plasmas)
1024x1024 (Less common, but found on "ALIS" 42” HD plasmas)
1280x720 (Common on DLP, LCD projectors and digital RPTVs)
1280x768 (Found on Pioneer HD 50” plasmas, DLP projectors and some LCDs)
1366x768 (Common on many high definition plasmas and LCDs)
1920x1080 (Full high definition. Very rare, but availability is increasing, and this will eventually become the standard high definition format for most displays).
Unfortunately the confusion doesn’t end there. All high definition broadcasts in Australia are transmitted in the 1440x1080i or 1920x1080i format (I’ll leave 576p out of the equation, as no one with any audio/visual knowledge classes this broadcast format as true high definition). The “i” after the 1080 figure stands for interlaced scan. Bare with me as this is where things get tricky!
Since the beginning of television broadcasting in the 1930s, all the pictures we are used to seeing on our conventional CRT televisions have been made up of two interlaced “fields”. The odd lines (1,3,5,7 etc) are painted on the screen in the first pass (field one) followed by the even lines (2,4,6,8 etc) in the following pass (field two).
Interlacing was introduced as a clever way of cutting down on the bandwidth of a broadcast, reducing fullscreen flicker (by allowing 50 or 60 screen updates a second rather than 25 or 30) and giving the illusion of there being more lines on screen than there actually is. This interlacing method works fine on older CRT tube televisions, due to a combination of the persistence of the phosphors onscreen (they take time to fade giving the illusion of the fields blending together) and the perception of our brains, which due to rapid refreshes of the screen perceives the fields to be onscreen at the same time.
Since the introduction of new digital displays, we’ve run into a problem. The problem being that all digital displays are inherently "progressive". In other words they generally work by illuminating all pixels on screen at once, and therefore can’t operate in interlaced mode like older CRT technology.
This creates a problem with all interlaced formats that must be addressed. The interlaced material must be “de-interlaced” (converted to progressive scan) in order for it to be displayed properly on the digital display. Confused yet? Well hang on as things are about to get a lot messier…..
De-interlacing is the process of taking interlaced fields and converting them to progressive frames for display. Sounds easy right? Well yes and no. Where things get complicated is the huge amount of various sources of content available. Whether something was original shot on film or video, and at what frame rate, has a huge bearing upon the type of de-interlacing that can be performed to deliver acceptable image quality.
The most simple and common method of de-interlacing is a method known as field interpolation, or “bob de-interlacing”. Bob works by taking a single interlaced field (for an example a 540 line interlaced field of the 1080i format) and filling in the gaps via interpolation, to convert that field to a complete progressive frame. Interpolation simply looks at surrounding lines from the same field and “guesses” how to fill in the gaps between the interlaced lines. This works fine, but the big problem is that it throws away much of the available resolution. I.e. 1080i effectively becomes 540p, as only half of the available information (one interlaced field) is taken into account with the de-interlacing process. Surely there must be a better way? Well there is…
As the name implies weave de-interlacing looks at both the odd and even line successive interlaced fields and weaves them back together into a single frame. This works very well for any material that was originally shot on film at 24 frames a second (the international standard for film or drama production). The reason being that in the case of 24 frames a second film sources, the odd and even line fields are from the same original film frame. All you have to do is put them back together and hey presto you have full resolution! If done correctly material that was originally transmitted as 1080i can be properly de-interlaced to 1080p with no loss of vertical resolution.
However (and this is a big however!) this only works for film based material, or anything originally shot at 24, 25 or 30 frames a second. This method of de-interlacing cannot be applied to anything that was shot on interlaced video. Why? Interlaced video is shot at 50 or 60 individual fields a second, that don't form complete frames, and this means that each field is from a “different moment in time”. If you attempt to use weave de-interlacing on 50i or 60i video, it will be combining fields that simply don’t belong together, and result in an effect called combing or mouse teeth. To combat this most displays or video processors will simply fall back on bob de-interlacing for interlaced video sources, and if clever enough, will use better weave de-interlacing for film sources. This is done through a process called “cadence detection” which in simple terms means the video processor is clever enough to differentiate between the origin of various source footage.
Note that in the USA and other 60hz markets, there is also a process called 3:2 pull-down which must also be applied as part of the weave de-interlacing process for good results. This process converts 24 frames a second to 60 frames a second to match their 60hz fresh rate. In Australia with our 50hz system, we don’t have this to worry about, so I have chosen to leave it out of this article. You can read about 3:2 pull down here and here.
So is there a better method than bob de-interlacing for 50i and 60i video sources? Yes, but it’s tricky…
Motion adaptive based de-interlacing
This is the most sophisticated type of de-interlacing and as a result is very processor intensive, requiring expensive dedicated chipsets. As such it is currently rare on any HD displays (and even external processors for that matter). Some displays and processors can apply this type of de-interlacing to standard definition sources (480i or 576i) but very few can use it with high definition 1080i. This is expected to change over the next few years.
So how does motion adaptive de-interlacing work? Motion adaptive de-interlacing analyses several successive fields (the more fields it analyses the better it generally is) and looks carefully at differences between those fields. For parts of the image that are motionless or relatively still, it will weave together sections of those successive fields, and for parts of the fields that are under motion, it will use interpolation to fill in the gaps. Think of it as a combination of weave and bob de-interlacing.
If done properly this can result in better resolution (as you can get close to the full vertical resolution when there is little movement) but the results vary wildly depending on how sophisticated the motion adaptive de-interlacing is. The best motion adaptive de-interlacing chipsets are generally ones that can de-interlace on a “per pixel” level which means they are powerful enough to look at differences between fields on an individual pixel basis.
So that about covers the basics of de-interlacing. There are other less common methods of de-interlacing available such as vector based de-interlacing, but for the sake of simplicity I’ve chosen to keep them out of this article.
The problem with de-interlacing
As you can see de-interlacing is tricky business, and as a result it’s often not done properly by manufacturers. By far the most common and cheapest implementation of high definition de-interlacing is the bob method, and hence the majority of HD displays on the market will use this method for all high definition 1080i sources, regardless of whether they are film or video based. In other words the display is unable to differentiate between sources, and simply applies basic bob de-interlacing to all HD content. The result is that you may often only be getting a maximum of 540 original lines of resolution (from one interlaced 1080i field) on your 720p, 768p or even 1080p display. To combat this some people invest in sophisticated external video processors which will often do a much better job than the de-interlacers and scalers built into their display.
So how can you tell what your display uses?
In short, unless you know what to look for, it’s very hard. Bob de-interlacing does soften the picture, and can leave ugly defects in the picture (aliasing, shimmer, moiré and line flicker to name a few) but unless you’re familiar with these picture artifacts you’re unlikely to notice.
The simplest way of knowing whether your display incorporates high quality high definition processing, is to look for specific mention of this in marketing material or on the manufacturer’s website. The rare HD displays that feature good de-interlacing and processing, will generally make a big deal of it. If you see no mention of it anywhere on the website or in other related marketing material, then it’s generally safe to assume is isn’t there.
I don’t at all recommend ringing the manufacturer to ask about processing capabilities, as you’re almost guaranteed to get someone who has no idea what the hell your talking about, and even if you do they will more than likely not be forthcoming with this type of information.
It is worth noting that video processing is improving all the time, and the next few years are likely to see big improvements to high definition de-interlacing and scaling in consumer displays.
So now that we’ve covered de-interlacing, what about scaling? Scaling is a little less tricky than de-interlacing, as the same method of scaling can often be applied to all sources (after de-interlacing) regardless of their origin. However there are still many different methods, which can have a big impact on image quality. Put simply scaling is the process of taking a given video format (anything from 480i to 1080p) and converting it to match the resolution of the display. For example when a 720x576p signal (like a progressive PAL DVD) is input into a 1280x720p display it must “up-convert” that material to 720p, so that it matches the physical resolution of the device. Or when a 1920x1080i signal is input into a 1024x768 display it must "down-convert" it to match the display.
Like bob de-interlacing, scaling is done through a process of interpolation (in the case of up-converting) or by discarding resolution (in the case of down-converting). Sometimes a display must both down-convert and up-convert to match it’s native resolution (i.e. up-convert horizontal resolution while down-converting vertical resolution) which can result in a very bad picture if not done well.
Basic scaling analyses anywhere between 4 and 16 pixels from a single frame, to average one pixel in the final scaled frame. The number of pixels taken into account is given a figure of “xx-tap” scaling. For example a scaler that looks at only 4 pixels to create a final pixel is called a “4-tap scaler”. Top end scaling can go all the way up to 1024-tap as seen with Silicon Optix’ HQV processing.
There are many other different methods of scaling available, depending on the display in question, all of which are beyond the scope of this article.
Just like de-interlacing, a lot of home theatre enthusiasts now choose to use external scalers, as they give far greater flexibly with output formats, typically incorporate higher quality scaling algorithms, and will therefore nearly always result in a bettor picture than using the display’s own internal scaler.
Other forms of scaling
Also common these days are “up-converting DVD players” which can not only de-interlace a standard definition interlaced PAL DVD (from 576i to 576p) using weave de-interlacing, but also then up-convert the material to 720p, 1080i or even 1080p before being output to the display. The result can mean a smoother more detailed picture with less artifacts, depending on the quality of the de-interlacing and scaling built into the player.
All high definition digital set top boxes are also all capable of up-converting material to 720p or 1080i, however almost all of them use very basic bob de-interlacing and basic scaling algorithms, and you’ll nearly always get better results by outputting the material in it’s native format (576i or 1080i) and then let the display or an external scaler do the de-interlacing and scaling.
Some people also use computers (known as “Home Theatre PCs” when built for home theatre use) which can also perform decent quality de-interlacing and scaling at a very affordable price. An example of free software used for this task is the popular Dscaler.
So does you head hurt yet? It should! This is just a tiny article on an incredibly complex area. If you’re brave enough to want to find out more I suggest having a look at some of these links:
De-interlacing video basics – A good basic guide to de-interlacing.
Progressive Scan/De-interlacing Explained – Secret’s of Home Theater & Hi-Fi. This is probably the most comprehensive and accurate explanation on the internet of all the various de-interlacing methods (complete with animated pictures). Some of it is related to 60hz, but most of it is still very relevant to our 50hz system. Highly recommended reading.
HQV Processing FAQ – Silicon Optix HQV is arguably the most sophisticated consumer range video processing on the planet. This technology guide is a great explanation of what sets it’s de-interlacing and scaling hardware apart from the competition.
Understanding Interlace - A very good primer to interlaced scan from a broadcast perspective.
Posted 24 March 2006 - 01:34 PM
Posted 24 March 2006 - 03:27 PM
Posted 24 March 2006 - 04:15 PM
Much like Motion Adaptive Deinterlacing, it will examine successive fields and use Bob or Weave deinterlacing when it deems necessary, but the special thing about it is that it will interpolate along several vectors (mostly diagonal lines)
Posted 25 March 2006 - 08:43 AM
Much like Motion Adaptive Deinterlacing, it will examine successive fields and use Bob or Weave deinterlacing when it deems necessary, but the special thing about it is that it will interpolate along several vectors (mostly diagonal lines)
I’m aware of vector based de-interlacing Davo (briefly mentioned it in the article) but it’s very rare to see any in any consumer range products, so I left it out in the sake of simplifying things a little.
I think ATi might be using Vector based de-interlacing with AVIVO? Have you seen it in action?
Posted 25 March 2006 - 10:50 AM
Posted 25 March 2006 - 12:14 PM
Given it's still very rare I think I'll leave it out of the article for now, as the topic is probably overwhelming enough for newbies
Posted 25 March 2006 - 12:28 PM
Edited by brendonc, 25 March 2006 - 12:29 PM.
Posted 13 April 2006 - 06:45 AM
Am I missing something here?
Posted 13 April 2006 - 01:49 PM
No confusion here mate
If you’ve read that bob is just line doubling, then that information is out of date. In the early days of video processing and CRTs, bob could simply mean line doubling when taking a 480i or 576i field and converting it directly to 480 or 576p. However bob is now generally accepted as the process of field interpolation. Rather than simply doubling lines, it looks at the surrounding lines and creates new intermediate lines. Bob is more correctly referred to now days as “field interpolation” but the bob name has hung around.
Since the move to digital displays this bob process has become part of the basic scaling process itself. Because processing is no longer just doubling 576i to 576p (as it used to on a CRT), it also has to up-convert to a higher resolution at the same time. In other words a 288 line interlaced field has to be bobbed to something like 768p. This is typically done through an interpolation process.
Have a read of how Secrets describes bob. (Secrets of Home Theatre/Hi-Fi is one of the most respected source of video processing info on the net)
From Secrets of Home Theatre & High Fidelity said:
This just involves taking each field and scaling it to a full frame. The missing lines between each of the scan lines in the field are filled in with interpolated data from the lines above and below. Done badly, the screen looks blocky and pixellated. Even done well, the image looks very soft, as image resolution is unavoidably lost. In addition, thin horizontal lines will tend to “twitter” as the camera moves. These thin lines will fall on just one field of the frame, so they will appear and disappear as the player alternates between the odd fields and even fields. This is the most basic deinterlacing algorithm, and the one that almost every deinterlacer falls back on when nothing else will work.
Ah, well that depends on how you look at it doesn’t it! Bob is effectively throwing away half the available vertical resolution. Unlike weave which combines successive fields to achieve full 576p resolution, bob simply interpolates a single field to 576p which mean you’re effectively looking at only 288 lines of resolution. The rest is “made up”. So yes, in comparison to native interlaced on a CRT (where we get the perception of woven fields) and weave de-interlacing for film sources on a progressive display, bob definitely discards half of the available vertical resolution.
Posted 13 April 2006 - 07:38 PM
btw the reason I said there was confusion about bob de-interacing is 'cause people seem to think that their plasmas are "throwing away 50% of the video signal" (see http://www.dtvforum.....dpost&p=379591 , http://www.dtvforum....ndpost&p=379694) after reading your article.
I just don't agree that information is being thrown away, there were 540 lines per frame to begin with, the 540 lines remain after bobbing, where's the loss?
eg. for 1080i50 material that has come from a film source (25fps material), when using bob to de-interlace, field 1 is doubled to 1080p, field 2 is doubled to 1080p, we have two unique progressive frames for the one point in time, all the information is kept. It looks ugly but all the information is there.
I don't see how using adaptive weave de-interlacing results in any more resolution being obtained. There is no extra information, it's just being presented in a way that hides the loss of information in motion areas.
What do you think?
Posted 13 April 2006 - 10:06 PM
De-interlacing is a form of scaling! Even oldschool stand alone line doublers are also referred to as scalers. Bob or field interpolation is certainly a form of scaling, but any scaling method that converts interlaced to progressive is also called de-interlacing.
I just don't agree that information is being thrown away, there were 540 lines per frame to begin with, the 540 lines remain after bobbing, where's the loss?
eg. for 1080i50 material that has come from a film source (25fps material), when using bob to de-interlace, field 1 is doubled to 1080p, field 2 is doubled to 1080p, we have two unique progressive frames for one point in time, all the information is kept. It looks ugly but all the information is there.
I can see why you’d think that Ikari, but you’re looking at it the wrong way. Interlaced scan formats only work properly on native interlaced CRT displays (which is what the format was invented for the in the first place early last century). On a CRT the fields are correctly interleaved. i.e. lines 1,3,5,7 and so on appear in the first field, and lines 2,4,6,8 and so on appear in the following field in-between the positions of the lines from the previous field. I bolded this last part of the sentence as it’s the relevant part to think of. In other words all the odd and even information from one field to the next lines up. Combine this proper interleaving with the ‘persistence of phosphors’ effect of a CRT display (which means the odd lines are still visible as the even lines appear on screen) and our brain perceives that both fields are onscreen at the same time. In other words even though the broadcast format is only comprised of 540 lines per motion update (1080i), you are actually seeing closer to the 1080 onscreen when there isn’t a lot of movement. When there is movement vertical resolution drops, but you’re always seeing somewhere between 540 and 1080 lines on screen.
It’s a very different situation with bob de-interlacing on a digital display. Because bob interpolates new intermediate data that fills in the gaps, and has to scale to match the display’s native resolution, the correct placement of these original interleaved fields is no longer present. The original resolution and interpolated lines are essentially placed over the top of each other and no interleaving takes place. So one field becomes a complete interpolated progressive frame, followed by the next complete interpolated progressive frame and so on. In addition to this there isn’t the phosphor persistence effect that is seen on CRTs, so there is no longer any perception of the successive fields/frames being joined together. The result is only the perception of 540 lines of original resolution at all times (with interpolated data smoothing out the frames).
To top it off you now also have interpolation jaggies, moiré effect, and line twitter as there can be detail bobbing up and down due to being in different positions from one frame to another.
Put simply, bob does NOT preserve any of the original interleaving effect that makes the interlaced scan format appear acceptable on a CRT. This is the reason good weave de-interlacing (for film sources) and motion adaptive de-interlacing (for video sources) is a must for good image quality with interlaced HD sources on digital displays. Unfortunately very few current displays feature weave de-interlacing for 1080i, let alone motion adaptive de-interlacing for 1080i yet (so yes, people are often only getting 540 unique lines of resolution per motion update on their HD screens) but thankfully this situation is starting to change with next generation displays.
Keep in mind that bob de-interlacing still retains full horizontal resolution of a 1080i source, so it will still look very good. On a 1368x768 display you would be seeing 1440x540 converted to 1368x768 and some people wouldn’t notice the difference between this and proper weave de-interlaced/scaled 1080i presented at full 1368x768p resolution.
Ok, there are couple of things being confused here. You have to remember that with film sources there are only 24 or 25 original film frames, so the interlaced fields of 50i are from the same moment in time (both odd and even line fields are taken from the same original film frame). Hence even though the broadcast has arrived as 1440x540/50i (1080i), weave de-interlacing can easily recombine these interlaced fields back into full 1080/25p with no loss of vertical resolution.
Bob de-interlacing of film sourced 1080i material = 1440x540/50p (25 unique frames a second with each frame shown twice) - with some detail bobbing up and down and interpolation artefacts such as aliasing.
Weave de-interlacing of film sourced 1080i material = 1440x1080/50p (25 unique frames a second with each frame shown twice) – perfectly woven fields with twice the vertical resolution, smooth edges and much better detail.
Motion adaptive de-interlacing is only useful for native 50i or 60i video sources where there is movement between two fields (each field is from a different moment in time). In these cases good motion adaptive de-interlacing will weave together parts of the fields that aren’t under motion, and interpolate parts that are under motion. This helps retain some of the vertical resolution, but there is no comparison in image quality to native progressive scan video formats at 50p or 60p.
Posted 14 April 2006 - 01:37 AM
Ok, I'm not disagreeing with any of that
But don't you think it's a bit harsh to say that when given a 1080i signal, a HD plasma will throw away half the picture information? I mean, drop field throws away half the picture information but it's still widely used and nobody complains about it for some reason. Kinda pisses me off! Anyway it doesn't matter, we know the facts of what happens to the video when it's bob de-interlaced but everyone's interpretation will be different. I suppose the finished result of the image on the screen is the main thing. If weave or adaptive motion deinterlacing has the effect of containing more image detail when actually looking at it on the screen then I suppose it would be correct to say that they are in fact more detailed images (even though it technically doesn't contain any more picture information).
Posted 14 April 2006 - 01:21 PM
Why is it “harsh” if that’s exactly what it does? If used for film sources, bob de-interlacing throws away exactly half the resolution. 540 lines per screen update vs 1080 lines per screen update. Remember those 540 lines fields are designed to be shown together at the same time (they come from the same original progressive film frame!) not one after the other with the gaps artificially filled in!
The important thing to remember is that with non interlaced progressive scan frames detail is not taken into account if it’s not onscreen at the same time. If it was then you could argue that 720p is actually 1440p (if you pretend that the next frame is delivering half the information!)
?? Well that’s a pretty strange contradiction! As an example weaved 1080p contains exactly twice the original picture information (2,073,600 original pixels) than bob de-interlaced 1080p (1,036,800 original pixels). How exactly does that “not contain any more picture information”? Interpolating to a higher resolution (whether it be 768p or 1080p) doesn’t mean it has any more detail. Weaving however combines fields that belong together from the same original frame so you are getting twice the picture information over bob at all times.
Put it this way, if you took some 1280x720p progressive video footage, and converted it to 1280x360p, then up-converted it back to 1280x720p, would it still be 720p? Would you not argue that the original source footage contained twice as much as original detail as the up-converted footage?
If you still doubt his take any high resolution picture (anything over 1920x1080) on your computer and resize it into two different pictures. The first one resize to 1920x540. The second one resize to 1920x1080. Now take the first 1920x540 pic and resize it back to 1920x1080. I think you’ll find the original 1920x1080 picture has far better detail and sharpness than the one that has been re-interpolated back to 1080p. While only a still pic, it’s exactly the same principle.
By the way, don’t get motion adaptive de-interlacing (only useful for native video sources shot at 50i or 60i) confused with weave de-interlacing (used to recombine back interlaced fields that were taken from 24p or 25p film footage). Unlike my explanations of weave above, motion adaptive de-interlacing of native interlaced video footage does not deliver twice the picture information over bob de-interlacing. All it does it restore some of the vertical resolution lost through interlacing. This only works fro parts of the image that are still, where it can weave those parts of the successive fields together and display them as a complete frame. For parts that are under motion it uses a similar interpolation process to bob. The result with de-interlacing 1080i is that you’re always seeing somewhere between 540p and 1080p, depending on how much motion is taking place (rather than straight bob which means you are only seeing a maximum of 540p at all time, regardless of any movement).
Weave is very different as there is no “interfield motion” so full vertical resolution can be restored at all times (proper 1080p).
Posted 15 April 2006 - 02:20 AM
It's not 1080 lines per screen update with weave 'cause the frame is repeated twice on the screen. Bob de-interlaced 1080i always updates 50 new unique images per second, regardless of the source format.
I disagree that the 540 line fields are designed to be shown together at the same time because when you display a 1080i video signal natively, on a HD CRT for example (let's say for argument's sake the Sony KVHR), it will always display field 1, followed by field 2, with equal time in between all field scans. If they were ever meant to be shown together at the same time then any 50fps 1080i video would have extremely poor motion. Remember it's all stored & transmitted 1080i50 (isn't it?).
For native interlaced display modes, as 1080i was designed for, the screen is always scanned field after field after field, never 2 fields at the same time.
But it is! Field 2 of bob de-interlaced 1080i reveals more of the same progressive image that wasn't shown in bob de-interlaced field 1. It's positioned wrongly on the screen with respect to the previous frame, hence the flicker, but you are seeing more unique picture information in that second field, that originated from the progressive film source.
Of course it's not the same as the 720p that was there to begin with because it was down-scaled first, i.e picture information was lost.
Sure, but wouldn't motion adaptive de-interlacing work just the same as weave for film based sources since the algorithm would detect that the 2 fields correspond to the same frame and therefore contain no moving parts, and then weave them anyway?
Posted 15 April 2006 - 08:32 PM
Sorry Ikari but you’re still missing the point. The point is that with weave there are 1080 lines visible on screen at the one time. With bob there are only 540 + interpolated lines.
Sure, the frame is repeated for both bob and weave, the difference being that with bob the frames are placed over the top of each other They are not interleaved like they are on a CRT! It’s important to remember this! In addition some lines jump up and down due to differing placement from one field to another, creating aliasing and line twitter. You are definitely not perceiving any more detail than what is presented in any one frame (540 original lines of resolution).
It sounds like you’re getting native 50i thoroughly confused with 25p presented as 50i.
With native 50i video (shot in 50i format) the fields are from different moments in time and this format is designed to be shown on a native interlaced monitor with one field shown after the other. It was invented for use on CRTs back in the 1930s and it was known that it would work at the time because of the nature of CRT technology. No one could ever have imagined it would still be in use for display on progressive digital monitors 70 years later! That’s what make native interlaced so hard to convert properly to progressive scan (requiring advanced motion adaptive de-interlacing for it to look any good at all).
With 50i derived from 24/25p sources, it’s an entirely different situation, and this is the part you seem to be having a bit of trouble getting your head around. With 25p sources, the fields are from the same moment in time, and come from the same original film frame. If displayed this way on a CRT, you can get the illusion of there being more than 540 lines on screen due to proper interleaving of the fields, and the “persistence of phosphors” effect as explained above. However you have to get your head around the fact that bob de-interlacing does not keep the interlacing effect intact and you are no longer getting the perception of woven together fields.
Weave rectifies this situation entirely as it places the fields back together (back to the way it was actually shot and shown at the cinema!) and each frame is repeated (just as each frame is repeated at 48p at the cinema!) This means you get full vertical detail on screen at all times, and you also do away with ugly interpolation artefacts that are always present with bob.
Yes, but for the umpteenth time you’re only seeing 540 lines at once. That’s why bob is more accurately referred to as 1920x540/50p (while weaved 1080i is more accurately refereed to as 1920x1080/25p).
Given the source only has 24 of 25 frames to begin with, there is nothing to gained by having 50 unique frames, and bob is no smoother in motion (as the frames are still from the same moment in time).
Weave gives you twice as much detail on screen at all times. There is no getting around that point.
If you still don’t believe me, have a look at this this animated gif (taken from Secret’s guide to progressive scan) which is alternating between bob (single field interpolation) and weave. Spot the difference?
Here’s a quote from the same article, which was just above that gif.
Below, on the left, is a full film frame, with both fields combined together (weave) to make a complete progressive frame. The center image is just the first field of the frame, with the missing scan lines interpolated. Note the loss of resolution and the increased stair stepping on the diagonal lines. Again, the images are zoomed 200%. The image below, on the right, is an animation that demonstrates what it looks like when the deinterlacer switches between weave and single-field interpolation. If motion-adaptive deinterlacing were employed, the image would retain the full vertical resolution. There are two larger versions that you can select from to view (below, right).”
Please read the full article here. This is the most comprehensive and accurate article on de-interlacing I’ve come across, from one of the most respected A/V sites on the net. If you still have doubts after reading this, I give up
Well, funny you should say that as it’s exactly the same situation with bob! Think of this way. Now days studios start with a 1080/24p HD master. Agreed? Now this is converted to 1920x1080i for broadcast distribution. Because it’s film sourced (24p) no spatial or temporal resolution has to be discarded for the conversion to interlaced scan. Think of it as “lossless compression” (but only when film sourced!). The information is simply “split into two fields”. Now this can be shown in interlace form, but only on an interlaced CRT monitor (where as explained you can get some of the perception of the full 1080 lines). However if is bobbed on a digital display you end up with 1920x540p. Full stop. Just because the line structure is slightly different from the first bobbed frame to the next does not in any way give you the benefit of increased resolution. Its still only 540 lines per frame with all the gaps “filled in”.
To try and explain this better, I whipped up a basic picture illustrating the differenced in presentation of film sources shown as interlaced scan, bob and weave. Check it out here.
Absolutely, but only where there is no motion whatsoever. Whenever there is motion it must then use a combination of weaving and bobbing for different part of the picture. This is why it is so processor intensive and up until recently was reserved only for SD de-interlacing (and even then it’s not always used).
One very important thing to point out here Ikari is that all of that above is very well accepted information in the A/V world. None of it is under debate (with the exception of this thread ). No one with any experience with video processing suggests bob delivers anything other than the amount of resolution it receives in one field (540 lines in the case of 1080i).
I guess the big question is have you seen it for yourself? I’ve seen the difference between bob and weave on numerous displays with a variety of SD and HD sources. The difference in image quality between the two methods for film sourced material is nothing short of dramatic.
It’s no secret that bob sucks.
To sum all this up, you only have to remember one thing:
Interlaced scan only delivers more information than what is present in one single field, IF presented in native interlaced scan. As soon as its interpolated to progressive scan via bob, line structure (interleaving) is destroyed, and filled in with made up data.
If bobbed 1080i actually presented more data than 540p, then you could argue that 720p is actually 1440p (taking two frames into account) or that native 1920x1080/60p is actually 1920x2160. Obviously that isn’t the case.
Posted 15 April 2006 - 09:04 PM
There are two other things you really need to learn about and that is Video and Film sources. Typically, Video sources will have 50 unique images per second, and are true Interlaced sources. Film will have only 25 unique images per second. When you transfer it to an Interlaced system like 1080i, you take each Progressive frame and make two fields out of it. When you Weave deinterlace it, you're basically recombining the two fields to recreate the original frame. This cannot be done on Video sources, and can only be done on Film sources that have been converted to an Interlaced signal properly. An example of a bad conversion would be Star Wars Episode II on Ten, which for some reason went through an odd Interlacing process which turned a Progressive film into Interlaced video.
Posted 15 April 2006 - 11:05 PM
When bob de-interlaced, the second field is positioned one row lower, not on the same row as the first field, so it still maintains the lower position with respect to the first field and the extra detail of field 2 is still perceived.
Say we have a 1920x1080p still image comprised entirely of white pixels and we place a black pixel at x-y postion 1,1. When displaying this still image on a native interlace 1080i video display, we would perceive a white screen with a black dot at the top left of the image that would flicker on and off 50 times per second. If we bob de-interlace the 1080i video to 1080p, what we would see is also a black dot flickering on and off in the top left corner 50 times per second, except it would be 2 pixels tall.
This proves that every single pixel from the source 1920x1080p image is represented and perceived in the bob de-interlaced end result, so there is no "50% loss of picture information".
We are still seeing all detail in all fields when bob de-interlaced to 1080p, and then some (the interpolated bits). The fact that they're not interleaved properly doesn't nullify the perception of extra detail in those spots. I've compared weave with bob for 576i DVD's and can see all of the detail that is present with weave. It's flickering, but it's all there, and looks nothing like "half the resolution".
I've never said bob is an acceptable method of de-interlacing and I've always said weave will look a lot nicer (though when scaling down to 1366x768 some may have trouble determining if it's been bobbed or weaved). But the point that we are at odds about is whether bob throws away half the picture information, which I don't agree with. The picture information is all there. You could run a reverse process to restore all 50 interlaced fields from the bob de-interlaced 1080i, and then do whatever the hell you want with them, even weave and restore the original 25p, which proves all the information is there. None of the pixels disappeared like in your example of scaling 720p down to 360p, and then up to 720p again.
I'm not accusing you of not knowing the facts, you obviously know your stuff, you run a ht website, a ht business, but it seems that our interpretations of bob de-interlacing are our point of disagreement.
Let's do ourselves a favour and keep this argument succinct -- would you mind explaining to me why bob de-interlacing 1080i (to either 1080p or a digital display resoltion such as 1366x768) throws away exactly 50% of the video information. Where are the pixels going to, where are they being lost, is the explanation I'm searching for. I suspect you have been saying that resolution is being lost ONLY in the human perception of the image, yet you maintain that pixels have actually been discarded (i.e no way to get them back).
I came across this post by Owen:
Using typical 720p or 768p digital displays, you would be REALLY hard pressed to pick it.
This is why I'm bothered by the statement that HD plasmas "throw away 50% of the video information". If they threw away that much there is no way you couldn't tell the difference between weave and bob.
Also, I was re-reading your posts and wasn't sure about these statements:
With bob, frames are never repeated, it doesn't bob field 1 to a progressive frame and then repeat it twice. Is this what you were saying or am I misunderstanding you?
Posted 16 April 2006 - 05:24 PM
But that’s just the thing, it is a legitimate comparison! Why? It all comes down to the one thing that I’ve put in bold text about 5 times. Here I go again
Once a single field has been interpolated (blended/up-sampled – whatever you want to call it) into a single frame (lets say a 540 line field has been interpolated to 768p on a plasma) it no longer carries any “interleaving line structure” and all the gaps that were originally present (for the next field to fill in on an interlaced CRT monitor) have now been smoothed over with interpolated data. The result is a single blurry frame, and NO PERCEPTION OF FIELD COMBINATION.
If that animated gif was comparing a weaved frame to two successive blurry interpolated frames (both frames bobbed from the original fields) the image quality would be no different! In fact it may be slightly worse as some part of the image would be seen bobbing up and down.
I’m really not making this up mate. It’s the way bob works. Bob completely destroys the illusion of interlacing. It’s a known fact.
What I’m trying to tell you (and I really don’t think I can make this much clearer) is that its completely irrelevant that the lines of original broadcast are there in the next frame, as they no longer appear to be connected to the previous frame. It is 100% accurate to say that bobbing 1080i films sourced material presents it as 540/50p, rather than 1080/25p or 1080/50i (the two ways it’s intended to be seen).
With that argument you’re assuming that the odd/even line structure is still in the right place, and that no blurring of detail has taken place through the interpolation process. This is the case with old school straight line doubling of 576i to 576p, but not with single field interpolation on digital displays. Interpolation “averages” the entire picture to scale it. i.e. you’re not simply filling in the gaps, but rather the entire 540 line field is “averaged” to 768p (or 1080p or whatever the resolution of the display is). The field that comes after it will be interpolated differently because the lines are different (even though they are taken from the same original frame).
The result is that the odd/even line structure is no longer present. Often an even line will be placed smack bang over the top of the odd line that came before it. They certainly don’t neatly line up as you incorrectly claim above.
Remember that there is no real accuracy to interpolation. It is “digital guessing”. Plain and simple. That’s why bobbed pictures are rampant with interpolation jaggies (along all diagonal edges) as the two successive frames don’t line up properly at all.
I don’t think that you are, so I’m sorry if I gave that impression. In fact I think your arguments are interesting ones, but I’m afraid that they are still misunderstandings of the technology, nothing more. It’s very frustrating for someone with many years of hands on experience with video proceeding trying to explain these tried and tested concepts to someone doubting them, without getting a little condescending at times, so I apologise if any of my comments have come across as harsh.
I also think it’s a shame that you felt the need to challenge these industry accepted explanations of de-interlacing in a pinned thread that is designed for newbies. This discussion is making what should be a simple introductory thread very confusing and intimidating for newcomers. Oh well.
All that side, you have to get it out of your head that just “because the lines are all still present one frame after another, that all the original detail remains”. It all comes down to presentation and perception.
You also have to understand that 25p delivered as 1080/50i, is best seen when converted back to 25p. That’s how it started out. It’s simply being delivered in interlaced form (as I stated above 1080/50i can be seen as lossless compression of 1080/25p film sources).
Put it this way, why do you think we have progressive scan DVD players? They take a 720x576/50i DVD (sourced from a 25p master) and weave together the interlaced fields into complete progressive frames. The result is 720x576/25p (presented as 50p with each frame repeated). No one with any HT experience would argue against the virtues of a good progressive DVD player with weave (film based) de-interlacing.
Anyone that has seen an early progressive DVD player that bobbed 288 line fields to 576p (some cheapie players still do this btw) will tell you that weaving results in twice as much detail and sharpness with no interpolation artefacts. I could tell you in under 10 seconds whether a DVD picture was bobbed or weaved on a screen I’m familiar with (and I’m not saying I’m special - most HT enthusiasts I know could!).
No you couldn’t reverse it. And this is a very important point you raise which I will use to illustrate what is happening. If it was a simple line doubling process without interpolation, then yes, theoretically you could reverse the bobbing, by discarding every second “filled in” line (although why and how you would reverse bob processing I’m not sure!). However interpolation averages an entire frame as part of the scaling process. This seems to be the part that you aren’t taking into account.
It’s the same concept as taking a 540 line field in Photoshop (even though you can’t capture a single field, we’ll overlook that for this example) and converting it directly to 768p. The gaps would be filled in, the detail softened, and you would have scaling artefacts visible as “jaggies” along edges.
Almost. The pixels are actually being thrown away in two ways.
1. The interpolation process that scales the field to a frame removes any interleaving structure, and also blurs/softens detail through the interpolation process itself.
2. Because the interleaving structure no longer remains, and there is no “persistence of phosphors effect” as seen on CRT, the frames are now complete separate entities, and there is no longer any perception of those frames being combined as one (as you get in native form on an interlaced monitor).
So yes, the detail is “thrown away” because it’s not perceived. Another way of putting it is: You’re seeing 50 blurry half resolution frames rather than 25 full resolution frames (and remember bob also introduces interpolation artefacts, further worsening the picture).
That’s about the crux of it. The result is you only perceive 540 lines per screen update, because that’s all that you are actually looking at.
Remember it all comes down to the fact that 1080/50i is an effective delivery method of 1080/25p (for film) or native 1080/50i (for video). In the former case, its best viewed when converted back to its native progressive 25p form with full vertical resolution.
Here’s one more scenario for you. Please think carefully about this one, as I think this may well be the scenario that makes you see the light
Think of Channel 7’s HD service. We all know its 576p converted from footage that is internally shot at 1080i (as Southern Cross Hobart enjoys it in unprocessed 1080i format). Agreed? Now, when Seven convert 1080i to 576p, they do so via bobbing. This is as clear as day, as you can see line twitter on shows like Sunrise, and parts of the picture are clearly mobbing up and down. In fact the only method you can use to convert 1080i to 576p is bobbing anyway.
So, we are agreed that Seven transmit 576/50p that has been converted from 540 lines fields up-scaled to 576 lines frames (bob interpolation).
Using your argument, Seven are still presenting 1080i! They just happen to be transmitting it as 540p up-scaled to 576p! (just as you say is happening on a display that takes 540 lines fields and up-scales them to 768p!) Remember “none of the lines are lost” as you say. They are all still there. Using your argument, shouldn’t Seven HD still look very good?
Now, do some searches of these forums, and I challenge you to find one person that thinks that Channel 7’s HD service looks anywhere near as good as the 1080i broadcasts from Nine and Ten. In fact you’ll actually find most people believe that Seven’s SD service looks better than it’s HD service (as with film content it can be 576i weaved to full 576p, rather than bobbed 540p interpolated to 576p).
The same goes with ABC and SBS. Both bob de-interlace their content to 576p from 576i (which is also an even interpolation process, showing you that this effect happens regardless of what you’re scaling to!). Now look at the difference in image quality, bob makes on ABC “HD” which is the picture on the right. Before you say it’s an unfair comparison I can assure you that the blurring of the bobbed HD service is just as noticeable under movement with multiple frames as it is with a single capture.
Ok. That’s about it from me. There really is no point trying to convince you any further if you don’t accept the above arguments and examples. All I would ask is that you do some research on the net at respected A/V sites, and better yet do some real world tests for yourself. If you do, you’ll quickly see that bob provides nothing more than is present in any one frame (and introduces a softness to the picture along with many ugly artefacts).
UPDATE: In answer to your post referring to Owen's claims about bob not being that noticeable on 768p plasmas, I disagree and in any case he was referring to “most people wont notice” so this is simply a matter of perception, and whether the person knows a bad picture when they see one. However that being said it's only a difference of 228 lines of original resolution in the case of bob vs weave on a 768p display (although bob will look softer, have line twitter and aliasing because of the interpolation process). Although weaved 1080i will certainly look far better on a 768p display, its not quite as "night and day" as bobbed 540p vs weaved 1080p on a 1080p display where you are missing out on 540 lines of resolution.
Posted 16 April 2006 - 06:06 PM
Look, there's no point in taking this argument any further as it's gotten quite out of hand. I know I can see all the detail, and that it's all there, when bob de-interlacing 576i video, that's all I need to know.
I apologise for editing my above post as it's quite different to when you replied to it but I decided to go with a different example of bob de-interlacing a 1080i video to illustrate why every pixel is represented in the end result.
Anyway, if you feel this thread of yours has been tarnished by any of my raving please feel free to delete any of it.
Posted 16 April 2006 - 06:16 PM
If you really think that, then I may as well give up.
Bobbed 1080i on a 768p screen = 1920x540 field -> up-converted directly to a 768p frame. Only 540 original lines per frame.
Weaved 1080i on a 768p screen = 1920x540 fields combined to 1080p -> then down-converted to a single 768p frame. Result = 228 lines of extra detail per screen update and no blurring, aliasing or interpolation artefacts.
What could be clearer? Are you really doubting this? Are you saying the entire A/V industry is wrong and you are right? If the Earth really flat?
If you think bob preserves detail, then I’m afraid you really need your eyes examined mate. Bob discards detail through the interpolation process, and through its inability to take information from anything other than one single field. It’s that simple.
I’m afraid I have to agree. With all due respect this is now like bashing my head against a brick wall.
I don’t have any power to edit or delete threads at this forum. In any case I wouldn’t want to delete the thread. I just wish you’d chosen a better place for this discussion. A pinned FAQ using industry accepted terms and explanations (that aren’t under debate from anyone expect for you) isn’t the place for it.
Posted 16 April 2006 - 06:20 PM
The idea might just work to some extent and look a lot better than presented as 576p.