OK, my 99 cents
Of course there's no doubt that correctly de-interlaced video looks far better than bob de-interlaced video, and I know that both of you accept that.
The thing that I found interesting was Ikari's challenge to the comment that displays which bob de-interlace are throwing away 50% of the resolution. Of course this comment has been made many times by many people and, in a practical
sense, is almost true. But when Ikari challenged it to the letter, I found myself having to agree with him.
These displays are not actually throwing anything
away, other than the opportunity
to merge the two fields in a manner which is more pleasing to the eye. However, every single pixel that exists in the source is still contributing to the final output, so nothing is being simply discarded.
Sure, it's more difficult for the brain to re-assemble the two fields and glean all of the contained resolution, since these natively progressive displays do not have the phosphorus properties of CRTs. But the information is
still there. Scaling alignment issues aside, it would be possible within a controlled environment to actually re-create the original fields and then present the frames in a more pleasing manner, as per what normally happens after "correct" de-interlacing. So this demonstrates that all of the information contained within the source is
being displayed. Not one pixel is being discarded.
Of course the result (of bob de-interlacing) looks far inferior. The flicker, for a start, is a big turn off. Also, the brain is being relied upon to re-assemble the two fields. Interestingly though, the brain does infact glean the additional resolution, as the animated GIFs that Ikari provided clearly show (you do have to be looking at them in Firefox to get the speed required to correctly witness this effect). It's not pretty, but it is all there.
So I guess a lot of this really comes down to semantics. One side says that 50% of the information is being thrown away
. The other side says that, well, actually nothing is literally
being thrown away. If you're talking to the letter, the latter is true. If you are talking in a practical, effective sense, yes, data appears to have been discarded.
I guess the other issue I have is with the figure of 50%. I believe that it is easy to dispute this figure. A lot of people assume that you get double the spatial resolution when using an interlaced format, if it's correctly de-interlaced and displayed. (And, I'm only talking about still scenes here... obviously during movement you definitely do not get any increase in resolution due to interlace). So logically, they say, if you get double by using interlace correctly, then you are losing half (50%) if you use interlace incorrectly (bob de-interlace). I dispute this.
Interlace does not really, and never has, provided double the vertical resolution. In order for interlace to look acceptable, television networks must apply vertical filtering prior to mastering/broadcast in order to avoid/reduce interline flicker. This filtering reduces the vertical resolution dramatically. This is one of the reasons why, if produced correctly, 576p can actually look amazingly good. As a progressive format, no interline flicker-prevention filtering need be applied. So you really are
benefiting by 576 true scanlines.
Now you may be tempted to say, OK, that's fine for video material, but what about movie sourced stuff, where correctly de-interlacing requires only a simple -- and lossless -- weave? Well, although the de-interlacing step is certainly easier and does not result in de-interlacing artifacts that come about by having to choose between weave and bob on a per-pixel basis, the vertical resolution problem is all still there. Television networks still have to apply vertical filtering to video that is based upon a movie source
. If they did not, fine vertical details would flicker uncontrollably on CRT displays and these viewers would not able to stand watching! So, interline flicker is just as much a problem whether the source is truly interlaced or not. It's because of the interlaced nature of the display device
(CRT) -- not the source -- that requires vertical filtering to attend to the problem of interline flicker.
If no one
was watching broadcasts in interlaced formats using CRT displays, then the networks could stop applying vertical detail reduction, and all would be fine, since progressive displays do not require this vertical bluring in order to provide a stable image. Obviously this is not the case... there will always be viewers with CRT displays (well, you know what I mean).
This, therefore, is the real difference between a 50i format (showing film-based content) and a 25pSF format. Many people say they are effectively the same because a good de-interlacer can simply weave the 50i format to recreate the 25 discrete frames per second at full vertical resolution. Only problem is, these frames are not at full vertical resolution!
Being an interlaced format, they have already had vertical filtering applied to handle the interline flicker problem. Remember that 576i and 1080i are designed to be viewed on a CRT where interline flicker is inherent if this filtering is not applied. Since 25pSF is not broadcast to consumers, the above is always the case.
So, if weaving a progressive source delivered via an interlaced medium does not yield double the resolution
, then it follows that by not
weaving, you are not losing half
of the maximum obtainable resolution. So how much additional vertical resolution does an interlaced format provide? Takashi Fujio of NHK in Japan measured a factor of about 0.6. Ie. for 576i, you really only obtain an effective vertical resolution of just 345 lines -- far short of 576. For 1080i, you effectively only get 648 -- less even than 720p!
The 0.6 figure is probably a little harsh. 0.66667 and 0.7 are also widely accepted figures. It really depends on the precise amount of vertical filtering that has been applied to the interlaced source. The point is though, that it's no where near a doubling/halving.
There's another thing that works against the 50/50 argument, and that is chroma format. All mastered/broadcast video for consumers is in the 4:2:0 chroma format. This means that there is only half as much chroma information -- vertically -- as there is luminance. There's also half as much horizontally, but that's irrelevant to this discussion. If there's half as much chroma information vertically, as there is luminance, then even if you are starting with a full resolution source (no interline flicker-prevention filtering having been performed), and you discard half of the vertical resolution, the result is still better
than half because not one single line
of chroma has effectively been discarded -- there was already only half! The result will have half the vertical resolution, but with 4:2:2 chroma instead of 4:2:0, and this makes a discernable difference. Therefore the overall result
is discernably better than a simple "halving" of vertical resolution because really only the luminance has been halved -- all of the available chroma in the source is still being displayed. I have included this paragraph for completeness only -- it's certainly not the dominant argument against the "50% loss" proposition, and in fact, does not really help bob de-interlacing's position, since there's still only a single line of chroma per each two lines of luminance, on a per-field basis. The over-riding point here though, is to stop assuming that our 1080 line source material really does contain as much true
resolution as one could optimally contain within a 1080-line image -- it does not!
So in summary...
Displays which bob de-interlace are not actually throwing anything way in literal terms, though in practical terms they certainly are not making best use of the data to display it in a human-friendly manner, and this does
affect the human perception of such content.
The reduction in effective vertical resolution when watching something that has been bobbed rather than correctly de-interlaced, is not 50%, but more like around 20%. Of course flicker is also a problem for bobbed stuff, so this also detracts from such an output, but not in actual resolution terms.
As you both agree, good quality de-interlacing is certainly the preferred way to go, and anyone who actually views the outputs of both scenarios would quickly arrive at this view. However, we have to be careful about using throw away lines (pardon the pun!) such as "displays that bob de-interlace are throwing away half of the vertical resolution", despite the fact we've heard and read this stated by others. Sometimes things are said -- even by experts -- to make a point, even though they do not stand up to technical scrutiny if taken literally, though may have been a perfectly reasonable comment within the intended context.
I am happy to be shown why I am wrong in any of the above, though I am not expecting this. (That probably sounds a little arrogant! I don't mean it that way -- basically what I'm saying is bring on the discussion if you have something good to contribute!).
P.S. As a final thought, think about single-wheel DLP displays. They display each colour at a separate moment in time, relying on the brain to re-assemble each frame of the image as a full-colour, full-resolution frame. A small percentage of people suffer from the "rainbow" effect, and as such, this is not the most desirable manner in which to present an image. However, all of the data is
displayed, and no one would suggest that a single-wheel DLP display is throwing away 2/3rd of the image! The brain is an amazingly adaptive device! Ikari's animated GIFs show that the brain does in fact benefit from the increased resolution, even when using bob, it's just not particularly pleasing. Perhaps, just like the rainbow effect, it's more disturbing to some than others.
P.P.S. As a very
final thought, here are just a couple of examples of things regularly said by experts that are incorrect, even though they are acceptable:
1. That NTSC runs at a field rate of 60 Hz.
When trying to be correct, it's common to state that it's at 59.94 Hz. Even this is not true. NTSC runs at a field rate of 60000 / 1001 which is ~59.94005994! Depending on the context of the discussion, it may be perfectly reasonable, even for an expert, to use the 60 Hz figure for convenience. That doesn't make it true, however, no matter how decorated the expert!
2. That electric current travels from positive to negative.
This is the accepted convention, and was formalised prior to discoveries being made (when valves were invented) that clearly showed that electrons actually travel from negative to positive! The convention remained because it was so entrenched. Does this mean that someone who states that electricity travels from positive and returns to ground is wrong? Well, technically, yes, but we can forgive them for that and we understand what they mean (and if we don't, it really doesn't matter anyway!).