View Single Post
  #2  
Old 04-05-2017, 10:49 PM
old_tv_nut's Avatar
old_tv_nut old_tv_nut is offline
See yourself on Color TV!
 
Join Date: Jul 2004
Location: Rancho Sahuarita
Posts: 7,184
Selection of test material:

The test committee had the unenviable task of finding high definition test material when there were barely any high definition systems in operation. The attempt was made to get a wide variety: film, computer generated, and live.

The live material was generated at the Ed Sullivan theater, where simple scenes were constructed and actors were rehearsed to repeat a short (ten second) action repeatedly. Each scene had to be repeated after changing out the camera scan parameters.

Some other live material supplied by the Japanese and others had to be converted from interlaced to progressive. Zenith was not satisfied with the conversions initially done by the interlaced proponents (surprise!) and built their own converter.

Zenith's partner, AT&T, had some of the most advanced computer graphics capability at the time, and devised some test scenes that (surprise!) were especially difficult for interlaced systems - but the interlaced proponents could not object, as the scenes were rendered with full fidelity directly in their own format.

During this period, MPEG-2, which eventually was chosen as the video codec, was under development. It illustrates the great importance of having the right assortment of test material. The material for MPEG-2 development apparently did not have enough illustrations of fades to black or cross-fades between scenes. As a result, the current HD systems have trouble with these. MPEG4 and H.264 have specific coding tools to tell the decoder that a scene is being faded. MPEG-2 tends to treat each faded scene as a major change from the previous one, and can only represent this by sending lots of bits to describe the change at each pixel. When the coder runs out of bits, the fade gets very blocky and ragged looking. It should be noted however, that using the MPEG-2 coding tools, coder makers have managed to reduce the number of bits needed on average by 50% for the same quality as we had in the early tests. This helps to make multicasting of HD and SD subchannels possible.

As an example of the improvement of coders, you only have to look at indoor sports like basketball, where the stadium is lit by powerful strobes for the professional still photographers. When the strobes flash, the TV camera suddenly gets an all-white frame. The early coders would panic and spend a lot of bits representing this huge change, and then try to spend as many bits as possible to restore the normal image that follows. The human eye could easily see the artifacts in the succeeding frames. Encoder algorithms were improved to recognize the flash frames and spend almost no bits on them, knowing that the eye can't tell if that blast of light is reproduced accurately or not. That way, there would be more bits available for the succeeding normal frames.
__________________
www.bretl.com
Old TV literature, New York World's Fair, and other miscellany
Reply With Quote