View Full Version : Early testing of HD video coding

04-05-2017, 11:29 PM
I'm starting this thread, prompted by the off-topic discussion of early HD cameras and VTRs in the thread

The BTS KCH-1000 cameras, modified to several different proposed image formats, were used to create otherwise identical video sequences for subjective testing. This involved live shoots at the then otherwise unused Ed Sullivan theater in New York, and also scans of ShowScan 60 fps film on a special telecine installed at Zenith.

The telecine consisted of a newly designed projector with a bed for mounting the BTS camera. Unfortunately, although the the projector seemed very sturdy, the weight of the camera at the end of the mounting bed made it oscillate enough to make the pictures unsteady. By this time, we could connect a camera to the 6 foot rack of the DVS solid state frame store, and capture 10 seconds of video, and then transfer it to the HDD 1000 Sony tape machine. The projector designer included a single frame stepping mode. So, I got to adapt an old Zenith/Heath computer to run a script, issuing RS-232 commands to the projector and the frame store. The film would be stepped, then after a one-second pause for vibrations to die down, the frame store would capture a frame(progressive) or field(interlaced). When ten seconds was accumulated, it could be transferred to the tape by assemble edit. I spent many nights to 3 am babysitting the transfers.

Unfortunately, there was an accident that ruined some of the film (scratched it badly), so that the test committee judged it unusable. What happened is that a visiting expert from Kodak decided to be helpful and change the film between runs; but he didn't know that the projector was designed (for some reason I will never know) to run the take-up reel in the opposite direction. He threaded the film the way most projectors work, and when it was turned on, the tension servo went open-loop and rapidly wound the film in a jumble onto the take-up reel. No-one was willing to pay for new prints (I guess Showscan had donated them from prints already in stock), so none of the scenes on that reel were ever used.

04-05-2017, 11:49 PM
Selection of test material:

The test committee had the unenviable task of finding high definition test material when there were barely any high definition systems in operation. The attempt was made to get a wide variety: film, computer generated, and live.

The live material was generated at the Ed Sullivan theater, where simple scenes were constructed and actors were rehearsed to repeat a short (ten second) action repeatedly. Each scene had to be repeated after changing out the camera scan parameters.

Some other live material supplied by the Japanese and others had to be converted from interlaced to progressive. Zenith was not satisfied with the conversions initially done by the interlaced proponents (surprise!) and built their own converter.

Zenith's partner, AT&T, had some of the most advanced computer graphics capability at the time, and devised some test scenes that (surprise!) were especially difficult for interlaced systems - but the interlaced proponents could not object, as the scenes were rendered with full fidelity directly in their own format.

During this period, MPEG-2, which eventually was chosen as the video codec, was under development. It illustrates the great importance of having the right assortment of test material. The material for MPEG-2 development apparently did not have enough illustrations of fades to black or cross-fades between scenes. As a result, the current HD systems have trouble with these. MPEG4 and H.264 have specific coding tools to tell the decoder that a scene is being faded. MPEG-2 tends to treat each faded scene as a major change from the previous one, and can only represent this by sending lots of bits to describe the change at each pixel. When the coder runs out of bits, the fade gets very blocky and ragged looking. It should be noted however, that using the MPEG-2 coding tools, coder makers have managed to reduce the number of bits needed on average by 50% for the same quality as we had in the early tests. This helps to make multicasting of HD and SD subchannels possible.

As an example of the improvement of coders, you only have to look at indoor sports like basketball, where the stadium is lit by powerful strobes for the professional still photographers. When the strobes flash, the TV camera suddenly gets an all-white frame. The early coders would panic and spend a lot of bits representing this huge change, and then try to spend as many bits as possible to restore the normal image that follows. The human eye could easily see the artifacts in the succeeding frames. Encoder algorithms were improved to recognize the flash frames and spend almost no bits on them, knowing that the eye can't tell if that blast of light is reproduced accurately or not. That way, there would be more bits available for the succeeding normal frames.

04-05-2017, 11:59 PM
Use of existing TV drama film material:

The test committee thought at first they would like to use examples from existing programming, since that would represent common production practice, scenes with various degrees of motion, title crawls, etc. CBS donated a 35mm film clip form "Murder, She Wrote." We ended up not using it for several reasons:

1) The film grain was terrible in high definition. The production companies were using grainy high-speed film because it made lighting much easier. CBS saw this and instituted a new policy that all future shows be filmed on fine-grain stock

2) Angela Lansbury looked fine in standard definition, but you could see she needed much more makeup to hide wrinkles in HD.

3) The scene included someone facing Angela Lansbury and holding a gun by his side - but the gun was cropped out of the picture when televised in 16x9 aspect ratio. Not knowing about the gun made the dialog very cryptic, and it was judged that test viewers would be distracted by trying to figure out what was happening.