Videokarma.org

Go Back   Videokarma.org TV - Video - Vintage Television & Radio Forums > Recorded Video

We appreciate your help

in keeping this site going.
 
 
Thread Tools Display Modes
Prev Previous Post   Next Post Next
  #1  
Old 01-14-2023, 03:54 PM
DVtyro DVtyro is offline
VideoKarma Member
 
Join Date: Sep 2022
Posts: 137
Analog video recording principles

First off, thanks to everyone who have been answering my questions about analog video recording formats. I re-read the answers and digged into a couple of books, but I still have questions!

I would appreciate if someone fills the blanks in my understanding of video signal modulation, storage, bandwidth, resolution, etc taking into account that I am not an electronics engineer and don't plan to become one, just want to have a little clearer understanding of how things work and why one format is better than another.

I've been reading some introductory texts on the topics of analog radio, television and color-under recording systems. I'll go with what I've learned (or I think I've learned), and please correct me where I am wrong.

First, how many samples per second we need to send, this is a rather simple idea: take frame size times frames per second, say 525 lines total, 4/3 aspect ratio with square pixels will give 525 * 4/3 = 700 samples per line, time 30 fps: 525*700*30 = 11 million samples per second.

Each wave period has two half periods, and I thought that each of them could be used to describe brightness of a pixel, so 11 million / 2 = 5.5 MHz. At this point I had a naive idea of how things work, I thought that this wave can be manipulated (a.k.a. modulated) so that amplitude of the each of its half-periods could be increased or decreased, this would give us 5.5 million unique samples. Based on this, I could not understand why do we need a RANGE of frequencies, why we cannot use ONE frequency modulated this way.



Apparently, this is not how things work in real life Instead, I need 5.5 million of DIFFERENT FREQUENCIES, each frequency representing a pixel on the screen. To help me understand this, I was reading a book on analog radio, so the example was, like, real music has range of frequencies, say from 100 to 10 KHz, and want to send all of them. So, if I modulate the carrier with this signal, I will get a BAND with two sidebands, corresponding to the carrier minus signal and carrier plus signal. Ok, I get this.

OTOH, I am not sure that sending audio over the air is exactly the same as sending samples on a screen… But if this analogy is correct, then each of the samples on screen is represented by its own frequency, just like each tone in music is represented with its own frequency. Is this correct so far?

So, the position of a sample is represented with frequency, the intensity can be represented with amplitude. This is how AM radio works, and this is how OTA TV works - luminance use amplitude modulation (NTSC/PAL color uses AM, SECAM color uses FM, audio uses FM).

Amplitude modulation is easy to represent with a picture and to understand. I've read arguments for FM modulation, so it seems that all signals on tape are FM modulated: luminance, chrominance and audio, correct?

Next, broadcast video is converted for recording on tape. Because of various technical difficulties, AM signal cannot be used for quality video recording, so first the luminance is separated from the chroma, then the luminance bandwidth is reduced, then the luminance signal is then used to FM modulate a carrier before it is recorded on the video tape.

It is harder for me to make a mental picture how FM modulation works. The first part of the modulation is the same with AM, two frequencies are added together, producing two sidebands. When recording on tape, carrier is chosen high enough for the upper sideband to basically be removed automatically just because such high frequencies cannot be recorded on tape. So this issue solves itself. Lower sideband is used for luminance signal, I get it so far.

I suppose everything else is FM-modulated as well including chroma? Including NTSC and PAL chroma?



But the sideband represents the POSITION of sample onscreen, right? To represent the brightness, the signal modulates the carrier frequency, not the carrier amplitude. This is what DEVIATION is about, it is represents brightness. The higher the deviation, the larger the difference between black and white, or in other words, better contrast and lower S/N.

What I don't get is how brightness corresponds to the sample position (or in case of audio, how loudness corresponds to audio frequency)??? If the carrier is modulated by frequency, it is moved up and down, but the sideband that describes the position of a sample, is relative to the carrier, how does it work???

The book on radio that I am reading, re-iterates: the "rhythm" (well, frequency) of the carrier deviation corresponds to the signal frequency, while the amount of the deviation corresponds to the brightness (loudness). But I still don't get how the two parts: the sideband, that describes the POSITION, and the deviation that describes BRIGHTNESS are reconciled together. I can that with AM the carrier does not change, so whatever the distance from the carrier is my frequency (audio tone or sample position). But in case of the FM the carrier constantly changes, how do I know what actual audio tone or sample position it corresponds to??? Please, explain!

Now looking at this picture, which is one of the series of graphs about VHS/SVHS/Hi-8, I have several questions.



First, what is "220% peak white limit", orange question mark. It is my translation from non-English text, but it corresponds to Panasonic info, which says about SVHS: "A 210% increase in the peak white level enhances the picture quality even more, with excellent delineation of image borders and highly faithful reproduction of detail." - does this mean that SVHS has 210% increase in the peak white level compared to VHS, and the above picture that describes VHS is incorrect? Also, "white level" is brightness, and brightness is defined by deviation of carrier frequency, not by amplitude, but the picture shows the amplitude. Is it a mistake, or what does it mean?

Second, red question mark: what is this frequency, from which the sideband is measured? It is not black level, not white level, not the middle. Is it 70% level of the deviation band, because the average value of a pure sine wave is 0.7 of max value?

Third, green question mark: chroma. It has both sidebands. Is it FM or AM modulated? If it is FM modulated, it should have its own deviation? Is chroma bandwidth of all color-under systems the same? If not, why it is not specified? The increase in chroma bandwidth means the increase in chroma resolution? But apparently this is not what they did, they only ever increased luma resolution. Why increasing chroma carrier is touted as a big deal, improving picture? Isn't the only thing that matter is that chrominance and luminance do not collide?

P.S. Please excuse me for using chroma and chrominance interchangeably.
Reply With Quote
 



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -5. The time now is 10:47 PM.



Powered by vBulletin® Version 3.8.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
©Copyright 2012 VideoKarma.org, All rights reserved.