Optional Audio Introduction

   

Updated: 01/01/2014


Module 8

  

 

 

How the Imaging

Process Works

 

Why do you need to know how the film and TV imaging process works?

For one thing, knowledge is power, and the more you know about the process, the easier it will be to use the tools creatively. Plus, you'll be able to solve most of the inevitable problems that crop up during TV production.

Let's start at the beginning with...

Fields and Frames

>>Ironically, both "motion" pictures and TV are based solidly on an illusion. "Motion" as such does not exist in the actual TV and motion picture images. The illusion is created when a rapid sequence of still images are presented. race horse

This illusion was discovered after a $25,000 bet was established by a motion picture foundation in 1877. For decades, an argument raged over whether a race horse ever had all four hooves off the ground at the same time.

 By the way, the horse in the above illustration should appear to be in motion.  If not, you may need to turn on the animation in your browser in order to see this and other animated illustrations in these modules.

In an effort to settle the horse issue, Leland Stanford, founder of Stanford University, set up an experiment in which a photographer took a rapid sequence of photos of a running horse. (And, yes, they found that for brief moments a race horse does have all four feet off the ground at the same time.)

>>However, this experiment established something even more important. It illustrated that, if a sequence of still pictures is presented at a rate of about 16 or more per-second, the individual pictures blend together and give the impression of a continuous, uninterrupted image.

If the series of eleven still photos shown below are presented sequentially in rapid succession, they create the appearance of continuous motion. (Note photo above.)

racehorse

You can see in the sequence of images above that the individual pictures vary slightly to reflect changes over time.

In the circular illustration on the right we've slowed down the timing of the images.  Here, you can see more clearly how a sequence of still images can create an illusion of movement.

We see a more primitive version of this in the "moving" lights of a theater marquee or "moving" arrow of a neon sign suggesting passersby come in and buy something.

>>Although early silent films used basic frame (picture) rates of 16 and 18 per second, when sound was introduced the rate increased to 24 frames per-second.

This was necessary primarily to meet the quality needs of the sound track. To reduce flicker, today's motion picture projectors use a two-bladed shutter that projects each frame twice, giving an effective rate of 48 frames per-second. (Some projectors flash each frame three times.)

Unlike broadcast television with its frame rates of 25 and 30 per-second, for decades, film has maintained a worldwide 24 frame-per-second standard.

The NTSC (National Television System Committee) system of television used in the United States, Canada, Japan, Mexico, and a few other countries reproduces pictures (frames) at a rate of approximately 30 per-second. (We'll take up the new ATSC digital broadcast standard in the next module.)

Of course, 30 per-second frame rate presents a bit of a problem in converting film to TV (mathematically, 24 doesn't go into 30 very well), but we'll worry about that later.

>>A motion picture camera records a completely formed still picture on each frame of film, just like the still pictures on a roll of film in a 35mm camera (assuming the camera is not digital and still uses film). It's just that the motion picture camera takes the individual pictures at a rate of 24 per-second.

Things are different in TV. In a video camera, hundreds of horizontal lines make up each frame.

Thousands of points of brightness and color information exist along each of these lines. This information is electronically discerned in the TV camera and then reproduced on a TV display in a left-to-right, top-to-bottom scanning sequence.

This sequence is similar to the movement of your eyes as you read.


Interlaced Scanning

>>Originally, to reduce variations in flicker and brightness during the scanning process, as well as to solve some technical limitations, the scanning process was divided into two halves.

The odd-numbered lines were scanned first and then the even-numbered lines were interleaved between these lines to create a complete picture. Not surprisingly, we refer to this process as interleaved or interlaced scanning.

In this greatly enlarged TV image, we've colored the odd lines green and the even lines yellow.

When we remove these colors, we can see how they combine to create the black and white video picture on the right. (Later, we'll describe a color TV picture, which is a bit more complex.)

Each of these half-frame passes (either all of the odd- or all of the even-numbered lines, or the green or the yellow lines in the illustration) is a field. The completed (two-field) picture is a frame.

After scanning the complete picture (frame), the process starts again. But, if the subject matter in the scene changes with time, the next frame will reflect that slight change.

Human perception fuses together these slight changes between successive pictures, giving the illusion of continuous, uninterrupted motion.

The interleaved approach, although necessary before recent advances in technology, results in minor picture "artifacts," or distortions in the picture, including variations in color.


Progressive Scanning

>>After several decades of using the interlaced approach, today most video displays (including flat-screen TV sets and computer monitors) use a progressive or non-interlaced scanning approach.

With this approach, the fields (odd and even lines) are combined and reproduced together in a 1-2-3 sequence, rather than an odd (1-3-5) and even (2-4-6) interlaced sequence.

Progressive scanning has a number of advantages, including greater clarity and the ability to interface more easily with computer-based video equipment. However, it  adds greater technical demands on the TV system.

As we'll see in the next module, the specifications for digital and high-definition ("hi-def") television allow for ▲both progressive and interlaced scanning.
 

The Camera's Imaging Device

>>The lens of the television camera forms an image on a light sensitive target inside a video camera in the same way a motion picture camera forms an image on film.

But instead of film, television cameras commonly use a solid-state, light-sensitive receptor called a CCD (charged-coupled device) or, more commonly, a CMOS (complementary metal oxide semiconductor).  Both of these "chips" are able to detect brightness differences at different points throughout the image area.

The chip's target area (the small rectangular area near the center of this photo) contains from hundreds-of-thousands to millions of pixel (picture element) points.  Each point can electrically respond to the amount of light focused on its surface.  

A very small section of a chip is represented below  -- enlarged thousands of times. The individual pixels (picture elements) are shown in blue. ccd drawing

Differences in image brightness detected at each of these points on the surface of the chip change that light into electric voltages.

Electronics within the camera scanning system regularly check each pixel area to determine the amount of light falling on its surface.

This sequential information is directed to an output amplifier along the path shown by the red arrows.

This information readout results in constantly changing field and frame information. (We'll cover this process, especially as it relates to color information, in more detail in Module 15.)

In a sense, your TV receiver reverses this process. The pixel-point voltages generated in a camera are changed back into light, which we see as an image on our TV screens.


Analog and Digital Signals

>>Electronic signals -- as they originate in microphones and cameras --  are analog in form.

This means the equipment detects signals in terms of continuing variations in relative strength or amplitude.

In audio, this translates into the relative volume or loudness of the sound; in video, it's the relative brightness of different areas of the picture.

As illustrated above, we can change these analog signals (on the left) into digital data (on the right).  The latter is computer zeros and ones (0s and 1s, or binary computer code). The digital signal is then sent to subsequent electronic equipment.

>>Backing up a bit, we need to explain how the analog-to-digital process works. The top part of the illustration below analog to digital drawingshows how an analog signal can rise and fall over time to reflect changes in the original audio or video source.

In order to change an analog signal to digital, that wave pattern is sampled at a high rate of speed.  The amplitude at each of those sampled moments (shown in blue-green on the left) is converted into a number equivalent.

These numbers are simply the combinations of the 0s and 1s used in computer language.

Since we are dealing with numerical quantities, this conversion process is appropriately called quantizing.

>>Once the information is converted into numbers, we can do some interesting things (generally, visual effects) by adding, subtracting, multiplying, and dividing the numbers.

The faster all this is done, the better the audio and video quality. But this means that as the quality increases the technical requirements become more demanding.

Thus, we are frequently dealing with the difference between high-quality equipment that can handle ultra high-speed data rates and lower-level (less expensive) consumer equipment that relies on a reduced sampling rate. This, in part, answers the question about why some video recorders cost $50 and others cost $50,000.


What's the Advantage of Digital Data?

>>Compared to the digital signal, an analog signal would seem to be the most accurate and ideal representation of the original signal.

While this may initially be true, the problem arises in the need for constant amplification and re-amplification of the signal throughout every stage of theanalog noise  audio and video process.

Whenever an analog signal is reproduced or amplified noise is inevitably introduced, which degrades the signal.

In audio, this can take the form of a hissing sound; in video, it appears as a subtle background "snow" effect. This is exaggerated in the photo below.

video noiseBy converting the original analog signal into digital form, we can eliminate this noise buildup, even though the signal is amplified or "copied" dozens of times.

Because digital signals are limited to the form of 0s and 1s, no "in-between" information (spurious noise) can creep in to degrade the signal.

We'll delve more deeply into some of these issues when we focus on digital audio.

>>Today's digital audio and video equipment has borrowed heavily from developments in computer technology -- so heavily, in fact, that the two areas have largely merged.

Satellite services such as DISH and Direct TV make use of digital receivers that are, in effect, specialized computers. And you probably listen to music recorded on a pocket-sized device capable of storing several hours of digitized music.

We discuss some of the advantages of digital electronics in video production - here.

In the next module, we'll look at world television standards.




       Word Square           Crossword       

       TO NEXT MODULE         Search Site         Video Projects         Revision Information            
           
  Issues Forum         Author's Blog/E-Mail          Associated Readings         Bibliography             
 
   Index for Modules             To Home Page          Tell a Friend        Tests/Crosswords/Matching       

© 1996 - 2014, All Rights Reserved.
Use limited to direct, unmodified access from CyberCollege® or the InternetCampus®
.