Stereo, Quad and 5.1 Sound
Just as we see in 3-D, we also, in a sense, hear in 3-D.
Our ability to judge visual depth and perception is based on interpreting the subtle differences between the images we see in our left and right eyes. Our ability to locate where sounds are originating is possible in part because we have learned to unconsciously understand the minute and complex time-difference relationship between the sounds from our left and right ears.
If a sound comes from our left side the sound waves will reach our left ear a fraction of a second before they reach our right ear. We've learned to interpret this subtle time difference, which, technically, is known as a phase difference.
Depending upon the location of a sound, we might also note a slight difference in loudness between sounds that occur on our left and sounds coming from our right which also helps us place the sound in a three-dimensional perspective.
In stereo production we are dealing with sound intended for our left and right ears, and the inherent differences represented. Therefore, recording and playing back stereo signals requires two audio channels.
Creating the Stereo Effect
In TV production there are several approaches to creating the stereo effect.
First, there is synthesized stereo where stereo is simulated electronically. Here, a monaural (one channel, non-stereo) sound is electronically processed to create the effect of a two-channel, stereo signal. A slight bit of reverb (reverberation, or echo) adds to the effect.
Although this is not true stereo, when reproduced through stereo speakers, the sound will be perceived as having more dimension than monaural sound.
The elaborate audio board below can easily accomplish this.
True stereo is only possible if the original sound is recorded with two microphones or a microphone with two sound-sensing elements.
This process is fairly simple when the output of a stereo mic is recorded on two audio tracks and the two tracks are subsequently reproduced with two speakers. Things get much more complicated when you want to mix in narration, music, and visual effects.
Typically in productions a monophonic (non-stereo) recording of narration is mixed into a background of stereo music or on-location stereo sound. The narration (or primary dialogue in a dramatic production) is typically placed "center stage" and the stereo track adds a left-to-right stereo dimension.
But, what if you are micing a contemporary music session?
In this type of production you typically need to mic each element separately and then create the best sound balance and mix in postproduction while keeping in mind the original visual perspective.
For this type of audio recording you need
Originally, recorders were used that could record from 8 to more than 40 separate analog audio tracks on a single piece of one-inch or two-inch audiotape.
The recorder shown on the right records 16-tracks on two-inch, reel-to-reel tape. (Note the 16 VU meters on the machine.)
Today, audiotape has been largely replaced by computer-type hard disks. This type of digital recording not only makes it possible to record and play back high quality digital sound, but to almost instantly find needed segments.
By recording the various sources of sound on separate audio tracks, they can later be placed in any left-to-right sound perspective.
The unique and creative sound of many of today's recording artists originates in the "mix" created by recording engineers.
In contrast to contemporary or modern music, recordings of classical music and orchestras are generally done with only one (strategically placed) stereo or ¥ surround sound mic. In this case, the sound mix and balance are the responsibility of the conductor rather than an audio engineer.
Two approaches to stereo micing are used: the X-Y and the M-S approaches. Each has its advantages.
The X-Y Stereo Mic Approach
The easiest approach to stereo recording is to use an all-in-one stereo mic, which is basically two mics mounted in a single housing, or, as shown on the left, two mics mounted outside of a housing.
This approach to stereo is referred to as the coincident pair or X-Y technique.
Single unit stereo mics are useful in on-location productions where things need to be kept simple and audio can be successfully miced from one location.
However, this approach can limit stereo separation (a clear and distinct separation between the left and right stereo channels), and the ability to control the left and right sound perspective.
not as convenient, two separate mics can also be used in for X-Y recording.
(See first illustration below.) With this approach two cardioid mics
are pointed toward the subject matter, creating about a 130-degree arc
of sensitivity (in green below).
The M-S Micing Techniques
Although more technically complex, some engineers feel that the mid-side, or M-S technique (on the right in the illustration) provides greater stereo flexibility.
In this case, bidirectional and unidirectional (supercardioid) mics are typically used together.
The directional mic (shown in dark blue in the illustration on the right above) picks up the basic audio in the center of the scene.
The bidirectional mic's polar pattern (shown in green in the center of the illustration) picks up the left and right audio channels. The areas of minimum sensitivity for this mic are oriented toward the camera, thereby suppressing unwanted production and studio noise.
The outputs of both mics are fed through a complex audio matrix circuit that uses the phasing differences of the mics to produce the left and right channels.
By adjusting the level of the mid (center) mic in relation to the side (figure 8) mic level, the stereo image can be made narrower or wider without moving the mics.
As in the case of X-Y mics, MS mics are available that include both of these mic elements within a single housing.
But, when these single unit stereo mics are used, they can't inadvertently be
mounted upside down or you will reverse the left and right stereo perspective.
Maintaining The Stereo Perspective
Stereo audio in TV production faces a major problem because of camera angle and distance shift with each new camera shot.
Because of this, it's almost impossible or at least it would be pretty confusing if the stereo perspective shifted with each change in camera angle.
For example, in an on-location sequence shot at the beach it would be rather disconcerting if the ocean's audio position jumped from left to right with each reverse-angle shot. So we have to compromise.
In the case of an ocean an audio engineer might place the ocean (or a sound effect of the ocean) in a left-to-right perspective that matches the initial wide-angle establishing shot (more or less "center stage") and then hold that same stereo perspective in the audio tracks for subsequent close-ups even reverse-angle shots.
However, for lengthy shots that clearly represent changes in stereo perspective, a pan pot can be used to subtly shift the ocean so that a true left-to-right stereo perspective is simulated.
A pan pot consists of two or more faders (volume controls) ganged together. They can be used on an audio board during postproduction to slowly move a source of sound from one stereo channel to the other. This will avoid jarring shifts in sound perspective as shots are changed.
Changes in the stereo placement end up being a creative decision. There are no rules but there are two guidelines.
First, try to simulate the authentic stereo sound perspective
whenever possible. The second guideline, which is even more important,
is that it's never desirable to use a production technique -- in either
audio or video -- that diverts viewer attention away from production
content. It's better to hold back on authenticity rather than use an
effect that will call attention to itself.
Keeping Dialogue "Center Stage"
For maximum sound clarity the dialogue for dramatic productions should be mixed to keep it in the center of the stereo perspective.
In most cases this will conform to what you see on the screen. The momentary exception might be when someone or something enters from one side of a frame
Even with center-stage dialogue a stereo perspective can be added by mixing in stereo background music and sound effects during postproduction.
In sporting events background stereo sound of the crowd is typically mixed in with monophonic feeds of play-by-play narration.
If there are two announcers, pan pots can be used to place them slightly to the left and right of center (but never at the extreme ends of the left-right stereo perspective).
For cuts to roving cameras focused on cheerleaders or sideline activity a stereo mic mounted on the camera can be faded into
existing program audio when that camera is switched up.
Although many TV sets have stereo speakers built in, the distance between the speakers can limit the stereo separation and, therefore, the stereo effect.
Ideally, a stereo signal should be reproduced by two good-quality speakers placed about one meter (three feet) on either side of an average-sized TV set.
The distance between the speakers depends on the viewing distance and the size of the screen. The farther back the listener is the greater the distance can be between the speakers.
If a noticeable audio "hole" seems to be present between the left and right sound sources, the speakers are too far apart.
The stereo effect is often enhanced or even to a degree
created in postproduction by introducing phasing differences between
the left and right audio signals.
The ATSC (Advanced Television Systems Committee) standard for digital TV adopted by United States and Canada includes 5.1-channel surround-sound using the Dolby Digital AC-3 format. Compared to the earlier stereo TV broadcast standard, 5.1 audio adds important dimensions to TV audio.
Stereo covers about a 120-degree frontal perspective. Although this provides significant realism, we can actually perceive sounds in a much wider perspective even in back of us.
Surround-sound, quadraphonic sound and 5.1 Dolby sound systems attempt to reproduce sounds in both the front and back of the listener close to a 360-degree sound perspective.
Even though the number of homes equipped with full 5.1 surround-sound decoders is limited, some productions are being done in surround-sound.
The Dolby 5.1 Surround Sound system consists of six discrete channels of audio: left, center and right channels in front of listeners, and left-surround and right-surround at the back sides. If you've been counting, that only totals five channels, not six.
The 6th channel (which is the ".1" part of the designation) is a bass channel of limited frequency response (3-120Hz). Although it's capable of room-rattling bass, it only takes up one-tenth of a full-range audio channel. Hence, the system is referred to as 5.1.
Bass is essentially nondirectional, so the speaker can be placed almost anywhere in the room.
In this illustration we're assuming a TV screen six to eight feet (2 to 2 1/2 meters) from the viewer/listener, the left and right front speakers 22 to 30 degrees off to the sides, the front speaker in the center of the TV, and the back speakers at an angle of about 100 degrees (or slightly behind) the listener.
Of course, placing all these speakers in appropriate places and distances within a room strains most interior design schemes, so to tackle that problem researchers analyzed the way we hear sounds and came up with a surround-sound system that uses only two (high quality) speakers.
To achieve the expanded effect, multi-channel audio recordings are digitized and fed into a computer during postproduction. Using this technique, even a vertical dimension can be suggested.
Some of the new flat-panel TV sets, which have only two speakers in the front, make use of this approach to simulate sounds that seem to be emanating well off to the left or right of the TV set. While not as good as a five- or six-speaker setup, it's an improvement over standard stereo.
Quad mics that detect sounds in nearly a 360-degree perspective have four mic elements within a single housing. From these mic elements separate channels for five or even six speakers can be derived.
Typically, an upper capsule contains two mic elements and picks up sound from the left-front and right-rear. Another capsule mounted below this one picks up sound from the right-front and left-rear. These are then recorded onto four audio tracks.
During postproduction the four audio tracks are fed through a computer and mixed with tracks of music and effects (M&E) to develop a full surround-sound effect.
When connecting speaker wires to amplifiers attention needs to be paid to polarity the positive and negative leads (wires) to the speakers.
Generally, one of the wires will be different possibly it will be a different color or have a different stripe. Amplifiers will often have red and black terminal connections to indicate these differences.
If you do not maintain this consistency (polarity) in hooking up both the amplifier and connections to the speakers, the audio will be out of phase. Among other things, you will experience sound cancellation effects and a loss of bass.
While we are talking about this, you need to know that it's never a good idea to operate an amplifier without the speakers connected especially with the volume turned up. Without "a load" of the speakers, some amplifiers can burn out.
In the next section we'll more fully explain digital audio.
Issues Forum Author's Blog/E-Mail Associated Readings Bibliography
Index for Modules To Home Page Tell a Friend Tests/Crosswords/Matching
© 1996 - 2017, All Rights Reserved.
Use limited to direct, unmodified access from CyberCollege® or the InternetCampus®.