Style and Editing [Television's Style: Image and Sound]

Home | Audio Magazine | Stereo Review magazine | Good Sound | Troubleshooting

Editing is at once the most frequently overlooked and the most powerful component of television style. We are seldom conscious of a program's arrangement of shots, and yet it is through editing that television producers most directly control our sense of space and time, the medium's building blocks. For many theorists of television, editing is the engine that powers the medium.

At its most basic, editing is deceptively simple. Shot one ends. Cut. Shot two begins. But in that instantaneous shot-to-shot transition, we make a rather radical shift. We go from looking at one piece of space from one point of view to another piece of space from a different perspective. Perspective and the representation of space suddenly become totally malleable. Time, too, can be equally malleable. Shot two need not be from a time following shot one; it could be from hours or years before. The potential for creative manipulation is obvious.

Within broadcast television, however, editing is not completely free of conventions--far from it. Most television editing is done according to the "rules" of two predominant modes of production: single-camera and multiple camera. By mode of production we mean an aesthetic style of shooting that often relies on a particular technology and is governed by certain economic systems.

As we have seen before, television forever blends aesthetics, technology, and economics. Single-camera productions are filmed with just one camera operating at a time. The shots are not done in the order in which they will appear in the final product, but instead are shot in the sequence that is most efficient in order to get the production done on time and under budget. Consider, for example, a scene between two characters named Eugene and Lydia, in which shots 1, 3, 5, 7, and 9 are of Eugene and shots 2, 4, 6, 8, and 10 are of Lydia.

The single-camera approach to this scene would be to set up the lighting on Eugene, get the camera positioned, and then shoot the odd-numbered shots one after another. Then Lydia's lighting would be set up and the camera would shoot all the even-numbered shots of her. Later, the shots would be edited into their proper order.

Multiple-camera productions have two or more cameras trained on the set while the scene is acted out. In our hypothetical 10-shot scene, one camera would be pointed at Eugene while the other would simultaneously be pointed at Lydia. The scene could be edited while it transpires or it could be cut later, depending on time constraints. Sequences in daily soap operas and game shows tend to be edited while they are shot, but weekly sitcoms are generally edited after shooting.

These modes of production are more than just a matter of how many cameras are brought to the set. They define two distinct approaches, whose differences cut through

• Pre-production-the written plan for the shoot.

• Production-the shoot itself.

• Post-production-everything after.

And yet, both modes rely on similar principles of editing.

Historically, the single-camera mode of production came first. It developed initially in the cinema and has remained the preeminent way of making theatrical motion pictures. On television, it is the main mode used to create prime-time dramas, MOWs, music videos, and nationally telecast commercials. As it is also the site for the development of most editing principles, we will begin our discussion of editing there. Subsequently we will consider the multiple-camera mode of production, which is virtually unique to television and is only rarely used in theatrical films. Sitcoms, soap operas, game shows, sports programs, and news casts are shot using several cameras at once. Although multiple-camera shooting has developed its own conventions, its underlying premises are still rooted in certain single-camera conceptualizations of how space and time should be rep resented on television.

Before discussing the particulars of these modes of production, it should be noted that the choice of single-camera or multiple-camera mode is separate from that of the recording medium (film or video). While most single-camera productions today are still shot on film and not on video, this is becoming less true as high-definition digital video evolves. One notable convert to digital video is George Lucas, who is shooting Star Wars: Episode II in that format.

Multiple-camera productions are also not tied to one specific medium. They have a long history of being shot on both film and video. As we shall see, these modes of production are not determined by their technological underpinning- although that is certainly a consideration. Rather, they depend as much on certain economic and aesthetic principles as they do on technology.


Initially it might seem that single-camera production is a cumbersome, lengthy, and needlessly expensive way to create television images, and that television producers would shy away from it for those reasons. But television is not a machine driven solely by the profit motive. Just as we must be cautious of technological determinism (i.e., that television producers will use new technology as soon as it becomes available), we must also be wary of slipping into an economic determinism. That is, we must avoid the mistaken belief that television producers' aesthetic decisions and technological choices will always be determined by economic imperatives. In a study of how and why the Hollywood film industry adopted the single-camera mode of production, David Bordwell, Janet Staiger, and Kristin Thompson contend that technological change has three basic explanations:

1. Production efficiency-does this innovation allow films to be made more quickly or more cheaply?

2. Product differentiation-does this innovation help distinguish this film from other, similar films, and thus make it more attractive to the consumer?

3. Standards of quality--does this innovation fit a conventionalized aesthetic sense of how the medium should "evolve"? Does it adhere to a specific sense of "progress" or improvement?' Although single-camera production is more expensive and less efficient than multiple-camera, it compensates for its inefficiency by providing greater product differentiation and adhering to conventionalized aesthetic standards.

Because single-camera mode offers more control over the image and the editing, it allows directors to maximize the impact of every single image. Consequently, it is the mode of choice for short televisual pieces such as commercials and music videos, which rely on their visuals to communicate as powerfully as possible and need a distinctive style to distinguish them from surrounding messages that compete for our attention.

Stages of Production Pre-production. To make single-camera production economically feasible, there must be extensive pre-production planning. Chance events and improvisation are expensive distractions in a single-camera production. The planning of any production-whether an MOW or a Pepsi commercial-begins with a script. Actually, there are several increasingly detailed stages of scripting:

• Treatment-a basic outline.

• Screenplay-a scene-by-scene description of the action, including dialogue.

• Shooting script-a shot-by-shot description of each scene.

• Storyboard-small drawings of individual shots (Fig. 1).

---------- Fig. 1

1. Car speeds recklessly down a tree-lined road.

3. The road; his point of view.

). looks. toward

2. Woman and man in fro it seat. He drive:,

4. Front seat, same scene as in #2.

6. Her hand reaches for wheel.


For our purposes it is not important to go into the differences among these written planning stages, but it may be helpful to consider the storyboard, which consists of drawings of images for each shot (with more than one image for complicated shots). Storyboards indicate the precision with which some directors conceptualize their visual design ahead of time. Alfred Hitchcock, for example, was well known for devising elaborate storyboards. For him, the filmmaking process itself was simply a matter of creating those images on film. Commercials and music videos are also heavily storyboarded. Each frame is carefully plotted into a particular aesthetic, informational, or commercial system.

Production. A single camera is used on the set and the shots are done out of order. Actors typically rehearse their scenes in entirety, but the filming is disjointed and filled with stops and starts. Because the final product is assembled from these fragments, a continuity person must keep track of all the details from one shot to the next-for example, in which hand the actor was holding a cigarette and how far down the cigarette had burned. Nonetheless, small errors do sneak through, illustrating just how disjointed the whole process is. For instance, in Fig. 9, a frame enlargement from a Northern Exposure (1990-95) scene that is analyzed later, a dishcloth is on actor Janine Turner's shoulder. At the very beginning of the next shot, Fig. 10, the dishcloth has disappeared.

The "production" stage of making television is under the immediate control of directors. They choose the camera positions, coach the actors, and approve the mise-en-scene. Most television directors do not write the scripts they direct (which is done in pre-production), and most do not have control over the editing (post-production). However, the actual recording process is their direct responsibility.

Post-production. The task of the technicians in post-production is to form the disjointed fragments into a unified whole. Ideally the parts will fit together so well that we will not even notice the seams joining them. At this point in narrative television production, the sound editor and musical director are called on to further smooth over the cuts between shots with music, dubbed-in dialogue, and sound effects. Of course, in music videos and many commercials the music provides the piece's main unifying force and is developed well before the visuals. Indeed, the music determines the visuals, not vice versa, and becomes part of the pre-production planning.

The post-production process was revolutionized in the 1990s by computer based nonlinear editing (NLE), on systems such as the Avid Media Composer and Media 100 (Fig. 9.6). Virtually everything in television and film today, with the exception of nightly newscasts, is edited on NLE systems. To understand what makes these systems "nonlinear" and why that is significant, a bit of history is required (see section 9 for further details). Early video editing systems were strictly linear. To assemble shots A, B, and C, you first put shot A on the master tape and then shot B and then shot C. If you decided later that you wanted to insert shot X between A and B, you were out of luck. You had to start all over and put down shot A, followed by X, and then B, and so on. One shot had to follow the other (there were exceptions to this, but we are simplifying for clarity). In contrast to this linear system for video, film editing was always nonlinear. If film editors wish to insert a shot X between shots A and B, they just pull strips of film apart and tape them together again.

Digital editors changed video's reliance on linear systems.

NLE systems typically use two computer monitors-as is illustrated by editor Niklas Vollmer's project, Fit to Be Tied, which was edited on the Media 100 (Figs. 2- 4). In Fig. 2 (taken from the left-side monitor) you see lists of available image and sound clips and a preview window that shows what the finished project will look like. In Fig. 3 (taken from the right-side monitor) is the project's timeline. All NLE systems use timelines to structure the editing. In the detail for the Fit to Be Tied timeline (Fig. 4), each shot is signified by a rectangle, with a label such as "Monster sings" and "Big Al walk by." Unlike linear video editing, Vollmer may place any shot anywhere on the timeline-inserting shots be tween other shots if he wishes. NLE also permits fancy transitions from one shot to the next. In the Media 100 timeline, the editor may specify two simultaneous image tracks (labeled "a" and "b" in Fig. 4) and create special effects between shots-as is signified by the small arrows between tracks a and b. In this manner, the NLE editor may create fades, dissolves, and more elaborate transitions. Also visible in this detail of the timeline is one audio track (labeled "Al"), with the relative loudness of the audio indicated by the graph-like line. Several other over lapping audio tracks can also be added-allowing editors to create sound mixes.

NLE is a big part of the digital overhaul of the television industry. Its computers are cheaper than old-fashioned video editing equipment, and it provides television editors with much greater aesthetic flexibility. Moreover, it is part of the motivation behind the move to digital video (DV). Analog video and film must be converted to a digital format before they can be sucked into an NLE computer, but images shot in digital video can skip this process since they are already digital. The ease and relative lack of expense of DV and NLE are changing the face of post-production and facilitating work by independent video producers-such as the people behind The Blair Witch Project (1999) and Time Code (2000).

Fig. 2

Fig. 3

Fig. 4

The Continuity Editing System

In section 2 we discussed Hollywood classicism as the major narrative system in theatrical film. Accompanying this narrative structure is a particular approach to editing that has come to be known as continuity editing. It operates to create a continuity of space and time out of the fragments of scenes that are contained in individual shots. It is also known as invisible editing because it does not call attention to itself. Cuts are not noticeable because the shots are arranged in an order that effectively supports the progression of the story. If the editing functions correctly, we concentrate on the story and don't notice the technique that is used to construct it. Thus, the editing is done according to the logic of the narrative.

There are many ways to edit a story, but Hollywood classicism evolved a set of conventions that constitute the continuity system. The continuity editing system matches classicism's narrative coherence with continuities of space and time. Shots are arranged so that the spectator always has a clear sense of where the characters are and when the shot is happening-excepting narratives that begin ambiguously (e.g., murder mysteries) and clarify the "where" and "when" later. This spatial and temporal coherence is particularly crucial in individual scenes of a movie.

A scene is the smallest piece of the narrative action. Usually it takes place in one location (continuous space), at one particular time (continuous time). When the location and/or time frame change, the scene is customarily over and a new one begins. To best understand the continuity system, we will examine how it constructs spatial and temporal continuity within individual scenes.

How these scenes then fit together with one another in a narrative structure is discussed in section 2.

Spatial Continuity. In the classical scene the space is oriented around an axis of action. To understand how this axis functions, consider Fig. 5, an overhead view of a rudimentary two-character scene. Let's say that the action of this scene is Brent and Lilly talking to one another in a cafeteria. The axis, or line of action, then, runs through the two of them. The continuity system dictates that cameras remain on one side of that axis. Note the arc in Fig. 5 that defines the area in which the camera may be placed. If you recall your high school geometry, you'll recognize that this arc describes 180°. Since the cameras may be positioned only within the 180° arc, this editing principle has come to be known as the 180° rule.

The 180° rule helps preserve spatial continuity because it ensures that there will be a similar background behind the actors while cutting from one to the other. The cafeteria setting that is behind Brent and Lilly recurs from shot to shot and helps confirm our sense of the space of the room. A shot from the other side of the axis (position X) would reveal a portion of the cafeteria that had not been seen before, and thus might contain spatial surprises or cause disorientation.

Fig. 5

More important than similar backgrounds, however, is the way in which the 180° rule maintains screen direction. In the classical system, the conventional wisdom is that if characters are looking or moving to the right of the screen in shot one, then they should be looking or moving in the same direction in shot two. To cut from camera A to camera X (Fig. 5) would break the 180° rule and violate screen direction. In a shot from camera A, Lilly is looking screen left. If the director had cut to a shot of her from position X, Lilly would suddenly be looking screen right. Even though the actor herself had not changed position, the change in camera angle would make her seem to have changed direction. This is further illustrated by camera position B. A cut from Brent (camera B) to Lilly from the hypothetical X position would make it appear as if they were both looking to the right, instead of toward one another.

Breaking the 180° rule would confuse the spatial relationship between these two characters.

Maintaining screen direction is also important to action scenes filmed outdoors. If the directors are not careful about screen direction, they will wind up with car chases where the vehicles appear to be moving toward each other rather than following. And antagonists in confrontational scenes might appear to be running in the same direction rather than challenging one another.

There are, of course, ways of bending or getting around the 180° rule, but the basic principle of preserving screen direction remains fundamental to the classical construction of space. For this reason, the continuity system is also known as the 180° system.

Built on the 180° rule is a set of conventions governing the editing of a scene. Although these conventions were more strictly adhered to in theatrical film during the 1930s and 1940s than they are on television today, there are several that still persist. Some of the most prevalent include:

• The establishing shot

• The shot-counter shot editing pattern

• The re-establishing shot

• The match cut-including the match-on-action and the eyeline match

• The prohibition against the jump cut

This may best be illustrated by breaking down a simple scene into individual shots. In Fig. 6, the basic camera positions of a Northern Exposure scene are diagramed. While examining the frame captures from this scene, keep in mind that this was a single-camera (film) production. That is, multiple cameras were not used. Just one camera was on the set at the time of filming.

Fig. 6

The first shot of a classical scene is typically a long shot that shows the entire area and the characters in it, as in the long shot of Maggie and Joel in Fig. 8. (camera position A), preceded by an exterior shot of her cabin (Fig. 7). This establishing shot introduces the space and the narrative components of the scene: Maggie, Joel, her cabin, a dinner cooked by her. In a sense, the establishing shot repeats the exposition of the narrative, presenting specific characters to us once again. If the establishing shot is from a very great distance, it may be followed by another establishing shot that shows the characters clearly in a medium shot or medium long shot.

From there the scene typically develops some sort of alternating pattern, especially if it is a conversation scene between two people. Thus, shots of Maggie are alternated with shots of Joel, depending on who is speaking or what their narrative importance is at a particular point (camera positions B and C, Figs. 11 and 12). Note that once again the 180° rule is adhered to, as the cameras remain on one side of the axis of action. Note also that the angles of positions B and C crisscross each other, rather than being aimed at Joel and Maggie from positions D or E. These latter two positions do not violate the 180° rule, but positions B and Care preferred in the continuity system for two reasons.

First, these angles show more of the characters' faces, giving us a three-quarter view rather than a profile. We look into their faces without looking directly into their eyes and breaking the taboo against actors looking into the camera lens and at the viewer. Second, since we see Joel's shoulder in Maggie's shot (Fig. 11) and vice versa (Fig. 12), the space that the two share is reconfirmed. We know where Maggie is in relationship to Joel and where he is in relationship to her.

Since shots such as C in Fig. 6 are said to be the counter or reverse angle of shots such as B, this editing convention goes by the name shot-counter shot or shot-reverse shot. Shot-counter shot is probably the most common editing pattern in both single-camera (such as Northern Exposure) and multiple-camera productions (e.g., soap operas). Once shot-counter shot has been used to detail the action of a scene, there is often a cut back to a longer view of the space. This re-establishing shot shows us once again which characters are involved and where they are located. It may also be used as a transitional device, showing us a broader area so that the characters may move into it or another character may join them. Often it is immediately followed by another series of shots-reverse shots.

The Northern Exposure scene does not contain this type of re-establishing shot, but provides a variation of it. After a series of 15 shots in fairly tight close-up (framed as in Figs. 13 and 14), the camera cuts back to a medium close-up (Fig. 17) as the tone of Joel and Maggie's conversation shifts. The scene is then played at medium close-up for seven shots (Figs. 17- 23), as Joel and Maggie drift apart emotionally. Just when Maggie is most disenchanted with Joel (Fig. 24), he compliments her and their intimacy is regained. This is marked in the framing with a tighter shot of Joel (Fig. 25), as he raises his glass to toast her. She reciprocates his intimacy and is also framed tighter (Fig. 26). After one more close-up of Joel (Fig. 27), the camera cuts to the original medium shot of the two of them (Fig. 28, compare with Fig. 8), which tracks back and out the window (Fig. 29).

Fig. 7 - 7-12

Fig. 13 - 18

Fig. 19

Fig. 25

Fig. 31

Thus the framing has gone from medium shot to medium close-up to close-up, coming closer to the characters as the scene intensifies. But it does not remain at close-up. The camera cuts back to medium close-up and then returns to close-up before ending the scene with a track backward from a medium shot.

The key to any classically edited scene is variation, closer and farther as the narrative logic dictates.

Two other editing devices are among those used to maintain space in the continuity system: the match cut and the point-of-view or subjective shot.

In a match cut, the space and time of one shot fits that of the preceding shot. One shot "matches" the next and thereby makes the editing less noticeable.

Matching may be achieved in several ways. Two of the most common are the match on action and the eyeline match.

In a match-on-action cut, an activity is continued from one shot to the next. At the end of shot two in the Northern Exposure scene, Maggie begins to sit down (Fig. 9); at the start of the next shot she continues that movement (Fig. 10). The editor matches the action from one shot to the next, placing the cut in the midst of it. This, in effect, conceals the cut because we are drawn from one shot to the next by the action. We concentrate on Maggie's movement, and the cut becomes "invisible:' We probably don't even notice the vanishing dishcloth.

An eyeline match begins with a character looking in a direction that is motivated by the narrative. For instance, in L.A. Law (1986-94), legal boardroom scenes are edited based on the looks of the characters. Jonathan looks in a specific direction in one shot (Fig. 32) and the editor uses that look as a signal to cut to Leland (Fig. 33), toward whom Jonathan had glanced. Jonathan's eyeline provides the motivation for the cut and impels the viewer toward the new space.

In an eyeline match such as this, the second shot is not from the perspective of the person who is looking, but rather merely shows the area of the room in the eyeline's general direction. The shot of Leland is from a camera position in the middle of the table, not from the chair where Jonathan was sitting, even though his glance cued the shot of Leland.

Fig. 32-Fig. 33

A shot made when the camera "looks" from a character's perspective is known as a point-of-view shot. A point-of-view shot is a type of framing in which the camera is positioned physically close to a character's point of view.

The shots of Joel and Maggie in Figs. 17- 23, for example, are all point-of view shots. In each, we could see from Joel's or Maggie's point of view. If the camera were positioned as if it were inside the character's head, looking out his or her eyes, then it would be known as a subjective shot. Frequently, point-of view and subjective shots are incorporated in a simple editing pattern: in shot one someone looks and in shot two we see what he or she is looking at from his or her perspective. In Fig. 34, from another Northern Exposure scene, Maggie draws Joel's attention to his brother, Jules. Joel turns and looks in the first shot.

The camera cuts to a close-up of the brother in shot two that is taken from Joel's perspective (Fig. 35). Subjective shots such as this are very similar to eyeline matches, but the eyeline cut does not go to a shot that is the character's perspective.

Fig. 34

Fig. 35

Fig. 36

Fig. 37

The opposite of a match cut is a jump cut, which results in a disruptive gap in space and/or time, so that something seems to be missing. Jump cuts were regarded as mistakes in classical editing, but they were made fashionable in the 1960s films of Jean-Luc Godard and other European directors. Godard's first feature film, Breathless (1960), features numerous jump cuts, as is illustrated in Figs. 7.36 and 7.37. The camera maintains similar framing from one shot to the next while the woman's position shifts abruptly and a mirror appears in her hand.

Today, jump cuts similar to this are quite common in music videos and commercials, and even find their way into more mainstream narrative productions.

Homicide: Life on the Street (1993-99) is peppered with them (e.g., Figs. 7.38 and 7.39, which are taken from two shots that were edited together). But then, Homicide is not a conventionally edited show. In most narrative television programs, match cuts remain the norm and jump cuts are generally prohibited.

Fig. 38

Fig. 39

Sample Decoupage. The best way to understand editing is to take a scene and work backward toward the shooting script, thereby deconstructing the scene. The process of breaking down a scene into its constituent parts is known as decoupage, the French word for cutting things apart.

In our discussion of Northern Exposure we have created a sample decoupage.

You may want to perform a similar exercise with a videotape of a short scene of your own choosing. Watch the tape several times with the sound turned off.

Try to diagram the set and each of the camera positions from a bird's-eye view.

Draw a shot-by-shot storyboard of the scene. Ask yourself these questions:

1. How is the scene's space, the area in which the action takes place, introduced to the viewer? Does an establishing shot occur at the start of the scene (or later in it)?

2. What is the narrative purpose or function of each shot? What does each shot communicate to the viewer about the story?

3. Why was each shot taken from the camera position that it was? Do these angles adhere to the 180° rule? Is screen direction maintained? If not, why is the viewer not disoriented? Or if the space is ambiguous, what narrative purpose does that serve?

4. If the characters move around, how does the editing (or camera movement) create transitions from one area to another?

5. Is an alternating editing pattern used? Is shot-counter shot used?

6. How does the camera relate to the character's perspective? Are there point-of-view or subjective shots? If so, how are those shots cued or marked? That is, what tells us that they are subjective or point-of-view shots?

7. Is match-on-action used? Are there jump cuts?

8. How does the last shot of the scene bring it to a conclusion?

9. In sum, how does the organization of space by editing support the narrative? Temporal Continuity. Within individual scenes, story time and screen time are often the same. Five minutes of story usually takes 5 minutes on screen. Time is continuous. Shot two is presumed to instantaneously follow shot one. Transitions from one scene to the next, however, need not be continuous. If the story time of one scene always immediately followed that of another's, then screen time would always be exactly the same as story time. A story that lasted 2 days would take 2 days to watch on the screen. Obviously, story time and screen time are seldom equivalent on television. The latter is most commonly much shorter than the former. There are many gaps, or ellipses, in screen time. In addition, screen time may not be in the same chronological order as story time.

Through flashbacks, for example, an action from the story past is presented in the screen present. So, both time's duration and its order may be manipulated in the transition from one scene to the next.

To shorten story time or change its order without confusing the viewer, classical editing has developed a collection of scene-to-scene transitions that break the continuity of time in conventionalized ways, thus avoiding viewer disorientation. These transitions are marked by simple special effects that are used instead of a regular cut.

• The fade. A fade-out gradually darkens the image until the screen is black; a fade-in starts in black and gradually illuminates the image. The fade-out of one scene and fade-in to the next is often used to mark a substantial change in time.

• The dissolve. When one shot dissolves into the next, the first shot fades out at the same time the next shot fades in, so that the two images overlap one another briefly. The conclusion of the Northern Exposure scene illustrates this. The final shot is a long shot of Joel and Maggie, as seen through the window of her cabin (Fig. 29). From there it dissolves to a close-up of Joel's face in his own bed (Fig. 31). The two shots both appear on screen for short period of time, overlapping one another (Fig. 30). Here the dissolve serves to mark the transition from Joel's dream state to "reality." Dissolves are more conventionally used to signal a passage in time; and the slower the dissolve, the more time has passed.

• The wipe. Imagine a windshield wiper moving across the frame. As it moves, it wipes one image off the screen and another on to take its place. This is the simplest form of a wipe, but wipes can be done in a huge variety of patterns. Wipes may indicate a change in time, but they are also used for an instantaneous change in space.

In addition to these transitional devices, classical editors also use special effects to indicate flashbacks. In films of the 1930s and 1940s, the image may be come blurry or wavy as the story slips into the past (or into a dream). The special effect signals to the viewer, "We're moving into the past now. "During the prime of the classical era, changes in time were inevitably clearly marked, and these techniques continue to be used (as is suggested by the dissolve in Northern Exposure). Fades, dissolves, and wipes were part of the stock-in-trade of the film editor during the cinema's classical era, and they are still evident in today's single camera productions. Historically, however, narrative filmmakers have used these devices less and less. Initially, this was due in large part to the influence of 1960s European filmmakers, who accelerated the pace of their films through jump cuts and ambiguous straight cuts (no special effects) when shifting into the past or into dream states. The jump cuts in Godard's Breathless revolutionized classical editing, breaking many of its most fundamental "rules." And Luis Bufluel's films enter and exit dream states and flashbacks without signaling them to the viewer in any way, creating a bizarre, unstable world.

Classical editing is not a static phenomenon. It changes according to technological developments, aesthetic fashions, and economic imperatives. Current fashion favors straight cuts in narrative, single-camera productions; but fades, dissolves, and wipes are still in evidence. Indeed, the fade-out and fade-in are television's favorite transition from narrative segment to commercial break and back. In this case, the fade-out and fade-in signal the transition from one type of television material (fiction) to another (commercial).

Non-narrative Editing

Not all television material that is shot with one camera tells a story. There are single-camera commercials, music videos, and news segments that do not present a narrative in the conventional sense of the term. They have developed different editing systems for their particular functions. Some bear the legacy of continuity editing, while others depart from it. The specifics of editing for music videos and commercials are discussed in sections 10 and 12, respectively, but we will here consider some aspects of editing for television news.

News Editing. Although the in-studio portion of the nightly newscast is shot using multiple cameras, most stories filed by individual reporters are shot in the field with a single video camera. The editing of these stories, or packages (ranging in length from 80 to 105 seconds), follows conventions particular to the way that the news translates events of historical reality into television material (see section 4). The conventional news story contains:

• The reporter's opening lead.

• A first sound bite, consisting of a short piece of audio, usually synchronized to image, that was recorded on the scene: for example, the mayor's comment on a new zoning regulation or a bereaved father's sobbing.

• The reporter's transition or bridge between story elements.

• A second sound bite, often one that presents an opinion contrasting with that in the first sound bite.

• The reporter's concluding stand-up, where he or she stands before a site significant to the story and summarizes it.

This editing scheme was inherited, with variations, from print journal ism and a specific concept of how information from historical reality should be organized. The reporter typically begins by piquing our interest, implicitly posing questions about a topic or event. The sound bites provide answers and fill in information. And, to comply with conventional structures of journalistic "balance"(inscribed in official codes of ethics), two sound bites are usually provided. One argues pro, the other con, especially on controversial issues. The news often structures information in this binary fashion: us/them, pro/con, yes/no, left/right, on/off. The reporters then come to represent the middle ground, with their concluding stand-ups serving to synthesize the opposing perspectives. Thus, the editing pattern reflects the ideological structure of news reporting.



Top 10 Prime-Time Shows: 1998-99

All of the following are multiple-camera productions except for ER, Touched By an Angel, and CBS Sunday Movie.

1. ER

2. Friends

3. Frasier

4. NFL Monday Night Football

5. Jesse

5. Veronica's Closet

7. 60 Minutes

8. Touched By an Angel

9. CBS Sunday Movie

10. 20/20

Note: Source: Nielson Media Research.



Although a good deal of what we see on television has been produced using single-camera production, it would be wrong to assume that this mode dominates TV in the same way that it dominates theatrical film. The opposite is true. It would be impossible to calculate exactly, but roughly three quarters of today's television shows are produced using the multiple-camera mode. Of the top 10 most popular prime-time shows in the 1998-99 season, only three were shot in single-camera mode (Table 1). This doesn't even take into consideration non-prime-time programs such as daytime soap operas, game shows, and late-night talk shows-all of which are also done in multiple-camera. Obviously, multiple-camera production is the norm on broadcast television, as it has been since the days of television's live broadcasts-virtually all of which were also multiple-camera productions (Table 2). It is tempting to assume that since multiple-camera shooting is less expensive and faster to produce than single-camera, it must therefore be a cheap, slipshod imitation of single-camera shooting. This is the aesthetic hierarchy of style that television producers, critics, and even some viewers themselves presume. In this view, multiple-camera is an inferior mode, a necessary evil. However, ranking one mode of production over another is essentially a futile exercise. One mode is not so much better or worse than another as it is just different. Clearly, there have been outstanding, even "artistic, "achievements in both modes. Instead of getting snarled in aesthetic snobbery, it is more important to discuss the differences between the two and understand how those differences may affect television's production of meaning. In short, how do the different modes of production influence the meanings that TV conveys to the viewer? And what principles of space and time construction do they share?



Top 10 Prime-Time Shows: 1950-51

Of the following, all but the Westerns (The Lone Ranger and Hopalong Cassidy) and Fireside Theatre were telecast live using multiple camera technology.

1. Texaco Star Theater

2. Fireside Theatre

3. Philco TV Playhouse

4. Your Show of Shows

5. The Colgate Comedy Hour

Note: (1950-51 was the first season during which the A.G. Nielsen Company [which became Nielsen Media Research] rated programs.)

6. Gillette Cavalcade of Sports

7. The Lone Ranger

8. Arthur Godfrey's Talent Scouts

9. Hopalong Cassidy

10. Mama


Stages of Production

Pre-production. Narrative programs such as soap operas and sitcoms that utilize multiple-camera production start from scripts much as single camera productions do, but these scripts are less image-oriented and initially indicate no camera directions at all. Sitcom and soap opera scripts consist almost entirely of dialogue, with wide margins so that the director may write in camera directions; a page from a script for No Business of Yours (an unproduced sitcom) is shown in Fig. 40. Storyboards are seldom, if ever, created for these programs.

This type of scripting is emblematic of the emphasis on dialogue in multiple camera programs. The words come first; the images are tailored to fit them.

Non-narrative programs (game shows, talk shows, etc.) have even less writ ten preparation. Instead, they rely on a specific structure and a formalized opening and closing. Although the hosts may have lists of questions or other prepared materials, they and the participants are presumed to be speaking in their own voices, rather than the voice of a scriptwriter. This adds to the pro gram's impression of improvisation.

Production. A multiple-camera production is not dependent on a specific technological medium. That is, it may be shot on film, on video, or even broadcast live. Seinfeld was filmed; Roseanne was videotaped. All talk shows and game shows are videotaped. Some local news programs and Saturday Night Live are telecast live. If a program is filmed, the editing and the addition of music and sound effects must necessarily come later, after the film stock has been digitized and imported into a nonlinear editor (NLE). If a program is videotaped, there are the options of editing later on NLE or while it is being recorded live-on-tape.

(Obviously, a live program must be edited while it is telecast.) Time constraints play a factor here. Programs that are broadcast daily, such as soap operas and game shows, seldom have the time for extensive editing in post-production.

Weekly programs, however, may have that luxury.

The choice of film or video is, once again, dependent in part on technology, economics, and aesthetics. Since the technology of videotape was not made available until 1956, there were originally only two technological choices for recording a multiple-camera program: either film live broadcasts on kinescope by pointing a motion-picture camera at a TV screen; or originally shoot the pro gram on film (and then broadcast the edited film later). Early-1950s programs such as Your Show of Shows (1950-54) and The Jack Benny Show (1950-65, 1977) were recorded as kinescopes. In 1951, the producers of I Love Lucy (1951-59, 1961) made the technological choice to shoot on film instead of broadcasting live. Although this involved more expense up front than kinescopes did, it made economic sense when it came time to syndicate the program. A filmed original has several benefits over kinescope in the syndication process. A filmed original looks appreciably better than a kinescope and is easier and quicker to prepare since all the shooting, processing, and editing of the film has already been done for the first broadcast. Since producers make much more money from syndication than they do from a program's original run, it made good economic sense for I Love Lucy to choose film over live broadcasting and kinescope. Moreover, its enormous success in syndication encouraged other sitcoms in the 1950s to record on film.


Fig. 40


Sep 13 1993 (George, Allen, Mr. Franklin)


(2/J) ACT TWO SCENE J INT. BARBER SHOP - DAY GEORGE CUTS MR. FRANKLIN'S HAIR. ALLEN ENTERS. GEORGE Well, now, what do you know? Look who's here! ALLEN Hi, Mr. Shearer.

GEORGE (LOUDLY) Mr. Franklin, you remember Allen Scot. t? Used to work summers next door at the grocery? MR. FRANKLIN IS STARTLED AWAKE. GEORGE INTER PRETS THIS TO BE A NOD. GEORGE (CONT., TO ALLEN) Out of school and everything. Did you get the graduation present from Winnie and me?


Sure did. Thanks. I appreciate it.


After the introduction of videotape, the economic incentive for multiple camera productions to shoot on film no longer held true. A videotape record of a live broadcast may be made and that videotape may be used in syndication. This videotape-unlike kinescopes-looks just as good as the original broadcast.

Today, producers who shoot film in a multiple-camera setup do so primarily for aesthetic reasons. Images in live broadcasts and on videotape are certainly better than the old kinescopes, but film still holds a slight edge over video in terms of visual quality. However, the introduction of high-definition TV may spell the end for film's visual superiority.

Narrative programs that are filmed and those videotaped narrative pro grams that are edited in post-production follow a similar production procedure.

The actors rehearse individual scenes off the set, then continue rehearsing on the set, with the cameras. The director maps out the positions for the actors and the two to four cameras that will record a scene. The camera operators are often given lists of their positions relative to the scene's dialogue. Finally, an audience (if any) is brought into the studio (see Figs. 5.6, 5.7). The episode is performed one scene at a time, with 15- to 20-minute breaks between the scenes-during which, at sitcom filmings/tapings, a comedian keeps the audience amused. One major difference between single-camera and multiple-camera shooting is that, in multiple-camera, the actors always perform the scenes straight through, without interruption, unless a mistake is made. Their performance is not fragmented, as it is in single-camera production. Each scene is recorded at least twice and, if a single line or camera position is missed, they may shoot that individual shot in isolation afterwards.

Further, in multiple-camera sitcoms, the scenes are normally recorded in the order in which they will appear in the finished program-in contrast, once again, to single-camera productions, which are frequently shot out of story order. This is done largely to help the studio audience follow the story and respond to it appropriately. The audience's laughter and applause is recorded by placing microphones above them. Their applause is manipulated through flashing "applause" signs that channel their response, which is recorded for the program's laugh track. The laugh track is augmented in post-production with additional recorded laughter and applause, a process known as sweetening in the industry.

The entire process of recording one episode of a half-hour sitcom takes about 3 to 4 hours-if all goes as planned.

Live-on-tape productions, such as soap operas, are similar in their preparation to those edited in post-production, but the recording process differs in a few ways. Once the videotape starts rolling on a live-on-tape production, it seldom stops. Directors use a switcher to change between cameras as the scene is performed. The shots are all planned in advance, but the practice of switching shots is a bit loose. The cuts don't always occur at the conventionally appropriate moment. In addition to the switching/cutting executed concurrently with the actors' performance, the scene's music and sound effects are often laid on at the same time, though they may be fine-tuned later. Sound technicians prepare the appropriate door bells and phone rings and thunderclaps and then insert them when called for by the director. All of this heightens the impression that the scene presented is occurring "live" before the cameras, that the cameras just happened to be there to capture this event-hence the term live-on-tape. The resulting performance is quite similar to that in live theater.

In soap operas, individual scenes are not shot like sitcoms, in the order of appearance in the final program. Since soap operas have no studio audience to consider, their scenes are shot in the fashion most efficient for the production.

Normally this means that the order is determined by which sets are being used on a particular day. First, all the scenes that appear on one set will be shot- regardless of where they appear in the final program. Next, all the scenes on an other set will be done, and so on. This allows the technicians to light and prepare one set at a time, which is faster and cheaper than going back and forth between sets.

As we have seen, narrative programs made with multiple cameras may be either filmed or videotaped and, if taped, may either be switched during the production or edited afterward, in post-production. Non-narrative programs, however, have fewer production options. Studio news programs, game shows, and talk shows are always broadcast live or shot live-on-tape, and never shot on film. This is because of their need for immediacy (in the news) and/or economic efficiency (in game and talk shows). Participants in the latter do not speak from scripts, they extemporize. And, since these "actors" in non-narrative programs are improvising, the director must also improvise, editing on the fly. This further heightens the illusion of being broadcast live, even though most, if not all such programs, are on videotape.

Post-production. In multiple-camera programs, post-production (of ten simply called "post") varies from the minimal touch-ups to full-scale assembly. Live-on-tape productions are virtually completed before they get to the post-production stage. But similar programs that have been recorded live, but not switched at the time of recording must be compiled shot by shot. For in stance, sitcoms often record whatever the three or four cameras are aimed at without editing it during the actual shoot. The editor of these programs, like the editor of single-camera productions, must create a continuity out of various discontinuous fragments-using an NLE system.

It might appear that sitcoms and the like would have a ready-made continuity, since the scenes were performed without interruption (except to correct mistakes) and the cameras rolled throughout. What we must recall, however, is that there are always several takes of each scene. The editor must choose the best version of each individual shot when assembling the final episode. Thus, shot one might be from the first take and shot two from the second or third.

The dialogue is usually the same from one take to the next, but actors' positions and expressions are not. Inevitably, this results in small discontinuities. In one Murphy Brown scene, for instance, TV producer Miles argues with his girlfriend, Audrey, and her former boyfriend, Colin. In one shot, Colin, on the far left of the frame, is holding a sandwich in his right hand (Fig. 41). The camera cuts to a reverse angle and instantaneously the sandwich has moved to his left hand (Fig. 42). Evidently, the editor selected these two shots from alternative takes of the same scene.

To hide continuity errors from the viewer, the editor of a multiple-camera production relies on editing principles derived from the single-camera 180° editing system (e.g., match cuts, eyeline matches, etc.). Also, the soundtrack that is created in post-production incorporates music, dubbed-in dialogue, sound effects, and laugh tracks to further smooth over discontinuities and channel our attention.

Fig. 41

Fig. 42

Narrative Editing: The Legacy of the Continuity System

It is striking how much multiple-camera editing of narrative scenes resembles that of single-camera editing. In particular, the 180° principle has always dominated the multiple-camera editing of fiction television. This is true in part because of the aesthetic precedent of the theatrical film. But it is also true for the simple, technologically based reason that, to break the 180° rule and place the camera on the "wrong” side of the axis of action would reveal the other cameras, the technicians, and the bare studio walls (position X in Fig. 5). Obviously, violating this aspect of the 180° system is not even an option in television studio production.

However, acceptance of the continuity editing system in multiple-camera production goes beyond maintaining screen direction due to an ad hoc adherence to the 180° rule. It extends to the single-camera mode's organization of screen space. As you read through the following description of a typical scene development, you might refer back to the description of single-camera space.

Note also that the following applies to all narrative programs shot in multiple camera, whether they are filmed or videotaped (or recorded live-on-tape). A scene commonly begins by introducing the space and the characters through an establishing shot that is either a long shot of the entire set and actors, or a camera movement that reveals them. On weekly or daily programs, however, establishing shots may be minimized or even eliminated because of the repetitive use of sets and our established familiarity with them. In any event, from there a conventionalized alternating pattern begins-back and forth between two characters. In conversation scenes-the foundation of narrative television - directors rely on close-ups in shot-counter shot to develop the main narrative action of a scene. After a shot-counter shot series, the scene often cuts to a slightly longer view as a transition to another space or to allow for the entrance of an other character. Standard, single-camera devices for motivating space (match on action, eyeline matches, point-of-view shots, etc.) are included in the multiple camera spatial orientation. Try watching a scene from your favorite soap opera with the sound turned off and see if it doesn't adhere to these conventions.

The differences between multiple-camera programs and single-camera ones are very subtle and may not be immediately noticeable to viewers. But these differences do occur, and they do inform our experience of television. The main difference between the two modes is how action is represented. Although multiple-camera shooting arranges space similarly to the space of single-camera productions, the action within that space-the physical movement of the actors-is presented somewhat differently. In multiple-camera shooting, some action may be missed by the camera and wind up occurring out of sight, off frame, because the camera cannot control the action to the degree that it does in single-camera shooting. For example, in one scene from the multiple-camera production, All My Children, the following two shots occur:

1. Medium close-up of Erica, over Adam's shoulder (Fig. 43). She pushes him down (Fig. 44) and is left standing alone in the frame at the end of the shot (Fig. 45).

2. Medium close-up of Adam, seated, stationary at the very beginning of the shot (Fig. 46). Here, the camera operators had trouble keeping up with Adam's actions and consequently his fall happens off-screen. If this scene had been shot in single camera mode, the fall would have been carefully staged and tightly controlled so that all the significant action was on-screen. Multiple-camera editing frequently leaves out "significant" action that single-camera editing would include.

Single-camera continuity editing might have used a match-on-action cut in this instance-editing these shots in the middle of Adam's fall, showing his action fully, and establishing his new position in the chair.

Fig. 43-Fig. 44

Fig. 45-Fig. 46

Small visual gaps such as this and other departures from the continuity editing system occur frequently in multiple-camera editing. What significance do they have? They contribute to the programs' illusion of "liveness." They make it seem as if the actors were making it up as they went along, and the camera operators were struggling to keep up with their movements, as if the camera operators didn't know where the actors were going to go next. Of course, in reality they do know the actors' planned positions, and yet they cannot know exactly where the actors will move. In single-camera shooting the action is controlled precisely by the camera, bound by the limits of the frame. In multiple camera shooting that control is subtly undermined. As a result, in their editing, multiple-camera narrative programs (soap operas and sitcoms, principally) come to resemble talk shows and game shows. The visual "looseness" of multiple camera editing comes to signify "liveness" when compared to the controlled imagery of single-camera productions. The spatial orientation of the two modes is quite similar, but the movement of actors through that space is presented a bit differently.

Non-narrative Editing: Functional Principles

The non-narrative programs that are shot with several cameras in a television studio include, principally, game shows, talk shows, and the portions of news programs shot in the studio. (Sports programs and other outdoor events such as parades also use several cameras at once, but that is a specialized use of multiple-camera production.) These programs do not share the need of narrative programs to tell a story, but their approach to space is remarkably similar to that of narrative programs. Typically, their sets are introduced with establishing long shots, which are followed by closer framings and inevitably (in conversation-oriented genres such as talk shows) wind up in shot-counter shot patterns. Game shows also follow this pattern of alternation, crosscutting between the space of the contestants and that of the host (Fig. 5- 6). The mise-en-scene of non-narrative programs is quite distinct from narrative set tings (see section 5), but the shot-to-shot organization of that mise-en-scene follows principles grounded in the continuity editing system.


In our consideration of editing on television, we have witnessed the pervasiveness of the continuity system. Although originally a method for editing theatrical films, its principles also underpin both of the major modes of production for television: single-camera and multiple-camera.

The continuity system functions, in a sense, to deceive us-to make us believe that the images passing before us compose one continuous flow, when actually they consist of many disruptions. Or, in other terms, one could say this system constructs a continuity of space and time. Many techniques are used to construct this continuity. The 180° rule maintains our sense of space and screen direction by keeping cameras on one side of an axis of action. Shot-reverse shot conventionally develops the action of a scene in alternating close-ups. Match cuts (especially matches-on-action and eyeline matches) and the basic point-of-view editing pattern motivate cuts and help prevent viewer disorientation.

Time on television is not always continuous. Indeed, gaps and ellipses are essential to narrative television if stories that take place over days or months are to be presented in half-hour, hour, or 2-hour time slots. Through editing, the duration and order of time may be manipulated. Within the continuity system, however, our understanding of time must always be consistent. We must be guided through any alteration of chronological order. Fades, for instance, are used to signal the passage of time from one scene to the next.

These principles and techniques of the continuity system are created in both single-camera and multiple-camera modes of production. An understanding of the stages of production-pre-production, production and post-production- helps us see their subtle differences. The key distinction is that single-camera productions shoot scenes in discontinuous chunks, while multiple-camera ones (especially live-on-tape productions) allow scenes to be played out in entirety while the cameras "capture" them. Even so, both modes of production must find ways to cope with discontinuity and disruption, and it is here that the continuity system's principles come into play, regardless of the actual production method used to create the images.

Non-narrative television is not as closely tied to the continuity system as narrative programs are, yet it does bear the legacy of continuity-style editing.

Establishing shots, shot-reverse shot editing patterns, and the like are as evident on talk shows and game shows as they are on narrative programs.

The power of editing, the ability to alter and rearrange space and time, is a component of television that is taken for granted. Its "invisibility" should not blind us, however, to its potency.


Editing style and mode of production are discussed in many of the readings suggested at the end of section 5.

The evolution of single-camera production is comprehensively described in David Bordwell, Janet Staiger, and Kristin Thompson, The Classical Hollywood Cinema: Film Style and Mode of Production to 1960 (New York: Columbia University Press, 1985). John Ellis, Visible Fictions (New York: Routlegde, 1992) is not as exhaustive, but it does begin the work of analyzing the multiple-camera mode of production. Few other sources make such an attempt.

In the cinema, the principles of editing have long been argued. This stems from the desire to define film in terms of editing, which was at the heart of the very first theories of the cinema. These initial forays into film theory were carried out in the 1920s by filmmakers Eisenstein, Kuleshov, and Pudovkin.

See, for example, Sergei Eisenstein, Film Form: Essays in Film Theory, edited and translated by Jay Leyda (New York: Harcourt, Brace & World, 1949); Lev Kuleshov, Kuleshov on Film, edited and translated by Ronald Levaco (Berkeley: University of California Press, 1974); and V. I. Pudovkin, Film Technique and Film Acting, translated by Ivor Montagu (New York: Bonanza, 1949). Editing has also been a central component of debates within film studies over the position of the spectator, as can be seen in Jean-Louis Baudry, "Ideological Effects of the Basic Cinematographic Apparatus," in Narrative, Apparatus, Ideology, ed. Philip Rosen (New York: Columbia University Press, 1986), 286-98; Nick Browne, "The Spectator-in-the-Text: The Rhetoric of Stagecoach," in Rosen, 102-19; and Daniel Dayan, "The Tutor-Code of Classical Cinema," in Movies and Methods, ed. Bill Nichols (Berkeley: University of California Press, 1976). Kaja Silverman, The Subject of Semiotics (New York: Oxford University Press, 1983) reviews this debate.

Thomas A. Ohanian, Digital Nonlinear Editing: Editing Film and Video on the Desktop (Boston: Focal Press, 1998); Thomas A. Ohanian and Michael E. Phillips, Digital Filmmaking, The Changing Art and Craft of Making Motion Pictures (Boston: Focal Press, 2000); and Michael Silbergleid and Mark J. Pescatore, eds., Guide to Digital Television (New York: Miller Freeman PSN, 2000) approach television editing from a hands-on perspective-explaining editing principles and the operation of editing systems. Ken Dancyger, The Technique of Film and Video Editing, 2nd ed. (Boston: Focal Press, 1996) offers a broad historical and critical overview of film editing that includes a limited section on editing for television.

Despite the obvious impact of editing on television style, television criticism has been slow to articulate its significance. However, this work has been begun in Jeremy G. Butler, "Notes on the Soap Opera Apparatus: Televisual Style and As the World Turns, "Cinema Journal 25, no. 3 (Spring 1986): 53-70; and the previously cited Herbert Zettl, Sight Sound Motion.


1. David Bordwell, Janet Staiger, and Kristin Thompson, The Classical Hollywood Cinema: Film Style and Mode of Production to 1960 (New York: Columbia University Press, 1985), 243-244.

2. Many people use "point-of-view" and "subjective" interchangeably. Here, how ever, we will distinguish between subjective shots from within the head of the character and point-of-view shots that are nearby, but not through the character's eyes.

Prev | Next | Index

Top of Page | Home

Updated: Thursday, 2021-05-13 11:16 PST