Color Television--Televsion Service Manual (1984)

Home | Audio Magazine | Stereo Review magazine | Good Sound | Troubleshooting



The development of color television was governed by two requirements: The color signal had to be inserted within the 6 MHz-channel bandwidth; and the color signal had to be compatible with the monochrome (black-and-white) television system.

In other word, the signal must be such that it can be received on a monochrome receiver in black and white without any modification of the receiver. In addition, the part of the signal that conveys color must be transmitted in such a manner that it does not appreciably affect the quality or type of picture reproduced by the monochrome receiver tuned to the color signal.

The signal must represent the scene according to its color, and the colors must be transmitted in terms of the three chosen primaries: red, green, and blue. By some means, the three physical aspects of brightness, hue, and saturation must be conveyed by the signal for each color in the scene because the eye sees color in terms of these aspects. Figure 14-1 shows the basic plan of the color-TV system.

COLOR-TV CAMERA

In order to make the color system compatible, the specifications of the standard monochrome signal had to be retained. This meant that such things as the channel width of 6 MHz, the aspect ratio of 3:4, the number of scanning lines per frame (525), the horizontal- and vertical-scanning rates (15,750 and 60 hertz, respectively), and the video bandwidth of 4.25 MHz had to remain the same within narrow tolerances. To these basic specifications, provisions had to be added to convey the color elements by means of a signal that will hereafter be known as the chrominance signal.


Fig. 14-1. Basic plan of the color-TV system.


Fig. 14-2. RF frequency-response curve showing relations of color, sound, and picture carriers.

Even if the same specifications were retained, the color system would not be compatible if the composite color signal did not contain a signal that would convey brightness. To satisfy this requirement, a signal that is representative of the brightness of the colors in the scene must be transmitted together with the chrominance signal. This brightness signal is very much the same as the video signal used in standard monochrome transmission, and it will be referred to hereafter as the luminance signal. It is transmitted by amplitude modulation of the picture carrier in such a manner that an increase in brightness corresponds to a decrease in the amplitude of the carrier envelope.

Putting a chrominance signal in the allotted channel of 6 MHz created a difficult problem since this chrominance signal had to be transmitted along with the luminance signal and had to be included without objectionable interference to the luminance signal. This was accomplished by proper placement of the chrominance signal within the band of video frequencies. The color-signal carrier has the relations shown in Fig. 14-2 to the picture- and sound-signal carriers.

The chrominance and luminance signals are included within the 4.25-MHz video band by an interleaving process. This pro-cess is possible because the energy of the luminance signal concentrates at specific intervals in the frequency spectrum. The spaces between these intervals are relatively void of energy, and the energy of the chrominance signal can be caused to concentrate in these spaces, as depicted in Fig. 14-3. The chrominance signal is conveyed by means of a subcarrier. The frequency of this subcarrier was chosen so that its energy would interleave with the energy of the luminance signal. The energy of each of these signals is conveyed by the video carrier. The subcarrier frequency is high enough in the video band that the subcarrier sidebands, when they are limited to a certain bandwidth, do not interfere with the reproduction of the luminance signal by a monochrome receiver.


Fig. 14-3. Placement of the color signal.

The chrominance signal must convey energy that represents the primary colors. Color could be transmitted by three separate signals, each representing a primary color, but a channel width of at least 12.75 MHz would then be required. Since this would take away the idea of compatibility, color had to be represented in some other manner in order to utilize the standard 6-MHz channel width.

Three signals-red, green, and blue-are obtained from the camera. Portions of these three signals are used to form the luminance signal, which leaves three signals that are referred to as color-difference signals. Two of these signals are proportionately mixed together to form two other signals that are used to modulate the chrominance subcarrier.

A method of modulation known as divided-carrier modulation may be employed in order to place two different signals upon the same carrier. The subcarrier is effectively split into two parts, and each portion of the subcarrier is modulated separately. Then the two portions are combined to form the resultant chrominance signal. The amplitude and phase of this signal vary in accordance with variations in the modulating signals. A change in amplitude of the chrominance signal represents a change in hue.

A reference signal of the same frequency as the subcarrier frequency is transmitted in the composite color signal. This reference signal, called the color burst, has a fixed phase angle and is employed by the color receiver in order to detect properly the colors represented by the chrominance signal. Figure 14-4 shows the appearance of the color burst. An expanded display of the color burst obtained with a triggered-sweep oscilloscope is shown in Fig. 14-5.


Fig. 14-4. Color burst is placed on the back porch of the horizontal sync pulse.

In the foregoing discussion, it has been stated that the composite color signal contains a luminance signal and a chrominance signal. A color-burst signal is also transmitted along with the conventional blanking and sweep-synchronizing signals. Let us now examine in greater detail the methods employed in making up the composite color signal.


Fig. 14-5. Expanded display of the color burst as obtained with a triggered sweep oscilloscope.

CHARACTERISTICS OF COLOR

In order to see, we must have a source of light, just as in the process of hearing, we must have a source of sound before we are able to hear. Obviously, if sound waves were not present, nothing would be heard. So, if a source of light were not present, nothing would be seen.

When we speak of light, we usually think of light coming from the sun or the light that is emitted from some artificial lighting source, such as electrical lighting. This type of light is referred to as direct light. Another type of light is indirect, or reflected, light, which is given off by an object when direct light strikes it. The difference between these two types of light is that the indirect light is dependent upon the direct light. When light is not shining upon an object, light will not be given off unless the object contains self-luminating properties.

Direct light falling upon an object is either absorbed or reflected. If all of the light is reflected, the object appears white.

If the direct light is entirely absorbed, the object appears black.

The larger the amount of light that is reflected by an object, the brighter the object will appear to the eye. In addition, the more intense the direct light source, the brighter the object will become. This can be demonstrated by casting a shadow upon a portion of an object and noting the difference in brightness of the two areas. The portion without a shadow will, of course, appear brighter.


Fig. 14-6. Ramant-energy spectrum.

Light is one of the many forms of radiant energy. Any energy that travels by wave motion is considered radiant energy. Classified in this group, along with light, are sound waves, x-rays, and radio waves. As shown in Fig. 14-6, light that is useful to the eye occupies only a small portion of the radiant-energy spectrum.

Sound is located at the lower end of the spectrum, whereas cos mic rays are at the upper end; light falls just beyond the middle of the spectrum. Along the top of the spectrum shown in Fig. 14-6 is the frequency scale, and along the bottom is the angstrom-unit scale (10^-8 cm). Wavelengths in the region of light may be designated in microns (1 micron = 10^-4 centimeters). These units are also shown along the bottom of the spectrum in Fig. 14-6. Light is made up of that portion of the spectrum between 400 and 700 nanometers (nm).

When all wavelengths of the light spectrum from 400 to 700 nm are presented to the eye in nearly equal proportions, white light is seen. This white light is made of various wavelengths that are representative of different colors. This composition can be shown by passing light through a prism, as shown in Fig. 14-7.


Fig. 14-7. White light dispersed by a prism.


Fig. 14-8. Relationship between colors and wavelengths in the light spectrum.

The light spectrum is broken up into its constituent wavelengths, with each representing a different color. The ability to disperse the light by a prism stems from the fact that light of shorter wavelengths travels slower through glass than does light of longer wavelengths. Figure 14-8 shows the relationship of wavelengths

and the colors of the light spectrum. The spectrum ranges from violet on the lower end to red on the upper end. In between fall blue, green, yellow, and orange. A total of six distinct colors is visible when passing light through a prism. Since the colors of the spectrum pass gradually from one to the other, the theoretical number of colors becomes infinite. It has been determined that about 125 colors can be identified over the visible gamut.

Three color attributes are used to describe any one color or differentiate between several colors: (1) hue, (2) saturation, and (3) brightness. Hue is a quality used to identify any color under consideration, such as red, blue, or yellow. Saturation is a mea sure of the absence of dilution by white light and can be expressed with terms such as rich, deep, vivid, or pure. Brightness defines the amount of light energy contained within a given color and can be expressed with terms such as bright, dark, or dim.

We might consider an analogy between a color and a radiated radio wave. Hue, which defines the wavelength of the color, would be synonymous with frequency, which defines the wave length of the radio wave. Saturation, which defines the purity of the color, would be synonymous with the signal-to-noise ratio, which defines the purity of the radio wave. Brightness, which is governed by the amount of energy in the color, would be synonymous with amplitude, which defines the amount of energy in the radio wave.

Brightness is a characteristic of both white light and color, whereas hue and saturation are characteristic of color only. Saturation and brightness are often visualized as identical or interrelated qualities of color, whereas they should be considered as separate qualities. It is possible to vary either one of the qualities without changing the other.

In nature, however, a change in saturation is usually accompanied by a change in brightness, which is exemplified by the fact that a pastel color generally appears brighter than a saturated color of the same hue when they are directly lighted by the same source. By changing the lighting on the pastel color (such as by placing it in a shadow), one can decrease the brightness of the pastel color; and it is conceivable that both colors can be made to have the same brightness. Thus, two colors of the same hue but of different saturation can have equal brightness levels.

Any given color within limitations can be reproduced or matched by mixing three primary colors. The additive process of color mixing used in color television employs colored lights for the production of colors. The colors in the additive process do not depend upon an incident light source. Self-luminous proper ties are characteristic of the additive colors. Phosphorescent signs that glow in the dark are good examples of this process. Cathode ray tubes contain self-luminance properties, and so it is only logical that the additive process would be employed in color television.

The three primaries for the additive process of color mixing are red, green, and blue. Two requirements for the primary colors are that each primary must be different, and that the combination of any two primaries must not be capable of producing the third. Red, green, and blue were chosen for the additive primaries because they fulfilled these requirements and because it was determined that the greatest number of colors could be matched by the combination of these three colors.

The basic principle of the color-TV system is shown in Fig. 14-9. When the red camera is energized at the transmitter, the red gun in the color picture tube at the receiver is energized. Similarly, a signal from the blue camera results in an output from the blue gun, and a signal from the green camera results in an output from the green gun. Since three kinds of signals must be transmit ted in a channel that normally accommodates only one signal, a multiplexing, or encoding, method of transmission must be used.

This method involves modulating one carrier with two signals, as explained next.


Fig. 14-9. Basic principle of the color-TV system.

DIVIDED-CARRIER MODULATION

It has been pointed out that, for purposes of color transmission, a chrominance signal is required; moreover, the chrominance signal must represent two color signals (saturation and hue) sep arable from each other. One subcarrier at a frequency of 3.579545 MHz above the picture carrier is available to convey both color signals. Consequently, some method of modulating one carrier with two signals must be utilized at the transmitter.

As can be seen in Fig. 14-10, a fundamental block diagram shows the manner in which one carrier may be modulated by two signals. A subcarrier generator produces a sine wave of constant frequency and amplitude. This subcarrier is then applied to two doubly-balanced modulator circuits, represented by blocks A and B. The subcarrier coupled to the modulator in block B has been subjected to a 90° phase-shift. One of the two modulating signals is applied to modulator A, and the other is applied to modulator B.


Fig. 14-10. Block diagram of divided-carrier systems of modulation.

In Fig. 14-10, there are two blocks representing doubly balanced modulators. The modulator in block B operates in the same manner as the one in block A, with the exception that the subcarrier input is delayed 90°. This delay causes the output of modulator B to be displaced 90° in .phase with reference to the output of modulator A. In this respect, it should be remembered that the output of modulator B can either lead or lag that of modulator A by 90°, depending upon the polarity of the modulating signal introduced into each balanced modulator circuit.

The voltages from the outputs of modulators A and B are combined in the adder stage. The output of this stage is a single waveform, which varies in amplitude and phase in accordance with the amplitude and phase of each of the two signals introduced to modulators A and B. Thus, two modulating signals are impressed upon a single subcarrier, and these two signals can be recovered by reversing the modulation process at the receiving end.

COLOR SYNCHRONIZATION

The chrominance signal changes phase with every change in the hue of the color it represents, and the phase difference between the chrominance signal and the output of the subcarrier generator identifies the particular hue at that instant. When the chrominance signal reaches the receiver, the receiver must have some means of comparing the phase of the signal with a fixed reference phase identical to that of the subcarrier generator at the transmitter. This reference phase is provided in the receiver by a local oscillator that is synchronized with the subcarrier generator by means of a color-burst signal transmitted during the horizontal blanking period. The color burst consists of a minimum of 8 hertz at 3.579545 MHz.

As shown in Fig. 14-11, the color burst is place on the back porch of the horizontal-blanking pedestal. When located at this point the burst will not affect the operation of the horizontal oscillator circuits because the horizontal systems used in existing receivers are designed to be immune to any noise or pulse for a short time after they have been triggered. (Remember, the horizontal oscillator will have been triggered at the start of the horizontal-sync pulse.) Since the average voltage of the color burst is the same as the voltage of the blanking level, the burst signal will not produce spurious light on the picture tube during the retrace period.


Fig. 14-11. Specifications for the color burst.

A color receiver is designed to extract the color burst from the transmitted signal. This reference signal is used to synchronize the color section of the receiver in much the same manner as the horizontal and vertical pulses are used to synchronize the

horizontal- and vertical-sweep sections. If the color burst is attenuated, the receiver is likely to loose color-sync lock (see Fig. 14-12). Fig. 14-12. Progressive attenuation of the color burst.

COLOR SIGNAL

As shown previously, the color-picture signal consists of two separate signals: luminance signal and chrominance signal. We shall now discuss the makeup of these two signals at the transmitter.

Shown in Fig. 14-13 is a drawing of the basic components of a tricolor camera that employs three camera tubes and two dichroic mirrors for the separation of light. Regular image orthicons may be used in this camera; however, newer tubes that have a response to light frequencies more similar to the human eye are also used. One of the camera tubes receives only the light frequencies corresponding to the color red and is called the red camera tube. Another tube receives only the light frequencies corresponding to the color blue and is called the blue camera tube. The third tube receives only light frequencies corresponding to the color green and is called the green camera tube.


Fig. 14-13. Basic components of a tri-color camera.

To illustrate the operation, let us assume that the color camera is focused on a color scene. The light is broken up in the following manner: All the light frequencies pass through the objective lens, which is mounted on the turret, and through a pair of relay lenses. Then the light is affected by the dichroic mirrors. This type of mirror permits all the light frequencies of the spectrum to pass except those of the primary color which it is designed to reflect. By the use of this type of mirror, white light can be separated into the light frequencies of the three primary colors: red, green, and blue. This principle is depicted in Fig. 14-14.

Through correct placement, only two dichroic mirrors are needed in the color camera. The blue dichroic mirror is positioned at a point indicated by A in Fig. 14-13. When light arrives at this point, all the light frequencies except those representing the color blue are passed through the mirror. The frequencies representing the blue portion of the spectrum are reflected. The tilt angle of the mirror at point A is such that the mirror directs the blue light to a front-surface mirror at point C. Then the blue light is reflected onto the face of the blue camera tube.


Fig. 14-14. Color filters permit only the primary colors to enter the color camera.

The light passed by the dichroic mirror at point A goes on to the red dichroic mirror, which is positioned at point B (Fig. 14 13). This mirror is designed to pass all the light frequencies except those which represent red. The red light is reflected to a front-surface mirror at point D, where it is again reflected so that it falls on the face of the red camera tube.

Both the blue and the red portions of the incoming light frequencies have been removed, and only the green portion re mains, which is allowed to fall directly on the green camera tube.

In this manner, the light is broken up into the three primary colors.

At the output of the color camera, there are three voltages that are representative of the three colors. From these voltages, the luminance and chrominance signals are formed. Figure 14-15 shows the plan of a basic color-TV transmitter.


Fig. 14-15. Plan of a basic color-TV transmitter.

LUMINANCE SIGNAL

The luminance signal is the portion of the color-picture signal utilized by monochrome receivers. For this reason, the luminance signal must represent the scene only according to its brightness. It is very similar to the video signal specified for standard mono chrome transmission.

The human eye does not see all colors equally bright. The specifications for the luminance signal take into consideration the sensitivity of the eye to light frequencies. Definite proportions of each of the color signals from the color camera are used to form the luminance signal. These proportions are: 59 percent of the green signal, 30 percent of the red signal, and 11 percent of the blue signal. Figure 14-16 provides a visualization of these facts.


Fig. 14-16. Y component of the color-TV signal.

If an all-white portion of a scene is being scanned at the camera, the luminance signal will contain these proportions of the three color signals. The luminance signal is commonly called the Y signal. The proportions of the red, green, and blue signals do not change when brightness changes; only the amplitude of the Y signal changes. Figure 14-17 shows a display of the Y signal on an oscilloscope screen.


Fig. 14-17. Display of the Y signal on an oscilloscope screen.

CHROMINANCE SIGNAL

The chrominance signal must represent only the colors of a scene; therefore, the luminance signal is subtracted from each of the three output signals of the color camera. This results in three signals, which represent red minus luminance, green minus luminance, and blue minus luminance. These signals are denoted as the color-difference signals. When an all-white scene is being scanned, the color-difference signals are equal to zero so that no color information is transmitted; only luminance information is transmitted, as shown in Fig. 14-18. A display of a color-bar signal on an oscilloscope screen is shown in Fig. 14-19.

COLOR-RECEIVER CIRCUITS


Fig. 14-18. Elements of TV color bars.


Fig. 14-19. Display of a color-bar signal on an oscilloscope screen.


Fig. 14-20. Complete block diagram of a color receiver. Sections similar to those used in monochrome receivers are shaded; sections used only in color receivers are unshaded.

A complete block diagram of a color receiver is shown in Fig. 14-20. The shaded blocks represent sections that are similar to those used in monochrome receivers. The unshaded blocks represent the new sections used only in color receivers.

From the block diagram in Fig. 14-20, it can be seen that the tuner, the video IF, and the sound IF sections of a color receiver are not too different from those of a monochrome receiver. With the exception of the block representing the sound detector, this drawing could represent the RF, IF, and sound circuits of any television receiver; nevertheless, the following discussions about circuits show some important differences.

RF Tuner

The function of the RF tuner in a color receiver is the same as in monochrome receivers, and the physical appearance of the unit has not changed. However, the RF circuits in a color receiver must have one feature that is not necessarily required of the RF circuits in monochrome receivers. This concerns the allowable tolerance in the frequency response of the tuner.

It has been pointed out previously that a color picture signal is comprised of both a luminance and chrominance signal. As seen in Fig. 14-21, the bandwidth of this signal extends from 0.75 MHz below to 4.2 MHz above the picture carrier and falls to zero at 1.25 MHz below and at slightly less than 4.5 MHz above.


Fig. 14-21. Frequency distribution of the color picture signal.

If a tuner designed for a monochrome receiver were to be used in a color receiver, nonuniform amplification of frequencies might result and cause poor color reception. Although a tilt or sag in the response of the RF tuner might be compensated for in the IF amplifier section, it is necessary to provide uniform bandpass characteristics in the tuner for proper reception of color telecasts on all channels. An RF circuit that has a frequency response similar to that shown in Fig. 14-22 would produce excellent results in a color receiver.

A defective tuner that attenuates high frequencies might continue to provide satisfactory results during monochrome trans mission; however, such a tuner would very probably cause poor reception during color transmission. Such a condition would obviously result in a complaint. When the tuner in a color receiver is serviced, particularly during alignment, the bandpass requirement of the tuner must be kept in mind. Many of the compromises that are in common practice in servicing tuners for monochrome receivers cannot be made in tuners for color receivers.


Fig. 14-22. Ideal frequency response of tuner used in color receiver.

Video IF Amplifiers and Video Detector

Although the bandpass characteristics of the video IF section are more restrictive in a color receiver, the function is essentially the same as in monochrome receivers. Before examining any circuits, let us consider what is required of the video IF section.

First, the purpose of this section is to provide amplification and selectivity to a specific band of frequencies. This band should extend from 0.75 MHz below to around 4.2 MHz above the IF picture carrier in order to include both the luminance and the chrominance signals. Ordinarily, four or five stages are needed for this function. The video IF strip should also attenuate the sound IF carrier much more severely than is common in mono chrome receivers. An interference pattern will appear on the screen if the sound carrier and color subcarrier are allowed to beat together.

A curve illustrative of good overall frequency response through the RF and IF sections of a color receiver can be seen in Fig. 14-23A. Compare this curve to the one in Fig 14-23B, which is representative of the- overall frequency response of late-model monochrome receivers. It can be noted particularly that the response of the color receiver is very critical in the region of the sound carrier where the slope of the curve is very steep. Frequencies only 0.35 MHz away from the maximum attenuation point at the sound-carrier frequency are provided with at least 90 percent amplification. The reason for this is that the upper side bands of the chrominance subcarrier extend to this portion of the frequency curve.


Fig. 14-23. RF and IF frequency-response curves. (A) Color receiver. (B) Late-model monochrome receiver

For example, the frequency limits of the color-picture signal have been superimposed on the response curve of Fig. 14-23A. Although the frequency response indicated by the curve in Fig. 14-23B would produce good results during a monochrome transmission, it would severely attenuate the chrominance signal during color transmission. This loss of chrominance would result in poor color reproduction or a complete loss of color.

The video detector demodulates the IF signal so that the luminance, chrominance, and sync signals are available at the output of the detector circuit. A crystal diode with an IF filter is commonly used for this purpose. The video detector in a color receiver may employ a sound-carrier trap in its input. This trap attenuates the sound carrier and ensures against the development of an undesirable 920-kHz beat frequency, which is the frequency difference between the sound carrier and the color sub carrier. When the sound carrier is attenuated in this manner, the sound takeoff point is located ahead of the video detector.

A basic IF amplifier configuration is shown in Fig. 14-24. The third IF transistor operates at higher power than the other two transistors because the video detector requires appreciable power input. Transistor Q25 operates with a collector voltage of 15 volts and an emitter current of 15 mA. The collector-load impedance is about 1,000 ohms; however, T8 provides some step-up voltage transformation for the video detector. The power gain of this stage is about 18 db.

The first stage in Fig. 14-24 operates with a collector voltage of 15 volts and an emitter current of 4 mA. Q27 utilizes reverse AGC. The minimum collector current of Q27 under strong-signal conditions is about 50 µA. Q27 has a dynamic range of 40 db.

Diode D12 is a clamp diode that becomes reverse-biased to pre vent Q27 from being completely cut off at high AGC bias. Q27 is neutralized by the 1.5-pF capacitor connected from its base to T6.


Fig. 14-24. Typical video-IF-amplifier configuration.

Two traps are connected into the input circuit of Q27. The first trap is an inductively coupled trap, and the second is a bridge-T configuration. These are the accompanying sound and adjacent sound traps, respectively. The collector load for Q27 is a simple resonant circuit, with a tap to provide an out-of-phase neutralizing signal. Q26 is base-driven via capacitance coupling. This second IF stage is not AGC-controlled and operates continuously at maximum gain. The basic bias circuit for Q26 also provides some negative feedback to obtain a properly shaped response curve. The collector load for Q26 is a tuned bifilar transformer.

The third stage is neutralized by a 1.5-pF capacitor from the base of Q25 to the secondary of the collector-output transformer.

A 0.005-µF capacitor is connected in the base bias network to prevent signal feedback from collector to base. This stage is not AGC-controlled and operates continuously at maximum gain. All of the collector loads are shunted by resistance to obtain proper bandwidth.

Sound IF and Audio Sections

With the exception of the separate sound IF detector, the sound IF and audio sections of color receivers follow conventional monochrome design. The reader who knows the theory of intercarrier operation should have little difficulty in understanding and working with these sections in color receivers.

It has been mentioned that the sound IF carrier is severely attenuated in the video IF strip; consequently, the output of the video detector contains virtually no 4.5-MHz beat signal. The sound signal must be obtained from a point ahead of the video detector. This takeoff point is usually the output of the final video IF amplifier. The signals available at this point are in the IF range, and in order to obtain the 43.5-MHz sound signal, a detector is necessary.

Video Amplifier

The main function of the video amplifier, often called the luminance channel, is to amplify the luminance portion of the video signal. This signal is comparable to a monochrome signal in that it represents the brightness variations of the image. From this standpoint, the function of the luminance channel can be com pared with that of the video amplifier in a monochrome receiver.

The luminance channel may use two or three stages so that the desired brightness signal may be obtained.

A secondary function of the luminance channel is to introduce a specific time delay in the brightness signal, which is necessary because all video signals undergo a time delay in reverse proportion to the bandpass limits of the circuits through which they pass. The time delay increases as the bandpass is narrowed. Since the luminance channel must pass a wider range of frequencies than the chrominance channel, the bandpass of the luminance channel is much wider than that of the chrominance channel. Were it not for a special design, it would take a longer time for the chrominance signal to pass through the chrominance channel than it would for the luminance signal to pass through the luminance channel. The associated picture elements of these two signals must arrive at the picture tube at the same time; therefore, the luminance signal must undergo an extra time delay. This delay is accomplished through the use of a special delay circuit in the luminance channel.

A typical direct-coupled, video-amplifier configuration is shown in Fig. 14-25. A dc coupling diode is employed in the output stage. This diode tends to compensate for the drooping transfer characteristic of the output transistor when driven to maximum output. Blanking pulses are applied to the picture tube via C252. SC204 is the video detector and is followed by a low pass filter comprising C232, L212, base-input capacitance of Q210, and L214. This low-pass filter removes IF feedthrough and also serves a peaking function.


Fig. 14-25. Typical video-amplifier configuration.

The first stage operates in the common-collector mode and provides an impedance match to the output stage. This output stage operates in the common-emitter mode. T206 serves both as a sound-takeoff transformer and as a 4.5-MHz sound trap for the video amplifier. Note that 4.5-MHz interference causes sound "grain" in the picture, and a combination of 4.5- and 3.58-MHz interference produces a 920-kHz beat-a wormy effect that has the characteristic appearance shown in Fig. 14-26.


Fig. 14-26. "Wormy" effect produced by 920-kHz interference.

The AGC Circuit

Although conventional in operation, the AGC circuit plays an important part in the color receiver. This importance can be realized when it is considered that variations in the amplitudes of the incoming signal will affect the color as well as the brightness of the image. In order to stabilize the operation of the receiver, a good AGC circuit is a necessity.

The preceding material described the color-receiver circuits that correspond very closely to those found in monochrome receivers. We shall now discuss the circuits that deal with the proper reproduction of color. These circuits are represented by the unshaded blocks in Fig. 14-13.

In order to be utilized in the color receiver, the chrominance signal, which is in the form of a 3.58-MHz signal, must first be separated from the composite color signal. An amplifier stage having a frequency-limiting filter network is used for this purpose. This stage is called the bandpass amplifier. The chrominance signal is fed from the bandpass amplifier to two demodulators where two color-difference signals are extracted from the 3.58-MHz signal. In order for the latter function to take place, two continuous-wave (CW) signals are required by the demodulators. These CW signals are generated and controlled by a section referred to as the color-sync section of the receiver. A burst amplifier, keyer, 3.58-MHz oscillator, and control circuit are used in the color-sync section.

During the reception of a monochrome signal by the color receiver, a means of cutting off the chrominance channel is pro vided. This function is performed by the color-killer section, which automatically disables the chrominance channel when there is no color signal being received.

Bandpass Amplifier

The purpose of the bandpass amplifier is to separate the chrominance signal from the composite color signal and feed it to the demodulators. The signal at the takeoff point for a typical bandpass-amplifier section is the composite color signal, which includes the chrominance, luminance, burst, synchronizing, and blanking signals. The takeoff point for the chrominance signal is usually in the first video amplifier stage, but, depending on the gain in the bandpass amplifier, it can be anywhere in the video section.

Only the chrominance portion of the composite color signal appears at the output of the bandpass-amplifier circuit. Between the signal takeoff point and the input of the demodulators, any remaining 4.5-HMz signal has been attenuated, the luminance signal has been blocked, and the color-burst and synchronizing signals have been keyed out.

A typical bandpass-amplifier configuration is shown in Fig. 14-27. Diode CR205 functions as a bias-voltage regulator for Q206 to compensate for temperature drift. Diode CR206 operates as a rectifier to ensure that only a positive-going pulse is applied to the emitter of Q206 for blanking the chroma signal during horizontal retrace. Note that base and emitter voltages for Q206 are specified for both color and black-and-white reception.


Fig. 14-27. Bandpass-amplifier configuration.

This change in base and emitter voltage is produced by color killer action and often provides useful clues when troubleshooting bandpass amplifier malfunctions. Figure 14-28 shows the frequency-response curve of a typical bandpass amplifier.

The setting of the COLOR control in the bandpass amplifier determines the amount of chrominance signal applied to the demodulators as well as the saturation of the colors in the picture.

This chrominance signal is coupled to the demodulator stages, where it is detected. The chrominance signal varies in both phase and amplitude. It is the function of the demodulator stages to detect correctly the differences between the phase and amplitude of the chrominance signal in order for the receiver to reproduce the proper colors.


Fig. 14-28. Frequency-response curve of typical band pass amplifier.

(A) Input. (B) Output


Fig. 14-29. Input and output waveform for bandpass amplifier.

Normal input and output waveforms for the bandpass amplifier are shown in Fig. 14-29. The input waveform is a keyed rainbow signal; it consists of a series of color bursts at a frequency of 3.56 MHz. Horizontal-sync pulses are included in the keyed-rainbow signal. After this standard test signal is processed through the bandpass amplifier, the bursts appear in amplified form, with the horizontal-sync pulses keyed out. Keyed-rainbow generators are discussed in Section 15.

Let us review briefly how the color signal varies in phase and amplitude. Figure 14-30 shows how the primary colors (red, green, and blue) and the complementary colors (magenta, cyan, and yellow) have different amplitudes and phases. The burst phase is at 0 deg; in turn, yellow has a phase angle of 12°, red has a phase angle of 76.5°, and so on. The relative amplitudes of yellow and red are 45 and 63 percent with respect to the amplitude of white.


Fig. 14-30. Vector representation of relative phases of the TV color signal.

The chief chrominance signals that concern us at this point are R-Y, B-Y, and G-Y, as depicted in Fig. 14-31. Note that B-Y is 180° from burst; R-Y is 90° from burst. We observe that + (G-Y) is 300° from burst, and that -(G-Y) is 180° from + (G-Y). R-Y and B-Y are called quadrature signals because they are 90° apart.

Similarly, G-Y L 90° is a quadrature signal with respect to both + (G-Y) and -(G-Y). These chrominance signals are provided by standard color-bar generators.


Fig. 14-31. Phases of G-Y. R-Y, and B-Y signals.


Fig. 14-32. Signal amplitudes for 100 percent saturated color-bar pattern.

Figure 14-32 shows how the amplitude of a given chrominance signal varies for different colors. The amplitudes of the Y (black and-white) signal are also indicated. For example, a green color is built up from 0.59Y, -0.59R-Y, -0.59B-Y, and 0.41 G-Y. Next, a yellow color is built up from 0.89Y, 0.11R-Y, -0.89B-Y, and 0.11G-Y. Therefore, when a color program is being transmitted, the chrominance signal and the Y signal are varying rapidly in both phase and amplitude.

Color Synchronization

In order to reproduce properly the colors of a televised image, the modulation of the chrominance subcarrier at the transmitter must be reversed at the receiver. It may be recalled that the modulation process at the transmitter involves the use of a 3.58 MHz subcarrier. This subcarrier is applied in quadrature to two doubly-balanced modulator circuits. Simultaneously, the saturation signal is applied to one balanced modulator, and the hue signal is applied to the other. The 3.58-MHz subcarrier is cancelled, and the resultant output is the chrominance signal, which is a 3.58-HMz signal that varies in amplitude and phase.

Recovery of the color-difference signals in the receiver is accomplished by reversing the modulation process. This requires that a 3.58-MHz CW reference signal that is locally generated should be applied in quadrature to two demodulator circuits.

Accuracy in the demodulation process is attained by regulating this reference signal so that a definite phase relationship with the subcarrier is maintained. It is the function of the color-sync section to generate the local 3.58-MHz reference signal and regulate its frequency and phase.

A circuit diagram of a typical color-sync, bandpass amplifier, and color-killer section is shown in Fig. 14-33; it is an example of a ringing-crystal, color-sync arrangement. In other words, the quartz crystal is shock-excited into 3.58-MHz oscillation each time that a color burst is applied. Although the ringing waveform tends to die out slightly between bursts, the crystal Q is very high, and the delay is small. Q6 operates as a limiter to provide a completely uniform output. The quartz crystal is energized by the output from the burst amplifier.

In normal operation, the burst-amplifier-output waveform appears as shown in Fig. 14-34. Note that the color killer operates as an electronic switch to turn the second bandpass stage on or off. The color killer is energized by the output from the subcarrier oscillator. Thus, if there is no color burst present, the second bandpass amplifier is disabled. The burst killer is energized from the horizontal-output transformer. It keys out the color burst from the signal in the bandpass amplifier, thereby preventing color contamination in the image from the color burst.

Color Killer

The purpose of the color killer in a color receiver is to prevent any signal from getting through the chrominance channel during the time a monochrome signal is being received. This prevents any signal other than the luminance signal from reaching the picture tube. Signals are prevented from passing through the chrominance channel by employing a color-killer stage to bias or to cutoff one or more stages in the chrominance channel.

Matrix and Demodulator Section

At this point in the discussion of color-receiver circuits, three video signals have been described: the luminance signal and two color-difference signals at the output of the chrominance de modulators. It is the function of the matrix section to combine these three signals in the correct proportions so that three color signals are produced. These color signals must correspond to those which appear at the output of the color camera: one for red, one for green, and one for blue. The color signals are amplified and applied to the picture tube, where they reproduce the hues of an image in terms of the three primary colors.

We shall find that various chrominance and demodulator arrangements are used in different models of color-TV receivers.

In one basic arrangement, R-Y and G-Y signals are demodulated, as depicted in Fig. 14-35. However, the G-Y signal is recovered in a chrominance matrix. It can be shown that G-Y can be produced by mixing -0.51R-Y with -0.19B-Y. An R-Y signal is changed into a -(R-Y) signal by passing the R-Y signal through a phase inverter. The G-Y matrix is simply a mixer for the -(R-Y) and the -(B-Y) signals.



Fig. 14-33. Configuration of typical color-sync bandpass amplifier and color-killer section.

In order to understand how the colors of a scene are reproduced, it is necessary to know something about the picture tube.

The tube used in the basic color receiver is known as the tricolor picture tube. Three separate electron guns are incorporated in this type of tube in order to accommodate the three color signals.

The coating on the face of the tube consists of phosphor dots, which are arranged in triangular groups of three. One dot in each trio emits red, one emits green, and one emits blue light.

When operating properly, the electron beam from the red gun activates only the red dots, the beam from the blue gun activates only the blue dots, and the beam from the green gun activates only the green dots. The phosphor dots are closely spaced so that when more than one dot in each trio is activated, the total light emission will blend to form one color. For example, when the light emissions from all three phosphors are equal, the screen appears white. If the red and green phosphors are activated equally, the resultant hue appears to be yellow. The blending into a single hue of the color dots on the picture-tube screen is based upon the principle that the human eye cannot resolve the separate colors at normal viewing distance. As a result, the total light emitted from the dot combinations appears as a single color.

Figure 14-36 shows how the Y, R-Y, B -Y, and G-Y signals pro duce a red bar on the screen.

The foregoing information should be helpful in understanding the requirements of the matrix section. A detailed discussion of the picture tube will be presented later.


Fig. 14-34. Output waveform from the burst amplifier.


Fig. 14-35. Basic matrix arrangement.


Fig. 14-36. Waveform amplitudes at the picture tube for a 100 percent saturated red bar.

In order to produce the colors of an image correctly, the matrix section must fulfill certain requirements. For instance, when a fully saturated red portion of the scene is to be reproduced, the amplitude of the video signal applied to the red gun must be at a maximum value and the amplitude of the signals applied to the green and blue guns must be zero. During the time when a fully saturated green is to be reproduced, the amplitude of the signals applied to the red and blue guns must be zero and that of the signal applied to the green gun must be at maximum value. The amplitude of the blue signal must be at maximum value, and those of the red and green signals must be at zero when a fully saturated blue is being reproduced. If white is to be reproduced, the amplitude of all three color signals must be at maximum values because white contain all colors.

Demodulation of the chrominance signal is the reverse of the modulation process at the transmitter. It could be accomplished by a mechanical switching action, but at a frequency of 3.58 MHz, a mechanical switch is impossible-switching tubes are used instead. Figure 14-37 shows the chief chrominance demodulator arrangement.


Fig. 14-37. Common demodulator systems. CHROMA


Fig. 14-38. An R-Y/B-Y/G-Y chroma-demodulator arrangement.

A widely used chroma demodulator arrangement is shown in Fig. 14-38. Diode phase-amplitude detectors are utilized. The chroma-signal output from the bandpass amplifier is applied to the R-Y, B -Y, and G-Y demodulator diodes. Also, suitably phased subcarrier voltages from the subcarrier oscillator are app lied to the diodes. Thus, the subcarrier voltage applied to the B -Y demodulator has the B-Y phase, that applied to the G-Y demodulator has the G-Y phase, and that applied to the R-Y demodulator has the R-Y phase. The diodes are turned on and off at suitable time intervals by the sub-carrier voltage to pass the R-Y, B-Y, and G-Y signals, respectively. At the same time, the subcarrier is reinserted into the three chroma signals, and they are demodulated by the rectifying action of the diodes. This process will be explained in greater detail in a following section.

CHARACTERISTICS OF THE THREE-BEAM PICTURE TUBE

Some of the characteristics of the three-beam tube are: (1) It has a phosphor-dot screen made up of three different phosphors; (2) it has a shadow mask to allow each beam to strike only the correct set of phosphor dots; (3) and it has three beams originating from three electron guns to energize each of the different phosphors.

There are therefore three major parts in a color picture tube: a phosphor viewing screen, a shadow or aperture mask, and an electron-gun assembly. A diagram showing the location of these parts appears in Fig. 14-39.

Let us investigate the characteristics of the three-beam color picture tube by first discussing the three major parts.


Fig. 14-39. Location of the three major parts of the color picture tube.

Phosphor Viewing Screen

The screen of the monochrome picture tube is made up of a mixture of phosphorescent material, which, when energized by an electron beam, will emit white light. This material is placed on the face plate of the tube in the form of a solid screen. The viewing screen of a color picture tube is also made up of phosphorescent material, but since three different phosphors are used, the screen of the color picture tube is different from that of the monochrome tube. The phosphors are the type that will emit colored light when they are energized by electrons because color has to be reproduced on the screen.

Since three additive primaries are employed in color television, three different phosphors are used. The phosphors are deposited on the viewing surface in the form of dots in a set pattern. These dots are placed very close together, but they do not overlap or touch each other. One-third of all the phosphor dots emit red light, another third of them emit green light, and the other third emit blue light. When an electron beam strikes a red-phosphor dot in a triad, that dot will glow with a red light. The blue phosphor dot will glow with a blue light when it is energized with a beam.

The characteristics of the human eye are such that the light emissions from the three phosphors cannot be distinguished separately at normal viewing distance. Instead, the eye blends the light from the three sources to give the appearance of a single color. For example, when the light outputs of all three phosphors are equal, each dot will glow with its respective color but the eye blends the three lights together so that the screen will appear to be white.

By controlling the energization of the phosphors, it is possible to produce a variety of colors that correspond to the hues in the visible light spectrum. For instance, when only the red and the green phosphors are energized, the two light sources are blended together by the eye and the color yellow is seen. If the green and blue phosphors are energized, the eye sees the color cyan.

Shadow, or Aperture, Mask

It has been stated that the color picture tube has three electron beams: One beam is used for energizing the red-phosphor dots, one for the green-phosphor dots, and the other for the blue phosphor dots. These three beams must be made to strike their respective set of dots at all times. To make this possible, a shadow mask is placed in the path of the electron beams directly behind the phosphor screen.

The mask consists of a thin sheet of metal that has been etched with a series of very small holes by a photoengraving process.

This mask is made large enough to cover the entire phosphor screen. There are as many holes in the mask as there are triads on the phosphor screen-one hole for each dot triad. The placement of the mask in respect to the phosphor dots is shown in Fig. 14-39.

A red, green, and blue dot can be seen through each hole in the mask.

A drawing showing the relationship of the electron beams, aperture mask, and phosphor-dot screen is presented in Fig. 14 40. The blue beam is shown as originating from the source on the top, the red beam from the source on the lower right, and the green beam from the source on the lower left. The three beams are controlled in such a way that they converge and diverge at the same holes in the aperture mask as they are scanned across the screen, and therefore each beam strikes only its respective set of color dots. The blue beam hits the blue-phosphor dot of the particular triad indicated in Fig. 14-40, and the red and green beams hit their respective dots in this triad.


Fig. 14-40. Relationship between beams, aperture mask, and phosphor screen in the color picture tube.

This triad of dots can be likened to the spot produced on a monochrome tube as the electron beam strikes the phosphor screen. Just as the brightness of this spot can be controlled in the monochrome tube by varying the intensity of the beam, the brightness of the triad in the color tube can be changed by con trolling the total intensity of the three beams. In addition, how ever, the beams can be controlled individually, which makes possible the reproduction of any desired hue.

Electron-Gun Assembly

As stated previously, the color picture tube employs three electron guns. Each is a complete unit in itself, and all three guns are identical in physical appearance and operation. Each gun contains a heater cathode, and grids Nos. 1 to 4. Grids Nos. 1 and 2 serve the same purpose as they do in a monochrome picture tube, No. 1 being the control grid and No. 2 the accelerating anode.

Grid No. 3 in the color tube is the focus electrode. The focus electrodes of the three guns are electrically connected. The high voltage anode of the color picture tubes consists of the inside coating to which the aperture mask and phosphor are connected.

The three electron beams are aligned so that they are equidistant from each other and from the central axis of the gun structure. In order to obtain this result, each beam is acted upon by a separate beam-positioning magnet, which is mounted outside the neck of the tube. By proper adjustment of the three magnets, each beam is positioned with respect to the other two beams.

With the three beams correctly aligned with respect to each other, the entire system of beams has to be oriented with respect to the central axis of the tube. This is accomplished by a purity coil, or magnet, which is placed around the neck of the tube and affects all three beams equally. The adjustment of this control will move all three beams the same amount until they are properly aligned with the central axis of the tube.

The three beams are brought into focus by the action of the No. 3 grids. These grids are electrically connected, which means that all three grids will have the same potential with respect to ground. After being focused, the beams enter a convergence field, which causes them to cross over (converge) at the aperture mask and strike the dots of the correct color. In the latest color tubes, this convergence field is obtained through the use of electromagnets, which are mounted around the neck of the tube.

During horizontal and vertical deflection of the three beams, they will converge at the center of the screen but not at the edges.

To eliminate this condition, dynamic convergence coils are positioned around the neck of the tube. These coils are powered by currents from the horizontal and vertical output stages. The waveforms of these currents are adjustable so that convergence can be obtained over the entire screen.


Fig. 14-41. Constructional plan of an aperture-grille picture tube.

Another type of color picture tube is called the in-line design, or Trinitron. As shown in Fig. 14-41, the chief feature of this picture tube is an aperture grille instead of a shadow mask. Also, the three electron guns are mounted in line horizontally. Note that all three electron beams pass through the same slot in the grille at a given time, but they pass through at different angles. Therefore, the red beam strikes the red stripe.

The same chroma and Y waveforms are used with the aperture grille picture tube as with the shadow-mask picture tube. How ever, the convergence arrangement is quite different. Note that there has been a trend toward the use of elaborated shadow mask picture tubes that have their deflection yokes and convergence units permanently mounted on the neck of the picture tube. This type of picture tube is converged at the factory and does not require re-convergence during its useful life. When this type of picture tube is replaced, the yoke and convergence assembly are discarded with the defective picture tube.


Fig. 14-42. Sampling the R-Y and B-Y signals.



Fig. 14-43. Typical circuitry for color stages.

TRANSISTOR COLOR-TV RECEIVERS

Transistor color-TV receivers use various circuits that are essentially the same as the circuits explained previously for transistor black-and-white receivers. However, the chroma processing circuits may be basically different from those in tube-type color receivers. Therefore, a typical chroma processing circuit used in a modern transistor color-TV receiver will be explained in this section.

It is helpful to start with a brief analysis of chrominance-signal demodulation, as depicted in Fig. 14-42. Note the following facts:

1. The subcarrier-oscillator voltage is injected into the R-Y demodulator in the R-Y phase.

2. The subcarrier-oscillator voltage is injected into the B -Y demodulator in the B -Y phase.

3. Conduction of the R-Y demodulator occurs briefly on the peak of the injected subcarrier voltage.

4. Conduction of the B -Y demodulator occurs briefly on the peak of the injected subcarrier voltage.

5. When an R-Y signal is applied to the demodulators from the bandpass amplifier, the R-Y signal is sampled on its peak by the R-Y demodulator. On the other hand, the R-Y signal is going through zero at the instant of sampling in the B-Y demodulator, and there is zero output from the B -Y demodulator.

6. When a B -Y signal is applied to the demodulators from the bandpass amplifier, the B -Y signal is sampled on its peak by the B -Y demodulator. On the other hand, the B-Y signal is going through zero at the instant of sampling in the R-Y demodulator, and there is zero output from the R-Y demodulator.

7. A minus (R-Y) signal is sampled on its negative peak by the R-Y demodulator and is going through zero at the instant of sampling in the B -Y demodulator. There is a negative R-Y output from the R-Y demodulator and zero output from the B -Y demodulator.

8. A minus (B-Y) signal is sampled on its negative peak by the B -Y demodulator and is going through zero at the instant of sampling in the R-Y demodulator. There is a negative B -Y output from the B -Y demodulator and zero output from the R-Y demodulator.

Next, let us observe the chrominance demodulation arrangement depicted in Fig. 14-43. Note that the color signal is fed into the color IF amplifier from the video-detector output amplifier.

In other words, the color IF amplifier operates in the same manner as the bandpass amplifiers discussed previously. After the chrominance signal is stepped up through two stages of amplification, it is mixed with the brightness (Y) signal from the contrast control. In other words, the red, blue, and green demodulators are energized by the complete color signal, as depicted in Fig. 14-44, which is a basic difference compared to the other demodulator arrangements described previously.


Fig. 14-44. The complete color signal.

Because the complete color signal is fed to the demodulators in Fig. 14-43, we find that a combined demodulation-matrixing action takes place. In other words, this system employs color demodulators instead of chrominance demodulators. Basically, the subcarrier oscillator injects the red phase into the red demodulator, the blue phase into the blue demodulator, and the green phase into the green demodulator, as shown in Fig. 14-45. Thus, matrixing is accomplished along with the demodulation process.


Fig. 14-45. Basic demodulation phases in color de modulators.


Fig. 14-46. Magenta is 180° from the green phase.

The red, green, and blue signals are then fed to video drivers and amplifiers and are finally applied to the cathodes of the color picture tube.

RGB demodulation entails a problem that is not encountered in conventional chrominance demodulation; that is, spurious pulses, called blips, are produced in a simple RGB demodulation system. Therefore, circuit means must be provided to cancel out the blip interference. This is done by reversing the polarities of the demodulator diodes in the green demodulator section, as seen in Fig. 14-43. In turn, we would obtain a magenta color-signal out put from the green demodulator (Fig. 14-46) unless the injected subcarrier phase is reversed. Therefore, the subcarrier is injected into the green demodulator in the magenta phase, and a green color-signal output is obtained owing to the fact that the diode polarities are reversed.

The automatic chroma control (ACC) section in Fig. 14-43 is simply an AGC arrangement to maintain the chrominance signal at a fixed level in case the incoming signal tends to fade. If the color burst decreases in amplitude, the color IF gain increases.

The color killer operates as explained previously. Similarly, operation of the color sync and color oscillator (subcarrier oscillator) is the same as described previously.

SUMMARY

When color television was first developed, it had to meet two requirements: The color signal had to be compatible with the monochrome system; and it had be inserted within the 6-MHz channel bandwidth.

Three color attributes are used to describe any one color or several colors: hue, saturation, and brightness. Hue is used to identify any color under consideration. Saturation is used to mea sure the absence of light and can be expressed as rich, deep, vivid, or pure. Brightness defines the amount of light energy contained within a given color and can be expressed in terms of bright, dark, or dim.

The color picture signal consists of two separate signals: luminance and chrominance signals. The luminance signal is the portion of the color-picture signal utilized by monochrome receivers. The chrominance signal must represent only the colors of a scene. Therefore, the luminance signal is subtracted from each of the three output signals of the color camera.

QUIZ

1. What are the three basic primary colors?

2. What two basic requirements were needed in developing color TV?

3. What is meant by the chrominance signal?

4. What are the three color attributes used to describe any one color or several colors?

5. Brightness is a characteristic of what two factors?

6. What are the two requirements for primary color for color mixing?

7. The color-picture signal consists of two separate signals. What are the two signals?

Prev. | Next || Index

Top of Page  Home

Updated: Sunday, 2020-12-06 9:47 PST