TECHNICAL TALK: What Price Distortion? (Oct. 1981)

Home | Audio mag. | Stereo Review mag. | High Fidelity mag. | AE/AA mag.


By Julian D. Hirsch

What Price Distortion?

A READER recently posed an interesting question: commenting on the "distortion race," in which amplifier manufacturers seem to be striving endlessly to reduce the distortion in their products to zero, he observed that some of the most highly esteemed (and expensive) amplifiers make no claims to have vanishingly low distortion, that in fact there appears at times to be an inverse relationship between price and distortion! His question naturally was, "Why do these very expensive amplifiers have such 'bad' specifications?" He was refer ring to those selling in the $2,000 to $5,000 range (for a power amplifier only) with ratings of 0.2 to 1 percent while others selling in the $300 to $600 range are often rated at between 0.001 and 0.05 percent.

Several possible explanations were suggested: (1) some manufacturers claiming extremely low distortion may not be entirely truthful, (2) manufacturers of high-price amplifiers with high distortion ratings may be too conservative in their specifications, or (3) distortion ratings may not be a key criterion of good amplifier performance.

My experience indicates that neither (1) nor (2) is a valid explanation. Even in the lower price ranges I usually find amplifiers to be honestly and even conservatively rated in accordance with the rigorous FTC rules for power and distortion ratings, and distortion ratings of 0.02 percent and less are not uncommon. The few tests I have done on very expensive amplifiers with unexceptional distortion ratings indicate that they be have pretty much as rated, which is to say with much higher distortion than one will usually find in others selling for a fraction of their price.

Most of the very expensive amplifiers on the market are sold (and bought) on the basis of their presumed special listening qualities, and the usual "specsmanship" factors are not in force to the extent they are in lower price brackets. To me they sound just fine, and since I have yet to hear any amplifier whose sonic qualities would make it a clear choice over any other, it follows that the higher measured distortions in some of the ones 1 have used have not affected their sound in any way I could detect.

THAT leaves us with the third hypothesis: harmonic distortion, at least below some threshold level, really doesn't matter very much in a high-fidelity amplifier. That view may shock some readers who are overly impressed by low distortion specs, but it is a fact of life for anyone who has had experience in evaluating amplifiers. Note that I said evaluating, not designing. Many amplifier designers I have known are some what lacking in objectivity, especially where their own creations are concerned. It is therefore easy for them to "hear" differences, always in favor of their own products, and it is equally easy for them to devise "objective" measurements that will validate this subjective response. In my experience, given two amplifiers with similar power-output ratings, flat frequency response, and negligible noise levels, one of which has a distortion of 0.001 percent and the other a distortion of I percent, it is unlikely that any difference between them would be audible in a controlled, double-blind A-B listening test if they were both operated below their clipping points.

What I have been discussing is simple harmonic distortion caused by a curvature in the amplifier's transfer characteristic.

This usually produces lower-order distortions, principally second and third harmonics, which are relatively inoffensive to the ear. Crossover notches and other sharp discontinuities create many higher-order harmonics which may be audible even at low levels (although the audible importance of these effects has been greatly exaggerated, given modern amplifier performance). Ordinary intermodulation distortion (IM) is simply a different way of measuring the same electrical "problem" using different test signals. So why do we have this bizarre situation in which amplifiers selling for a few hundred dollars are rated at 0.001 to 0.05 percent distortion, white others selling for ten times as much have distortion ratings ten to a hundred times greater? There is probably no single, simple answer to the question, since many factors other than distortion must be considered.

For example, the amplifiers in my correspondent's list range in power output from about 50 to 250 watts per channel, and that alone can account for a substantial price difference. On the other hand, even if price and power do tend to follow each other roughly, there are so many exceptions to the rule that it must be viewed with suspicion.

It is possible that the explanation may lie simply in the relative ease with which ultra-low distortion can be secured with the output transistors and circuit designs currently available, especially when they include large amounts of overall negative feedback.

Recognizing the sales appeal of extremely low distortion ratings, especially to the lay public, many manufacturers cannot resist the temptation to shoot for a sub-0.002 percent specification; if it can be achieved without any large cost penalty, why not? However, if one believes that "transient intermodulation distortion" (TIM) and related phenomena that are said to result from the use of large amounts of negative feedback are serious problems in modern hi-fi systems, then it follows that we would be better off with much less overall negative feedback. This would result in higher harmonic distortion, but if an increase of one or two orders of magnitude is still not audible in that area, and if there is a beneficial reduction or even elimination of TIM, the trade-off is well worth it-according to proponents of this philosophy.

Luckily, the present state of power-transistor and circuit development makes it possible to have low distortions of all kinds without "excessive" feedback. All it takes is money. This may be part of the reason for the high price of some amplifiers, but I don't think so. The extremely high prices of a few amplifiers more probably result from very limited production runs that entail a large amount of hand labor, plus the generally high quality of the mechanical and electrical materials used in the product (many of the parts and transistors are much more expensive in small quantities than they are when bought in lots of many thou sands). And, of course, the need to amortize the heavy development and engineering costs of a sophisticated product over a few units results in an inordinately high selling price as well.

My correspondent notes, by the way, that all the amplifiers he listed (about fifteen, from as many different manufacturers) are excellent products. Keep this in mind if you decide to make a similar study of distortion ratings; do not, in other words, give them any undue importance. Price, appearance, power, and many other things can-or should-outrank distortion in the selection process. As I have often said, reliability and ruggedness are paramount for me; no amplifier is worth having if it regularly goes "down" under reasonable home operating conditions (or during lab tests, although I tend to make allowances for their some times unrealistic severity).

One final thought: have you considered how many $400 amplifiers you could afford to replace for the price of just one $4,000 amplifier? Even if the cheaper amplifier were less reliable than the expensive one (and in my experience the reverse is more likely to be true), you could afford to keep several on hand as spares and never be with out a system in the event of an amplifier failure.

Also see:

 

TAPE TALK: Pitch Problems; Tape Contact; What, No Bias?


Source: Stereo Review (USA magazine)

Prev. | Next

Top of Page   All Related Articles    Home

Updated: Friday, 2025-12-19 9:35 PST