Loudspeaker Testing and the Listening World (High Fidelity, Jun. 1981)

Home | Audio Magazine | Stereo Review magazine | Good Sound | Troubleshooting





It takes a heap of computations to make a speaker measurement in a home--or anything approaching one.

by Robert Long


------ Ed Foster of Diversified Science Laboratories glances up from the Apple computer that forms the heart of HF's new real-room speaker testing procedure.

Beginning with this issue, we are turning our backs on the anechoic chamber loudspeaker measurements that we have used (with some updatings over the years) since the late Ben Bauer and his colleagues at CBS Laboratories (now CBS Technology Center) devised them for us a decade ago. If there is a tinge of regret in that statement, it's not to be wondered at. Our position is a little like that of someone trading in his first car for a newer and better model: Sentiment must be kept at bay by a firm grasp on the improvements at hand.

The seminal improvement is that Diversified Science Laboratories now is measuring speakers in a real listening room, rather than an anechoic chamber.

That statement, so easily made, implies a vast quantity of work, much of it done by an Apple computer that was far over the horizon ten years ago. At that time, the concept of measuring in a real room was unthinkable because there was no practical way to remove the room from the measurements and, therefore, make them reflect loudspeaker behavior alone.

That's what anechoic chambers were (and are) for, but they necessarily subjected the speakers to an artificial environment, making the test results themselves useful only after interpretation of their inherent artificiality.

All audio measurements are, to some extent, creatures of the process.

Conversely put, the ideal of a single measurement that would say all that must be said for comparative product rankings is still very, very far off because we don't know how to measure all of the parameters that must be taken into account or how to weight them against the real needs of those on whose behalf we make the measurements. (This portion of the ideal can never be achieved, of course, since readers don't share a single set of criteria and, therefore, an individual weighting scale would be required for each reader.) So we measure what we have the instrumentation and the understanding to measure, knowing that it will never precisely match the desiderata.

That improvements in the match could be gained by moving away from anechoic testing was manifest before DSL began development of its methodology. Several companies had begun using computers to accumulate, correlate, and adjust loudspeaker data gathered in typical listening rooms or, at least, in environments that simulated their properties. Among these companies was Acoustic Research, and when DSL approached Tim Holl, AR's vice president of engineering, for advice on this type of testing, he and chief engineer Alex De Koster gave generously of their time and their information; they even offered to share with DSL some of the computer programs they had developed for the purpose.

This gave us a solid foundation for the building of the new test method, and the superstructure outlined below rose directly from it, retaining some familiar features from the old method in an effort to retain a degree of comparability (albeit a modest one) with the data that we had been publishing on the basis of the anechoic measurements. The very fact that the testing environment is "echoic" guarantees that measurements will not be identical to those in anechoic space even if all other variables are kept identical. Some ballpark comparisons thus are possible, but product ranking on the basis of measurements alone--even when they are identically derived--is a task into which angels, however accomplished at pin-sitting, are not likely to rush.

Now let's examine one at a time the specific parameters that will appear among the DSL measurements, noting how the measurement is made and how it should be regarded within the context of the overall report. Keep in mind that, since the speaker no longer is, acoustically, floating in infinite space, some relationship between the speaker and the listening-room boundaries must be postulated. DSL follows the manufacturer's instructions in this regard-if there are any. Too often, of course, the manufacturer represents the model as all things to all placements, and DSL must make its own decision. In that event, a "standard" position tends to put the tweeter near listening height (approximately I meter off the floor) and the speaker is set either with its back against the wall or four feet in front of it--whatever seems most consistent with the design approach and whatever instructions are supplied.

Room response characteristics, as the graphs in this month's test reports show, are measured from two positions: directly in front of the speaker and approximately 30 degrees off to one side.

The mike is kept 15 feet from the back wall (and thus in the "far field," instead of the rather arbitrary 1 meter used in our former tests, among others, which were far-field at short wavelengths but essentially near-field at very long ones) and at listening height. This does not mean that it is precisely 1 meter off the floor; the mike is moved about in an imaginary window about 1 foot square and centered on the listening position.

And at least 100 measurements are made this way at each of the two basic positions. Since full-spectrum pink noise (rather than frequency sweeps or individual noise bands) is used as the source, all frequencies are excited at once; spectrum analysis "chops" the information into one-third octave bands and averages the 100-plus tests in each, while the movement of the testing mike averages the local spatial effects that otherwise would color the measurement. The computer accomplishes all this data averaging in an astonishingly short time.

The computer also corrects for any response irregularities of the testing system: the pink-noise generator, the microphone, the spectrum analyzer, or the room. This last correction is the trickiest. It is based on a set of five reverberation-time measurements at each of the points in the room where microphones may be placed. Room modes and the room's damping characteristics are identified in the reverberation data and stored in the computer as correction information. Overall room-coupling effects are not corrected, of course; to take the room completely out of the response data would defeat the purpose of testing in the "real" environment.

When you look at the response curves, you will see immediately that the calibration has been altered. Formerly, we showed absolute sound pressure level for a nominal 0-dBW drive and a microphone distance of 1 meter. Since our “0 dBW" now is different, following the same practice would tempt unwarranted comparisons with past reports.

Instead, we take the maximum and minimum response figures within the working range of the speaker (generally, the range used in the sensitivity measurement) and arbitrarily call the mean between these two extremes our 0-dB reference. In the graph, you can easily see whether the spread above and below this reference is greater or less than ± 5 dB (the calibrations adjacent to the 0-dB reference).


-------- Diagram shows one "standard" position of speaker and mike during on axis and off-axis room response analysis.

In addition to the smoothness of the on-axis (solid) curve, look at its general lie and at any tendency to tilt one way or the other toward the frequency extremes, where response anomalies that affect perceived tonal balance may be visible. Comparison of the two curves will give you some idea of the speaker's beaminess; the closer the curves lie, the more uniform the distribution of sound in space and the lower the likelihood that tore color will alter radically depending on your listening position.

Sensitivity is measured with band-limited pink noise (250 Hz to 6 kHz), with the microphone at 1 meter, just as it used to be. One major change in the technique affects sensitivity results in particular, however. Our past reports keyed all their figures to "nominal impedance" (defined as the impedance minimum falling just above woofer resonance) and calculated power in watts from the voltages needed at that impedance-whether or not the frequency under consideration happened to coincide with that of the impedance minimum. So if the nominal impedance measured 5 ohms, "100 watts" would require a drive of 22.36 volts (the square root of the product of 100 times 5), which would deliver only 50 watts at any frequency where the impedance happened to rise to 10 ohms.

We now dispense with the notion of "nominal impedance" altogether and specify actual test voltages instead of their theoretical power equivalents.

Since the reference resistive load used in testing amplifiers is 8 ohms, DSL makes this test with the drive-2.8 volts rms--necessary to deliver 0 dBW (1 watt) into an 8-ohm load. Thus both systems, old and new, use nominal 0-dBW drives; but since their actual calculation differs, so will the test results. And since most speakers' nominal impedance is below 8 ohms (though their average impedance may not be), the new technique tends to yield higher numbers.

Continuous output follows past practice in driving the speaker only to "20 dBW or 100 watts" (28.3 volts rms in the new procedure) or to excessive distortion (10% THD or "buzzing"), whichever comes first, and measuring the sound pressure level at this drive.

Many speakers will accept more without misbehavior, of course, but it's hard to imagine why a home system-even a very inefficient one-would need to handle more on a continuous basis.

Many would surely self-destruct if we made it a practice to feed them with higher continuous levels, even at the relatively "safe" test frequency of 300 Hz.

Pulsed output, again at 300 Hz, can pursue these upper power reaches more safely because of the brief duty cycle of the pulse, which, to that extent, more closely resembles musical signals.

This test can be (and frequently is) conducted at the maximum unclipped output of the driving amplifier. DSL stops short of this drive level if the distortion becomes excessive. Since audible change may be evident in the pulse reproduction before outright clipping occurs, audibility (rather than visibility on an oscilloscope) has been adopted as the criterion. Evidently the ear is more sensitive than the oscilloscope; the maximum allowable pulse distortion seems distinctly lower by this criterion than by the former visible-deformation judgment method. So pulse-handling comparisons should not be attempted with a combination of old and new measurements.

Distortion, like sensitivity, is measured at 1 meter. Drive voltages for the desired sound pressure levels are calculated from the sensitivity data (so that these levels will be averaged in the band-250 Hz to 6 kHz-used for the sensitivity test), and a series of tones one-third octave apart are fed to the speaker at these voltages. The second and third harmonics of the test frequencies are measured with a spectrum analyzer and the results stored and plotted with the aid of the computer.

Because of the quantity of data involved, we continue to give overall characterizations of it in the main text, rather than trying to list or plot it in some form in the data column.

-------------

(High Fidelity, USA print magazine)

Also see: The Critics Go Speaker Shopping [June 1981]

A Spring Speaker Survey [June 1981]





 

Top of Page   All Related Articles    Home

Updated: Thursday, 2021-03-25 8:34 PST