--(Greek letter) Gamma Electronics

The Audio Critic Seminar on the State of the Art: Part 1, part a (Vol.2, No.1, Winter/Spring 1979)

Home | Audio mag. | Stereo Review mag. | High Fidelity mag. | AE/AA mag.


It lasted 15 hours, with eight stalwart men of audio contributing their best thoughts, but when it was over we had barely scratched the surface. Even so-some surface, some scratch! We're publishing the edited transcript in two installments, of which this is the first.

On a clear, sunny Monday in the latter part of the winter, the following eight persons sat around a long table covered with green baize and yellow note pads at the Editor's house and discussed audio from 9 AM until midnight, with only brief interruptions for meals, leg stretching and other calls of nature.

In alphabetical order:

Peter Aczel, your Editor and moderator of the seminar.

Mitchell A. Cotter, polymath technologist, possibly the only audio practitioner equally at home in tensor calculus, quantum mechanics, solid-state physics, electronic circuitry, precision tools-you name it-as well as music, now in the business of manufacturing phono system components under his own name.

Julius Futterman, senior member of the group and inventor of the famed Futterman OTL tube amplifier, a man who knows vacuum tube circuitry like few, if any, others.

A. Stewart Hegeman, second in seniority, a legend among audiophiles for the past 30 years, a pioneer whose engineering roots go back to the old Western Electric days and whose unconventional speakers, amplifiers, preamps and other audio products have been celebrated under names as diverse as Lowther Hegeman, Brociner, Westminster, Citation, and Hapi.

Dr. Matti Otala, former professor at the University of Oulu, Finland, and now director of the Technical Research Center of Finland, an all-around scientist who has virtually lived inside amplifiers, done extensive psychoacoustic research, and is probably best known in audio circles for his seminal work on TIM.

Andrew S. Rappaport, by far the youngest participant, well known to readers of this publication both for his remarkably original and sonically superior audio designs and for his thought-provoking letters, probably the only authentic whiz kid in audio.

Max Wilcox, our music man and Contributing Editor, producer of Artur Rubinstein's famous albums and innumerable other records at RCA, and now an independent free-lancer experimenting with super simple, purist recording techniques.

Bruce I. Zayde, mathematician and EE, accomplished organist, trombonist and licensed commercial jet pilot, one of the earliest champions in this country of the Thiele/Small mathematical approach to low-frequency speaker design, and one of the few people we trust on the subject of filter theory and other heavy stuff.

By courtesy of Tandberg of America, Inc., we had a brand-new Tandberg TD 20A four-track tape deck recording in stereo what was being said, at 3 3/4IPS on 10.5” reels, through a pair of Tandberg TM6 dynamic microphones. (This is in no way a review or an endorsement of the Tandberg equipment, but we can certainly report that everything worked smoothly and reliably; indeed, we're willing to venture the subjective opinion that the TD 20A is an unalloyed pleasure to use.) What follows is a very lightly edited transcript of this continuous recording, which was interrupted only during meals and inter missions and takes approximately 10 hours to play in its entirety. Asterisks (***) indicate omitted sections, which are relatively brief and not terribly important.

Here it is, then: possibly the most important contribution of The Audio Critic so far to the realistic education and general enlightenment of audio enthusiasts.

EDITOR: Gentlemen, welcome to the first seminar of The Audio Critic. I say first because I'm hoping that similar meetings will be taking place in the fu ture, and just for openers I'd like to say that I'm very proud and happy to have a gathering of people such as this to launch this program because I can't imagine a better group that could have been put together for this purpose. I'm really pleased that all of you could come. It wasn't easy as you know to get everybody together in the same place at the same time. The theme of this discussion is the State of the Art in audio. Now that should be interpreted very broadly, namely: What is possible in the light of present-day knowledge? Are these possibilities being implemented today? Is there a likelihood that they will be implemented in the near future? In other words, how should it be done? And how shouldn't it be done? We'll take the typical audio system piece by piece, and I thought, unless you gentlemen have a better idea, unless you have some sort of objection to the approach I'm about to suggest, I would like to start from the listener's point of view. The listener is faced with a loudspeaker, or a pair of loudspeakers, or a number of loud speakers, which is the thing that he actually listens to, and we could start with that and trace the signal back to the power amplifier, preamplifier, the phono system and various other sound sources.

I thought that even before we do that, so that we don't duplicate observations and efforts and discussions later on, we could lay down some ground rules and talk about the listener himself. And that includes the ear, the listening environment, the air, and whatever else you would like to talk about. Is there anyone here who thinks that this may not be the best approach? Because we're going to be very flexible.

FUTTERMAN: It's as good as any, I suppose.

WILCOX: Why not? Try it.

EDITOR: I suppose the traditional way is to trace the signal from the source.

COTTER: You're going to start with the listener.

EDITOR: I think we should start with the listener because whatever is true of the listener will be true of the whole discussion.

COTTER: Let me mention something which I think is an ultimate distortion that occurs outside of the audio system as traditionally conceived. I encountered this problem when I was doing re searches on quad. And I think that it's a serious problem that will affect judgments about what is happening and its quality and character and we have to, I think, account for it in establishing our criteria. The concept 1 know is around that an audio system is an absolutely transparent sonic channel. But I think there is a problem of human disorientation that occurs when you have, let's say, a perfectly transparent medium, although you don't have the supporting visual sensations. A good example of this would be-let us suppose that you are actually in a concert hall. We are simulating your home living room environment by means of some superior three-dimensional TV thing where there you are sitting in the middle of this con cert hall but you're surrounded by this holographic, utterly complete visual simulation of your living room. And I'm doing it backwards deliberately, because you are there in the sonic environment.

It's completely transparent. The holographic system is utterly transparent to the sound but you are sitting in your living room; in fact you're not in a concert hall chair at all, you're sitting in your favorite chair in your living room. And there you are with the living room constructed around you. I submit that this is a very disturbing illusion and that you would not hear the concert hall quite as well or quite the way you would want to, because the whole human response is not purely sonic. And there is a tendency I think to think that we are going to re create a total sensory impression with the sonic illusion. And that's a defective idea.

OTALA: Are you talking about a physical, sensory illusion or a psycho logical illusion? COTTER: Perhaps both, Matti, perhaps both, because I'm talking basically I think about a visual context that is all out of keeping with the sonic impression.

" ... it has, in my opinion, been clearly demonstrated by various researchers that hearing theory as it stands now is not valid.”

And I tell a story about my now very large son. When he was very small, being very at home in my lab, and utterly un afraid of anything except one place, the anechoic chamber, where he refused to go under any conditions, accompanied or unaccompanied, because it was so grossly strange. And I think that we have an effect very much like that when we're presented with a very perfect sonic illusion, but the visual stimuli are all out of keeping with it. And, in evaluating the perfection of either the recording/reproducing sonic channel, or the character of the image that's created, I think this be comes a very important factor. Max might comment because, from an aesthetic point of view, I'd be interested in what you feel goes on. I think there may be significant tendencies to compensate for, or overcompensate for, some of these missing ideas.

WILCOX: I wrote a whole article about this phenomenon in, I think, the second issue. It was about listening without the visual stimulus; in my experience, if you remove some of those visual stimuli, you really hear better. I don't know if you read that article or not. It was about the whole idea that your perception of what you hear is based a great deal on what you're seeing at the same time, too. On the other hand, what you hear is being filtered by what you're looking at and your attention is being diverted by what you're looking at. That's perhaps a whole other subject.

OTALA: Here we have two factors, I believe. The first being, of course, that the complexity of the total image is, as you say, a multisensory matter. But in our psychoacoustic experiments, which we have done during the last five years, it has become very clear and evident that hearing and auditory images are at least fifty percent-probably very much more-psychological. I mean, the problem is not that we would not hear accurately. The problem is what we seemingly make of those crude pieces of mosaic that we hear. We insert the missing de tails in our mind and not through our ears.

And this explains a lot of things which came out in those psychoacoustic experiments. For instance, a very trivial relationship which everybody knows: the more active the musician who was tested, the poorer was his sensory illusion of the sound itself, and the better was his illusion of the musical texture. That's one of the important things. A second important thing which came out was that if we picked people who were basically the extrovert kind of people, interested in many things, they proved to be very much more-up to two decades-more sensitive than a group of introverts. These kinds of things, they seemingly play a very, very important role.

COTTER: How you use the sonic information to construct your sense of awareness.

OTALA: That's true.. .

EDITOR: Let me ask this. From the point of view of the audio designer, isn't the task accomplished when an exact replica of the original sound field has been created around the ears of the listener?

RAPPAPORT: The thing is, sometimes that's not good enough.

COTTER: That's the point. I think that is the whole point. That's why I was anxious to have Max perhaps recapitulate, but I think that people who do recording often do exactly what Andy is saying. They correct for the absent visual stimuli.

RAPPAPORT: And also the interesting thing, relating to Matti's first point, is that when you listen, a listener's experience to reproduce an event is really anticipatory. When you sit down with a record and you know what orchestra it is, what piece of music it is, you expect something from that, because you've heard the piece performed before, you've heard the orchestra before, or you've heard a recording on that particular label, or produced by that person. You expect something, and you listen to it, and I think-and I haven't done the psychoacoustic tests that Matti's done--I think you expect to hear that, and even if it isn't there you may hear it anyway.

And it depends on the type of listener you are. I think a musician is more keenly aware of the musical content, and an engineer would be more keenly aware of some of the technical aspects, and that kind of thing.

EDITOR: Does this mean that--let us say--that technologists have succeeded in duplicating the original sound field in your listening environment. I'm not suggesting that this is possible today or will even be possible tomorrow. Let us say that there is an exact wave for wave duplicate of the sound field around you, and then for psychological reasons the listener says, “I'm not satisfied. This is not the real thing.” That does not imply at that point that the loudspeaker designer or the amplifier designer should go back to the drawing board and do something different. It may suggest manipulations outside that area, but it doesn't suggest that. Would you agree?

COTTER: I'm not sure.

FUTTERMAN: Yes, but I'd like to leave psychoacoustics out for the moment and read something that I think is apropos.

This is the IRE Transactions on Audio, March/April, 1961, approximately 18 years ago. The Editor's Corner: “Nothing New in Audio.” And I'll read it, it's quite short.

Editor's Note: To save space and avoid copyright problems, we'll just summarize the timeless editorial by Marvin Camras that Julius Futterman read into the record. It raises the basic questions about phony “'breakthroughs” vs. genuine innovation in audio, about the technical criteria of 'perfect realism,” and about fooling the ear with doctored sound. Accuracy from source through electronic system to listener is conceded to be desirable, but the obstacles are seen as somewhat unyielding. It could have been written in 1979.

FUTTERMAN: And this was eighteen years ago, and I think it's very apropos at the moment.

EDITOR: It certainly is. The thing is, if perfect reproduction in the objective sense is possible, if indeed we can duplicate, we can transport a sound field from here to there, what else is there left for audio technologists? Isn't that what this particular discipline is about? To try to transport an exact sound field from here to there?

COTTER: Not altogether.

OTALA: My opinion is that, yes, it is. If we change the sound field, at least in the main signal channel, then we contaminate it. Let everybody contaminate it in his own equipment, if he so wishes. But let's transmit it to him whichever way you choose-record, or radio, or whatever-in an uncontaminated form.

That's. one of the important things. But the second point is that we apparently do not know how to do this. I would like to say that we are living in the Stone Age of audio, because we do not know what is relevant in this context. I'll just make one suggestion. Up to the end of the “60s, beginning of the '70s, it was okay if you did the amplitude transmission all right.

Well, then came TIM and other new findings, which showed that it is not sufficient alone that you reproduce amplitudes. You have to reproduce the first derivative, too. Now, okay, that is the rate of change type of thing. Right now, we have been pondering for a while whether the second derivative is important and indeed it seemingly is. In mechanics-and remember that we are discussing a mechanical device-we have clearly defined properties of derivatives up to the seventh derivative. What goes beyond that, that's another question, and of course we have diminishing gains when we go further, to more and more planes, or spaces, in this respect. But who knows? The third thing is that it has, in my opinion, been clearly demonstrated by various researchers that hearing theory as it stands now is not valid. It certainly must be changed quite a lot. We have now had several theories of how the ear works: the theory of Cardozo and the theory of Keidel, both completely dif ferent than present thinking. They offer some explanations, but most probably- whatever the mechanism may be-some new finding will yield us some kind of hearing theory which is far superior to anything we have done in instrumentation today.

COTTER: I think that's a very important point. But I would like to get a little bit more understanding of what you've said. I don't think you really mean that what fact is known about hearing is in valid. What you're saying is that some of the theories that seek to construct a scheme-a predictive scheme, an analytic scheme-with the facts that are known, have holes, or inadequacies. No one who is doing hearing research, I think, thinks as a total theory.

OTALA: No, no, there is a hearing theory, which is generally accepted. But seemingly it is not valid. Seemingly . . .

Let me explain what Keidel says as being the most up-to-date way of hearing that he knows. It's as simple as this: the mechanical device, the ear, is just a pure mechanical device. Forget about it, be cause its characteristics have practically no relevance in the hearing situation.

Practically no relevance meaning that, quite often, we find that hearing is better if you have hearing damage. I mean, the sensory illusion is better. Also, a fact which came clearly out in our psycho acoustic experiments, the worse the ears of the subject were the better he was in pinpointing different deficiencies.

EDITOR: Matti, do you mean even the perception of phenomena that are many dB down?

OTALA: Yes. Especially those. Let me continue then. What Keidel says is that we should probably revise our hearing theory so that sound perception doesn't occur in the amplitude and frequency do main, as we've thought so far, but in stead, the upper auditory pathway is composed of three detectors. And he names these as being the transient detector, the vowel detector, and the consonant detector. He shows some experiments which quite clearly show that this could possibly be the case. He said, hearing is an optimized computer, that it was developed by natural choice--the law of survival. And in the early times it was different kinds of transient sounds which were important, because of the fact that these conveyed clues of possible danger.

Later, the two detectors evolved for the understanding of speech. And they are especially trimmed, those computer programs, for extracting from a complex signal different patterns which might be considered as vowels, or consonants-I mean, just to understand speech. He doesn't claim that this is the whole picture, but what he says is that this is at least one logical approach to it. There fore, we probably have to revise our thinking. I don't take any sides in that, not pro nor contra, but I just note that if the present hearing theory is valid then our measurements must be wrong.

Or vice versa. We came to the conclusion that in stereophonic music, for instance, using very pure sources, but musical sources, we found 0.003% rms TIM distortion being audible. Now that is completely impossible because it is below the hearing threshold of the subjects.

COTTER: No it isn't, because you proved it wasn't. The problem is in how we con struct our analogy.

OTALA: Yes, but let me cite some other ...

HEGEMAN: Was that the only thing you heard?

OTALA: examples as well. We probably are dealing with a very much more complicated beast than we ever thought of. Because it is the brain we are dealing with.

COTTER: Let me say some things. Be cause I think it's appropriate that we talk about hearing theory, but there are various theories-there are theories that have come and have gone. Anybody who's lived through the last 30 or 40 years of hearing research and followed it knows that we have a great break through occurring by the efforts of Georg von Bekesy. Bekesy's efforts were very much concerned with answering the question of what is this thing, this cochlea, this mechanism-what goes on? Bekesy's early work was done on dead cochleas, relatively fresh dead cochleas, of human beings, that were examined with various mechanical types of studies to reveal the mechanism of the basilar membrane, and the organ of Corti, which are the hair cells that pick up some of this motion.

But there was a very substantial error, not in any neglect sense, but a very substantial error that arose because there's a very significant difference in the elastic properties of the basilar membrane and the cochlear process, between live and dead tissues. The significance of it was not really fully worked out until some where over the last 10 or 12 years. And it amplifies, it extends the peripheral mechanism. In other words, if you look at the hearing sensory process as an ear auditory process-and there's some question as to whether that's adequate or not-for instance, the whole body responds. I mean, we talk about foot- tapping and body-shaking and gut rubbing bass-that's a very important part of the . ..

HEGEMAN: We've had some rather weird discussions over the dinner table, how do you hear, and with what do you hear? COTTER: You can hear with your liver, too.

OTALA: But let's put it this way. I think that we know quite well what the cochlea does, and we know how the . . .

COTTER: No, I wouldn't agree, Matti. I think we have yet to learn even what the cochlea does. But there is a split in the thinking of the hearing researchers be tween what the psychologists or the psychoacoustic research people, who come at it from a very different point of view, from the standpoint of sensory apparatus, divided up essentially into what they call peripheral mechanisms and the cerebral mechanisms. And I would greatly concur, and Ill tell you some of the re searches I know of and the things we've done. People look at the classical hearing loss curves-in fact they view as some tragedy what's called presbycusis, the progressive loss of hearing with age. It turns out that that's largely a noise exposure problem, and it's viewed with some alarm. But precisely the mechanisms you're talking about seem to be taking place in that hearing acuity--the ability to make the discriminations-seems not to be affected in quite the way you would think. The psychologists understand this very clearly as an adaptive mechanism, and the body, the mind, the whole human apparatus, has its survival capability largely be cause we don't have just one cylinder. We have a great deal of redundancy.

OTALA: I don't talk about that at all.

I'm dividing, engineering-wise, the whole thing in a number of elements. Take the ear. I would say, well, we know reasonably well how the ear functions.

What happens next is the transmission of the sensory information to the brain.

And it's exactly there where Keidel found out these things. He doesn't make any claims of hearing, of the ear itself; he only says that he has done some neurosurgery and he has pinpointed these sensory responses, he has pinpointed reactions. He says that from the ear, the reactions, the neuron responses are basically those of amplitude and frequency. There's a transformation of the signal in the upper auditory pathway; the neuro-system conveys the responses into the brain. And there he finds three major nodes which react strongly to these phenomena that he describes. And what he says is that apparently the input to the brain is primarily characterized by these three variables, and not frequency and amplitude.

COTTER: You're saying that the people doing the hearing work today don't think in terms that resemble electrical net work theory, which is where a lot of the findings and the studies and the interpretation and the experimental construction came from when hearing research started. It was dominated by telephone workers. And we owe a great deal of our background to Bell Laboratories.

FUTTERMAN: Most of what Mitch and Matti have been talking about is a little over my head-or over my ears. I know as far as my own hearing goes, if it came to a choice of listening to some rock music or Beethoven's Ninth, I would prefer the latter. And I hear-maybe not as well as younger people, but I hear. I think this business that we're talking about now could go on for hours.

EDITOR: I'd like to bring it into focus.

ZAYDE: In terms of the network presentation that you were describing, is it basically leaning towards lumped parameter assessments? Because that I think is suspect to some . . .

HEGEMAN: [ don't think it is.

ZAYDE: You don't think so, or is that . . .

HEGEMAN: Well, I just quote from Dr. Mclntyre's article on string instruments which was published about a year or so ago, in which he comments that nobody knows the psychoacoustics of hearing, and that if you were trying to make your self a model of the thing, if you would make a model so goddamn complicated it would take ten years for you to associate and translate all the data that you were going to get on the thing. Now this agrees a great deal with what Matti said, “. . . the different measurement methods, which are purely engineering sequences, may have no relevance whatsoever in hearing.” where there's an analysis of string tone, as you hear the bite of the bow on it, you hear the impact, you hear the body of it, and the decay. And again, more or less your three divisions of hearing that you were talking about. And while Dr. McIntyre isn't trying to make a model for hearing, he's trying to make a model of a violin. And he admits that it's a very complicated and very difficult thing to put back into a mathematical presentation.

Yet dammit, any good listener can hear the difference between a good and bad violin, and a good and bad string tone.

OTALA: Let me inject here one thing.

My point for this rather long treatise was a very simple thing. We don't know at present how we are listening, what we hear. Therefore, we're trying to quantify the things with engineering methods, or engineering analyses, like the different variables that we know, and the different measurement methods, which are purely engineering sequences, may have no relevance whatsoever in hearing.

COTTER: I think the problem is that a lot of the predictive, analytic things that are based on these engineering, electrical system approximations to hearing suggest approaches that lead, I think, to a dead end. And that's what Matti is saying. You can begin to pursue the flatness of frequency response and the minimization of total harmonic distortion and those classical things to a point where they no longer matter, and in fact where perhaps many other things go to hell in the meantime. And you may not be looking at those things at all, or sensing those differences, but your ears will. The problem with an inadequate model is that you're likely to pursue the wrong thing, or you're in a quandary as to how to make improvements. Now Peter's original assignment here was a discussion of the State of the Art and then an approach to what is possible, what may be done, and what directions we might take. So we are saying, I think, all of us, that there are a lot of things wrong with these approaches because they are not based on hearing, on what you hear.

Because we don't actually have a model of hearing. Let me give one specific ex ample which is something of a mystery, and-it's a mystery of several kinds. There are phenomena called interaural harmonics and interaural beats. When you present diotically--that is in separate, in each ear--different stimuli, the mind, which is the hearing organ of most significance, constructs relationships that represent beats and interaural harmonics, which have no existence in physical reality. There is no cross-linking between the ears; even the bone conduction values are eliminated from this equation. So obviously there are peripheral mechanisms that produce kinds of phenomena that represent interpretations of whatever it is that's being transmitted. There's also the question of just what peripheral processing takes place. The organ of Corti, the basilar membrane and that whole “mechanical” mechanism do things to these time relationships, patterns, which obviously from not Keidel's work but Moeller's, many others . . . pattern recognition is what the whole human apparatus is about. And differential pattern recognition. We're all familiar with the fact that we can go into a room, there's a little fan going, a little background noise; you not only hardly notice it upon entering the room, you accept it as the environment. It's there, and you sort of lose track of it. But if it should be turned off, or increased or decreased, you instantly notice it. The nature of the human response is to notice differences. Another thing that Matti mentioned which is very, very important is that when you look at the human hearing responses, even as measured by frequency domain and that sort of thing, and you look at these peculiar relation ships-the decreasing threshold at lower frequencies, the contraction of the range of loudness at low frequencies, and all of this kind of appearance-one has to ask a very important question that isn't often asked and hasn't been dealt with much but can produce a lot of understanding, and recently has begun to produce under standing. You look at this and you say, why? Why this relationship? What relationship to nature does it have? And as Matti mentioned, there's a number of people doing the research, notably in Holland and a few other places, have begun to look at this as a matched signal, matched filter, matched detector mechanism, very, very well adapted to the way things happen in nature. If you go looking for information, you only want to spend your money, as it were, use your mechanism in a place where there really is some information. And this low-frequency loss property and this nonlinear gain thing at low frequencies are an ideally matched relationship if you take the outdoor world in the forest or on the plain as the environment in which you want to perform detection. And the frequency range, or the wave-length range, the time relationship range, of the human being is very much related to his size and his mobility and the distances and range at which threats and events will influence his behavior. And you can see some similarities in adaptive response of the hearing mechanism in the elephant ear and in the smaller animal ears as somewhat related to those variables. And this work is very, very recent. So we've been missing for many, many years, for decades, important ideas about the nature of hearing that have to do with the fact that somehow or other it has a value, it has a meaning, that these things are the way they are. And we lose sight of this as long as we stick to these engineering interpretations and then try to trans late them into amplifier measurements and so on.

RAPPAPORT: The interesting thing is that everything you're saying may very well be true or it may very well not be true. There are a few interesting points.

One is that where the human ear is concerned, as far as our pursuit-namely the pursuit of higher fi-it's totally unimportant in one sense, because the same human being under the same circum stances-and there are some emotional considerations that also enter into this-but the same person in the same sound field will react the same way, whether the sound field is original or re produced. So in one sense, the human ear and our hearing mechanism cancel out of the equation when we're dealing with translating a musical event, or an aural event, into a reproduced event.

HEGEMAN: That would be great, Andy, except for the fact that everybody has their own subjective interpretation of what they're hearing, so therefore you would still get . ..

RAPPAPORT: For the moment I'm eliminating . . .

COTTER: Andy's talking about an ideal translation as being a substituted sound field. If it was totally, perfectly re created, then you're the same in both cases.

RAPPAPORT: That's right. Allowing for the same emotional environment, the same subjective environment ... it is relevant in one sense, because right now we're dealing with imperfections, and the idea is that we can't lose sight of the fact that the only reason we need the ear is in order to determine what's important and how we're going to respond to various imperfections. There are kinds of distortions, I firmly believe, that occur in nature--and you touched on this-that our ear is going to be used to. I think that one of the reasons we can hear distortions in amplifiers through speaker systems, for instance, which have, at least quantitatively, ten times or an order of magnitude greater distortion is because the distortion is different. There are different kinds, and they're things we hear in nature and that kind of thing. I think in discussing the ear it's important to realize that the only reason we need to bring the ear into it is because what we're doing is far from perfect.

OTALA: Let me inject here something that might support you. Showing how wrong-how basically, terribly wrong- we have been doing things in engineering is the simple example of headphones.

All the listening in psychoacoustic re search has been done using headphones.

We tried that, too, and we found out to our great astonishment that hearing, or the distortion threshold-that level of distortion that was audible, at least in TIM studies--was three to five times higher with all the headphones. We tried five different headphones, the best there were, and invariably, we had to inject three to five times more distortion into the signal before it became audible. What we learn from this is simple: that your model of differential pattern recognition is really valid. But somehow that model doesn't work when you have headphones.

But see how simple these things are. First we put headphones on our heads, say. We say, okay, the sound is fantastic, isn't it? We don't hear any distortion. Then we say, oh yes, that's because of the fact that headphones of course are very much better transducers, aren't they? There fore we say the amplifiers are quite okay, and it's only the lousy loudspeakers that create the distortion. Whereas the situation is just the converse. There's nothing wrong with the headphones themselves; they are probably very much better than the loudspeakers. But it's the hearing geometry or differential pattern recognition which is important, and therefore the headphones and the hearing geometry in that sense have a masking effect on imperfections.

EDITOR: Wasn't this the root cause of the incorrectness of the Fletcher-Munson curves originally?

HEGEMAN: Are they incorrect?

OTALA: Well, perhaps ... they are or aren't. Who knows?

COTTER: They are.

HEGEMAN: Are they? It's impossible to set them up in any kind of listening condition.

EDITOR: The Robinson-Dadson curves show totally different low-frequency con tours. Because of the headphone situation.

OTALA: Most probably.

COTTER: Well, based on the Robinson Dadson free-field threshold curves and the comments Matti made, one of the facts before us, though, is that you need the head diffraction effect in order to work with the whole hearing mechanism.

This data shows that very, very clearly.

And the minute you remove that from the sound field, there are gross alterations in the way in which you are hearing.

FUTTERMAN: You mean with head phones you don't hear that?

COTTER: With headphones you don't hear that unless you've contrived a very, very different sort of system than what is used to put music over headphones.

HEGEMAN: I believe that no one really considers the value or the importance of the bone conduction aspects of the hearing mechanism.

COTTER: We've begun to now, in the last five or ten years.

OTALA: Remember, the important thing is really that, so far, there is not a single publication I know of that was done on distortion perception without head phones. They're always done with head phones.

COTTER: There was a good set of work done rather a long time ago by Feldt keller at the Technische Hochschule in Stuttgart, actually done not by Feldt keller directly, but done by a man named Gassler, who found, for instance, that in a lot of the classical nonlinear distortions there was a very great difference in the distortion threshold depending upon what the tones were that were presented. And they presented thirds and fifths and things of this sort, and he found another very interesting thing in this classical, traditional way, that there were closed contours to the distortion thresholds. And that the closed contours happened to center on a rather low value of sound field-that is, the centroid of these closed contours was in general in the region of 68 or 72-82 dB. Now when Robinson and Dadson did their work, and they began to assemble all the data on loudness that had been collected, they noticed very significant differences in loudness measurements-and loudness difference measurements. And they in vented a sort of pulling function that said that, in effect, there was some optimum level at which you preferred to hear for the maximum acuity and that as things got softer than that or louder than that, you mentally contracted the scale to bring it into that range. It's very interesting and noteworthy that the centroid of that distortion threshold of Gassler's work and the centroid of Robinson's correction function all lie in the same general area of intensity. This shows in effect how very different the hearing function is, since it's these differences that we're working with, than the common data that is presented as in this function. We tend to hear, I think, this temporal pattern of loudness change very differently from the way this flat-network, two-port kind of construction is . . .

HEGEMAN: What I was going to say about that is, you have to be very careful to set your average level in order to make those curves mean anything. If you're off base 10 dB, hell, they don't mean a damn. And incidentally, Mitch, as you were talking about your hearing and so forth, and the low-frequency end, we all know very well that your ear reconstructs the fundamental when it's given a harmonic structure.

FUTTERMAN: I think Peter Walker said it much more simply in his advertising, in the brochure on the Quad speaker. He says it's very important to adjust the volume so that it's right for your listening environment. And that's much more important than how the bass sounds, how the treble . . . In other words, the volume level is most important.

HEGEMAN: I have yet to hear a decent harpsichord record that doesn't sound as if you were paying a visit to a boiler factory and clang, clang, clang, instead of hearing the picked strings that a harpsi chord represents.

OTALA: There is one record--Afka Records in Boston.

COTTER: You used that for the tests . . .

EDITOR: Gentlemen, could I try to pull all this together? The way it seems from this discussion is that the relevance of a correct model of the ear from the point of view of the audio designer would be that, if we did have a 100% accurate and relevant model, we could then concentrate on those aspects of design that could satisfy this model and not sweat endlessly over those aspects that are irrelevant to it. Be cause, as Andy says, the hearing mechanism is the same in Carnegie Hall as it is in this room. From that point of view . ..

FUTTERMAN: But that's why we need the golden ears!

EDITOR: From that point of view there is no difference. But instead of concentrating on 79 different things, if we had a correct model maybe we could zero in on 17 of them and have a perfect system of reproduction.

HEGEMAN: Hey, what's the good spot to sit in Carnegie Hall, after all?

FUTTERMAN: How much can you afford?

OTALA: Agreed on one condition: that is, that the hearing mechanism is not the same in Carnegie Hall and here. Be cause we just discussed the other extreme, the headphones. That's another extreme of a contracting listening environment, isn't it?

RAPPAPORT: The hearing mechanism is the same; the environment is different.

OTALA: Yes, but we introduce other things which we don't know . ..

COTTER: The minute you remove the head diffraction, I think you've greatly disturbed the aural perception.

RAPPAPORT: It all depends on how you define the mechanism-whether that includes the head diffraction or not.

FUTTERMAN: Is that true with the binaural recording?

COTTER: Well, only if you have a binaural head that translates. I think the essence of what we're saying is, and I think we all agree, that there are grave errors made in extending these very elemental kinds of frequency and loudness or amplitude relationships to the criteria for the system. And that we are probably missing-this was Matti's original point, since there seems to be a very important temporal pattern interpretive mechanism involved-we are missing probably the most important aspects of hearing that are not continuous sound effects. For one thing, all this data on loud ness is based on a more or less continuous effect. And the whole nature of hearing is to sense differences and temporal pattern, and we're not looking at those mechanisms that make disturbances of that kind when we use this oversimplified analysis.

EDITOR: Mitch, are you suggesting that a system of audio reproduction is imaginable that is far less perfect, ac cording to certain criteria, than some of the systems we have today, but more perfect in various neglected areas, that would sound more real to us than these other systems?

COTTER: Absolutely. I know for a fact that we can have a tenth and a quarter and even a half percent of certain kinds of what some people consider today to be ugly distortions, that are inaudible, absolutely inaudible, and that other factors which are not looked at are optimized that produce a totally different impression. Matti's point was, that when you read an rms value of something, in the form of TIM as read by classical technique, you are getting rid of time in interpreting, or in giving, that number. Be cause that is in effect a long-time average, a time average between stimuli.

“You're saying the ear is an oscilloscope, not a spectrum analyzer.”

OTALA: 200 milliseconds, though.

HEGEMAN: That's a long time.

COTTER: That's a long time perhaps for hearing. On a short-term basis, per haps a I millisecond or perhaps even a shorter basis, that effect becomes magnified. Just, even in looking at the engineering parameters, to values like 10%. And we are not doing the right kind of thing in interpreting . .. Well, for in stance, one of the great tragedies was the ready availability of spectrum analyzers.

It has misguided us. It is so convenient to buy a very expensive piece of machinery, perhaps that you've had to write very lengthy justifications for; they cost kilo bucks, many kilobucks. Then you get the spectrum analyzer, and you proceed to use it. And it's very easy to see interesting, intriguing kinds of displays.

The fact is that a spectrum analyzer inherently wipes out time as a consideration. It is a grand averager.

OTALA: Let me give here a practical joke, which is a true joke, also. I'm a member of the International Electro technical Commission Audio Standards Committee. Five years ago, IEC adopted an average loudness curve of program, of normal musical program material, which was used for rating channels and especially loudspeakers. In the last conference in Budapest, one year ago in November, I started to wonder about that curve because it showed a very anomalous low-frequency content. It went something like this, and it extended down to 20 Hz or so with about some thing like a 10 dB loss. I started to wonder how they ever arrived at this, be cause it was contrary to my experience that this would really be the case. Well, it turned out that it was done in England, and the results were confirmed in Hungary by two groups of researchers.

Both had used a spectrum analyzer connected to normal radio programs. And you know what a spectrum analyzer does. Well, all this garbage down here at the low-frequency end was syllable pauses--the inter-word pauses registered as 2-Hz or 5-Hz components. Of course, there is “boop” --that is, sound and no sound, and that's the way it registers.

How often we have this kind of thing! Well, this has now been corrected.

OTALA: Again, we have exactly the same fallacy. If we connect a spectrum analyzer or any kind of narrow-band system, this low-frequency end comes from there-I mean these syllabic pause frequencies.

COTTER: You're more interested in the higher frequencies than those data suggest because we're using in a sense a resonating device, which is all out of keeping with what the nature of the hearing mechanism is.

OTALA: Right. We are doing temporal hearing and not long-time temporally averaged hearing. That's the basic engineering fallacy.

FUTTERMAN: You just saved me $5800.

EDITOR: You're saying the ear is an oscilloscope, not a spectrum analyzer.

OTALA: Yes. More or less.

COTTER: The ear in fact is-what Matti is saying is that the data are very mis leading if you use the spectrum analyzer because the nature even of the peripheral mechanism in hearing is a transversal filter, a traveling wave system which is inherently much more sensitive to the fine structure in time than it is to the long-term thing. And maybe the low frequency things . . . the liver, literally, is a much more important integrator-the body response-and the liver is one of the biggest and heaviest things, and if you've got bass that'll move your liver then you really are satisfied, you know?

HEGEMAN: Here's how you listen to an organ, Mitch.

COTTER: That's right, that's right.

EDITOR: I think we have to pass on to the next subject, whatever that may be. I think we're all agreed that an under standing of the hearing mechanism is essential in order to zero in on relevant and irrelevant aspects of audio design. But let us postulate that the fabrication of a clean channel is possible-clean from the point of view of satisfying the human hearing needs totally. Let us say that such a channel exists from microphone diaphragm to loudspeaker diaphragm. In that case, do you feel it's possible to transport a sound field from here to there?

HEGEMAN: Well, you've left out one very, very important thing--the environment in which it's going to be reproduced.

COTTER: Well, I think we started talking about that whole problem and I said that there's a subjective disorientation effect. What we proceeded to do in this last hour of discussion is to discard-largely discard-the traditional methods of evaluating the channel, saying in effect that if one got some modicum of some of these amplitude and frequency things in hand, that thereafter further pursuit in that direction was pointless because there were much more important other mechanisms. And Matti keeps pointing to the time domain as the relevant kind of thing that is missed by all the traditional methods, the spectrum analysis and so on. If we could transpose a sound field, we would have achieved some kind of an ideal. There's no question. We're all agreed on that. We still have the subjective disorientation, and I think we accept that it's going to be a disturbing factor. And I asked Max about that point because you try to substitute some enriched stimuli to make up for some of that. But the problem is the design problem: just precisely how do you achieve this transparent channel? If you could make this channel, you would have a certain kind of transposition.

EDITOR: Are we at the point where we can analyze the channel? I'm not sure whether: we are. We all agree we need more than one channel; we generally use two channels.

COTTER: A sound field.

HEGEMAN: Do you want stereo, do you want ambiphonic or just which?

EDITOR: The channel may be clean, or at least clean as defined by psycho acoustic criteria, but is the availability of clean channels a guarantee of accurate sound reproduction; in other words, of the accurate transposition of the sound field from here to there?

OTALA: But Peter, isn't that an academic question, because we haven't got that kind of channel, and we will not in the foreseeable future? Because of all the contamination we've got-think about those compressors and Dolbys and everything that's used . . .

HEGEMAN: The engineer is designing a product. He's actually working on probably one of-I use 30 black boxes-it may be only 20 black boxes. You go in between the chain, the way music is made, and where you actually listen to it.

One thing I've certainly discovered over the years--two wrongs don't make a right. Any time you try to compensate for something that's going on up here, you've ended up lousing it up. So, from a design standpoint, you basically look at gain, bandwidth, signal-to-noise, and your time domain. If you can work yourself down to the best possible performance you can get out of whatever apparatus you're using, you have come up basically with a successful design. If that's the state of the art, you're lucky.

And the state of the art changes daily.

ZAYDE: The contamination that Matti was talking about is quite a bit different, I think, than what a lot of the practitioners have been led to believe using spectrum analysis as a means through which one can determine this contamination level. It could be, and in fact is, as we're discovering, that these aberrations are very different from what we've ever thought them to be. And that we're leaning towards a reality of the time domain as giving us a significant handle on what is going on as opposed to spectrum analysis, when in fact most spectrum analyses only dealt with a portion of the general Fourier transformation, just the amplitude characteristic, and eschews every connection with the time domain. So I think there are some significant points here.

RAPPAPORT: The idea is-getting back to what Peter was getting at before-as Matti was saying we're still in the Stone Age of audio, and we're still dealing with some very basic phenomena such as tonality. Peter is talking about recreating the sound field and I think by that he means to a certain degree the holographic aspects of the performance--being able to pinpoint where everything 1s. We're still being able to work on how things sound. I think we have to take it one step at a time. [ would like to be able to reproduce the sound of an instrument, even the sound of a single instrument, accurately, much less where it is, and how big it is, and that kind of thing. And that's a very, very primitive aspect of reproduction.

HEGEMAN: How loud it is, how big it is, is part of the basic inherent quality of that instrument. It has to be played against very close limits.

RAPPAPORT: Well, the bigness determines the tonality to a certain extent.

ZAYDE: Our tools are limited by our own practical experience in determining what is the “mathematical representation” of the hearing mechanism.

RAPPAPORT: The interesting thing is that, with the determination of the composition of the hearing mechanism or its characteristics, we're doing it from two ends. There's the biological end, which is trying to examine the hearing mechanism itself and determining how it works, and then there's our end, which is looking at various phenomena and determining how that relates to the hearing mechanism. And we're coming up with an under standing of the hearing mechanism based on simple empirical knowledge that we've derived from experimentation.

COTTER: Andy's got a good simple case for us.

OTALA: So let's dismiss Peter's approach, I mean the hypothetical approach that if we could reproduce the sound pattern as we should . .. We can not-and in the foreseeable future we will not, simply because of the fact that there will not be that kind of sound sources available. Our present problem in my opinion is not that we would not be able to do that in the long run-after 20 or 50 or 100 years, we certainly will. But the problem will be a very simple one: all those listening tests, for instance, con ducted nowadays with present equipment, relatively often, as you know, yield as a result that nobody hears any difference with any component of the system. The reasons are very simple. There's so much contamination that no matter how much you add to that, nobody hears anything.

COTTER: I would like to sound a note of optimism in the midst of all this pessimism. I think that Andy's simple ex ample is a very good touchstone, because many of the contaminating influences act in very simple ways to affect the sound of a single instrument. And what we're missing in the approaches at the present time is dealing with the ordering of the events that that single sound represents. Since it's a certain kind of pure case, let me tell you of an interesting experiment which I think makes it much more optimistic than Matti's 20 or 50 or 100-year projection.

HEGEMAN: I could care less about that.

COTTER: When we were doing these experiments in quadraphony-we were using a four-channel independent system--we discovered very quickly that the least subjectively disturbing, psycho acoustically disorienting kind of sound was in trying to reproduce some small, simple instrument-not a piano, but some very small, very simple instrument that could conceivably be right there in the room.

HEGEMAN: Guitar.

COTTER: Guitar was one we used. The guitar taught me a lot of interesting things about the nature of the problem.

The propriety of that sound to the room removed a lot of other spatial, disturbing kinds of ideas and problems. And we then zeroed in on, I think, some of the very important time domain effects that have affected my thinking ever since.

One of the things we discovered in the course of that particular piece of work, and which actually had been in my head for a long time, with respect to phonograph records, was that virtually all of the significant distortion processes in the phonograph mechanism, which is still our primary medium, and to a certain extent even in the tape re cording processes, were time-disturbing effects--were time-modulating effects--rather than the kind of thing that had been analyzed. In fact, when you look back over the literature, there were striking examples of the analyses having been correctly conceived in the approach but immediately lost as the analyst or author sought to present to his fellows some kind of easy engineering handle, which was more like the spectrum analysis kind of thing. It occurred to me then that we were neglecting that under standing, that kind of representation. Efforts to examine this time relationship and the mechanisms led to a very different approach to what we were doing.

And the sonic effect was a remarkable improvement in spite of all these other limitations in the system. I was very heartened, because it made a bigger improvement in this clarity, this ability to transpose, for that simple sound than any of the other traditional kinds of things. 1 think we may be a lot closer to that accomplishment than we realize. What was also very heartening and very interesting was that in the phonograph record we had a medium that had considerably more capacity than had been appreciated. Because as we made these changes in approach-and I'll talk more about the details as we get into it-it seemed to me there was more there than had been brought out. We've been able to further that end over the years since. But we had maybe by sheer fortune, sheer good luck, gotten into the medium more than we were taking out because our approach to what was to be done, and our understanding of the mechanisms of the distortion, had obscured for us what really was going on. I am more optimistic. I think that we are much closer to the ability to translate the sound in Andy's sense-as a very vital, uncontaminated sound. I've gone around talking about intellectual honesty in assessing the sound, and I have invented the perfect observer in the form of a 4-year old, impudent and irreverent little girl who simply listens and tells you the truth because she has no big investment in a hi-fi system, and she has no particular concern. And it seems to me that such an observer listening to most hi-fi instantly knows that it's not live; Ella's commercial notwithstanding, there's no doubt. It seemed to me that a sort of ideal was that if this little girl ever paused for a moment to reflect, if she had a moment's doubt, we would already be pretty successful. And that the question, is it live or is it recorded, can be examined from the standpoint of different kinds of criteria than we commonly use. For instance, to remove this environmental problem, which Stew I know feels strongly about too, one of the cute things you can do in assessing system quality-system purity, the lack of contamination--is to walk outside the room. How about that? You've done this, I know.

HEGEMAN: Many times.

COTTER: You walk outside the room, and in effect you have transposed your interpretation.

HEGEMAN: You ask yourself, are they playing it there?

COTTER: Yes. Is there a live sound going on in the room? Because I'm not there, I'm outside there. What is it that's arriving at your ears? For one thing, your frequency response in the traditional sense is screwed up beyond belief. It's tortuous. Your amplitude relationships are obviously affected. In fact, as you walk down the hall, you're introducing a lot of attenuation. But you know, dammit, it's very easy to tell whether it's live or recorded in most cases. And if you've got a system that sounds pretty good down the hall, when you walk in the room it still sounds pretty good, only you begin to become influenced by the lack of realism. I think the answer is along the lines of what Matti introduced as the main consideration, which is that the time relationships with which these sounds arrive are obviously much, much more important than frequency response.

HEGEMAN: And they're more real if you get outside the doorway of the room.

COTTER: The reality is improved. And Max, from the standpoint of the reality of the musical performance . . .

WILCOX: What I'm thinking of is that sometime during the day we should get into the differences between software and hardware. Because I'm the only soft ware manufacturer here.

HEGEMAN: I'm a software user, though.

WILCOX: Okay. Just three weeks ago I recorded the Schubert B flat piano sonata with Richard Goode in RCA Studio A. And we had a very good little piano there, CB 409. What's always difficult is to walk out into the room, position yourself in a certain place because the piano sounds differently of course when I'm going out of the control room door; the piano's quite far away and I hear mostly reflected sound, a kind of clangy bright reflected sound which makes certain seats in Carnegie Hall not make a piano sound very good because you get that kind of clangy sound. The closer I get to the piano, the better it sounds up to a point; then if I get too close to the piano, then it doesn't sound well anymore. If I'm leaning over the piano speaking to Richard or to Mr. Rubinstein until a couple of years ago, the piano sounds very bad there actually.

“. . . some of the difficulty is that many recording producers and engineers never go out into the hall to begin with.”

To a pianist that's a rather bad place for the piano to be listened to. In any case, what I try to do is go out there and position myself. Now when I come inside the room, even though I'm down to what I think is a vastly superior way of recording the piano than I used in previous years, because it's two rather widely spaced omnidirectional microphones going on to a 2-track tape, I still always come back, and I say, “Ha, don't have it yet, do we?”

HEGEMAN: That's true.

WILCOX: Now there are a lot of things involved there-you're involving the room, the loudspeakers, the console, and all kinds of things. I'm constantly faced with that difference, and that's many steps closer to the original than most of us. Because then the tape is processed, then the record is made, then it finally gets on to the equipment that you gentlemen either analyze or manufacture or do other things to. For better or for worse, it depends on characters like me to provide you with the source material that you play. And you were talking about distortions of Dolbys and so forth, if you manufacture equipment to make the average phonograph record sound good, you may be doing very strange things, because the average phonograph record, I submit, still is a rather crudely engineered product.

ZAYDE: Your discussion of capturing the piano deals with the piano as an en tire entity. But isn't it true that some people deal with the piano as, really, components--they look at the sound board, they look at the strings, and they try to capture different elements of the piano, such that, when you sum all these things together you get “piano”.

WILCOX: The ear doesn't hear it that way.

ZAYDE: That's right, that's the whole point. But that's a fallacy in the practice.

WILCOX: My whole approach to recording-which I always approached as a musician-but certainly the techniques were inherited establishment techniques that I learned from great engineers like Lewis Layton, who were great primitive engineers in that they didn't really know why they were doing it but they did a very good job. The Chicago-Reiner recordings are not the product of a sophisticated engineer but the product of a guy who could hear.

EDITOR: The hall helped.

WILCOX: And a great hall, and a great orchestra.

HEGEMAN: Max, as you transfer between the control room and the studio, and so forth, if this is the fiftieth time you've been in there, you do make your own emotional adjustment to the differences in the sound, do you not? The answer is you walk out in the studio, and you hear how the piano sounds, and you have your more or less particular spot where you want to judge how the piano sounds live in the hall . ..

WILCOX: And that's a subjective judgment itself.

HEGEMAN: Forgetting any kind of recording mode, or anything else. That's where the orchestra's playing, and that's ...

WILCOX: That's the original subjective thing-where do you want to be?

HEGEMAN: Yeah. That sets your guide line. Okay, now you walk back into the studio, and you start scratching your head and so forth and so on, flip mikes, you do this that and the other thing to try and recreate that. But you do have your own format of the differences between that performance and what you hear in the session, or in the control room. And you make your own mental adjustments, or acoustic adjustments and so forth, and say, “Oh, that's going to be all right.” You take these disturbances on there and you say, okay if it sounds like this here in the control room it's gonna be all right.

WILCOX: I think some of the difficulty is that many recording producers and engineers never go out into the hall to be-gin with. They sit in front of their Altec 604's and they produce what my great old engineer Richard Gardner used to call “typical recorded sound.” That's all they're trying to do.

OTALA: It isn't uncommon, really. You've probably heard about Svein Erik Borja's--he's a Norwegian broadcast-man--his experiments . . .

HEGEMAN: That's a very interesting article. I enjoyed that.

OTALA: He comes out with comparisons that are just fantastic. For instance, he reinforced Bob Ashley's earlier remark that the equalization curve that was found in most records was the JBL studio monitor loudspeakers inverse frequency response, first of all. Secondly, he recently showed four different recordings, where the nominal input, the recording itself-the 24-track recording-was made in a hall. It was mixed down by the same mixer in various studios using various control rooms.

HEGEMAN: And he came out with four different records.

OTALA: He came out with such incredible differences that when I heard them I took the table and said, hey this is impossible. And he even included the Rosenborg studios in Oslo before alteration and after alteration. And you wouldn't believe the different sound. So that's the rubbish that you are putting out.

HEGEMAN: No, that's where he has to have a great big shovel to shovel the shit.

EDITOR: Who says we need a mix down?

HEGEMAN: Hey, how about that?

COTTER: In defense of Max . . .

WILCOX: Let me defend myself for a moment. Then you can defend me.

Without making this a sounding board before dying, coming clean with life . . . 1 still was the product of, and was working for, RCA for 17 years, with an establishment kind of engineering approach. In the last few years-and I think my musical instincts were always very good-but being involved in this kind of electronic thinking has changed my approach to making records. Rather than using many, many, many Neumann cardioid microphones, I have now done things like purchase, or let Unitel Television purchase for me, because they live in my apartment, several Schoeps micro phones. When we first bought them we bought 10 cardioids and 6 omnis. But I don't ever use the cardioids anymore; I'm gradually trading them off, because I can't stand to listen to cardioid micro phones anymore. We recently recorded the Schubert Eighth Symphony, which Peter has in the sound room, with 6 omni directional microphones. I also recorded it with a pair of coincidental figure-8's, which despite the scientific purity of the whole idea I didn't like so much . ..

HEGEMAN: You too, eh?

WILCOX: because there was a certain kind of thing that isn't right about it.

HEGEMAN: Dr. Blumlein . . .

WILCOX: I'd like to hear what you people say about that because maybe it's just that I'm not doing it right. Anyway, what I brought this morning was Peter Serkin playing some Chopin variations, Opus 12, recorded on an ATR 100 at 30 IPS, with no Dolby and a very minimal console-and someday I'd like some body to build me a really fancy minimal console; oh, I doubt, there probably is no talent in this room that could do that-and fed into two Schoeps omni directional microphones that were set in a pattern I had established in another hall; T carry this little drawing around with me, and within certain few inches it seems to work. The mike goes here, and that one goes here, and I sort of have it measured off from the toe of the piano; and it works equally well in varying places . ..

HEGEMAN: Top off, or high stick?

WILCOX: Well, it even has a high stick; as a matter of fact, I stole that from Charlie Fisher. I built a stick that makes the lid about this much higher . ..

HEGEMAN: A real high stick?

WILCOX: Yeah. But anyway, there's no mixdown involved, because the record's going to be cut from a 2-track tape. And I agree, I've seen mixing rooms where you start doing all these crazy things. If you don't have it correct at the date, then the chances for transformation of the tape are infinite. And all you're really doing is adding further distortions of perception and . . .

HEGEMAN: What you really do-you could do this geometrically very easily-you set up a pair of mikes; what you're really recording is the sound field and so forth. Now you start playing in the control room, you start taking pieces out of that pattern. It no longer is the rectangle, no longer is the volume that you started with; it has holes, and this, that and the other thing.

ZAYDE: When you do amplitude match-when that's done-what that does necessarily to time domain relation ships, it just blows it apart.

HEGEMAN: Well, what do you think these guys are doing? “That's too loud; let's cut that one down a little bit, let's balance out the sound”--they're not doing anything to the time domain except destroying it.

WILCOX: I don't think this is what Peter wants to get into now.

EDITOR: We can get into this later on. It doesn't really matter. The nature of this seminar is such that subjects keep crop ping up that will be caught up with later on. That's completely unavoidable. I don't want to constrain this discussion into any straitjacket here.

WILCOX: I just wanted to say that software is a serious problem for you gentlemen, and there are some people in the world who are trying to give better software to people and I'm just one of them-I hope I'm one of them.

COTTER: I started to say “in defense of Max” because actually you've got a very much more difficult problem than the usual audio designer, who can always resort to specs to prove his case. He can launch a sheet which shows a bunch of numbers and proves unquestionably that he has more zilchus factor and less zorkus factor than anything else in existence. Max has to deal with the sound. He has to deal with the music.

OTALA: There's one added comment on records. We're presenting at the AES 62nd Convention in Brussels, in March, a paper on the signal rates of change that we've measured on records. We show the distribution curves, reproduced for quite a lot of records. There's nothing new about the fact that the slew rates, at a 100-watt amplifier output level, point to something like 2 to 4 volts per microsecond, worst case. But what is important there is, seemingly, one thing. For every record we tried--except the Sheffield Lab direct-cut records, which were the only examples that showed an anomalous curve-but all the others seemed to be slew-rate limited in such a way that the signal rates of change measured in those records had an abrupt end. It was like this, and then pop!

HEGEMAN: Topped off? OTALA: Topped off. There was an end.

And this is completely impossible as far as I can understand basic physics.

COTTER: What pickup did you play the records with?

OTALA: MC-20-that's the new Ortofon--very fast.

COTTER: With what type stylus?

OTALA: That has a biradial elliptic, I believe. The importance here is that there's a slew limit, a seemingly very sharp limit. There is a particular limit to every record-the distribution curves just run vertical after that, so that if you try to measure the signal rate of change when it becomes double-the measuring window becomes double-you go about 3 to 4 decades down in the probability. That only points out that we must have an inherent limiting in the process. Where it is . . .

HEGEMAN: You certainly don't want that kind of cutoff in the system.

WILCOX: Where do you think that happens in the chain? These are direct-to disc records that you're talking about?

OTALA: No, they were all possible records.

WILCOX: No, but I mean the Sheffield Lab ones-were they direct-to-disc records?

OTALA: Yes, they were direct-to-disc. But they were the only exceptions, and even they showed not abrupt . . .

COTTER: Gradual.

RAPPAPORT: Which is what you would expect.

OTALA: Yes. But based on basic physics, it isn't possible that it would be that steep. Acoustical signals do have a distribution of rates of change, certain lye S055 5

COTTER: Did you try measuring acoustical signals, though?

OTALA: No, we didn't in this experiment. But the important factor is, seemingly, that we are already having in the recording studio somewhere, or in the cutter portion of the equipment, some thing which is slewing, and based on this kind of rapid decrease-first al most flat spectrum and then rapid decrease-it's also slewing at quite an appreciable time percentage.

EDITOR: Mitch asked whether you tried any acoustical signals. Did you mean just live through the microphone-is that what you meant?

COTTER: Yes. Because we did do some experiments along those lines quite a few years ago with very small microphones and we found interestingly enough that there were cutoffs taking place, that good-sounding and bad-sounding instruments differ in the cutoff.

OTALA: You're talking about frequency cutoffs.

COTTER: No, I mean rates of change.

The pressure gradient effects were very important to the sound; and if you think for a moment about what the nature of the basilar membrane process is, if you think in terms of traveling wave and gradient effects, then it would seem as though there ought to be a cutoff in rates of change. That if you got too steep a gradient what you were doing in effect was mechanically and acoustically producing accelerations that are proportional to a power function of the event, and that you would rapidly climb into regions of stress in the mechanism that would excite other undesirable factors.

That in fact many a good-sounding violin-and we have Carline's work and others to show that there are interesting differences in the attack and edge effects-that in effect when some people will tell you that a violin doesn't sound very good, it sounds scratchy--that “scratchy,” in effect, is excessively high rates of change as compared with the smoothness of the sound of instruments that don't have those effects. So I suspect that there may be in fact an auditorilly determined rate of change that is acceptable, beyond which it is probably painful, since I think there are second-order response mechanisms even at low levels.

Some people, for instance, are very, very irritated by the sound of a piece of chalk screeching on a blackboard. And that is in some ways perhaps an example.

Because those are very abrupt edges on that kind of sound. Even in reeds, which tend to be square-wave kinds of excitation, a good reed and a bad reed, and a good player and a bad player, differ in the control, in the wetness you might say of the edge on that sound. It's interesting that many of the distortion processes that you find in the time domain that mess up sound add what we sometimes call “fur” in the form of excessive edges. In fact, one of the things you referred to, Matti, in the TIM kind of process, is that a very tiny bit of an edge that rms-es at the double-oh level on a short term basis may be very, very irritating and yet time-wise occupy a very small interval. So all these things suggest there may be a naturally desirable slew-rate limit.

HEGEMAN: It's very interesting to listen to a plucked string, which is to me one of the guidelines for good reproduction, and there you hear-it has a short enough time that one of the things that you want to hear, of course, is the attack, the body.

But in most, a great deal, of the reproducing equipment you hear the aftertaste, an after-ring when the note stops. I've never tried to do much in the lab measurement on the doggone thing, but a guitar string, a harp string, pizzicato on a stringed instrument, it's what you don't hear-that lack of aftertaste which tells you that your loudspeaker isn't ringing and your wave shapes are good. That gets very significant as part of your listening experience.

ZAYDE: To elaborate on what Mitch said, which is rather interesting, is that brass instruments in particular are capable of sounding incredibly un pleasant. And there is a very interesting effect that takes place when you propagate the signal yourself and feel through your own mechanisms when it becomes unpleasant. There are changes that take place that you can feel. When you get into this excessive rate of change modality, there is a sensation that takes place-at least I speak about when I play, on my lips-you can feel it. This blossoms into this rather unpleasant, edgy sensation. There are some instruments that may have this as a general structure that you cannot dissociate from the overall balance of the tone. For example, if we were to listen here in this room to a Stradivarius and compare it to a (3 '. . . present-day approaches to evaluating the clarity of amplifiers are pointless and reveal nothing.” Guarnerius del Gesu, the effects would be profoundly different. It's seen that the Stradivarius has this bright, almost edgy quality that can become somewhat irritating, depending upon the specific instrument and all this kind of stuff, as op posed to the Guarnerius.

HEGEMAN: But when you get down into there, you have to figure out who's playing it, whether it's their [unintelligible] instrument, and whether they are working to get a tone out that they want to get out of the instrument. A good violinist can make six different instruments sound the same.

ZAYDE: Oh sure, you adapt your playing to the instrument. I'm saying, if we eliminate that and just play the instrument raw, we find that there are some very exciting changes there going on.

HEGEMAN: Generally, I'd consider the Stradivarius a little bit over-bright and a little bit hard, but I've heard people play them like you can't believe.

ZAYDE: We can adapt ourselves to this aspect.

RAPPAPORT: It’s a feedback mechanism.

ZAYDE: But a healthy one.

RAPPAPORT: That’s right. It's good feedback.

EDITOR: This is an interesting consideration. It's possible to sit in an audience and hear unpleasant sounds live that are quite reminiscent of unpleasant sounds through a high-fidelity system.

COTTER: Let me mention something to you, some experiments that go in this general direction. As a matter of fact, Max, I think this is scientifically interesting; it's also interesting from a music point of view. It's a kind of example of how we live dangerously, you might say-artistically. There were some studies done that examined the dynamics of artistic performances, and it was found by a group of people working from Bell Labs and by several other re searchers, a couple of people in Europe too, that interestingly enough there was a pretty large correlation, pretty strong correlation, between the range of dynamic used and the artistic ranking of the performers. That the inexperienced and less able performer performed an acceptable performance using a smaller range of dynamics, less kind of stress.

Now that fact is interesting, but it's more interesting, and made very much more interesting, by some experiments that were done in the development of companders and expanders. Two different systems altogether; in one set of work it was involved with digital; another set of work was involved using an analog group of a kind developed by Bob Grodinsky.

In both cases, interestingly enough, live piano, live fiddle, live performances of good artistry-I wouldn't say the greatest, Max, I didn't have access to your level of artist-but really very, very good performances. That in each of these cases you could hardly notice-you began to notice-what were significant down ward, compressive effects. It took rather a significant difference to pick up the downward compression. But the least little bit of expansion, like 1 dB per 10 dB, became very quickly painful, irritating.

And it suggested that an artist who is a consummate artist knows how to dance just along the edge of a cliff. And that, in effect, great artistry is just the maxi mum satisfying stress, and then no more.

So, I would like to know more about the slew rate in the acoustic reality. And I suspect that artistry is concerned with avoiding excessive accelerations that would impart irritating things--particularly short, edgy things-that the ear and the brain are so adapted to picking up. And that in fact, the “stone wall” effect that you found may lie in artistic considerations rather than in any equipment limitation. If they do, it's very important to know this from the equipment point of view, because maybe we just don't need to do any more than a certain amount because if we were to, then we would encounter harshness and hashiness.

EDITOR: Mitch, do you really think that this natural limiting by the performer would appear as a stone wall on actual measurement instrumentation?

COTTER: Yes, yes.

OTALA: I don't think so.

HEGEMAN: I don't agree with that one bit.

COTTER: I think this is the control of instruments, something along the line that Bruce is talking about.

OTALA: We have to separate two domains here. First of all, what you're talking about is the acoustically enjoy able domain. That goes for every instrument separately. However, if you take a multi-instrument orchestra, for in stance, and you play that on a record, there's a definite possibility that there's a summation of signals in such a fashion that a high rate of change occurs now and then. If that is not reproduced, if the final record played does not show that kind of effects, there's something wrong with it.

Just take 10 flutes playing slightly different notes-once a minute at least, their waveforms will orient in such a way that there is a high slew rate, a high rate of change produced.

COTTER: I don't know that that's really true. I think we ought to look at that; I think we need to, because I feel that the ear will go into a problem, also the air it self. It'd be interesting to look back at what you did find and try to determine whether or not that represents a pressure gradient that carries us into a highly non linear region or not. I suspect that again we are dancing very close to the limits. I think that when you talk about the propagation of sound, I know we en countered an interesting thing just recently, and we've implemented it, and that is that there appears to be a very significant difference for certain kinds of sounds-and virtually no difference at all in other kinds of sounds-in the absolute phase with which the energy is reproduced.

OTALA: Well, my basic approach here, my way of thinking at least, is that these were multimiked recordings, at least 99% of them, or 95%, or heaven knows exactly . . .

WILCOX: 99, I'm sure.

OTALA: 99. And there, if you have those instruments playing, you add them electrically, there must be some kind of distribution in that . ..

RAPPAPORT: Even if what Mitch is saying about the artistic limitations is correct, then, as you say, at some point in time they have to line up, and also there are other considerations. You would expect to see distortions created by the equipment contributing to very high rates of change, and out-of-band information, that kind of thing. You would expect to see it, even if what Mitch is saying is true about an individual musician.

EDITOR: Do you think we're ready to discuss channels of reproduction at this point, components?

* ok * EDITOR: You may talk about anything you would really be interested in talking about. I just feel that from the point of view of this seminar we should cover components one by one because, let's face it, what our subscribers are interested in-that's components.

RAPPAPORT: You can't cover components one by one, because components interrelate, and in addition to that they all have the same problems.

EDITOR: Of course. Shall we say we should mention them.

RAPPAPORT: Independent of what they are, they all have the same problems.

COTTER: Hear, hear. In fact, I would go further. I would say that Andy's point is very well taken, that in fact you can assemble a whole raft of components that all have these gorgeous specs, and they differ in ways that aren't determined, and they all differ in sound, and they all have similar kinds of problems-and none of them are identified by the traditional methods of measurement. And I'm for one very interested in pursuing further the ideas that the assembled disillusioned souls here have, with respect to both the inadequacy of present methods and the relative importance of the time domain effects.

OTALA: Let me stimulate your thoughts with some examples of components and distortions. Let me cite one which is printed also in our paper, 'Correlation of Audio Distortion Measurements.” We had an amplifier and we measured it with all the methods that we used. It just went past those methods with very good figures. Then we tried the noise transfer method-you know, putting in pink noise in the frequency range of 10 to 20 kHz and looking what comes down to the range of 0 to 10 kHz. And it showed a very anomalous behavior there; we don't know exactly what causes it. But it is a deep problem. Let's go further. I was recently faced with the problem of different transistors sounding differently. You plugged in transistors and they sounded ... I know you know that effect. That started to intrigue me so much that I looked very carefully at that. What happened was exactly those things that we have been talking about here. It was a time or phase modulation effect. The simple thing was this: in those transistors the ft varied considerably with current.

This affected the sound in such a manner that although the ft was on the order of 15 to 20 MHz, and the stage cutoff frequency around 200 kHz, the first pole, the dominant pole at that circuit shifted back and forth with the signal.

COTTER: To be less abstract, you're saying that the time and the output circuit of the event compared to the input became subject to the value of the current.

OTALA: Yes, you can phrase it that way.

COTTER: It's time modulation, to be less abstract.

OTALA: Time modulation or, if you take sinusoidal signals it's a phase modulation, but all right.

COTTER: But since we're concerned with transients, primarily . . .

OTALA: Yes. All right.

HEGEMAN: Which is a function of top end bandwidth, right?

OTALA: Right. So the top-end band width went up and down with low frequency information, and that time- or phase-modulated the high frequency end.

Well, that was easy. We later discovered a similar effect in coupling capacitors at the low end. We had a number of coupling capacitors which were electrolytics, and under very special conditions they created an anomaly in the low-frequency response.

HEGEMAN: Which is like a hysteresis loop?

OTALA: You might call it that way, yes.

What was the reason then? Quite simple.

Namely that they did have a voltage dependent capacitance, especially at those bias voltages, which were very low.

Nothing happened with normal signal components, but since that's an RC net work, C in series arm, when we had rumble and record warp signals coming in, they developed an appreciable voltage across the cap, so the capacitance changed, pumped up and down. Consequently, for the low-frequency signals passing this network at the same time, the time or phase was modulated at the rate of the warp. That was very much audible.

COTTER: You modulate mostly the low frequencies, of course, because the high frequencies have no voltage across the capacitor.

OTALA: Yes. Exactly. Below 250 Hz; we didn't detect that at higher frequencies.

COTTER: Well, I think we've all been looking at these kinds of phenomena be cause we feel that the present-day approaches to evaluating the clarity of amplifiers are pointless and reveal nothing.

OTALA: Don't put it that way. They are not pointless, but they are not necessarily sufficient.

COTTER: Yeah. If you get below a certain value, that it ceases to have any relevance and that you start to look in other directions. As a matter of fact, one of the things that I've gotten to feel very strongly about is that the mindless pursuit of some of these commonplace values like lower, lower, and lower distortion obtained by more and more and more feedback induces an excess of some of these problems that we are talking about.

HEGEMAN: Creates as many problems as it fixes.

COTTER: Maybe more.

OTALA: This is in fact what I've always been naming the subjective optimum.

For instance, let's take a very crude model of your increased feedback-then the static distortions go down and the dynamic distortions go up. Somewhere there's an optimum, and that optimum is particular to any different combination or situation.

HEGEMAN: That's a hardware problem.

It's going to be different for every change in hardware that you use.

OTALA: That's true. .

EDITOR: We seem to be on the subject of amplifiers. So let's change our sequence, because everybody's warmed up to the subject, and let's just go on talking about amplifiers. We'll get back to loud speakers later on.

FUTTERMAN: We never did start on loudspeakers.

EDITOR: I was going to start with loudspeakers . . .

HEGEMAN: That's gonna happen. The next three days, is that the way it is, Pete? We'll get to loudspeakers in two more?

EDITOR: We may get to them a little bit sooner. But let's continue talking about amplifiers, and let's continue to talk about feedback, because it's one of the really hot issues. I would like to have some more detailed opinions on the subject, particularly as to no feedback vs. optimal feedback vs. too much feed back. I'd like to have everyone's opinion on the subject.

HEGEMAN: As a real old-timer, who grew up before the days of feedback . . .

FUTTERMAN: Me too.

HEGEMAN: Yes, I know, you're right in that class. We lived with some awfully good sound, particularly triodes . . .

EDITOR: You bet.

FUTTERMAN: The Lincoln Walsh . . .

HEGEMAN: Western Electric 300B's. At the time I was working for the Bell System, I used to hear that stuff, and I used to say, Oh God, if only I could make something like that, that sounded like that, in my living room. Okay. Bode, Dr. Black, they came up with feedback; and, commercially, Western Electric went to pentodes, pentodes with feedback. Various other circuits have come up. But they never quite sounded as good as I remember those triodes.

FUTTERMAN: That's nostalgia.

HEGEMAN: I'm sure it's nostalgia.

COTTER: You were at Bell-you recall the caution with which they exercised themselves about 12 dB of feedback, 8 dB of feedback?

HEGEMAN: Oh yes, oh yes.

EDITOR: And it was much less dangerous with the kind of circuitry they were using then than some of the circuitry they use now.

OTALA: There's a good example, how ever, where feedback just works miracles. You remember how Poulsen found out the “good” tape recorder. It was very simple.

FUTTERMAN: An iron wire.

OTALA: It wasn't the wire. Because the sound was so bad he used feedback, and when he increased feedback, suddenly, like that, there was the sound that was good. And it was fantastically good, better than anything even imaginable at that time. That was the birth of the wire tape recorder. And you know what happened. He had the transformer windings reversed, so the amplifier went into ultrasonic oscillation, and that was the invention of bias.

COTTER: That's marvelous.

HEGEMAN: I think feedback has been used as a cure-all and a catchall. You do a lousy job, and it's supposed to wipe out all the problems. Working with amplifiers-I guess this is 20 years ago, when I was actively doing that kind of thing-one of the things, very simple, you know. You measure your distortion without feedback, you put 20 dB of feedback in, and it should reduce your distortion by 10 times. Suddenly you find it only works maybe 3 times, or 4 times, so you start to look. And the big problem in a distortion circuit is your time delay, your phase delays that are inside your loop. There are amplifiers on the market in which every stage was linearized with its own individual feedback, but that gave an internal bandwidth situation so that in the end loop you could put in some feed back. And frankly, these things sounded cleaner than most everybody else's. You couldn't measure any particular difference in them, but they just sounded better. Every time you increase the feed back you're screwing around with the top end characteristic, you have to put in more phase compensation to keep the thing from oscillating and so forth. It's an area of very little return. So yes, as Mitch said, 8 dB, 10 dB, 12 dB? But something in the art as it was there, turn ed out to be very, very useful. And it was useful on that thing. This 20, 30 dB feed back kind of thing, you're just building yourself into a hole.

FUTTERMAN: Well, I went through all the things that Stew mentioned, and I wanted to make a perfect amplifier even in the 1950's. I figured the limiting factor was the output transformer.

HEGEMAN: So you threw it out.

FUTTERMAN: So I threw it out. And I don't know if you remember this, Stew, but when you were working with Vic Brociner on the UL-1 on Second Avenue, I brought up one of my prototypes and demonstrated it. And you wanted me to leave it, either Vic or you, and I didn't have any protection at the time, so I didn't. My idea was, if we could eliminate the output transformer we could use feedback. And I worked on that, and it seemed to work. I made amplifiers, and I wrote a paper for the AES in 1954, October, where I described my circuit. I had a pot where you can vary the feedback from nothing to 60 dB-and it was stable, even at 60 dB. It was a class B design. Now there's nothing wrong with feedback; it's very useful.

Here is some data on my latest amplifier.

“The biggest problem with feedback is that it tries to take us backwards in time.”

Output at clipping: 16 ohms, 115 watts; 8 ohms, 78 watts; 4 ohms, 45 watts. Frequency response, 8 ohm load, 10 watts, 4 Hz to 110 kHz. Open loop frequency, 10 watts with the low-pass filter in, 13 Hz to 22 kHz. With it out, 13 Hz to 24': kHz.

The input low-pass filter, 3 dB point, 110 kHz. Gain, 26 dB. Feedback, 8 ohm load, 37% dB; 16 ohm load, 50 dB. And I think it sounds good.

HEGEMAN: Probably does.

EDITOR: We know it sounds good. The question is, why does it sound good, and why do other amplifiers with that much feedback sound bad? That's really the issue, isn't it? Both are true: yours sounds good and some of the others sound bad with equal amounts of feedback. Obviously something's different.

HEGEMAN: [ think it's pass band inside the loop as being the greatest criterion.

COTTER: Well, you say feedback, but this is a vacuum tube system where the transit time limitations of the vacuum tube don't appear until you talk about phenomena in the several nanoseconds area, and the time dependent delay time, or the current or signal level dependent delay time-these modulations are trivial even in the second order on several nano seconds basis. So I think what you're dealing with in a vacuum tube system of this kind is something that is grossly--maybe 4, 5, 6 orders of magnitude--different in the delay time change as a function of signal current or thermal history.

RAPPAPORT: Exactly. The biggest problem with feedback is that it tries to take us backwards in time. What it at tempts to do is erase a distortion that has already occurred. And you can't do that.

The signal occurs in real time, and the reproduction of that signal, the amplification of that signal should occur in real time. Now if you get the transit time, as Mitch said, down to virtually nothing then maybe you are operating in real time.

ZAYDE: There's a well-defined aperture that you can operate within.

EDITOR: Let's talk about that.

COTTER: Anybody who ever tried to park a big boat at a dock knows the importance of the delay time and the rate of response at the rudder. The in experienced soul can send it out to sea or ram it into the dock in direct proportion to the speed with which he swings the rudder. The cautious soul does an anticipatory sort of thing and inches up to it. And basically what Andy is saying is that if your output-the idea of feed back really is a very simplistic idea. It says, you look at the output, and you compare it with the input, and you correct for the difference. There's a little presumption in there-rather, not a little one, right?

RAPPAPORT: It's a very large presumption. That you don't know what happens in between input and output.

HEGEMAN: When you find a negative time constant that you can put in between the output and the application of the feedback, which will compensate inversely for the delay time through the amplifier, and by God, we've all got it made.

RAPPAPORT: That's exactly right.

OTALA: That is, by the way, not necessarily a difficult thing.

HEGEMAN: I don't have the apparatus on my desk.

EDITOR: Would everyone around this table agree that it's not the number of dB of feedback that must be watched, but the time modulations?

HEGEMAN: Yes.

RAPPAPORT: Actually, it's a combi nation of both.

OTALA: I don't agree. That's an over simplification of a complicated matter. First of all, of course, starting from trivial things, people seem to think in terms of feedback being a free entity by itself; you can juggle with it as you wish.

This is not true. Firstly, of course, the stability considerations-that is, the compensation and so on-are feed back dependent. This simple reason, as simple as that is, has so far been the reason for about one hundred or so comments on my TIM papers. I've always taken that as being such a trivial thing that it need not even be mentioned.

Well, I mentioned it, though, but nevertheless, that's only one thing-the pair, feedback and compensation. The second thing is how do you apply it- because in a given situation, in a given amplifier topology, if you increase feed back you also alter the compensation.

But you have to alter something else too, and that's a third variable. And there it breaks. For instance, you have a fixed output level, by virtue of the fact that you are designing a 100-watt or 500-watt amplifier, whatever; you have a fixed in put level, and if you increase feedback you have to increase gain somewhere in order to increase the feedback. Because your input and output levels are fixed, and the total gain is fixed. Now how do you do that-that's the crucial point. The problem is that we cannot make a stereotype by saying feedback is bad or good. We have to say how it is applied.

RAPPAPORT: You can analyze feed back by itself because, granted, in analyzing feedback as it relates to a specific circuit topology you have to understand everything that's going on. And when you do change the feedback you have to change something else. It could very well be that high-feedback amplifiers have a problem because their open-loop gain has to be so high and their increased distortions occurring from that.

However you can analyze feedback in general. A basic feedback system is a feedback system.

OTALA: Assuming that everything else remains constant except that the gain somewhere is linearly increased. The problem is that in a practical situation, when you increase the gain, it's not only the gain that increases . . .

RAPPAPORT: Well of course, you're changing all the other parameters. How ever, what I'm saying is, forgetting about the fact that feedback is used around a circuit, you can analyze exactly what feedback is. Feedback is more of a philosophy than a technique. You're taking the output signal and returning it to the input.

FUTTERMAN: You're comparing it with the input.

RAPPAPORT: You're comparing it with the input.

FUTTERMAN: And you have an error signal.

RAPPAPORT: Okay, you're creating an error signal. But the problem is that quite independently of what any other circuit parameters are, you're creating an error signal that does not exist in real time.

COTTER: If there's delay. The whole point here is, basically, that if you look at the amplifier-inside of whatever the outside world connections are, which may or may not be feedback-if you look at the amplifier, all an amplifier does basically is to amplify the error signal.

Because it doesn't know that it's a feed back amplifier.

FUTTERMAN: [It amplifies the true signal and the error signal.

RAPPAPORT: It amplifies the input less the error signal.

COTTER: Well, the error signal is the net difference, however it's taken. The whole point is that an amplifier that has feed back around it does not differ from the amplifier without feedback-as an amplifier. It is simply amplifying the error signal.

RAPPAPORT: Well, it amplifies its input signal, very basically.

ZAYDE: Yes, which is the sum of both.

OTALA: We can simply say that under the nonrealistic assumption that we would change feedback and feedback only-including then, of course, compensation because that's the pair-then we can say that if we are operating in the static domain of the amplifier, or the static operating area of the amplifier, then feedback does good things.

COTTER: You say static because you're saying static and dynamic become synonymous.

HEGEMAN: Without time, you're right.

OTALA: Not really, no; I have to define that for you later, perhaps. Nevertheless, if you're operating in the purely pig operational area of the amplifier then ;

COTTER: The DC response.

OTALA: The DC type of response. That means where you have no signal derivative effects. Then it is purely a good thing. Now, the individual statement, where it is good and where it is bad, depends on where the limit between static and dynamic is, and whether it is inside the band where the amplifier is to be used.

COTTER: Let me ask you a different question which puts the problem in a different light. Let's say that we concern ourselves with the important area of distortions that we discussed a little while ago, which is essentially time modulation effects that arise from the signal or from its recent thermal history. Could we have TIM-like processes without feedback? Let's forget altogether about feedback.

And I think the answer is yes, we could have these problems, because we say, in effect, that if there are signal or thermal--which is a sometime integral of the signal-dependent delays, we're going to prongs delay modulation effects and we ave a problem. Now why in the world is feedback used, to correct for these effects? It seems to me that the basic problem is it doesn't. That's the flaw, the flaw in the thinking.

RAPPAPORT: Once a time modulation effect has occurred, once a distortion like that has happened, you can't take it away.

HEGEMAN: Once you have a modulation you can't linearly correct it.

OTALA: Within certain limits you are right.

COTTER: The thing is you need a predictive kind of thing, or you need something different. The whole point, though, comes back to an interesting thing which Stew and Julius started talking about, and I had some of these experiences too. That is in the beginning, if you look at the early Bell System papers and you look at the approaches to feed back amplifiers, when one went about using feedback in the Bell System carrier amplifier systems, the first thing you did was to linearize the static behavior of the system, which in those days meant vacuum tube considerations, and the static and even the 108-kHz carrier range meant in a sense virtually no time effects.

Then you gingerly applied the feedback, because they were very concerned in those days about the order multiplication of distortions, and about the time magnification effects that occurred, in effect, from the fact that, going back to this simplistic idea that the amplifier amplifies the error signal, if you had a lot of feedback from a delayed replica of the signal, you would in effect be moving the position of the thing. As Andy said it tries to jump backwards in time.

OTALA: Let's put it in another perspective still. If you have got a number of devices that you have decided to use, then they have an intrinsic gain-bandwidth product. Now it boils down to the question, how do you divide this gain-band width product-you may use local feed back in every stage. You make some kind of proportions for the stage gains. Then after that you apply an overall feedback.

It's more or less a partitioning problem then. This goes back to my first comment;

for a given budget, it becomes a question of optimal balancing. When we are talking about these time modulation effects and phase modulation effects, whatever, then I am not completely certain that leaving the overall feedback, for instance, completely out and only applying different, various forms of local feedback would yield, in that respect, an optimum solution. I think that everything applied cautiously is the best . ..

COTTER: Yeah, but let's go back to something even more basic. I tried to bring to focus the idea that if there's time modulation taking place, then however optimistic your DC virtue seems to be, we have to face one basic question. Why use feedback? What are we trying to accomplish when we use feedback? If there are delay modulations, if there is a significant delay time, and the delay is “modulatable,” why use feedback?

OTALA: My point was, if we are operating in the static area, then . . .

COTTER: Ah. But this is the problem. In solid state today, what devices are we possessed of where the delay and the delay modulation components are so trivial as to make that assumption . . .

EDITOR: How about MOSFET's?

RAPPAPORT: Even with bipolars, you can create an amplifier-you don't have to use exotic devices-you can create an amplifier with minimal delay modulation effects.

COTTER: If you're looking for them.

RAPPAPORT: If you know what you're looking for, you can create an amplifier with minimal delay modulation effects and in that case-in that amplifier, for instance, you can apply feedback, and some of the gross effects of feedback will be reduced. Because you don't have these delay modulations to begin with; you're dealing with a constant delay, and some of the dynamic distortions created by feedback will be minimized. But there's another side of the coin. There are two factors-one is, if you have time modulation distortion why use feedback? The other is, even if you don't, why use feed back, or why not use feedback? Because there are definite reasons why even in that situation you shouldn't use feed back.

HEGEMAN: One thing feedback will do-it will straighten out a nonlinearity in the transfer characteristic.

ZAYDE: Steady-state. That's the whole point.

OTALA: In a static transfer characteristic, not in the dynamic transfer characteristic.

HEGEMAN: Now, what feedback will not do-and this word is thrown around like mad-once you have a modulation, feedback is no good at all. Because modulation is a multiplication and we don't have any good electronic division . . .

COTTER: De-multiplier.

HEGEMAN: De-multiplier, all right, that's a good word. You cannot de-multiply something, once there. This is even in our old AM broadcasting system-once modulated, by God, you're dead. You can't un-modulate what you've got.

RAPPAPORT: Especially if it's modulated in a pseudo-random fashion, as occurs in these amplifiers.

OTALA: Stew, you're perfectly right except for one thing. In a well-designed amplifier, these modulations by them selves are so small, that the piecewise linear approach goes. I usually draw this circuit diagram for everybody who says hey, let's use much feedback. I say look, we've got here an amplifier. We go and we measure the output distortion. What is the distortion level that we are likely to find in, say, an 80-dB feedback amplifier. Say it would be 0.01%.

HEGEMAN: Perfectly adequate.

OTALA: Right. Let's then look what's in side the amplifier, because if all our equations are correct the output distortion must be feedback times internal distortion. And what we find is 100%. Well, 100% is impossible. So, what went wrong? Well, there are many possible reasons, but the best reason here would be possibly that the feedback equations do not apply. And that the whole Laplace trans form does not apply in this situation. So why?

COTTER: The math is hyperbolic.

OTALA: Yes, but nevertheless the problem mostly seems to be that the circuit was not linear to start with. And the basic, real problem in all these theories is that they assume perfect linearity.

Now Stew, you wouldn't make that kind of amplifier--you would linearize it first; so would I. But if we then, in our open-loop characteristic, without feed back—I’m talking about 0.001% open loop distortion, and say an open-loop bandwidth of say half a megahertz or whatever--then . . .

EDITOR: Then why do you need feed back at all?

OTALA: Well, then I feel at least confident that I may apply a small amount of feedback without penalties.

EDITOR: For what purpose, in that case?

HEGEMAN: If nothing else, Pete, to wash out the variations in the hardware you're going to be using.

OTALA: There are some reasons, yes.

One of those for instance is that in a practical amplifier, a power amplifier for instance, I know no other way than moderate feedback to make the closed loop output impedance low enough.

RAPPAPORT: When do you consider it to be low enough?

OTALA: Say below 0.1 ohm or so; 0.05, something like that.

COTTER: Well, why is that important?

EDITOR: You have a paper that says it's the open-loop output impedance that's important.

“Isn't it true that the situation in which you can apply feedback correctly and fearlessly is the very situation where you hardly need it?”

OTALA: That is it, yes. But there are also some effects which occur with closed loop output impedance. But nevertheless, I would point out some other factors. Today I'm not afraid of using feedback as such, because it doesn't give me that much of a headache, if it is used moderately. But the problem is, for instance, a typical interface reaction--the Interface Intermodulation Distortion paper, you probably have seen it. That IIM phenomenon is in fact a very good example of something injected into a loudspeaker, coming back from it, propagating via the feedback into the . . .

COTTER: Let's spell that out. We have a loudspeaker. A loudspeaker contains a system that stores energy. If we had a loudspeaker that didn't store any energy we'd have a rather interesting loud speaker indeed.

HEGEMAN: Almost an adiabatic, huh?

COTTER: Almost. But the fact is that we store energy in a loudspeaker, and . . .

OTALA: We release it backwards.

COTTER: We release it backwards- there are reactions, some comes back, and it comes back at rather variable and different times. In fact it can come back spread out over a whole period of time, much greater than the initial event.

OTALA: We measured about 50% of the energy coming back during the next 50 milliseconds.

COTTER: Which is a hell of a long time compared to the dimensions of most rooms-or the dimensions of time for most musically important events. The basic problem is, what happens to that energy?

OTALA: Well, let me continue then.

Firstly of course you try to dissipate it in a physical resistance-the physical resistance being in this case the open-loop output impedance-but if that does not help, then the feedback will take care of it, inject it into the input as an error signal. And we've measured amplifiers which have the signal ratio of the nominal forward signal to the loudspeaker generated feedback signal of approximately 6 to 9 dB. That means they're al most equal in amplitude, and there may be an intermodulation, certainly, be tween the primary signal and this loud speaker-generated, delayed and frequency-transformed version.

COTTER: Perhaps something even 20, 30, 40, 50 milliseconds away.

OTALA: Yup, that's true.

COTTER: And we know that's long enough to be a discreetly different, highly different sort of sound; and these mental time-ordering processes we talked about clearly would recognize even a little teeny bit of some of this.

OTALA: Right, that's true, especially if we have a signal which for instance is de creasing at that very moment so that there's less masking. But the important point here, in my opinion, is that we're quite often mixing our measurement results and our conceptual thinking-feed back by itself in a perfect circuit like an amplifier itself-and describing the bad effects of feedback due to, for instance, time modulation. Although I support time modulation, for heaven's sake-but describing it as being that, whereas it in fact comes from other properties . . .

COTTER: These are some of the things that Andy mentioned when he said that feedback presents some problems that are very different.

RAPPAPORT: Yes, the idea is, let's take your analysis of a feedback amplifier connected to an energy storage device which is going to kick back some of that energy. Now you have a feedback amplifier with a moderate open-loop output impedance, and not all of the energy is going to be dissipated by the open-loop output impedance of the amplifier, and a portion of it is going to be recirculated back effectively, to the input as an error signal. Now what hap pens if that error signal is then different than that signal needed to properly dissipate the energy present at the output of the amplifier by the time that error signal has passed through the amplifier, hick has some finite delay? There's going to be another error signal created, due to the difference between the original error signal and the actual error at that time, recirculated back. And what can happen in certain cases is that an error signal which was not adequately dissipated by the output impedance of the amplifier, that may have lasted a certain length of time, is being continually regenerated through the amplifier, via the feedback loop, until gradually it decays.

OTALA: Well, this is a known phenomenon, but there's only one objection to that. That is that this kind of inter face problems are at their worst at low-frequency cone resonances. And they're practically nil above 5 kHz. So the delay-if we've got a feedback amplifier by itself, then the delay inside the loop cannot be that long, otherwise the amplifier would oscillate.

RAPPAPORT: What happens, though- and I honestly haven't done the mathematical analysis necessary to prove this--but what I feel is happening is that the error signal created by the feedback, which is much different than the error signal which the feedback is intentionally creating-in other words, that is the error. between the ideal feedback amplifier that is delay-less, and the actual feedback amplifier, which can be very short, and its duration is equal to the delay of the amplifier plus the delay of the feedback-creates, in many cases, a regenerative kind of effect which can take a couple of hundred nanoseconds, say, error, and if it is recirculated through the amplifier a hundred times, can make an error signal or a distortion which is clearly audible. And this hap pens not only from energy being fed to the output of the amplifier by the loud speaker but it happens because the distortions created in the amplifier itself are being recirculated. I feel this is a very significant effect in feedback amplifiers.

HEGEMAN: We're talking about feed back only in terms of a power amplifier.

In a preamplifier circuit, for instance, it's very ... in the first place, anybody who uses a feedback loop as part of the output of the preamp is I think a little bit out of their minds. So you put a buffer in there so the feedback portion of the circuit is buffered from the device, basically by giving one of Mitch's blank wall interfaces in there--you get rid of almost all of this reaction kind of effect from the load back to the source. That's a whole lot more difficult to do in a power amplifier.

RAPPAPORT: However, the effects of feedback as I said are not entirely due to interface, but they're due to actual distortions created by the amplifier and created by the feedback.

COTTER: It's interesting that the early Bell System papers that discussed feed back discussed precisely this regenerative order multiplication type of problem.

The tendency to stretch things out in time and increase the order of the problem. The thing that is very different about the old 300B triode, or the triode amplifiers that Stew referred to . . .

HEGEMAN: I've got a couple of them in the lab. Precious!

COTTER: Yeah, they're really exquisite, precious. But the thing that's interesting about these systems is not only did they share this very low time dispersal, very low delay property, but in effect, you had this terribly inefficient plate resistance of the tube, which in the case of the 300B was a very linear resistor, that is it didn't vary very greatly, but they were quite a large part of the power of the system.

HEGEMAN: The 283's had a plate resistance of about 700 ohms, I believe. And the 300B was a little lower than that, between 400 and 500.

COTTER: 500 to 600 ohms because I know that they did a very nice match to the 600-ohm line circuit which was so popular in the transmission characteristics. The fact is, though, that what you had was an amplifier that could be envisioned analytically as essentially a cur rent source, shunted by a fairly fat resistor, a fairly power-grabbing resistor, in parallel with whatever the load was. So that if this energy, from any energy storage system, whether it was a net work or a mechanical loudspeaker, did come back, it didn't meet perhaps a stone wall, but it met a purely non-time dispersive energy absorber, which did a neat little job of damping it. If you had no feedback on such a system, then it reflected it very little if the damping was decent. It's interesting that in a talk Les Paul gave to the AES many years ago about the early days of recording, electrical recording, he talked about some of these things you mentioned, Stew, and he said that as they kept improving the amplifiers, the sound kept getting worse and worse and worse.

ZAYDE: What's interesting is that the transit times of the devices very largely determine the shape of the envelope that is generated. The historical significance becomes profound as you stretch out these internal transit problems.

COTTER: Or as you increase the amount of feedback.

ZAYDE: Precisely. There's a very close relationship.

EDITOR: Can this be quantified?

RAPPAPORT: If the number of re circulations is fixed, and that is roughly fixed by the network independent of the delay, then the net result is going to be determined only by the delay. And obviously the shorter the delay for a given number of re-circulations, the less the audible effect is going to be.

ZAYDE: This is a by-product of the convolution profile completely.

FUTTERMAN: You people all puzzle me. Here I've designed an amplifier with loads of feedback, which Andy says shouldn't sound good, so does Matti . . .

Wait a minute, let me finish, let me conclude. In the final analysis, we're interested in the way it sounds in a component system, right? And wait a minute... I believe my amplifier sounds good.

(cont to part b)

---------

[adapted from TAC]

---------

Also see:

Speaker Wires and Audio Cables: Separating the Sense from the Nonsense

 

 

Various audio and high-fidelity magazines

 

Top of page
Home | Audio Magazine | Stereo Review magazine | AE/AA mag.