Jump to content

Richard C. Heyser's Klipschorn review


Arkytype

Recommended Posts

  • Moderators

You know......it's just an observation......but the 2-Channel audio forum is like "The Young And The Restless" soap opera.

You can be away from the soap opera for like a year. And in one 30 minute episode, you can catch up on everything that happened in a year.

And so it is here........

Leave this place for a year, come back for a visit, read about 4 or 5 threads and you're caught up. LOL........

Tom

Well said, Tom.

Link to comment
Share on other sites

mas, my people skills are fine. I get along with everybody... but that is because I respect everybody, even though I might not agree with them. You, however, like to go on the offensive when someone doesn't understand things the same way you do, or doesn't agree with you.

I've read many of your posts, because it is quite evident you are an intelligent person and I am certain I can learn a few things from you. I can also catch the humor in a lot of your posts. What I don't like is when you go on the offensive and try and make everyone else here feel small because they don't understand Hyser the way you do.

It's true this place is about sharing ideas, but it's also true that a number of people come here because of the folks that post here. A lot of us have met one another. We like one another. We like listening to music together and hearing different amps, preamps, turntables, speakers, etc. This is also another way of sharing ideas, thoughts, etc. It's also a good way to hear new equipment without having to buy it. [:D] I choose to come here because I really like most of the people here, and try and learn a thing or two in the process.

So, yes, I attacked you. Something I'm not particularly proud of. I don't make it a habit like some here do. But when I see someone belittling others the way you have in this thread, I'm gonna say something. Regardless, I will continue to read your threads because I know there is still a lot of good in what you say. I just wish you would say it with a little more tact.

Mike

Link to comment
Share on other sites

It should look familiar, if my guess is correct, this same article was floating around with about the same issues last year. With the addition of Heyser's ability to measure the frequency response questioned. And of course they held that Heyser was wrong then too.[:P]

But you are right, nothing much has changed.

Link to comment
Share on other sites

..........We

don't hear the onset of all frequencies simultaneously. It takes a minimum

number of completed cycles of a tone to be defined enough for processing and

perception of it. Lower frequency tones take longer to hear when they begin. In

the time it takes a 40Hz wave to complete just one cycle a 400Hz wave beginning

at the same time will complete 10 cycles, a 4000Hz wave 100 cycles. So in

essence, we naturally hear different frequencies after variable delays, the

lower the slower........

No.......

.......And various wavelengths/frequencies do not have different velocities! ...........

Just a minor point, but this caught me before my morning coffee. In a dispersive medium, velocity is

dependent on frequency. For a normally dispersive medium, the lower

frequencies will travel faster than higher frequencies. For a

non-dispersive medium, velocity is not dependent on frequency. Air is considered a non-dispersive medium for audio range wavelengths.

Thanks for thinking about it.

I never implied that the physical propogation of different frequencies occured at different speeds.

Mas incorrectly assumed I thought so. Until he address the questions we can't be certain why he made the error.

Link to comment
Share on other sites

Just a minor point, but this caught me before my morning coffee. In a dispersive medium, velocity is dependent on frequency. For a normally dispersive medium, the lower frequencies will travel faster than higher frequencies. For a non-dispersive medium, velocity is not dependent on frequency. Air is considered a non-dispersive medium for audio range wavelengths.

What is an example of a dispersive medium, if air is non-dispersive?

I'd always interpreted what I read as the velocity of sound being the same in a given medium (e.g., air, water, steel) given constant (or standard) physical parameters.

Link to comment
Share on other sites

Richard C. Heyser wrote over thirty articles for the Audio Engineering Society, IREE including eight written for Audio magazine. I'm posting one of his Audio articles that appeared in the March 1978 issue.

Perhaps this paper, Hearing vs. Measurement, will help temper some of the discourse in this thread. I would recommend reading the opening paragraph several times before continuing. Whether you agree with his philosophy or not, he will make you think.

If there is interest from the members, I can post the other seven articles that appeared in Audio magazine from June 1977 to May 1979.

Any suggestions for posting multiple pages would be welcome.

Lee

Hearing vs. Measurement 1 of 3.pdf

Link to comment
Share on other sites

After reading and re-reading Heyser's review, my feeling is
that perhaps he is wondering how a speaker that measures that badly can
sound so good. Although he does point out the time misalignment,
he does not suggest that it be dealt with in any way?? What he
does take the time to point out, is the perceived (not measured)
accuracy of the K-horn sound...(the car door experiment).

I'm not
suggesting that the clarity cannot be improved with the use of an
active xover with signal alignment. I for one would like to hear
from those who use or have used this approach. Is it really a
noticeable improvement or not??

Link to comment
Share on other sites

Now, if I can just find a high enough quality EQ...I'll be set...

[:P][*-)]

Ever'body knows ....you have to have a Digital EQ, Mark .....[;)]

Note: It is my nightmare that someone is sitting there now, after reading the above, and saying to themselves, "Hey, I have some of that stuff! I can do it too!"

Why am I sufficiently dismayed enugh to come back and add the addendum that the above common procedure is absolutely and utterly worthless for this purpose!? I shouldn't have to...

Link to comment
Share on other sites

Geez... they should be talking about accuracy. This is big issue guys, forget about the other stuff.

Tom, the very inaccuracy that concerned Heyser (and was fundamental to his thinking) was precisely "that other stuff", namely accurately reproducing the signal (and in particular the transitions between drivers) within the time domain.

So one wonders what "other stuff" it is that we should forget.

Its pretty bad when you have first order room reflections arriving BEFORE individual direct signals! And if one is familiar with the concept behind the establishment of the ISD in a room, having direct signals violate this concept is a bit laughable.

Not aligning the various signals in time renders the speaker "a mess". But once it is properly aligned, many issues easily resolve themselves.

Can it be done? Sure! Is it worth it? Absolutely. Would I pay big bucks for another fancy passive crossover that fails to solve this issue, regardless of how fancy the caps are? Heck no!

This problem was objectively identified and detailed in 1986. Yet over 20 years after the fact, and among those who claim to be aficionados of the units, we still are not addressing such a fundamental problem.

What in hell does it take, folks?

The real irony is that none of this should be new or a surprise to anyone.

...Likewise for the Cornwall, LaScala and the Jubilee.

[

Don Davis is right!

If bad sound were fatal, audio would be a leading cause of death!"

sounds to me like you are a candidate for the Single Driver Forums elsewhere. You do make good points about the time delays stuff, but the Klipshorns and Jubilees sound so damn GOOD and more REAL on the DYNAMICS. All this great Klipsch sound can be had without overly intellectual discussions, joining the AES. Other companies require you to buy $50,000 worth of amplifiers and more motors to do one tenth of the work half as good as with a garage sale 2-channel receiver from the last 20 years.

Link to comment
Share on other sites

That's a great point Tom. In fact, the Khorn has been one of the most annoying speakers I've come across because of that very fact...(and the Heyser article got me thinking about it)

Just curious and not being critical, but you are very clear in the Klipsch speakers that you don't like. What do you like?

I meant annoying in a good way...(well, good way for the speaker).

It is all an illusion anyhow. Even a Millions Dollar budget and all the concrete, wood, and measuring tools in the world cannot duplicate the live binaural expericence of insturments in 3 dimanesional space with their directional characteristics and heard from just point in that same space, whether it is an acoustic guitar by a campfire or a full symphony. All this stuff is BS anyhow.

Link to comment
Share on other sites

A good active electronic crossover to use for the Khorn would be the new Accuphase unit.....But at $6000. I,m sure the smart a$$ intellegencia would not attempt to purchase one.....And no they cant borrow mine.....I dance alone. As on Star Treck I,m light years ahead and I left my cloths behind.

Link to comment
Share on other sites

..And no they cant borrow mine.....I dance alone. As on Star Treck I,m light years ahead and I left my cloths behind.

Can't anything to this discussion (surprise, surprise). But I must say, among all the wonderfully interesting people hear I experienced over the last few years, my appreciation's for Maron just grows and grows. Unassuming technical chops and a razor sharp dry wit. Great combination.

You may have shed your clothes but no need to dance alone. That ringing doorbell is a visit from Tawny and Tickles (watch out for Tawny, she scary dangerous)

Link to comment
Share on other sites

Does it matter? (referring to time alignment).



It depends, its like all the other problems speakers have, it depends how
bad it is and how its used. Many folks go through life listening to tinny tiny
sugar cube speakers and are perfectly happy, others have boom boxes and at
the other extreme are those who refuse to use speakers and go super headphones.
Also, there is no reason not to think that hearing is like seeing or tasting,
everyone has somewhat different sensitivities.



Time alignment usually refers to placing the drivers in a physical location that
allows the arrivals from each to be in sync at the listening position at
crossover.

I say at crossover because a little appreciated fact about loudspeakers is
that they have an equivalent position in time, which is not static it changes
with frequency.

Unlike the electronics on is used to, loudspeakers spread signals out in time
compared to how they are presented electrically as a signal.

(if interested look up Richard Heyser, determination of loudspeaker arrival
times for one of the earliest works).



Group delay is a plot usually shown with filters, it (in audio) shows the
rate of change of the delay, not delay it self. Acoustic phase shows delay in
that the period in time is directly tied by the frequency and phase. This
acoustic phase is what one measures when all the fixed delays are removed
(like the distance from the speaker to microphone etc).

Acoustic phase is what governs if a speaker can reproduce a complex waveshape
like a square wave, most speaker cant do this over any significant range
because they have to much variance in time (acoustic phase).



What do you hear?

If you wear good headphones and with DSP generate the phase shift associated
with a normal crossover (90 degrees per order), on would conclude the effects
of the phase shift (change in time) associated with the crossover were minor
but audible.

In real life, with speakers sometimes the difference is quite noticeable.



What can you fix with DSP?

Lots, it is a really powerful tool.



DSP also allows the use of IIR or FIR filters which can be configured to not
only not have the phase shift of normal filters but correct the phase of the
driver in its coherent range. These filters work by having time delays signals
which are added back to the original with mathematical coefficients. The only
down side is cost, needing the right data, latency associated with the lowest
frequency one wants to correct to and the temptation to try to fix local
problems with global fixes.



What it cant fix are problems caused by sources, which are too far apart to
add coherently. This would be like a typical large multi way loudspeaker where
there is a low, mid and high source (or more than one) each more than the
critical distance away from its mate at crossover or more than about 1/3 wl
apart from a brother covering the same range.

When sources are too far apart in X, Y or Z, they do not add uniformly, they
produce an interference pattern.

With the loudspeaker depending on DSP with issues in X,Y or Z, DSP can make
nice amplitude and fix time (Z) but can only do so in one direction. Moving
changes the path lengths to each source and so each position out front requires
a different correction.

So, all this goes back to why some prefer a single full range driver which
doesnt have the problems of a multiple source or people go with a multiway
speaker which doesnt have the restricted bandwidth and dynamic range of the
single driver.



Like Tom B., I lean towards horns, they are more difficult to design
compared to modern cones and domes but have a distinct advantage in reduced
room interaction and dynamic range. It has been my observation that in a stereo
configuration, as one reduces the amount of speaker self-interference, ones
ability to point to the speakers physical location or distance is greatly
reduced while the solidity of the stereo panorama (or mono center image)
increased.

Not that this matters in most applications but I think your ears have an
ability to locate the interference a speaker produces in space, while its
producing music.



Best,



Tom Danley

Link to comment
Share on other sites


I wonder if time
alignment correction is a solution without a problem.


No.



I don't see how time alignment of electronic
signals and driver response can ensure that the ear will hear those signals as
time aligned unless one deliberately advances the low frequencies well ahead of
the higher ones to such a degree that the resultant sound would no longer
resemble the natural sound of live music.


We don't hear any sounds
instantaneously. The transmission from eardrum to basilar membrane in the organ
of corti is first mechanical through the three ossicles, then hydraulic from
the round window to the hair cells, then mechanical shear on the hair cells,
then chemical ion release from the hair cells to the neurons. Once the neurons
are depolarized the signal is passed through four independent channels of
preprocessing in progressive levels of auditory nuclei before reaching the
auditory cortex. So in essence, it takes time to hear sound well after it
reaches the listener's ears.


We don't hear the onset of
all frequencies simultaneously. It takes a minimum number of completed cycles
of a tone to be defined enough for processing and perception of it. Lower
frequency tones take longer to hear when they begin. In the time it takes a
40Hz wave to complete just one cycle a 400Hz wave beginning at the same time
will complete 10 cycles, a 4000Hz wave 100 cycles. So in essence, we naturally
hear different frequencies after variable delays, the lower the slower.


How many cycles of a bass
note need to be completed before you hear it? I don't know what the minimum
number of cycles is but it has to be a few... If it takes 5 cycles to hear a
50Hz note thats 100ms. Same with 3 cycles of 30Hz. 3 cycles of 60 Hz would be
1/20s or 50ms. These are low guesses at the cycle counts for perception,
but aren't these processing to perception delays all much longer than the time
alignment delays between the higher horn drivers and bass horn driver? The 8ms
figure mentioned to correct the bass horn delay - that's the time for a 125Hz
note to execute only a single cycle. Even in a perfectly aligned system you
would naturally hear the higher frequencies well before the bass cycles had
much of a start on defining the waveform. The lowest ones would be yet
incomplete in finishing their first cycle well after you had heard the higher
tones.


No.


The area of psychoacoustics
is a well understood area. And the
amazing thing is that examination of the acoustic signal in the time domain not
only illuminated many aspects of this, but it also fascillitating a resolution of many problems
experienced as well.


It was signal alignment that
specifically is responsible for increased intelligibly of misaligned signals.
Also, the mitigation of latter arriving signals that impairs image, intelligibility,
and localization are all resolved by aligning signals in the time domain.


You dont have to correct
the conductive and transmissive means by which the ear-mind function.


And the treatment of sound
waves as successive 'cycles of cycles' is an erroneous model. A sound wave is a pressure
wave, not some series of pulses experienced as discreet events. And our ability
to discern a signal is based upon the base frequency. It doe not vary with our
exposure to it.And one must beware of the models we use to illustrate a concept, as they too often risk imposing extraneous connotations upon the 'thing' itself.




Is it an
assumption of time alignment that the listener should be hearing the onset of
all frequency pitches at the same time? Wouldn't this require advancing the
lowest notes way out ahead of the higher ones?



No.



Or is it an assumption of time alignment that the
drivers should be phased so that the onset of all frequency pitches produce an
aligned sound wave front? Seems the lower frequency notes naturally resolve
themselves in the listener's ear/mind so much more slowly than the higher tones
that the aligned sound wave front would not be a sufficient correction to
be noticed.



I am not
sure of exactly what the distinction is between conditions arebut what is important
to imaging, intelligibility, and localization is the coherent (aligned) arrival
of the various frequencies comprising an acoustical event. The Henry Precedence
Effect and the Haas Effect corollaries specifically address the inability to
resolve various closely arriving signals within a finite period of time. Intelligibility
and the ability to resolve a discreet event is not aided by a skewing an
incident signal in the time domain. Quite the opposite.



Again, you
dont have to worry about how the ear-mind processes the signal. You only need
to understand what external conditions contribute to their ability to do so and
to do so with optimal clarity and intelligibility. The reason for this is
simple. Assuming that your assumption above
is valid, discreet impulse events would not be experienced as discreet finite events.



And these
psycho-acoustical foci (the Henry and Haas effects) speak directly to the primacy
of the time domain. The are well understood issues.



Alignment
of various sources of the same material within the time domain addresses not
only psycho-acoustical issues, but it also mitigated physical aspects that
manifest themselves in the frequency and temporal-spatial realm. Namely comb
filtering and polar lobing.



Thus signal
alignment in the time domain addresses not only the problem of psycho-acoustic
limits of resolution, but it also resolves issues such as comb filtering and
polar lobing that occur in te physical world.



But there
is still a limitation to multipoint of origin signal alignment. It is limited to a single spot.



Oh, and I am
sorry if the reference to headphones confused some. But their distance from
diaphragm to the eardrum precisely violates your supposition of necessary
signal offsets. And the ~13,500Hz figure
is the minimum frequency that could be
heard which had sufficient space to resolve
just
one complete cycle...let alone many cycles of cycles.





So, MAS, if an electronic
crossover will solve the K-horns problems, why don't YOU implement one, take
all the measurements you feel are important, and report back..it's called
experimentation, that's how knowledge grows.


Walk the walk...




Experimentation?
Really?! Gee, do you mean to imply that all this time domain crap is just
speculative? ROFLMAO!



Well I
wonder where much of the results to which I refer originated? And I wonder
where the impetus to achieve such systems originated? I have worked with too
many systems where signal alignment is used to correct for exactly such
problems on many levels. Many speaker manufacturers have attempted to address
this issue in a variety of ways, ranging from sloped baffle boards in an attempt
to physically vertically align the acoustical centers by manufacturers such as
Thiel and Snell to more ambitious attempts to develop co-axial systems whereby
the acoustic origins are aligned both vertically and horizontally.
Additionally, this goal has been behind the co-axial driver concept as well,
from UREI o Gauss, to McCauley and many others.



One might
wonder where you have been while all of this experimentation as been going on
over the past 40 years?


Sometimes
it takes more than simply using your feet, sometimes it helps to use the brain
too.



sounds to me like you are
a candidate for the Single Driver Forums elsewhere.



There are
quite a few companies who wish that they could accomplish exactly this goal in a
single speaker, and the coaxial speaker models are but one attempt to do
exactly this. Unfortunately physics has a few ideas about the practical aspects of this.




It is all an illusion
anyhow. Even a Millions Dollar budget and all the concrete, wood, and measuring
tools in the world cannot duplicate the live binaural expericence of
insturments in 3 dimanesional space with their directional characteristics and
heard from just point in that same space, whether it is an acoustic guitar by a
campfire or a full symphony. All this stuff is BS anyhow.




Absolutely incorrect!
Oh oh. More will be accusing me of
personally attacking someone because I have the audacity to thoroughly disagree
based upon a real technology that not only exists, but which I have experienced
firsthand..



The Etymotic
ITE (In The Ear) microphones studied and advanced by Don and Carolyn Davis at
SynAudCon, and now in the possession of Richard Clarke (of car audio fame and
owner of one of the largest CD duplication facilities in the country) who has
taken it on as his mission to pursue the technology, does EXACTLY that!



They retain
the 3Space acoustical cues completely, and in the most uncannily eerie
realistic manner imaginable. Especially as you spin around to respond to
acoustic cues present at the time of recording but now long since passed.





As far as
what effects are apparent due to the root cause of signal offsets in the time
domain from multiple real or virtual sources?


There are
several primary issues.



One is the in the real of the psychoacoustic, more
specifically in regards to issues explored By Henry and subsequently Haas regarding
intelligibility, localization, and imaging.



Another is
in the realm of frequency in the form of comb filtering and polar lobbing
anomalies. Audible problems that cannot resolved by EQ.



These are
real and verified, contrary to the views expressed by some that these may or
may not exist.




What I find
really scary is that some, who will posit the existence of such problem in a
listening room, express doubts that the EXACT same problems can effect the intelligibility,
imaging, and localization of a speaker! (Oh, and by the way, the exact same
issues are pertinent in large rooms with arrays, multiple speakers and distributed systems.!!)



The ONLY differences
is in the particulars of the topology. The basic fundamental concepts are EXACTLY
identical!



The result
being comb filtering, and polar lobing in the region where multiple sources
reproduce the same signal. Relative to the psychoacoustics we have group delay,
the rate of change of phase, as this is most often explored at the micro
(speaker) level (where some debate the
minimal amount of group delay that is audible) rather than at the much larger
macro level, .where group delay is very apparent in large signal offsets.



Now,
whether the additional source is virtual via a reflection from a rooms wall or diffraction
from a speaker cabinet, or whether the source is real as in an additional
speaker , the affects are the same, albeit on scales relative the speaker
placement and separation.



The only
difference is in the scale of the system focused upon, be it distributed speakers,
speaker room interaction, or speaker speaker interaction within the same
speaker the order of magnitude (magnitude here meaning focus level).



And are
these effects real and have they been verified? Absolutely!


What
catches me a bit by surprise is the degree to which so many are ignore-ant of
these advances, regardless of how aware one may be of the basic theory.



But what is
sad is the length too many (but not all) will go to to avoid becoming familiar
with recent advances in acoustics, instead preferring to willfully and
sanctimoniously dwell in the past. And while this is fine in itself, one
wonders why these folks wish to enter in to discussions of modern theory.



The basic
practical aspects of what Heyser and many others have postulated regarding the
primacy of the time domain and which been demonstrated and accepted precedence
are not difficult to learn. Nor are they difficult to implement and to receive benefit
from.



And while
some may become upset when someone dismisses outmoded models. The dismissal of
newer paradigms based upon ones ignorance of them (for whatever reason) has a
long and proud history on the forum.






Oh, and I
will simply laugh at the notion that it requires a $6000 active crossover to effectively
implement signal alignment.


But if
passives go for in excess of $1K, this sounds about right. At this price point
they should start flying off the shelves and showing up in the homes of
audiophiles everywhere.

Link to comment
Share on other sites

Mas, thanks

for going the extra mile to provide some support for your positions.

And the treatment of sound

waves as successive 'cycles of cycles' is an erroneous model. A sound wave is a

pressure wave, not some series of pulses experienced as discreet events. And

our ability to discern a signal is based upon the base frequency. It doe not

vary with our exposure to it. And one must beware of the models we use to

illustrate a concept, as they too often risk imposing extraneous

connotations upon the 'thing' itself.

I agree

that the wave is pressure, not pulses. I question concerned how much of a

single wave form, or how many periods of wave form are needed for the detection

of base frequency. I am sure that it takes a few periods to establish the

perception of that base frequency. I has to take progressively more of these

periods for lower frequencies therefore the ability to discern the signal

does vary with our exposure to it (it is well known that pitch discrimination

varies dependent on the duration of the pitch), and varies depending on the

base frequency the lower the slower.

Again, you dont have to worry

about how the ear-mind processes the signal. You only need to understand what

external conditions contribute to their ability to do so and to do so with

optimal clarity and intelligibility. The reason for this is simple. Assuming

that your assumption above is valid, discreet impulse events would not be

experienced as discreet finite events.

You are

assuming that discreet impulse events under my reasoning would not be perceived

as such. The perception of sound is not that linear. Part of the mental

processing of sound includes integrating information, even information that is

not quite simultaneous. An example from the visual system will make this clear.

You watch a tennis game and see a round, yellow ball in motion. You perceive it

as a unitary whole object with those properties. In reality, the parts of the

brain that processes each of these attributes (round, yellow, and in motion)

are in separate structures within the brain. Yet the integration into a

coherent seamless perceptual object is perfect.

Oh, and I am sorry if the

reference to headphones confused some. But their distance from diaphragm to the

eardrum precisely violates your supposition of necessary signal offsets.

And the ~13,500Hz figure is the minimum frequency that could be heard

which had sufficient space to resolve just one complete cycle...let alone many

cycles of cycles.

My

supposition of offsets does not have anything to do with a waves wavelength in

space. It is about the time required for enough periods of wave passage to be

discerned. The resolution of the frequency does not depend on the wavelength

having sufficient space. The detection is made by transduction of the wave

pressure variations at a specific plane normal to the propagation plane in

space, not measuring the space for a wavelength.

Again,

appreciate the discussion.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...