Jump to content

Article on the K-Horn and Interview


WMcD

Recommended Posts

We are discussing the effect of two sources at different distances from the listener. When the many players in an orchestra spread out on a stage or studio do they play in time to what they induvidually hear or do they compensate (those furthest from the conductor leading their notes, those closest leading less) so the conductor, michrophone, or audience receives the sound of all the musicians at the same time? I think the answer is the former and the latter couldn't be accomplished even if desired unless all the musicians wore headphones each differentially adjusted from back to front with decreasing amounts of compensatory signal delay. This is not a difference of inches, it's more like 50 feet from the back row of the orchestra to the front.

Is time alignment important to live music? Does reproducing live music need to pay attention to time alignment?

in my opinion, the speaker should try to be a reproducer, not a producer of sound.

have a blessed day,

roy

Link to comment
Share on other sites

  • Replies 61
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

So it sounds like a digital crossover with adjustable delay would improve the sound of a La Scala. Would it be necessary to get something like the one recommended for the Jubilee (around $1000?) or is there a less expensive alternative?

And for the purists/snobs, are most digital crossovers stereo units, or do some come as monoblocks, one for each speaker?

Link to comment
Share on other sites

So sorry to confuse the issue. I thought Islander was inquiring about an active crossover that was non digital. Analog based active crossovers of course dont have signal delay capability, generally... Ashly XR series, Rane AC series, TDM, etc., but as BSS calls it Band Edge Phase Adjustment see: http://www.jesther.com/BSS%20FDS%20360%20manual.pdf, page 6. We are very much in agreement on the digital side of the crossover issue, including your methodology. The timbral shift I was mentioning, is just an unfortunate side effect of over 40" of time/distance between woofer and tweet in stock La Scala. Remember, when you are 3' in front of the speaker, you are almost 6 and a half feet from the actual woofer. Time-distance measurments I have made show lo freq level drop to distance to track driver as source, not horn mouth as source. If youve ever wondered why they sound so thin way up close, this is the reason,(among other factors).The radial mid was designed for no baffle just plenty of moretite.Scalas were never known to be good near field monitors...much of the reason is they were likely voiced further back.

This is simply incorrect and I couldn't disagree more on a bunch of issues here!

Lots of points are being confused and some are simply being misrepresented.

First, active crossovers have had signal delay and this has been the norm for ages! And the Rane AC series lacks delay?

Band edge phase delay is a function of the digital filter design! NOT signal delay! The fact that they may both employ the term "delay" does not render them equivalent nor interchangeable.

And the signal delay is NOT referenced by a measurement from driver nor the mouth! They are referenced to the "acoustical origin" or "acoustical center" And while this may correspond in some devices to dynamic driver's voice coil, it does NOT correspond to a typical horns driver, but rather it corresponds to a point within the throat. In fact, a typical CD horn will exhibit a slight astigmatism here as the apparent center or origin will appear to move a bit with frequency! Keele demonstrated this over 20 years ago!

And regarding the value of signal alignment relative to the proximity of the source...The irony is that the closer you are to the source, the greater the need for this signal alignment, not less!

The reason for this is simple! The further one gets from the devices, the smaller the relative differences between driver spacing become relative to distance. In other words, think of a right angle triangle. The Height corresponding to the interdriver spacing will remain constant, and close to the speaker the the height is large relative to the base distance from the unit. Thus the hypotenuse will be large compared to the base. As you move a greater distance from the units, height remains constant and the base becomes large. As this occurs the hypotenuse becomes smaller and smaller relative to the base and thus the distance between driver sources becomes smaller relative to the total distance to source.

At a sufficient distance the multiple drivers will begin to act as a summed source - a 'point' source if you will (taken a bit loosely). This is precisely why it is difficult to use many modeling tools for a small acoustic space as the driver arrays are modeled as summed point sources rather than as individual drivers!

I am glad that you folks are thinking about this, but you are simply recreating the misunderstandings (and blatant fallacies regarding equipment that were never made) that were experienced 20 years ago prior to the TEF and time domain measurements arriving on the scene.

The irony is that if I were presented with this info I might be arguing against the value of signal delay!

Link to comment
Share on other sites

We are discussing the effect of two sources at different distances from the listener. When the many players in an orchestra spread out on a stage or studio do they play in time to what they induvidually hear or do they compensate (those furthest from the conductor leading their notes, those closest leading less) so the conductor, michrophone, or audience receives the sound of all the musicians at the same time? I think the answer is the former and the latter couldn't be accomplished even if desired unless all the musicians wore headphones each differentially adjusted from back to front with decreasing amounts of compensatory signal delay. This is not a difference of inches, it's more like 50 feet from the back row of the orchestra to the front.

Is time alignment important to live music? Does reproducing live music need to pay attention to time alignment?

Other than some electronic instruments, are there many typical instruments whose range spans all three drivers? In regards to time delay, does it really matter if the bassist sounds like he's standing a little further back than the guitarist or vocalist? If high notes from a guitar solo seem to jump forward in time (hmm, mini-time-travel effects) as the tweeter comes into play, it might make them really stand out, but now that I think of it, that's yet another type of distortion, albeit a possibly seductive type.

When a pianist runs all the way along the keys, does time misalignment become audible as the notes move from one driver to the next?

Link to comment
Share on other sites

So it sounds like a digital crossover with adjustable delay would improve the sound of a La Scala. Would it be necessary to get something like the one recommended for the Jubilee (around $1000?) or is there a less expensive alternative?

And for the purists/snobs, are most digital crossovers stereo units, or do some come as monoblocks, one for each speaker?

There are MANY units that will work just great in both mono and stereo configurations.

A majority of the units coming to market presently combine many functions and are able to be programmed very precisely by computer, either locally or remotely. And while these units provide a convergence of technology that are advantageous to a touring SR affair, the home user seldom needs the variety of features that these offer. Prices for some of these units are greater than what many have spent on their entire system. So please don't let this scare you away! There are still many great units for much less.

Almost any of the major active crossovers feature signal delay. Perhaps the cheapest option is the Behringer unit mentioned above that retails for $249. Also, any of the 3 way (3 bandpass) units of the "AC" series by Rane will suffice. You have your choice of single or 2 channel configurations. The AC 23 series is such a 2 channel 3 way design.

As Doc mentioned, there are varying degrees (with varying orders of magnitude) of signal alignment within the time domain. There is the alignment of individual drivers with which we are focused upon here. There is also the next 'larger' order of magnitude when we place the speaker unit within a room.

What we have learned is that all of these issues are important. And the goal is to minimize the misalignment as best we can. It is not a question of which one can we ignore. But that is not to say that there are not real world limiting factors that constitute compromise. On the driver level this concern has manifested itself in a variety of attempts at designing coaxial drivers as well as with speaker systems (eg Frazier Cat). And designs to this effect are not going away, nor is the issue of signal alignment. In fact, these concepts are becoming increasingly more fundamental each day.

Link to comment
Share on other sites

Other than some electronic instruments, are there many typical instruments whose range spans all three drivers? In regards to time delay, does it really matter if the bassist sounds like he's standing a little further back than the guitarist or vocalist? If high notes from a guitar solo seem to jump forward in time (hmm, mini-time-travel effects) as the tweeter comes into play, it might make them really stand out, but now that I think of it, that's yet another type of distortion, albeit a possibly seductive type.

When a pianist runs all the way along the keys, does time misalignment become audible as the notes move from one driver to the next?

Most recording techniques fail to capture these source phase relationships. Again, we go back to modern recording techniques.

One method does capture these phase relationships, and Richard Clarke has taken over development of the technology. This is the InTheEar recording technique pioneered by Don and Carolyn Davis in the 1980s and 90s using the Etymotic ITE Pressure Zone microphones.

But there is an order of magnitude of difference between the performance and the reproduction of a recording where phase differences between performers have been replaced by artists in vocal isolation booths, multi-miced and overdubbed sources performed on, say, different days, and localization provided via the pan knob of the engineer. No amount of signal synchronization will correct for this short of perhaps a time machine. So please do not assume the recording to be the pristine source to be worshiped - although, no amount of King's men will put Humpty Dumpty back together and make the playback more accurate than the source with its inherent limitations. And no playback/reproduction system can remedy problems not addressed at this fundamental stage.

And please be aware that the imaging that so many worship is an acoustical illusion that is accounted for within the Henry Precedence Effect.

Misalignment in playback manifests itself at the speaker level via group delay errors and via multisource interference in the form of superposition that results in comb filtering and frequency variant polar response anomalies. These errors are further exacerbated at the room level by the reflections within the room at the next larger order of magnitude in the signal processing chain.

I might add, one thing that makes this discussion so difficult is that we so easily jump from one end of the acoustical factor spectrum to the other in the course of the discussion! It is hard enough to describe what is going on at any particular level without jumping from the studio/recording process to the speaker level to the room level of interaction! But the behavior of sound in the time domain is fundamental and critical at every step in the process.

Link to comment
Share on other sites

We are discussing the effect of two sources at different distances

from the listener. When the many players in an orchestra spread out on

a stage or studio do they play in time to what they induvidually hear

or do they compensate (those furthest from the conductor leading their

notes, those closest leading less) so the conductor, michrophone, or

audience receives the sound of all the musicians at the same time? I

think the answer is the former and the latter couldn't be accomplished

even if desired unless all the musicians wore headphones each

differentially adjusted from back to front with decreasing amounts of

compensatory signal delay. This is not a difference of inches, it's

more like 50 feet from the back row of the orchestra to the front.

Is time alignment important to live music? Does reproducing live music need to pay attention to time alignment?

in my opinion, the speaker should try to be a reproducer, not a producer of sound.

have a blessed day,

roy

Ah yes, but which sound? Not the sound any

of the individual musicians hear for they are each individually in a

"wrong place"; maybe the sound heard by an audience member - but which

one, the one up front to the right, or back, or center, behind the lady

with the big hair, or where ever? Maybe the conductor; but he moves

around and turns his head (doppler and balance shifts); and the

applause (and coughs and sniffs and seat squeaks) all come to the

conductor from behind?

Each location has it's own unique sound.

Multiple michrophones mixed to stereo will create an additional

synthetic location and thereby an additional unique sound that could

not be heard by anyone in attendance, even if they showed up with a

cherry picker, rope, and tackle harness.

I agree that the speaker

should be a reproducer not a producer of the signal, but the question

here is about the sound that makes the signal.

Link to comment
Share on other sites

Paul, you are confusing two different things.

Consider the fact that a bunch of instruments playing the same thing

sounds different than a single instrument because the sound from each

instrument arrives at a slightly different time. Change the relative

time arrivals and it will sound different! When you place a microphone

in the performance hall, it will capture the important relative time

arrival differences between each instrument. All that matters is that

the recording engineering picked the location of the mic because he

wanted that particular sound captured on the recording...part of that

sound involves the very specific time and phase information.

The purpose of the speaker is to playback that which was intentionally

put onto the recording medium. Any amount of time and phase distortion

deviates from that which the recording engineer intended.

It's one thing to criticize the decisions of the recording engineer,

but that has absolutely zero relevance to speaker design. The speaker

should reproduce that which was recorded. That means not introducing

time and phase distortions. It really doesn't matter how much time and

phase distortion the recording engineer allowed into the recording - it

is the job of every component in the signal path to not add more.

Besides, some of that distortion may even be intentional by the

engineer - kinda like guitars and tube amps in most rock.

You don't get to add more distortion just because you can conjur up a

way to describe the original sound as having the same distortion.

You keep bringing this argument up over and over and I have to wonder

if you're honestly trying to find an answer to your question. It's

really only adding to the total confusion.

I don't mean this directly at you, but I have to wonder if the reason

audiophiles are so resistant to new ideas is because they inherantly

think someone is telling them their system sounds like crap - and far

be it for golden ear audiophiles to enjoy a system that has distortion!

Surely someone out there can understand the frustration when someone

tries to argue the world is flat or that gavity doesn't exist. Or is there something else at play? Is it the messenger?

Link to comment
Share on other sites

Dr Who,

You think I'm confusing the capturing of time and phase information (when done by the engineer) with time and phase distortion (when done by the speaker). You summarized it as:

"It
really doesn't matter how much time and phase distortion the recording
engineer allowed into the recording - it is the job of every component
in the signal path to not add more."

Think about sound instead of
signal and please refrain from accusing me of not asking legitimate
questions. I don't claim to be an audiophile, I just came for the
music, stayed for the chicks. Pauln

Link to comment
Share on other sites

Dr Who,

You think I'm confusing the capturing of time and phase information (when done by the engineer) with time and phase distortion (when done by the speaker). You summarized it as:

"It

really doesn't matter how much time and phase distortion the recording

engineer allowed into the recording - it is the job of every component

in the signal path to not add more."

Think about sound instead of

signal and please refrain from accusing me of not asking legitimate

questions. I don't claim to be an audiophile, I just came for the

music, stayed for the chicks. Pauln

Unfortunately, the processing path is not so linear!

Yes, there are fundamental limitations in the recording process involved with capturing the relative phase information (from one instrument to another) in the recording (and note, except in one method, there is no capture of any absolute phase information - all instruments to a common point - assuming that it even IS an ensemble recording with no sweetening!)

But considering that this information is then assigned to an input channel and then relegated to a pan pot (and phase in the recording process...Most recordings are lucky if they are able to sync the punch ins with the click tracks!) , this discussion NEEDS to be broken down into the record process AND a separate playback/recreation process! The gap between the two is equivalent to the cartoon with several blackboards of equations connected by an inconspicuous "and then a miracle happens" or an even more innocuous "and it naturally follows". And replying with the equivalent to "Hey, how can I worry about why your car doesn't work when I am concerned with the state of the UNIVERSE?" does not make the task more manageable or understandable.

Sorry, but Doc is right. While the entire process bears examining, confusing the two 'parts' of the process does not add to a greater understanding of the whole. A better organization would be to discuss the limitations of the recording process in a recording process discussion, and address limitation s of the playback/reproduction process in a discussion of the reproduction process.

But this confusion does point to a fundamental misunderstanding of many into the entire process - and this misunderstanding is apparently the source of many assumptions and misconceptions of what it is exactly that a system is capturing and reproducing...assumptions that are simply not supported by the facts. **

But those assumptions aside, there are still the various orders of magnitude with regards to the various electromechanical and various passive elements that make up the speaker and the speaker room interface - the reproductive environment. And even if we assume that everything else is ideal, we still have our hands full with this aspect. And that is the focus to which Roy, Doc and others are trying to focus upon.

Edit- You know, it would be nice if we could simply define a convenient theory of everything in these discussions. But what I would be happy in seeing is if we could simply try to focus on, and understand, just a very small part of the whole for a change. And unfortunately, few seem willing to try this (even if it means that other questions are raised! In fact, if one's thinking changes, new questions should follow!). Now THAT would be a nice start! But then this involves understanding just a bit of the basic physics... And unfortunately that is the rallying call for the subjectivists to run in and say "well how can you know anything anyway! We don't and look how happy we are!"

{**And without needing to allude to things technical, there is no

better indicator of this than someone buying into the hooey that 'doing

sound' gets you chicks! Yup, 3:30AM and you're sweaty and you stink and

you are trying to figure out just where in the hell you are while

standing in the loadout in the 'best' part of some strange town with the bouquet of overflowing dumpsters wafting through the early morning air.... Yup!

The chicks just flock to you like moths! No, wait, those ARE moths!

Anyone who had even a glimmer of insight into 'sound' dumped those fantasies long ago. And that fantasy is just the start! ...You might want to sit down...;-) }

Link to comment
Share on other sites

We are discussing the effect of two sources at different distances from the listener. When the many players in an orchestra spread out on a stage or studio do they play in time to what they induvidually hear or do they compensate (those furthest from the conductor leading their notes, those closest leading less) so the conductor, michrophone, or audience receives the sound of all the musicians at the same time? I think the answer is the former and the latter couldn't be accomplished even if desired unless all the musicians wore headphones each differentially adjusted from back to front with decreasing amounts of compensatory signal delay. This is not a difference of inches, it's more like 50 feet from the back row of the orchestra to the front.

Is time alignment important to live music? Does reproducing live music need to pay attention to time alignment?

in my opinion, the speaker should try to be a reproducer, not a producer of sound.

have a blessed day,

roy

Ah yes, but which sound? Not the sound any of the individual musicians hear for they are each individually in a "wrong place"; maybe the sound heard by an audience member - but which one, the one up front to the right, or back, or center, behind the lady with the big hair, or where ever? Maybe the conductor; but he moves around and turns his head (doppler and balance shifts); and the applause (and coughs and sniffs and seat squeaks) all come to the conductor from behind?

Each location has it's own unique sound. Multiple michrophones mixed to stereo will create an additional synthetic location and thereby an additional unique sound that could not be heard by anyone in attendance, even if they showed up with a cherry picker, rope, and tackle harness.

I agree that the speaker should be a reproducer not a producer of the signal, but the question here is about the sound that makes the signal.

that dumb tree that fell in the forest!!

have a blessed day,

roy

Link to comment
Share on other sites

that dumb tree that fell in the forest!!

Dang blang...

1. I never heard it

2. Do you think it caused harmonic distortion with the beaver cut it down? (perhaps some intermodulation when it hit the beaver on the head?)

3. Was any time delay needed such that when it hit the ground, it made the noise?

Link to comment
Share on other sites

You think I'm confusing the capturing of time and phase information (when done by the engineer) with time and phase distortion (when done by the speaker).

Correct me if I'm wrong, but you are trying to claim that because it is impossible to maintain perfect time and phase information during the recording process, that the playback system doesn't need to maintain it. If so, that's not how it works.

Think about sound instead of

signal and please refrain from accusing me of not asking legitimate

questions. I don't claim to be an audiophile, I just came for the

music, stayed for the chicks.

Sound = signal. And I'm not accusing of anyone not asking legitimate questions. I'm just curious if you're trying to make a point or genuinely asking a question. If the former, I dunno what to say other than perhaps you should go talk to some of the experts - like perhaps the dudes at Syn-Aud-Con...

Link to comment
Share on other sites

You think I'm confusing the capturing of time and phase information (when done by the engineer) with time and phase distortion (when done by the speaker).

Correct me if I'm wrong, but you are trying to claim that because it is impossible to maintain perfect time and phase information during the recording process, that the playback system doesn't need to maintain it. If so, that's not how it works.

Think about sound instead of signal and please refrain from accusing me of not asking legitimate questions. I don't claim to be an audiophile, I just came for the music, stayed for the chicks.

Sound = signal. And I'm not accusing of anyone not asking legitimate questions. I'm just curious if you're trying to make a point or genuinely asking a question. If the former, I dunno what to say other than perhaps you should go talk to some of the experts - like perhaps the dudes at Syn-Aud-Con...

from the recording's perspective, it does not matter what was captured. what matters is that it is reproduced as faithfully as possible. how bad or good a recording is should never depend on the speaker; at least that is the goal. if it does, something is wrong in bodcaw.

have a blessed day,

roy

Link to comment
Share on other sites

Coytee,

Does Chloe know that you are near the confuser again?

[:D]

It IS the lips, man, its the lips![:P][:P]

Ok Mr. 19 word count... Sugarlips is Hattie. This one is Chloe [&]...Next we'll show you Otis, then Tubby, then Fanny, then Newt, then Carrot top, then Blacky...

post-15072-13819333814446_thumb.jpg

Link to comment
Share on other sites

Signal = Sound? Now there is the very heart of the confusion. Sound
is what one hears in one's mind, signal is what one measures from
transducers; these are not the same. If one thinks they are measuring
sound, look at the measuring device and one will find it is the signal
from the transducer that is measured, not the sound. Same applies for
Signal = Sight, Taste, Touch, Pain, Hot, Cold, Pressure...

Oh well, guess its time for a dog picture, this is Lucy, hafl border collie half Lab...

post-16099-13819333814996_thumb.jpg

Link to comment
Share on other sites

Signal = Sound? Now there is the very heart of the confusion. Sound

is what one hears in one's mind, signal is what one measures from

transducers;

Oh...you are on your own regarding the voices in your head.

But if you have questions about speakers, we would be glad to hear about them for a change.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...