Jump to content

so, i created a 3ms time delay convolution (did this at 400hz for the scala)


tofu
 Share

Recommended Posts

Dr. Who, you too seem to have savy, coupled with the right equipment to do a simple test for upload. Puting aside the issue of calculating the speed of sound, let's focus on audibility of the "two-tap" phenomenon. Can you put together single tone LF (but at the upper -not lower - end of the KHorn woofer's handling ability) tracks with increasing delays until you think you've found an audible echo? If so, upload it and tell us what the delay factor was. I think this would accurately tell us whether the relative pathways for woofers vs. mid-ranges in this model have any audible significance.

Link to comment
Share on other sites

  • Replies 136
  • Created
  • Last Reply

Top Posters In This Topic

Hmmm, seems I'm talking about yet another thing (more along the lines of the beats)...

Let me try an example:

A = 440Hz

B = 540Hz

When A and B are playing at the same time, they sum to a complex

frequency, C (that doesn't look like a sine wave). If we perform a

fourier transform on the sum C, we will notice that there are measured

frequencies at 100Hz (the beat or difference tone) and 980Hz (the

combination tone)...the cool thing is we will also hear these two extra

tones (so its not just something that we measure). The point I was

trying to make is that the 100Hz and 980Hz "combination tones" will

change based on the relative phase and amplitude of Tone A and Tone B.

All of the tones will remain the same, but the relative amplitudes of

each at any point in time will be different.

Whenever A and B are not playing at the same time, we no longer have the "combination tones" or the summed complex frequency, C. We have A playing and then B playing.

Let's say we are playing a sound over a time period where our ears

wouldn't be able to distinguish between two different sounds. I think

it is obvious that A and B playing at the same time over that time

period will sound different than if the first half was A and the second

half was B, even though both cases will be percieved as a single pulse

of sound. The difference is in the combination tones.

If our song is supposed to play two frequencies at the same time, then

our khorn is going to change the single note into a different sound.

The higher frequency will arrive first by itself for 7ms (no

combination tones even though they should be there), then for a short

segment of time we will hear both at the same time (but the phase

between A and B will be different and that implies a new phase and

amplitude for the combination tones), and then finally we will hear the

lower frequency by itself for 7ms (again no combination tones that

should be there). By the time our pulse is finished, the khorn will

also have stretched the note out an extra 7ms! I have no problem saying

that we will still percieve this as a single pulse of sound, but the

timbre of the note will certainly be different.

Now if we were to increase the time-delay difference, there is a point

where the tap-dancing phenomenon starts to occur where we then percieve

two seperate tone bursts instead of a single one.

The comb-filtering that occurs because two drivers are playing the same

frequency is yet another factor to worry about....but one that gets

minimized from Al's super steep crossovers. The slope of the crossover

however is not going to directly affect the combinational tone

difference due to the time-delay. And as Al mentioned, two drivers

playing different frequencies do not exhibit any comb-filtering

effects. To differentiate the effects between the two, we can limit

measurements to frequencies outside the crossover passband (where

comb-filtering isn't an issue).

Link to comment
Share on other sites

Dr. Who, you too seem to have savy, coupled

with the right equipment to do a simple test for upload. Puting

aside the issue of calculating the speed of sound, let's focus on

audibility of the "two-tap" phenomenon. Can you put together

single tone LF (but at the upper -not lower - end of the KHorn woofer's

handling ability) tracks with increasing delays until you think you've

found an audible echo? If so, upload it and tell us what the

delay factor was. I think this would accurately tell us whether

the relative pathways for woofers vs. mid-ranges in this model have any

audible significance.

My software at home only allows 9ms intervals so I'm not sure how

feasible that would be. Besides, the khorn doesn't have the "two-tap"

phenomenon.

There is a minimum time-delay where the "two tap" occurs. For time

periods shorter than this interval, the percieved intermodulation

distortion is still going to change (as a function of the time-delay).

I have always hesitated referring to it as "intermodulation distortion"

because distortion is always implied as a bad thing, but the

intermodulation distortion is vital to the sound of live instruments.

I've always heard IMD used when measuring the performance of a

particular driver (every driver has some form of harmonic distortion,

which will in turn produce intermodulation distortions)

http://www.audiovideo101.com/dictionary/im-distortion.asp

Hopefully the analogy (A, then A+B, then B) makes it clear that the

"average" intermodulation distortion over the time period will be

different than the original (A+B only). We still hear it as one note,

but our ears are going to hear the "average". How audible is it? Just

listen to the demo [;)] (every drum hit sounds like one hit, but the timbre changes)

Link to comment
Share on other sites

Who,

I don't understand why you are persuing the sum of two tone frequencies through the Khorn woofer idea. A horn loaded speaker has very low intermod distortion. It's under 1% even at high volume levels. The IM distortion of a direct radiator is a lot worse. Intermod will NOT generate an echo. It just sounds bad! I have done intermod tests at midrange frequencies and I will post the plots to show that there is very little distortion generated by a horn speaker if you care to see them. The diaphragm motion is also very low compared to a direct radator. The large diapgragm motion COULD cause intermod in the form of doplar shift as a high frequency source (the cone) is move to you and away from you by the lower frequency of the two. Again, the horn reduces the diaphragm motion and therefor the doplar distortion and all the other types of distortion. I really think you are barking up the wrong tree!

BTW: The distortion caused by doplar would be frequency modulation, not IM distortion. I believe that, plus IM distortion is what PWK called "total modulation distortion" in his papers.

AL K.

Link to comment
Share on other sites

It's not the intermodulation distortion of the bassbin that is the

problem! The IMD ratings for speakers are measuring the amounts of IMD

present that are directly associated with the harmonic distortion of

the driver. In the issue of time-delay I'm not even talking about the

drivers. The link I posted explains this better.

Any source of sound where more than one frequency is present exhibits

intermodulation distortion. It's the intermodulation distortion between

the delayed frequencies and undelayed frequencies that is changing as a

function of the time-delay. (in other words, the combinational tones

between the delayed frequencies and undelayed frequencies change as a

function of the amount of time-delay).

I tried so hard to not refer to it as intermodulation distortion

because it instantly gets associated with a driver spec, but its not

the driver specs that I'm talking about.

Link to comment
Share on other sites

Who,

You are looking for intermod products generated by two sources that are out of time with each other, but, IM distortion is casue by nonlinear systems like amplifiers and speakers. Where is the nonlinearity? Are you assuming the air between you and the speakers is nonlinear? Are your ears non linear? If you ears are nonlinear, everyhting you ever heard in your life must be distorted! Your ears is the "given" in this discussion. The echo is cassued by two different path lengths just like the echo of a sound bouncing off a far wall. It is purely propogation time, nothing more!

Al K.

Link to comment
Share on other sites

Here I go, again being an unseasoned "audiophile" trying to intelligently digest this theory. But..........

Dr Who, you gave the A, A+B, B approach. I get it. However, if we are saying that a time-delay (phase shift?) has no effect on the frequency or tone, then where does your approach come into play?

If, for example, a 400hz note is projected through the speaker. A is the mid-range blowing 400hz, A+B are the mid-range and woofer blowing 400hz, and B is the woofer blowing 400hz.

Now, let's separate the phase shift for a moment, and focus only on the note. Doesn't a 400hz note sound the SAME, whether it is projected through a mid-range or woofer? I would think so. 400hz is 400hz is 400hz.

So, bearing in mind that we are only dealing with a single, fixed note for this purpose, it seems A is the mid blowing 400hz. A+B means the woofer just joined in at the same tone, and B is the woofer dragging behind at the same tone at the end.

So, it should sound like one continuous note, and the only difference would be that its total length is 7ms longer.

That said, I can see where the phase-shift would cause a set of repeated gaps and bleedovers through a song, but only in theory and not to our audible reality. Let's say the guitar and bass are supposed to come in together. We hear the guitar come in through the mid. Next, we hear the bass. Let's assume now that the guitar and bass are supposed to stop at the same time. The bass would then bleed over. Of course, this assumes NO common playing of the same frequency within the single instrument through BOTH the mid and the woofer (i.e., the bass only comes in through the woofer). If there is commonality, it seems the gaps and bleeds are reduced.

Am I right so far?

If so, I can't imagine you'd hear any difference because in the real world, I don't know of a guitarist who can keep up with a bassist to within 7ms, and vise versa. That goes for all instruments. That also goes for changes in tunes within the same instrument that could THEORETICALLY transverse from mid to woofer IN A MUTUALLY-EXCLUSIVE manner. Once the tone played by a single instrument crosses into the overlap and is played by both the mid and woofer, there would be no phase shift.

So, it would appear the "real" issue is the delayed starts and stops WITHIN WHAT SHOULD BE ONLY A SINGLE HARMONY. Like I said, I know of no musician who can keep up with other musicians to within 7ms.

Right or wrong?

Link to comment
Share on other sites

Here's the time delay with a drum segment. Keep in mind that it's the same 2000Hz crossover. It's interesting to note how much different the low end sounds, yet it's so far away from the crossover point of delay.

And at 9ms you can just hear the kick drum doubling up too.

I don't hear it. I'm a drummer, and I can't hear it. I'm not trying to go into credentials by saying I am a drummer, but I generally tend to focus on drums in a tune.

Those echos are in both versions. They are part of the song.

Link to comment
Share on other sites

Who,

You are looking for intermod

products generated by two sources that are out of time with each other,

but, IM distortion is casue by nonlinear systems like amplifiers and

speakers. Where is the nonlinearity? Are you assuming the air between

you and the speakers is nonlinear? Are your ears non linear? If you

ears are nonlinear, everyhting you ever heard in your life must be

distorted! Your ears is the "given" in this discussion. The echo

is cassued by two different path lengths just like the echo of a sound

bouncing off a far wall. It is purely propogation time, nothing more!

I feel like we're struggling with terminology here...I am using

intermod distortion interchangeably with combination tones. You are

referring to intermod distortion measurements as they pertain to

harmonic distortion.

Any two frequencies when played at the same time also create sum and

difference tones. The set of all these tones can be referred to as

intermodulation distortion:

"Intermodulation distortion

begins with additional frequencies being produced by the sum and

difference (addition and subtraction) of two original frequencies. For

instance, two frequencies of 2 kHz and 8 kHz may have been produced by

an audio system. Intermodulation distortion would create two additional

signals at 10 kHz (the sum or addition of 2 and 8 kHz) and 6 kHz (the

difference between the numbers or subtraction of 2 from 8 kHz)."

http://www.audiovideo101.com/dictionary/im-distortion.asp

The measurements to which you refer stem from the harmonic distortion of the nonlinear devices to which you refer:

"Each of these new signals,

the first parts of intermodulation distortion, would then develop

harmonics of their own (creating harmonics from 6 kHz of 12 kHz, 18 kHz

and so on while producing harmonics of 10 kHz at 20 kHz, 30 kHz and son

on). To further confuse the issue and add even more distortion, these

harmonics (the harmonics created from the sum and difference

frequencies of the original two frequencies) would then create their

own sum and difference distortions (for example, the first harmonics

located at 12 kHz and 20 kHz would create a sum frequency of 32 kHz and

a difference frequency of 8 kHz)."

http://www.audiovideo101.com/dictionary/im-distortion.asp

If I'm incorrectly using the term "IMD" then by all means interchange

it with the concept of combination tones....which are always there.

(they don't require a nonliner system). The point is that the

combination tones are changing with time (A, A+B, B) and even though it

happens in a timespan that is recognized as a single instance, the

percieved timbre still changes.

http://www.audiovideo101.com/dictionary/im-distortion.asp'>

Link to comment
Share on other sites

The point is that the combination tones are changing with time (A, A+B, B) and even though it happens in a timespan that is recognized as a single instance, the percieved timbre still changes.

http://www.audiovideo101.com/dictionary/im-distortion.asp'>

Okay, read my prior thread about 400hz is 400hz is 400hz and no musician is capable of keeping up to a 7ms degree of accuracy. If this is the difference you refer to as a difference in "perceived timbre," I still think it's imperceptible.

Link to comment
Share on other sites

Here I go, again being an unseasoned

"audiophile" trying to intelligently digest this theory.

But..........

Dr Who, you gave the A, A+B, B approach. I

get it. However, if we are saying that a time-delay (phase

shift?) has no effect on the frequency or tone, then where does your

approach come into play?

If, for example, a 400hz note is

projected through the speaker. A is the mid-range blowing 400hz,

A+B are the mid-range and woofer blowing 400hz, and B is the woofer

blowing 400hz.

Now, let's separate the phase shift for a moment,

and focus only on the note. Doesn't a 400hz note sound the SAME,

whether it is projected through a mid-range or woofer? I would

think so. 400hz is 400hz is 400hz.

So, bearing in mind

that we are only dealing with a single, fixed note for this purpose, it

seems A is the mid blowing 400hz. A+B means the woofer just

joined in at the same tone, and B is the woofer dragging behind at the

same tone at the end.

So, it should sound like one continuous note, and the only difference would be that its total length is 7ms longer.

That

said, I can see where the phase-shift would cause a set of repeated

gaps and bleedovers through a song, but only in theory and not to our

audible reality. Let's say the guitar and bass are supposed to

come in together. We hear the guitar come in through the

mid. Next, we hear the bass. Let's assume now that the

guitar and bass are supposed to stop at the same time. The bass

would then bleed over. Of course, this assumes NO common playing

of the same frequency within the single instrument through BOTH the mid

and the woofer (i.e., the bass only comes in through the woofer).

If there is commonality, it seems the gaps and bleeds are reduced.

Am I right so far?

If

so, I can't imagine you'd hear any difference because in the real

world, I don't know of a guitarist who can keep up with a bassist to

within 7ms, and vise versa. That goes for all instruments.

That also goes for changes in tunes within the same instrument

that could THEORETICALLY transverse from mid to woofer IN A

MUTUALLY-EXCLUSIVE manner. Once the tone played by a single

instrument crosses into the overlap and is played by both the mid and

woofer, there would be no phase shift.

So, it would

appear the "real" issue is the delayed starts and stops WITHIN WHAT

SHOULD BE ONLY A SINGLE HARMONY. Like I said, I know of no

musician who can keep up with other musicians to within 7ms.

Right or wrong?

The problem comes into play when we have a 60Hz frequency on the woofer

and a 4kHz frequency on the squaker, not when a single tone is being

played. In other words, every frequency below 400Hz is going to be

delayed relative to any frequency above 400Hz (ignoring the pass band

region of the crossover). All of these frequencies are going to NOT

interract for the first 7ms and last 7ms of the sound. In the middle,

the phase between all the frequencies will be different than if both

sounds arrived at the same time...so in the middle section the

combination tones are going to have a different amplitude structure.

Btw, here's another version of the drum delay without all the rain

sticks in the front (so it starts right on the tribal drum hit). This

should make it easier to AB between the two because you can just hit

next right away and hear the difference quicker.

Drum Time Delay short.zip

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share


×
×
  • Create New...