Jump to content

so, i created a 3ms time delay convolution (did this at 400hz for the scala)


tofu

Recommended Posts

Show me a loudspeaker that is minimum phase. I would like to see it. Then I will show you the insertion phase if a Belle Klipsch and the insertion phase of an extreme slope filter which comes to a group delay peak that is very high. You will not be able to even find the contibution of the filter to the total phase of the two. Only the phase linearity erros of the filter represent group dealy that is not "flat". The TIME delay (and the excess phase it contribnutes) has perfecty flat group delay because the derivitave of its phase shift due to the time delay is constant. It does NOT distort the complex wavefor at all.

BTW: The derivitave of phase with frequency IS group delay. This is the definition of it! This is a measure of phase linearity. It is NOT time delay.

Al K.

Link to comment
Share on other sites

  • Replies 136
  • Created
  • Last Reply

Top Posters In This Topic

Since the thread got deleted I will go ahead and reupload the audio

sample that portrays the audible effect of a very short time delay.

Sadly, the software I'm using limits me to a 2000Hz crossover point and

9ms of delay (so everything below 2000Hz is delayed 9ms), so I can't

model a transport function similar to the khorn (that has a delay of

7ms below 400Hz).

A = Original

B = Delayed

Just so its clear, I took a stereo file and converted to mono (because

I only have a stereo editor here at home). I then converted this mono

file back to stereo so that the L and R tracks were identical. To the

mono stereo track I applied a HPF on the left track and a LPF on the

right track. I then converted to a mono audio file and then converted

back to stereo, yielding "A." To the original mono stereo track I also

applied the same HPF and LPF, but before converting to mono I delayed

the left track by 10ms, effictively delaying every signal below 2000Hz.

I converted this mono track back to stereo, yielding "B." Both files

were then faded out after a minute and converted to mp3 so that the

file size wan't ridiculous.

Time Delay.zip

Link to comment
Share on other sites

I'm not nearly as inclined to research this topic and make myself a sound engineer at the end of the day.... but let me just try to throw some common-sense on this topic.

You are talking 2 and 3ms. That's .002 and .003 seconds. I doubt seriously the ear could ever discern such a difference.

Even if it was possible to discern it, making it "audible," it still takes a HUGE jump to say this audible difference is significant enough to translate into the ability to affect perceived "quality."

I would bet there is nobody that could listen to two tracks (one we would use as a reference, and the other we would use as the "delayed" subject), and sit there and honestly tell me one sounds "worse" than the other. I think given what folks are trying to push in this theory, a more "pure" test might be to have it relate to a single note. With perfectly calibrated instruments, record an "A" for 5 seconds or so. Then, place a 2 or 3 ms gap at the begining of a second "A" and dub it right over the old "A" for the full 5 seconds or so.

What you would have in the end is what we are all discussing. You have the frequencies coming in at different intervals. BUT these are the same frequency, so you've removed all the other variables and got right down to delay ONLY. If you did this, could you not measure the pitch with a pitch meter?

I would bet without a pitch meter (just using the plain old ears), there is nobody that could distinguish any change in the pitch (or any so-called "noise" this delay might create). I would also bet a pitch meter (I hope I'm using the right name for the instrument - those things that help people tune their guitars, etc.), would also register a perfect "A."

Just because you can "picture" the wave "noise" that results from time delay in your head on a graph does not mean you can hear it.

I would even bet you could play the single "A" note by itself and insert 2 - 3 ms blank intervals. For example, the note sustains for 1 second, then is shut off for 3 ms, then back on for 1 second, etc. I bet everyone would agree once played back that there is absolutely no discerning any gap anywhere. It would sound like a constant note.

Imagine a woodpecker sitting on a tree and pecking it 500 times in 1 second (that would be 2 ms intervals) or 333 times in 1 second (which would be 3 ms intervals). It would have to sound like 1 "smack" to everyone.

Now, imagine the woodpecker had 2 differently-tuned pieces of wood to peck, and could peck each 1 at a time, at the same 3 ms rate of speed. There, you might hear a different note. But you will not hear both notes. This is what somebody referred to as delay affecting timbre (not timber[;)]). But remember, there ARE 2 different notes being played when this occurs; the deal is that they are so close together, they sound like 1 - but only a different 1. That's the same thing that would occur if a guitarist could play different notes that quickly back-and-forth.

However, the music we listen to is nowhere close to this example. Nobody listens to 2 notes and calls them a song (except Ted Nugent). Really, though, when you have such a limited number of notes in the mix, it "could" affect timbre, but it would still sound so "pure" there would be no quality issue.

When you get a bunch of notes moving around in harmony, there is NO WAY you could ever process this immensely minute difference in timbre. The brain is too busy hearing the song.

So, my belief is this delay phenomenon is interesting in terms of picturing graphs and pondering how the "obvious picture" might register in terms of the quality of the audible sound. But I think our brains process sound based upon what the focus of attention is. Ever been watching a news story, and your wife says something, but it doesn't register clearly? Ever been talking to your wife while a news story is on, and the news story did not register clearly? These are the EXACT same sounds, only you heard them qualitatively differently.

If you are being a "good" audiophile - not thinking about anything else but that song, there are still so many notes flying around and so much harmony put together that you could never discern any qualitive difference between frequencies that came in delayed by 3 ms.

By the way, it is only when these delayed pitches mix that we encounter a "noise" experience. Noise is just a perception of one or more sounds that do not "fit in."

What is the frequency of an "A?" Obviously it is in waves/sec. Graph the "A" and graph another "A" on top of the old "A" but adjusted to the right by 3/1000 of a second. The intersections create a new tone. You have a new amplitude and wavelength. What are they? I'll bet the new amplitude and wavelength is so utterly close to the original "A" it's not funny. Compare (1) the variation in frequency of the "new" note from the original "A" and (2) the variation in frequency of the "new note" from the "B flat" that follows up the line from our old "A." I bet the variance shows the "new" note is real snug up against the old "A" and miles apart (relatively) from the "B flat."

It would be like taking a trip to the grocery store versus trying to travel outside our galaxy.

Y'up. I'm sceptical of this theory. If you want to talk 50ms delay... that's different.

Link to comment
Share on other sites

Since the thread got deleted I will go ahead and reupload the audio

sample that portrays the audible effect of a very short time delay.

Sadly, the software I'm using limits me to a 2000Hz crossover point and

9ms of delay (so everything below 2000Hz is delayed 9ms), so I can't

model a transport function similar to the khorn (that has a delay of

7ms below 400Hz).

A = Original

B = Delayed

Just so its clear, I took a stereo file and converted to mono (because

I only have a stereo editor here at home). I then converted this mono

file back to stereo so that the L and R tracks were identical. To the

mono stereo track I applied a HPF on the left track and a LPF on the

right track. I then converted to a mono audio file and then converted

back to stereo, yielding "A." To the original mono stereo track I also

applied the same HPF and LPF, but before converting to mono I delayed

the left track by 10ms, effictively delaying every signal below 2000Hz.

I converted this mono track back to stereo, yielding "B." Both files

were then faded out after a minute and converted to mp3 so that the

file size wan't ridiculous.

you delayed everything below 2000hz? i thought the bass bin was already delayed... why delay it more? wouldn't the point be to delay the high frequencies in order to make them aligned with the low?

Link to comment
Share on other sites

*snip*

the fact that i hear a very obvious echo with a 3ms delay is enough proof for me that it can make a difference. although i think that echo may be due to the fact that a 180db slope isnt steep enough to completely rid my convolution of overlapping.
Link to comment
Share on other sites

Here is an example of a speaker where the phase doesn't even wrap!!!!!

"As for technical features, the CS7.2 places the same emphasis on improved time, phase, and step response as its predecessor. Phase response is specified at
±10°, Thiels step-response graph is exceptionally smooth, and time error at frequencies above 300 Hz is claimed to be less than 0.5 millisecond. This may help explain the CS7.2s combination of apparent speed, integration, and driver-to driver coherence. Certainly, the CS7.2 rivals the best electrostatics and ribbons I have heard in these respects, and it is coherent over a much wider listening area than any electrostatic (Anthony Cordesman, AUDIO Mag, December1998, Thiel CS7.2 speaker) http://thielaudio.com/THIEL_Site05/PDF_files/PDF_reviews/CS7_2R_pdf/CS7_2R_Audio.pdf

Also, refer to the links in my last post!!!! And read John Murphy regarding his point that most speakers are minimum phase. I am not simply going to keep posting links for folks who want to maintain that Heyser and Davis don't know what they are talking about - but who are too lazy to read the sources. Oh, and John Murphy addresses the subject as well!

You really need to look at the definition of minimum phase. And please, since "no speaker is minimum phase"(sic), what purpose does an EQ serve, as you cannot EQ a non-minimum phase signal! And even I, who find the common uses of EQs comical at best, will acknowledge their use for direct signal EQ! As minimum phase speakers do indeed exist. But the goal is to build a minimum phase speaker - or at least to bring it into a minimum phase configuration!!!!

But you need to go back and read the definition of minimum phase.
Most speakers are minimum phase - and no, a simple shift in phase does not necessarily mean they are not minimum phase! Both Heyser and Davis explain this in depth. And it is EASILY observeable on the TEF Nyquist display, as the complex impedance spiral is a rotating phasor displayed along the frequency &/or time z-axis (depending on whether you desire the Nyquist (frequency) or Heyser (time) spiral) and in the Heyser, the projection of the spiral onto the real plane is the impulse response while the projection onto the imaginary plane is the doublet response, and they correlate to the system kinetic and potential energy at each point, while the projection onto the plane of the origin is 'normal' 2D polar view one normally sees in electronics! If you understand this description, the imaginary realm is simply the plane along the z axis containing the Y axis!!!! Hardly imaginary!! And magnitude and phase are simple displays of this curve as well- with group delay just being a different way of looking at the same phase!!!The Heyser spiral with the rotating phasor displayed along the time z-axis features projections of the coincident and quadrature response. There is much more, but I do not have the time to address this further at this time. Besides, I seriously doubt anyone has followed this nor visualized it anyway...

And one wonders why the Nyquist plot is so important! It shows the COMPLETE model of which every view and graph and measurement is simply a particular viewpoint/frame of reference of the system response! And to think that some think the nominal impedance - read resistance - is sufficient compared to this view! That is akin to saying that if one wants to completely understand your complete life, that instead of looking at the characteristics of your entire life, they will focus only on the day of your 8th birthday while ignoring all the rest!! The Nyquist and Heyser spirals are the entire response universe, each point of view simply being differing points of view- differing frames of reference for the exact same entity! And a system in minimum phase simply allows the Nyquist response rotating phasor to be map topologies locally through the use of the 90 degree offset orthogonal Hilbert response.

For those who wish to understand this further, you need to read Heyser and Davis. As I refuse to get into a further silly debate here. If you disagree, then debate Heyser's work directly. And while I enjoy discussing the subject - I have no time nor desire to debate the basic validity. That is a given!


Again, have fun...and may I suggest folks get busy reading Heyser and Davis for meaning!

Link to comment
Share on other sites

*snip*

the fact that i hear a very obvious echo with a 3ms delay is enough proof for me that it can make a difference. although i think that echo may be due to the fact that a 180db slope isnt steep enough to completely rid my convolution of overlapping.

Overlap of the passband - the superposition of identical frequencies - results in comb filtering and polar response anomalies.

And for you others, don't get hung up with pitch and frequency here!

Link to comment
Share on other sites

Dr Who,

Your experiment applying different delays to the right and left channels of a stereo "program" is not a fair test. You must mix the inital and delayed sound into the SAME channel and listen in mono. Your are now using the brains ability to perceive phase difference between two ears if you use two channels. Your ear/brain "team" definitly CAN hear that difference. That is how it computes a stereo image. What the ear can NOT hear is phase differences bewteen to seperate sounds with the same ear. That is, the phase relationships of the components (harmonics) of a complex waveform. Remember too that there is a point where phase differences become so large that they actually become two different sounds in time. At that point, you definitely can again hear the difference. This is the point where the classic tap dancer case makes the audable echo. When the phase is on the order of a few hundred degrees (I expect) or less, you can't hear it. PWK did testing to demonstrate that the path difference between his woofer and midrange were below this point where the delay difference could be heard. I personally have done tests with 3 seperate signal generators to simulate the harmonics of a square wave. The generators, not being phase locked to each other, created a sontinuously changeing waveform. I looked at the waveform on an oscilloscope and listened to it (in mono) on a set of headphones at the same time. Even when looking at the scope for visual clues I could hear no changes in the sound. It was a continuoius raspy tone. I know from this that I (at least) can NOT hear the phase relationships of a complex (square) waveform.

Al K.

Link to comment
Share on other sites

Wait a second! If you read all my long crap, you would notice a flaw in my math pertaining to the graphing of overlayed "A" notes 3 ms apart. My math was wrong, but I think this proves my point to a "T."

If you overlayed the two "A" notes, but separated them by 3ms, you would get a "net" result of a new amplitude, but the FREQUENCY would be exactly the same. Thus, NO CHANGE in tone at all, just volume.

Therefore, I think the so-called Eleanor Powell two-tap phenomenon posted by TBrennan is the limit the discussion could have. I think the delay could only result in an echo - not a change in tone or timbre at all. So, the only "qualitative" effect might be volume (due to the new resultant amplitude) and echo (due to the delay). I think we could all agree volume is a non-factor. That leaves us with the only qualitative distinction being whether you can perceive an echo.

So, can you hear any echos on the delayed track? Nope!

Link to comment
Share on other sites

Dragonfyn,

Overlap of the passband - the superposition of identical frequencies - results in comb filtering and polar response anomalies.

This is why I advocate extreme-slope networks. If only a single driver is making each sound there will be no interferance between them and no comb anomalies. Believe me, this you CAN hear! The stereo image improves and the sweet spot widens.

Al K.

Link to comment
Share on other sites

Dragonfyr,

Overlap of the passband - the superposition of identical frequencies - results in comb filtering and polar response anomalies.

This is why I advocate extreme-slope networks. If only a single driver is making each sound there will be no interferance between them and no comb anomalies. Believe me, this you CAN hear! The stereo image improves and the sweet spot widens.

Al K.

I agree completely - I was just commenting on the manner in which anomalies would present themselves regarding Tofu's concern about the overlap of the passbands with a lower order crossover.

Edit- oh - and by 'identical frequencies" I did not mean the individual signals had to be identical! I meant that the passband frequency - the common or 'same frequency of overlap...

Link to comment
Share on other sites

Dragonfyn,

This is why I advocate extreme-slope networks. If only a single driver is making each sound there will be no interferance between them and no comb anomalies. Believe me, this you CAN hear! The stereo image improves and the sweet spot widens.

Al K.

you know what? i'm damn tempted to build a poor man's ES network with a cheap computer and digital filters.

Link to comment
Share on other sites

you are talking 2 and 3ms. That's

.002 and .003 seconds. I doubt seriously the ear could ever

discern such a difference.

Even if it was possible to discern

it, making it "audible," it still takes a HUGE jump to say this

audible difference is significant enough to translate into the ability

to affect perceived "quality."

So, my belief is this delay

phenomenon is interesting in terms of picturing graphs and pondering

how the "obvious picture" might register in terms of the quality of the

audible sound.

Y'up. I'm sceptical of this theory. If you want to talk 50ms delay... that's different.

You ........

are Soooooooo wrong ......

3 Hundredths of a second is Clearly audible

50 ms ...is a huge amount

....i will leave the Spankin' .. for Dragonfyr to do .....[:)]

Link to comment
Share on other sites

Jeff,

Here is what I think is an example of what your are talking about. It is a short and a long duration tone burst. The long burst is longer in time than the dealy between them. At the point they overlap the amplitued gets a boost. If you move the mike a bit you will find a spot where the two cancell out! The is the comb anomoly thing. There is no change in pitch, just ampitude. You will heat the peak if you are standing in that spot and the null if you move to another spot. I think you may also percieve a smearing effect. If the two bursts were of different frequencies this would not happen. The phase between the two would then be random and inaudable. The two bursts would have to be vastly seperated in time before you could perceive that they were two seperate bursts (an echo).

Al K.

post-2934-13819273974264_thumb.jpg

Link to comment
Share on other sites

you are talking 2 and 3ms. That's .002 and .003 seconds. I doubt seriously the ear could ever discern such a difference.

Even if it was possible to discern it, making it "audible," it still takes a HUGE jump to say this audible difference is significant enough to translate into the ability to affect perceived "quality."

So, my belief is this delay phenomenon is interesting in terms of picturing graphs and pondering how the "obvious picture" might register in terms of the quality of the audible sound.

Y'up. I'm sceptical of this theory. If you want to talk 50ms delay... that's different.

You ........

are Soooooooo wrong ......

3 Hundredths of a second is Clearly audible

50 ms ...is a huge amount

....i will leave the Spankin' .. for Dragonfyr to do .....[:)]

3 Hundreths is 30ms. We're talking 3/1000 or 3ms.

Link to comment
Share on other sites

Jeff,

At the point they overlap the amplitued gets a boost... There is no change in pitch, just ampitude.

You will heat the peak if you are standing in that spot and the null if you move to another spot. I think you may also percieve a smearing effect. If the two bursts were of different frequencies this would not happen. The phase between the two would then be random and inaudable. The two bursts would have to be vastly seperated in time before you could perceive that they were two seperate bursts (an echo).

Al K.

I am not sure, but it looks like you are proving my point. Amplitude is volume. It changes. Volume is a non-factor for sound quality for this purpose. You say "[t]here is no change in pitch..." My point exactly, pitch=frequency=tone. No change in tone. An "A" is an "A" with or without the delay, or echo. So, if I do not physically discern something that makes me say "Did you hear that echo?", there is no qualitative difference. Again, an "A" is an "A" until the pitch, or frequency, is CHANGED. A uniform delay will not change pitch.

Link to comment
Share on other sites

I'm not nearly as inclined to research this

topic and make myself a sound engineer at the end of the day.... but

let me just try to throw some common-sense on this topic.

You

are talking 2 and 3ms. That's .002 and .003 seconds. I

doubt seriously the ear could ever discern such a difference.

Even

if it was possible to discern it, making it "audible," it still takes a

HUGE jump to say this audible difference is significant enough to

translate into the ability to affect perceived "quality."

How do you define quality? Last I checked we are

striving for the most accurate playback system possible (so as to avoid

changing the artist's vision). I think we would all agree that a

time-aligned is the ideal situation, so any audbile difference will be

in the direction of less accurate, aka less quality.

I

would bet there is nobody that could listen to two tracks (one we would

use as a reference, and the other we would use as the "delayed"

subject), and sit there and honestly tell me one sounds "worse" than

the other.

If you knew what the original sound source

sounded like then it makes a huge difference. Since we're not in

the studio, we have to trust the sound guy to make it sound as good as

possible. If you disagree with the decisions of the sound guy, then we

have a totally different issue.

I think given what folks are trying to push in this theory, a

more "pure" test might be to have it relate to a single note.

With perfectly calibrated instruments, record an "A" for 5 seconds or

so. Then, place a 2 or 3 ms gap at the begining of a second "A"

and dub it right over the old "A" for the full 5 seconds or so.

What you would have in the end is what we are all discussing.

You have the frequencies coming in at different intervals. BUT

these are the same frequency, so you've removed all the other variables

and got right down to delay ONLY. If you did this, could you not

measure the pitch with a pitch meter?

With only one tone you don't have a system of combination tones, so you

will only percieve a difference in amplitude and no difference in

timbre. Now if we were to record an "A" from an actual instrument, then

yes there will be changes in timbre....though I'm not sure which part

of the waveform you're going to choose to start the delay with (because

the attack, sustain, and decays of the same intrument all sound very

different).

I would bet without a pitch meter (just using the

plain old ears), there is nobody that could distinguish any change in

the pitch (or any so-called "noise" this delay might create). I

would also bet a pitch meter (I hope I'm using the right name for the

instrument - those things that help people tune their guitars, etc.),

would also register a perfect "A."

But what the pitch meeter won't show you is that

the amplitude of the resultant waveform will be different (assuming a

sine-wave "A").

Just because you can "picture" the wave

"noise" that results from time delay in your head on a graph does not

mean you can hear it.

But you already said you can hear it so it doesn't matter how it changes on the graph.

I

would even bet you could play the single "A" note by itself and

insert 2 - 3 ms blank intervals. For example, the note

sustains for 1 second, then is shut off for 3 ms, then back on for 1

second, etc. I bet everyone would agree once played back that

there is absolutely no discerning any gap anywhere. It would

sound like a constant note.

IF it sounded like a constant smooth note (which

I'm not sure it would), it certainly would not be percieved to be the

same volume.

Imagine a woodpecker sitting on a tree and

pecking it 500 times in 1 second (that would be 2 ms intervals) or 333

times in 1 second (which would be 3 ms intervals). It would have

to sound like 1 "smack" to everyone.

But it would sound different than if the

woodpecker was pecking just once every second. Again, the whole attack,

sustain, and decay.

Now, imagine the woodpecker had 2

differently-tuned pieces of wood to peck, and could peck each 1 at a

time, at the same 3 ms rate of speed. There, you might hear a

different note. But you will not hear both notes. This is

what somebody referred to as delay affecting timbre (not

timber[;)]). But remember, there ARE 2 different notes being

played when this occurs; the deal is that they are so close together,

they sound like 1 - but only a different 1. That's the same thing that would occur if a guitarist could play different notes that quickly back-and-forth.

Exactly, it sounds different

However,

the music we listen to is nowhere close to this example. Nobody

listens to 2 notes and calls them a song (except Ted Nugent).

Really, though, when you have such a limited number of notes in the

mix, it "could" affect timbre, but it would still sound so "pure" there

would be no quality issue.

Exactly, we listen to millions of frequencies

that are all interrelated based on their relative phase. The more notes

we have, the more compounded the problem becomes.

When

you get a bunch of notes moving around in harmony, there is NO WAY you

could ever process this immensely minute difference in timbre.

The brain is too busy hearing the song.

But you already stated that you could hear a difference between the two audio samples.

So, my belief is this delay phenomenon is interesting in terms of

picturing graphs and pondering how the "obvious picture" might register

in terms of the quality of the audible sound. But I think our

brains process sound based upon what the focus of attention is.

Ever been watching a news story, and your wife says something, but it

doesn't register clearly? Ever been talking to your wife while a

news story is on, and the news story did not register clearly?

These are the EXACT same sounds, only you heard them qualitatively

differently.

If you are being a "good" audiophile - not thinking

about anything else but that song, there are still so many notes flying

around and so much harmony put together that you could never discern

any qualitive difference between frequencies that came in delayed by 3

ms.

Using this logic you are claiming that we should

all be listening to Bose because we should be so intuned to the music

that we can ignore all other distortions.

By the way, it

is only when these delayed pitches mix that we encounter a "noise"

experience. Noise is just a perception of one or more sounds that

do not "fit in."

What is the frequency of an "A?"

Obviously it is in waves/sec. Graph the "A" and graph another "A"

on top of the old "A" but adjusted to the right by 3/1000 of a

second. The intersections create a new tone. (no, they create the same tome, but a new amplitude).

You have a new amplitude and wavelength. What are they?

I'll bet the new amplitude and wavelength is so utterly close to the

original "A" it's not funny. (the amplitude will

change anywhere between negative infinity and +3 dB and is a function

of the delay only! If you take a 20,000Hz tone playing at 10000000 dB

and shift it right by any multiple of 0.000025 seconds, you will

achieve a net output of 0dB. If you were to shift it right by a

multiple of 0.0001 seconds then you will achieve an increase of 3dB)

Compare (1) the variation in frequency of the "new" note from the

original "A" and (2) the variation in frequency of the "new note"

from the "B flat" that follows up the line from our old "A." I

bet the variance shows the "new" note is real snug up against the old

"A" and miles apart (relatively) from the "B flat."

It

would be like taking a trip to the grocery store versus

trying to travel outside our galaxy.

Y'up. I'm sceptical of this theory. If you want to talk 50ms delay... that's different.

Sorry to be blunt, but it's apparent that you don't even understand the

most fundamental concepts of wave mechanics...something everyone in

Illinois is supposed to learn in high school physics. So these

misconceptions are hardly those of "common-sense" considering most

people are beyond a high school education.

I am curious though as to why you are so adamently opposed to the

concept of time-delay. I can only imagine that it stems from someone's

personal enjoyment of khorns...and the conclusions of time-delay aren't

sufficient to warrant the khorn an awful sounding speaker! In fact, I

attribute the time-delay as being a number one reason why the khorn

sounds so good with drums. The conclusions are however sufficient to

claim that a compromise is being made when one chooses dynamics over

the time-delay (ie khorn versus cornwall)......

But time-delay doesn't even have to be a compromise because we can

easily account for it with active processing and bi-amping. And since

so many people are already bi-amping and noticing huge improvements, it

baffles me that more aren't doing it. Heck, even Roy Delgado (head

engineer dude at klipsch) is time-correcting his khorns (and jubilee)

and even suggested to the people visiting the plant that they will

notice marked improvements if they do the same.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...