Jump to content

Digital signal cables and "bits is bits"


Recommended Posts

I will update this thread later to explain what I'm trying to show here...

Okay. Sorry to have posted this in two pieces, but the first post dropped the text I'd typed and I didn't have time to rekey until now. Figured since the images were there I'd go ahead and show them.

Note - I'm opening this thread with a very non-technical post. Digital signal engineers, please don't stomp all over me for over simplification and glossing over details. Feel free to correct any errors, of course.

Anyway... in the "Another speaker cable question" thread in this forum, and in several other instances, we've had discussions about whether or not a digital cable can impact the sound of a system. One argument is that "bits are bits" - the cable is carrying a series of 1's and 0's, and as long as the 1's and 0's get where they're supposed to go without being altered (bits dropped, or bits flipped, or whatever), then the signal has been transmitted with 100% accuracy. It is a binary condition - either the signal is transmitted accurately, or it is not. And by this measure, a cable would have to be so badly designed as to be unusable for it not to work. The cheapest network cable, firewire cable, USB cable, etc. carry signals all day long without corrupting the signal, and some of these signals are much higher bandwidth than a simple SP/DIF or AES/EBU digital signal.

I don't believe this to be the case. I think looking at the data this way is only looking at half the equation. In A/D and D/A conversion to capture music signals, the *TIMING* of the data is every bit (NPI) as important as the value of the data. If I send you a Word file attached to an email, as long as the data packets get into your mailbox uncorrupted, you can open that document and read it. It does not matter what route the packets took to get there, nor the relative timing of their arrival. If I am reading or writing data to a hard disk or recordable media, as long as the timing with which the bits are encoded is within the allowable operating parameters of the computer, it will successfully read and write the data. When I was selling Apple II computers years ago, when you ran a diagnostic on the floppy drive it wasn't unusable to see a +/- 15% variance in rotational speed, but it would still work. Applewriter files were stored and retrieved error free. This is not true with a music signal. The exact timing of conversion from analog to digital and back again is a critically important to the fidelity of the signal as is accurately capturing the amplitude. The clock signals that control this timing are embedded in the data carried on both the SP/DIF and AES/EBU interfaces used to connect CD transports to D/A converters. Any inaccuracies in the clock signal is, in effect, a distortion of the signal. The right value, decoded at the wrong time, is the wrong value. This clock signal is, unfortunately, quite fragile (at least the way the Sony/Phillips engineers designed the stupid system, it is.) Any varience away from the ideal clock frequency is called "jitter". An improperly designed transmitter or receiver can introduce jitter. A cable with limited high frequency bandwidth (like plastic TOSLINK cables) can introduce jitter. Impedence mismatches between cable and component can introduce jitter. If you take a system in which the signal passes through a digital cable en route from transport to converter, measure the jitter, than swap the cable, and re-measure the jitter, you may find it has a significantly different value. (See the graphs posted in the other thread I mentioned for an example of this.)

Jitter is a bit hard to visualize, but I think I have a reasonable analogy. Those of you who have been in computers for some time may remember big, loud clattering machines called line printers. Back before laser printers, they were the fasted output devices available. Output was measured in lines per minute, with 1100 lines per minute being the deviding line between "fast" and "not fast". They worked thusly: tractor fed paper was moved, one line at a time, up through the printer. There were 132 print heads (actually, little hammers) in a line, one hammer for each character position across the page. There was a looped chain that whirled around inside the printer. On the chain were multiple instances of each printable character. The printer would advance the paper one line, and as the correct character moved in front of each hammer, the hammer would impact the chain, printing the character on the paper. As there were multiple copies of each character on the chain, the chain would have to rotate at most about 1/4 the way around before every character had been printed on that single line. Then the printer would advance the paper one line, and the next line would print. Made a really cool sound.

One problem with these printers was that the timing of the impact of the hammers on the chain was not exact. Instead of each character being printed exactly in it's correct location on the page (and this was, of course, monotype text, not proportional), the positioning varied from a bit early to a bit later, or from a little left of center to a little right of center. That is, the character spacing was "jittered". Here's what it looks like:

No jitter, accurately positioned characters:

nojitter.gif

With Jitter:

withjitter.gif

The important thing to remember here is that, in both instances, the text is identical. However, you can see by looking that the appearence of the two documents is different. If you performed any number of tests (statistical percentage of each character, stroke weight, one for one character by character comparison) the two documents would be pronounced identical. However, anyone can tell just by looking that that is simply not true. They look different.

Likewise, you can recover the digital datastream from a CD with 100% bit accuracy, transmit that signal any number of ways, and again recover it with bit 100% bit accuracy. However, if you take two identical datastreams, one recovered from a high jitter source and one from a low jitter source, they can *SOUND* quite different.

Link to comment
Share on other sites

  • Replies 47
  • Created
  • Last Reply

Top Posters In This Topic

It's interesting to note that both examples are displayed in an essentially analog format (image). You and I recognize it as text but the computer recognizes it as a picture. It'll take text-reading software to convert it back into text from image format - which is essentially a step identical to A/D conversion or DAC.

When the text is jumbled like this then it makes it much harder for the text recognition to work (A/D conversion). That's why everything should be kept digital. :)

Link to comment
Share on other sites

And the fact that you know we'll all be seeing the images exactly as you created them confirms that you believe a digital signal (in this case an image) can be transmitted over long distances with very cheap wires and will come out exactly perfect on the other end, without any coloring or alteration based on the wire I'm connected with. (very big grin)

If all the 1's and 0's arrive then you have a PERFECT copy of the original. YMMnotV.

Link to comment
Share on other sites

And in a computer world, that is true, because you have CRC, checksums, packet numbers, etc... all being accounted for and checked to be sure nothing is missing, and if it is, it is resent and reassembled in the proper order before being presented to the end user.

However, such is not the case in the audio world - no checks and balances from source to destination, and it's real time. We can't wait for a retransmit or a missequenced packet to be inserted in the right place.

Doug

Link to comment
Share on other sites

Exactly. (well, except that part about "real time")

A checksum tells you that all the 1's and 0's arrived or didn't arrive. It also tells you if someone changed a bunch of 1's to 0's, or vice versa.

And cd players typically do have an error checking process as they read the data off the disk to see if it skipped. Trivial stuff. Cheapest players.

Then you move the signal in its digital form from the player to your DAC (perhaps the DAC in your receiver). To do this it goes across a cable in digital format. Trivial stuff. Cheapest cables.

Why, because there aren't 40 million other users jumping back and forth between those two devices over a shared cable. It's a dedicated single-purpose system. Trivial stuff. Cheapest systems.

It's the same reason why you can by a dedicated dual tray CD player/copier and make exact copies of discs all day long. The wires inside those things cheap. Every copy will be perfect.

It's the DAC where things get interesting. That's the money side.

Link to comment
Share on other sites

You've really added a lot of information to your original post. Good discussion. To touch on your last paragraph...

----------------

Likewise, you can recover the digital datastream from a CD with 100% bit accuracy, transmit that signal any number of ways, and again recover it with bit 100% bit accuracy. However, if you take two identical datastreams, one recovered from a high jitter source and one from a low jitter source, they can *SOUND* quite different.

----------------

Jitter would not affect the sound quality.

The reason is because CD's are designed to be un-encoded at a specific rate. You can't change that by pushing the 1's and 0's at the DAC faster. You can't crowd them together either. It's either a 1 or a 0.

If you slow them down the DAC doesn't care - as long as you don't drop below playback speed. If you drop below playback speed then the whole thing goes BLURP. Not subtle.

When copying a CD from computer it can read the entire disk and write a new disk in just a couple minutes. Obviously we don't listen that fast. And the DAC won't decode it faster like playing a 33 1/3 lp's at 78 rpm.

The really fabulous thing about digital is that it is so perfectly defined. There is no chance for ambiguity in the "value".

The ambiguity comes when we convert the digital value into analog. To use your printer analogy we could use an old dot matrix printer. The dots weren't very tiny so you could see each little dot that was printed to created the letter A. Each dot is a digital bit that is used to create an analog image of the letter A. Today we have laser printers that have 1200 dots per inch. These dots are so tiny that our eyes perceive the analog image of A as perfect and without any jagged edges.

The argument with CD technology is that the dots are too big and as a result there are a number of people who can hear the rough edges (just like seeing the dots on old printers). That's why they're now releasing DVD-A and SACD which have much higher resolution (very tiny dots).

Link to comment
Share on other sites

As Artto just said in another thread:

----------------

SACD is a ONE bit system. No decimation filters. No Delta Sigma modulator. No Interpolation Digital Filter. No 'brick wall' filters. Sample rate is 2.822MHz (2,822,400 samples per second)

----------------

Very, very tiny dots indeed.

Link to comment
Share on other sites

Jitter would not affect the sound quality.

The reason is because CD's are designed to be un-encoded at a specific rate. You can't change that by pushing the 1's and 0's at the DAC faster. You can't crowd them together either. It's either a 1 or a 0.

If you slow them down the DAC doesn't care - as long as you don't drop below playback speed. If you drop below playback speed then the whole thing goes BLURP. Not subtle.

Not true. You're completely misunderstanding the affect jitter has on a signal. When you introduce jitter to a digital signal, it tends to interpolate the 1's and the 0's, not just speed them up. A more accurate description of jitter in the text example above would be

This is a sgnial wthi lsto of jirtte

The really fabulous thing about digital is that it is so perfectly defined. There is no chance for ambiguity in the "value".

No, once again, this is totally incorrect. You don't seem to understand that a digital PCM bitstream is not perfectly defined, and it has all manner of opportunity for ambiguity prior to being decoded and DAC'ed.

What you don't seem to realize is the vast amount of data that is being transferred over that optical/coaxial/AES-EBU connection (well, you guys don't use AES-EBU, but we do in digital studio environments)

Think about it for a moment:

A CD's bandwidth is a "mere" 16 bits, 44.1Khz, right? Let's break that down.

44.1Khz is 44,100 samples per second. Each sample consists of a 16 bit word. This breaks down to 705,600 bits per second per channel - total of 1,411,200 bps. Standard CAT-3 or CAT-4 phone line can't handle that much data - CANT HANDLE IT. You need a minimum of CAT-5 to deal with that much information.

Now let's get into DVD audio signals. 24 bit wordlength, 98,304 samples per second (96K). Now we're up to 2,359,296 bps per channel or 14,155,776 bps for a 6.1 channel Dolby DTS signal.

Still think audio signal transfer is a small amount of data? Here's the rub. Contrary to what you said earlier, there is no buffering system or packet system on either end of the cable - that's why it's referred to as bitstream audio - it's a constant flow of 1's and 0's - at upwards of 2Mbps.

For added fun, you start interpolating 1's and 0's (clock jitter) and the damage gets done extremely fast - because you're introducing noise into your signal - and the worst part about clock jitter? If it's caused by a poor quality cable, it's constant. Whatever handful of bits are getting interpolated, they're happening constantly. And with PCM data, it's so much worse because the interpolated bits can be something as simple as one sample word being changed from 1001 1101 1111 1001 to 1010 1101 1111 1001 (a change in amplitude at the 4th bit - affecting the amplitude of that sample by 6dB) or a full interpolation of an entire sample - changing the waveform entirely. When you're dealing with jitter, we're talking about something that's happening multiple times per second. Do you really think those errors don't add up?

The ambiguity comes when we convert the digital value into analog. To use your printer analogy we could use an old dot matrix printer. The dots weren't very tiny so you could see each little dot that was printed to created the letter A. Each dot is a digital bit that is used to create an analog image of the letter A. Today we have laser printers that have 1200 dots per inch. These dots are so tiny that our eyes perceive the analog image of A as perfect and without any jagged edges.

Nope. You totally misunderstand the function of a DAC. When you feed a digital bitstream into a DAC, it does standardized math functions to draw a curve out of the bits. There are no "jagged edges" left - only a logarithmic(sp?) curve based on the bits fed in. The difference between DAC's is basically in the quality of the comb filters used to roll off the high frequencies from the signal - where most of the sampling errors live. This is why most people report a better sound quality from playing CD's in their DVD player - not because it's necessarily a better DA converter, but rather that it's a higher frequency converter, one that can successfully push the sampling errors far above your hearing range.

The argument with CD technology is that the dots are too big and as a result there are a number of people who can hear the rough edges (just like seeing the dots on old printers). That's why they're now releasing DVD-A and SACD which have much higher resolution (very tiny dots).

See above. The people who claim to hear the rough edges are completely misunderstanding the way DAC works. It's just not the same as digital video rendering.

Hang on a minute and I'll get you a link to a book that a friend of mine who is a digital audio master just published. It'll clear up a lot of misconceptions.

Cheers!

Griff

Link to comment
Share on other sites

----------------

On 7/8/2003 7:19:00 PM toddc wrote:

As Artto just said in another thread:

----------------

SACD is a ONE bit system. No decimation filters. No Delta Sigma modulator. No Interpolation Digital Filter. No 'brick wall' filters. Sample rate is 2.822MHz (2,822,400 samples per second)

----------------

Very, very tiny dots indeed.

----------------

Artto is only partially right. The Delta Sigma modulator is the very foundation for DSD (Direct Streaming Digital) which is the format for SACD. He is correct that in the DSD format, there are no filters, and thus no variance. Near perfect digital reproduction.

Link to comment
Share on other sites

Doesn't matter. If you really want to ramp up your system quality, get an external DAC - Apogee, Lucid, Swissonic are good places to start looking. If you've got the big bucks, dCS and Genex are some of the finest in the industry - but you pay big $$$$ for them.

Rest assured, you'll hear the diff even on the cheapest players.

Link to comment
Share on other sites

RE: "And in a computer world, that is true, because you have CRC, checksums, packet numbers, etc... all being accounted for and checked to be sure nothing is missing, and if it is, it is resent and reassembled in the proper order before being presented to the end user.

However, such is not the case in the audio world - no checks and balances from source to destination, and it's real time. We can't wait for a retransmit or a missequenced packet to be inserted in the right place."

Doug.....wrong. That's what "error correction" is all about. CD players have plenty of of it. Why do you think there is 'over-sampling' (decimation filters) and 'interpolation filters' (reassembly & fill-in the missing or broken pieces) in the PCM (CD) recording and playback process? My old Revox B225 (circa 84/85') CD player even has CIRC (Cross Interleave Read Solomon Code) error correction (which I believe is what you meant with CRC), just like computers do. That's one of the very reasons CD can, to a certain degree, 'tolerate' scratches and surface defects without creating much in the way of additional noise or skipping as in analogue LP. In fact, the right & left channels of early (still?) CD players were (are) out of phase by 180 degrees & have to be delayed to bring them in phase with the other channel. The digital signal can be easily be delayed and 'interpolated' to re-construct damaged or 'upsampled' parts of the signal. In fact, if it weren't for this error correction capability, CD (DVD, SACD, etc.) digital would never even work (at least not in the practical sense)!

Link to comment
Share on other sites

----------------

Doug.....wrong. That's what "error correction" is all about. CD players have plenty of of it. Why do you think there is 'over-sampling' (decimation filters) and 'interpolation filters' (reassembly & fill-in the missing or broken pieces) in the PCM (CD) recording and playback process? My old Revox B225 (circa 84/85') CD player even has CIRC (Cross Interleave Read Solomon Code) error correction (which I believe is what you meant with CRC), just like computers do. That's one of the very reasons CD can, to a certain degree, 'tolerate' scratches and surface defects without creating much in the way of additional noise or skipping as in analogue LP. In fact, the right & left channels of early (still?) CD players were (are) out of phase by 180 degrees & have to be delayed to bring them in phase with the other channel. The digital signal can be easily be delayed and 'interpolated' to re-construct damaged or 'upsampled' parts of the signal. In fact, if it weren't for this error correction capability, CD (DVD, SACD, etc.) digital would never even work (at least not in the practical sense)!

----------------

This error correction is only applied to reading discs. When digital signal is transferred between devices, the receiving device does not checksum the incoming signal. It merely converts it.

Let's not blur the issue by mixing disc readers with digital signal transfers between devices - we're talking about the effect cables have on digital audio transfers, not how well a CD or DVD player can read a disc.

Sure, even the cheapest portable CD players have buffers to handle misreads. Doesn't change the fact that their crap DAC's absolutely destroy the high frequency information output to the headphone amp.

Link to comment
Share on other sites

RE: "Artto is only partially right. The Delta Sigma modulator is the very foundation for DSD (Direct Streaming Digital) which is the format for SACD."

Griff,.... as in conventional PCM systems the analog signal is first converted to digital by 64x oversampling delta-sigma modulation. Where conventional systems immediately decimate (decimation filter) the 1-bit signal into a PCM code, Direct Stream Digital records the 1-bit pulses directly. There is an additional stage Delta Sigma modulator in the PCM process, after the interpolation filter.

http://www.superaudio-cd.com/technology3.html

Link to comment
Share on other sites

Griff - Thanks for clarifying that the discussion has to do with the transfer of audio data between devices, not the accurate reading of it from the storage media. And as far as I know, there are no ACKs going back and forth between my CD player and my receiver.

Doug

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...