Reply
 
Thread Tools Display Modes
  #41   Report Post  
Laurence Payne
 
Posts: n/a
Default Is CD burning at slower speeds better?

On Sun, 27 Jun 2004 10:01:37 +0200, Peter Larsen
wrote:

note: examples from thin air, not
verified except that I know that a "problem cd player" is more likely to
play things that are burned x2 on my no longer new Plextor than things
that are burned at max speed, x8.


Whereas on my newer Plextor, using "high speed" media, I get failures
at speeds both higher and lower than the (apparently) optimal 8X.

We've all got to find our own "sweet spot" it appears.
  #42   Report Post  
Laurence Payne
 
Posts: n/a
Default Is CD burning at slower speeds better?

On Sun, 27 Jun 2004 10:01:37 +0200, Peter Larsen
wrote:

note: examples from thin air, not
verified except that I know that a "problem cd player" is more likely to
play things that are burned x2 on my no longer new Plextor than things
that are burned at max speed, x8.


Whereas on my newer Plextor, using "high speed" media, I get failures
at speeds both higher and lower than the (apparently) optimal 8X.

We've all got to find our own "sweet spot" it appears.
  #43   Report Post  
Laurence Payne
 
Posts: n/a
Default Is CD burning at slower speeds better?

On Sun, 27 Jun 2004 10:01:37 +0200, Peter Larsen
wrote:

note: examples from thin air, not
verified except that I know that a "problem cd player" is more likely to
play things that are burned x2 on my no longer new Plextor than things
that are burned at max speed, x8.


Whereas on my newer Plextor, using "high speed" media, I get failures
at speeds both higher and lower than the (apparently) optimal 8X.

We've all got to find our own "sweet spot" it appears.
  #44   Report Post  
Ethan Winer
 
Posts: n/a
Default CD burning at slower speeds (long)

Dave,

the "data arriving" rate varies as needed to keep the buffer's fill level

within acceptable bounds.

I agree with Murray, that really was a great post.

Apparently it's a very common misconception that errors in the data spacing
on a CD surface create jitter, because I see this wrongly stated all the
time. All modern CD drives can read fast enough to keep the buffer full. And
once the data is in a buffer, the CD's clock can send it out as accurately
as it's capable of.

--Ethan


  #45   Report Post  
Ethan Winer
 
Posts: n/a
Default CD burning at slower speeds (long)

Dave,

the "data arriving" rate varies as needed to keep the buffer's fill level

within acceptable bounds.

I agree with Murray, that really was a great post.

Apparently it's a very common misconception that errors in the data spacing
on a CD surface create jitter, because I see this wrongly stated all the
time. All modern CD drives can read fast enough to keep the buffer full. And
once the data is in a buffer, the CD's clock can send it out as accurately
as it's capable of.

--Ethan




  #46   Report Post  
Ethan Winer
 
Posts: n/a
Default CD burning at slower speeds (long)

Dave,

the "data arriving" rate varies as needed to keep the buffer's fill level

within acceptable bounds.

I agree with Murray, that really was a great post.

Apparently it's a very common misconception that errors in the data spacing
on a CD surface create jitter, because I see this wrongly stated all the
time. All modern CD drives can read fast enough to keep the buffer full. And
once the data is in a buffer, the CD's clock can send it out as accurately
as it's capable of.

--Ethan


  #47   Report Post  
Bubba
 
Posts: n/a
Default CD burning at slower speeds (long)


"Jeff Wiseman" wrote in message
...



I might be really missing the mark here but this statement of
yours seems to indicate that you don't have a clue what clock
recovery and jitter issues involve. Yes, the clock signal is
generated by the CD player's circuitry, but it is NOT free
running, otherwise you would be constantly overrunning or
underrunning your data buffers. The term "clock recovery" is the
standard term for this process. In reality, the clock is
"derived" from the flow of data bits being read. It is
semi-locked in a way with the databits coming off of the CD.
That's how it knows how fast to run. If the average rate of music
data samples coming off of the disk is a bit slow or fast, the
clock must adjust to compensate. The clock is constantly
adjusting by design based on the timeing of the leading edge of
pits. If the edge spacing varys a lot, the clock also adjusts lot
(jitter).


Let em ask a stupid question based on your thesis of clock speed
from the data flow. Can I burn some tracks of an audio CD at speed X,
not finalize the CD, burn more tracks at speed Y and then finalize the CD.
Would it play correctly in a CD player?


- Jeff



  #48   Report Post  
Bubba
 
Posts: n/a
Default CD burning at slower speeds (long)


"Jeff Wiseman" wrote in message
...



I might be really missing the mark here but this statement of
yours seems to indicate that you don't have a clue what clock
recovery and jitter issues involve. Yes, the clock signal is
generated by the CD player's circuitry, but it is NOT free
running, otherwise you would be constantly overrunning or
underrunning your data buffers. The term "clock recovery" is the
standard term for this process. In reality, the clock is
"derived" from the flow of data bits being read. It is
semi-locked in a way with the databits coming off of the CD.
That's how it knows how fast to run. If the average rate of music
data samples coming off of the disk is a bit slow or fast, the
clock must adjust to compensate. The clock is constantly
adjusting by design based on the timeing of the leading edge of
pits. If the edge spacing varys a lot, the clock also adjusts lot
(jitter).


Let em ask a stupid question based on your thesis of clock speed
from the data flow. Can I burn some tracks of an audio CD at speed X,
not finalize the CD, burn more tracks at speed Y and then finalize the CD.
Would it play correctly in a CD player?


- Jeff



  #49   Report Post  
Bubba
 
Posts: n/a
Default CD burning at slower speeds (long)


"Jeff Wiseman" wrote in message
...



I might be really missing the mark here but this statement of
yours seems to indicate that you don't have a clue what clock
recovery and jitter issues involve. Yes, the clock signal is
generated by the CD player's circuitry, but it is NOT free
running, otherwise you would be constantly overrunning or
underrunning your data buffers. The term "clock recovery" is the
standard term for this process. In reality, the clock is
"derived" from the flow of data bits being read. It is
semi-locked in a way with the databits coming off of the CD.
That's how it knows how fast to run. If the average rate of music
data samples coming off of the disk is a bit slow or fast, the
clock must adjust to compensate. The clock is constantly
adjusting by design based on the timeing of the leading edge of
pits. If the edge spacing varys a lot, the clock also adjusts lot
(jitter).


Let em ask a stupid question based on your thesis of clock speed
from the data flow. Can I burn some tracks of an audio CD at speed X,
not finalize the CD, burn more tracks at speed Y and then finalize the CD.
Would it play correctly in a CD player?


- Jeff



  #50   Report Post  
s.stef
 
Posts: n/a
Default Is CD burning at slower speeds better?


That agrees with my experience.

Want to carry on? Does your burner/software/media offer slower burn
speeds? If you CAN burn at 2X (maybe even 1X) are the results good?


I think not.

4x to 8x are good speeds.
probably 8x is the best choise

bye




  #51   Report Post  
s.stef
 
Posts: n/a
Default Is CD burning at slower speeds better?


That agrees with my experience.

Want to carry on? Does your burner/software/media offer slower burn
speeds? If you CAN burn at 2X (maybe even 1X) are the results good?


I think not.

4x to 8x are good speeds.
probably 8x is the best choise

bye


  #52   Report Post  
s.stef
 
Posts: n/a
Default Is CD burning at slower speeds better?


That agrees with my experience.

Want to carry on? Does your burner/software/media offer slower burn
speeds? If you CAN burn at 2X (maybe even 1X) are the results good?


I think not.

4x to 8x are good speeds.
probably 8x is the best choise

bye


  #53   Report Post  
Norbert Hahn
 
Posts: n/a
Default CD burning at slower speeds (long)

On Sat, 26 Jun 2004 20:36:23 GMT, Murray Peterson
wrote:

[snip]
A clock signal is recovered from the bits comeing off of the CD


This is false -- there are no clock signals contained in the CD data. The
clock signal is generated by the CD player's circuitry. The CD itself
contains encoded data and correction information only, and even that
information is intentionally scattered across the disk.


The CD contains an imbedded clock which needs to be recoverd in
the receiver circuit and it is used to clock the input circuit of the
de-interleave buffer. Failing to synch to this clock causes mis-reads
of the CD, jitter in the pit spacing will increase the read error rate
(C11 errors).

The embedded clock signal is de-coupled from other CD clocks and
thusly from the D/A converter clock by the de-interleave buffer.

Norbert

  #54   Report Post  
Norbert Hahn
 
Posts: n/a
Default CD burning at slower speeds (long)

On Sat, 26 Jun 2004 20:36:23 GMT, Murray Peterson
wrote:

[snip]
A clock signal is recovered from the bits comeing off of the CD


This is false -- there are no clock signals contained in the CD data. The
clock signal is generated by the CD player's circuitry. The CD itself
contains encoded data and correction information only, and even that
information is intentionally scattered across the disk.


The CD contains an imbedded clock which needs to be recoverd in
the receiver circuit and it is used to clock the input circuit of the
de-interleave buffer. Failing to synch to this clock causes mis-reads
of the CD, jitter in the pit spacing will increase the read error rate
(C11 errors).

The embedded clock signal is de-coupled from other CD clocks and
thusly from the D/A converter clock by the de-interleave buffer.

Norbert

  #55   Report Post  
Norbert Hahn
 
Posts: n/a
Default CD burning at slower speeds (long)

On Sat, 26 Jun 2004 20:36:23 GMT, Murray Peterson
wrote:

[snip]
A clock signal is recovered from the bits comeing off of the CD


This is false -- there are no clock signals contained in the CD data. The
clock signal is generated by the CD player's circuitry. The CD itself
contains encoded data and correction information only, and even that
information is intentionally scattered across the disk.


The CD contains an imbedded clock which needs to be recoverd in
the receiver circuit and it is used to clock the input circuit of the
de-interleave buffer. Failing to synch to this clock causes mis-reads
of the CD, jitter in the pit spacing will increase the read error rate
(C11 errors).

The embedded clock signal is de-coupled from other CD clocks and
thusly from the D/A converter clock by the de-interleave buffer.

Norbert



  #56   Report Post  
Jeff Wiseman
 
Posts: n/a
Default CD burning at slower speeds (long)



Dave Platt wrote:

You are mistaken in at least some of your understanding. I'd suggest
referring to a basic text on digital audio, such as one of Ken
Pohlmann's, for the details.



Thanks for that. I think I'm getting more of a handle on this...


The clock recovered from the raw pit/land data is used only to control
the feeding of the data _into_ the error-correction chip - when the
data comes out of the chip, it's under the control of a much more
stable, fixed-speed, as-low-in-jitter-as-you-care-to-engineer-it
oscillator



Ok, so data being passed to the D/A in general should always
convert on an accurate timing basis. The clock running the
converter also controls the transport speed to prevent
overflow/underflow problems. This is where I missed the point as
most of my experience in telecomm systems and separate DACs don't
have this clock feedback mechanism (i.e., the rate of reception
cannot be controlled). The D/A clock recovery came from the
incoming data stream as you've described for the separate D/A
scenario. One clock ran it all.


Timing jitter in the incoming-data data is simply
stripped out by the de-interleaving process.



And hopefully, the read errors introduced by jitter in the clock
recovered from the bit reads of the disc have been all corrected
in the de-interleaving and error correction circuits prior to the D/A.


The _first_ clock, used to transfer the data into the Reed-Solomon
error correction chip, is derived from the pit spacing. However, this
clock is _not_ used in the actual digital-to-analog conversion
process... it plays no further role once the data has started its way
into the error corrector / de-interleaver.



OK, so jitter in the clock recovered from pit spacing does not
affect the D/A conversion process itself. However, I know that
read errors due to jitter, misalignment/laser focus, etc., can
become great enough that even after "read error correction", the
read data may still have errors in it when passed off to the D/A
converter. Error correction algorithms are likely done in a way
to minimize these "bad data samples" but none-the-less bad values
can occasionally be sent to the DAC.

So uneven pit spacing causes jitter which causes read errors...

Now this is an area that I'm not really familiar with. How bad
does jitter have to be before read errors start affecting the
data stream to the DACs? How often does data samples with minor
faults occur? Unlike a non-real time arrangment like reading a
data file off of a hard disk where a signal from the decoding
circuitry can indicate a failed read so another attempt can be
made, when you get data off an audio disk you only have a fixed
amount of time to take care of this. Even with "skip-protection"
you only have so many reads for majority voting before you run
out of time. With a bad piece of sample data that can't be
totally corrected, you have to eventually send something to the DAC.

I've seen two distinct sets of opinions on this but not a lot of
evidence. One group says that once the data is read ,
de-interleaved, error corrected, etc., it will never, never,
ever, ever be a wrong value because it has been "error
corrected". Error correction algorithms are limited in many ways
and when you are limited in time and read passes, I find it
difficult to believe that every sample will be error-free at all
times. Also, I've listened to systems that seemed to have a
slight noise problem and occasionally skipped before being fixed
with a realignment and readjustment for tracking of the laser
heads to reduce read errors. They can sound different. Sometimes
when errors exist, some can get through to affect the sound. I've
heard it.

Again, we're talking about typical CD players here, not the type
that can read at 12x and do real time majority voting to ensure
accurate reads before even starting the error-correction process.

So am I missing something here? Does anyone have the experience
to show that worst case jitter due to pit timing for a standard
red book CD can NEVER reach the point of showing up as false data
in the output stream? Or is my original statement true where
significant pit spacing uneveness CAN show up in the output in
some form or another?

- Jeff
  #57   Report Post  
Jeff Wiseman
 
Posts: n/a
Default CD burning at slower speeds (long)



Dave Platt wrote:

You are mistaken in at least some of your understanding. I'd suggest
referring to a basic text on digital audio, such as one of Ken
Pohlmann's, for the details.



Thanks for that. I think I'm getting more of a handle on this...


The clock recovered from the raw pit/land data is used only to control
the feeding of the data _into_ the error-correction chip - when the
data comes out of the chip, it's under the control of a much more
stable, fixed-speed, as-low-in-jitter-as-you-care-to-engineer-it
oscillator



Ok, so data being passed to the D/A in general should always
convert on an accurate timing basis. The clock running the
converter also controls the transport speed to prevent
overflow/underflow problems. This is where I missed the point as
most of my experience in telecomm systems and separate DACs don't
have this clock feedback mechanism (i.e., the rate of reception
cannot be controlled). The D/A clock recovery came from the
incoming data stream as you've described for the separate D/A
scenario. One clock ran it all.


Timing jitter in the incoming-data data is simply
stripped out by the de-interleaving process.



And hopefully, the read errors introduced by jitter in the clock
recovered from the bit reads of the disc have been all corrected
in the de-interleaving and error correction circuits prior to the D/A.


The _first_ clock, used to transfer the data into the Reed-Solomon
error correction chip, is derived from the pit spacing. However, this
clock is _not_ used in the actual digital-to-analog conversion
process... it plays no further role once the data has started its way
into the error corrector / de-interleaver.



OK, so jitter in the clock recovered from pit spacing does not
affect the D/A conversion process itself. However, I know that
read errors due to jitter, misalignment/laser focus, etc., can
become great enough that even after "read error correction", the
read data may still have errors in it when passed off to the D/A
converter. Error correction algorithms are likely done in a way
to minimize these "bad data samples" but none-the-less bad values
can occasionally be sent to the DAC.

So uneven pit spacing causes jitter which causes read errors...

Now this is an area that I'm not really familiar with. How bad
does jitter have to be before read errors start affecting the
data stream to the DACs? How often does data samples with minor
faults occur? Unlike a non-real time arrangment like reading a
data file off of a hard disk where a signal from the decoding
circuitry can indicate a failed read so another attempt can be
made, when you get data off an audio disk you only have a fixed
amount of time to take care of this. Even with "skip-protection"
you only have so many reads for majority voting before you run
out of time. With a bad piece of sample data that can't be
totally corrected, you have to eventually send something to the DAC.

I've seen two distinct sets of opinions on this but not a lot of
evidence. One group says that once the data is read ,
de-interleaved, error corrected, etc., it will never, never,
ever, ever be a wrong value because it has been "error
corrected". Error correction algorithms are limited in many ways
and when you are limited in time and read passes, I find it
difficult to believe that every sample will be error-free at all
times. Also, I've listened to systems that seemed to have a
slight noise problem and occasionally skipped before being fixed
with a realignment and readjustment for tracking of the laser
heads to reduce read errors. They can sound different. Sometimes
when errors exist, some can get through to affect the sound. I've
heard it.

Again, we're talking about typical CD players here, not the type
that can read at 12x and do real time majority voting to ensure
accurate reads before even starting the error-correction process.

So am I missing something here? Does anyone have the experience
to show that worst case jitter due to pit timing for a standard
red book CD can NEVER reach the point of showing up as false data
in the output stream? Or is my original statement true where
significant pit spacing uneveness CAN show up in the output in
some form or another?

- Jeff
  #58   Report Post  
Jeff Wiseman
 
Posts: n/a
Default CD burning at slower speeds (long)



Dave Platt wrote:

You are mistaken in at least some of your understanding. I'd suggest
referring to a basic text on digital audio, such as one of Ken
Pohlmann's, for the details.



Thanks for that. I think I'm getting more of a handle on this...


The clock recovered from the raw pit/land data is used only to control
the feeding of the data _into_ the error-correction chip - when the
data comes out of the chip, it's under the control of a much more
stable, fixed-speed, as-low-in-jitter-as-you-care-to-engineer-it
oscillator



Ok, so data being passed to the D/A in general should always
convert on an accurate timing basis. The clock running the
converter also controls the transport speed to prevent
overflow/underflow problems. This is where I missed the point as
most of my experience in telecomm systems and separate DACs don't
have this clock feedback mechanism (i.e., the rate of reception
cannot be controlled). The D/A clock recovery came from the
incoming data stream as you've described for the separate D/A
scenario. One clock ran it all.


Timing jitter in the incoming-data data is simply
stripped out by the de-interleaving process.



And hopefully, the read errors introduced by jitter in the clock
recovered from the bit reads of the disc have been all corrected
in the de-interleaving and error correction circuits prior to the D/A.


The _first_ clock, used to transfer the data into the Reed-Solomon
error correction chip, is derived from the pit spacing. However, this
clock is _not_ used in the actual digital-to-analog conversion
process... it plays no further role once the data has started its way
into the error corrector / de-interleaver.



OK, so jitter in the clock recovered from pit spacing does not
affect the D/A conversion process itself. However, I know that
read errors due to jitter, misalignment/laser focus, etc., can
become great enough that even after "read error correction", the
read data may still have errors in it when passed off to the D/A
converter. Error correction algorithms are likely done in a way
to minimize these "bad data samples" but none-the-less bad values
can occasionally be sent to the DAC.

So uneven pit spacing causes jitter which causes read errors...

Now this is an area that I'm not really familiar with. How bad
does jitter have to be before read errors start affecting the
data stream to the DACs? How often does data samples with minor
faults occur? Unlike a non-real time arrangment like reading a
data file off of a hard disk where a signal from the decoding
circuitry can indicate a failed read so another attempt can be
made, when you get data off an audio disk you only have a fixed
amount of time to take care of this. Even with "skip-protection"
you only have so many reads for majority voting before you run
out of time. With a bad piece of sample data that can't be
totally corrected, you have to eventually send something to the DAC.

I've seen two distinct sets of opinions on this but not a lot of
evidence. One group says that once the data is read ,
de-interleaved, error corrected, etc., it will never, never,
ever, ever be a wrong value because it has been "error
corrected". Error correction algorithms are limited in many ways
and when you are limited in time and read passes, I find it
difficult to believe that every sample will be error-free at all
times. Also, I've listened to systems that seemed to have a
slight noise problem and occasionally skipped before being fixed
with a realignment and readjustment for tracking of the laser
heads to reduce read errors. They can sound different. Sometimes
when errors exist, some can get through to affect the sound. I've
heard it.

Again, we're talking about typical CD players here, not the type
that can read at 12x and do real time majority voting to ensure
accurate reads before even starting the error-correction process.

So am I missing something here? Does anyone have the experience
to show that worst case jitter due to pit timing for a standard
red book CD can NEVER reach the point of showing up as false data
in the output stream? Or is my original statement true where
significant pit spacing uneveness CAN show up in the output in
some form or another?

- Jeff
  #59   Report Post  
Murray Peterson
 
Posts: n/a
Default CD burning at slower speeds (long)

Jeff Wiseman wrote in
:

OK, so jitter in the clock recovered from pit spacing does not
affect the D/A conversion process itself. However, I know that
read errors due to jitter, misalignment/laser focus, etc., can
become great enough that even after "read error correction", the
read data may still have errors in it when passed off to the D/A
converter. Error correction algorithms are likely done in a way
to minimize these "bad data samples" but none-the-less bad values
can occasionally be sent to the DAC.


Yes.

So uneven pit spacing causes jitter which causes read errors...


It's possible, but unlikely, at least in the way you are describing it.
Each pit or land on a CD doesn't represent a single bit. The data on a CD
is in EFM (eight to fourteen modulation), which is used to minimize the
number of 1 to 0 or 0 to 1 transitions. Using EFM ensures that each pit
comes in a length of 3 to 11 bits -- you should think of the pits as
something more like "stripes". Spacing errors can therefore only occur
every 3 to 11 bits, the remainder of the data within one of these stripes
has no leading edge.

Here is a quick description of EFM:
http://www.ee.washington.edu/consele...5x7/efmmod.htm


Now this is an area that I'm not really familiar with. How bad
does jitter have to be before read errors start affecting the
data stream to the DACs?


Bad enough to cause the reading circuitry to completely misread the value
of a bit, and even then, the error correction circuitry can easily correct
most of these errors. To have an effect on the data being fed to the DAC,
the error rate has to be very bad indeed.

If you want a detailed analysis of the error correction capabilities of a
CD, google for "cross interleaved Reed Solomon" (CIRC in short).

How often does data samples with minor faults occur?


Errors occur quite frequently, but they are almost always correctable.
Here are some tests done for various CD burners:
http://www.cdrinfo.com/Sections/Arti...e=LiteOn+SOHW-
832S&index=9

I've seen two distinct sets of opinions on this but not a lot of
evidence. One group says that once the data is read ,
de-interleaved, error corrected, etc., it will never, never,
ever, ever be a wrong value because it has been "error
corrected".


CDs are very good at correcting errors, but anyone can take an Exacto knife
and force it to have uncorrectable errors. However, if the errors were
corrected, then the data is definitely perfect -- that's what data
correction is for.

So am I missing something here? Does anyone have the experience
to show that worst case jitter due to pit timing for a standard
red book CD can NEVER reach the point of showing up as false data
in the output stream? Or is my original statement true where
significant pit spacing uneveness CAN show up in the output in
some form or another?


"never" is too absolute. However, the error detection and correction
abilities of a CD are amazing; for example, the CIRC code can correct burst
errors of up to 3500 bits. That's more than a few simple bit errors, and
for pit spacing to cause uncorrectable errors, the entire CD would have to
be considered as faulty. I would also expect it to sound positively
horrible -- clicks, interpolated sections, and muted sections don't make
for a good listening experience.
  #60   Report Post  
Murray Peterson
 
Posts: n/a
Default CD burning at slower speeds (long)

Jeff Wiseman wrote in
:

OK, so jitter in the clock recovered from pit spacing does not
affect the D/A conversion process itself. However, I know that
read errors due to jitter, misalignment/laser focus, etc., can
become great enough that even after "read error correction", the
read data may still have errors in it when passed off to the D/A
converter. Error correction algorithms are likely done in a way
to minimize these "bad data samples" but none-the-less bad values
can occasionally be sent to the DAC.


Yes.

So uneven pit spacing causes jitter which causes read errors...


It's possible, but unlikely, at least in the way you are describing it.
Each pit or land on a CD doesn't represent a single bit. The data on a CD
is in EFM (eight to fourteen modulation), which is used to minimize the
number of 1 to 0 or 0 to 1 transitions. Using EFM ensures that each pit
comes in a length of 3 to 11 bits -- you should think of the pits as
something more like "stripes". Spacing errors can therefore only occur
every 3 to 11 bits, the remainder of the data within one of these stripes
has no leading edge.

Here is a quick description of EFM:
http://www.ee.washington.edu/consele...5x7/efmmod.htm


Now this is an area that I'm not really familiar with. How bad
does jitter have to be before read errors start affecting the
data stream to the DACs?


Bad enough to cause the reading circuitry to completely misread the value
of a bit, and even then, the error correction circuitry can easily correct
most of these errors. To have an effect on the data being fed to the DAC,
the error rate has to be very bad indeed.

If you want a detailed analysis of the error correction capabilities of a
CD, google for "cross interleaved Reed Solomon" (CIRC in short).

How often does data samples with minor faults occur?


Errors occur quite frequently, but they are almost always correctable.
Here are some tests done for various CD burners:
http://www.cdrinfo.com/Sections/Arti...e=LiteOn+SOHW-
832S&index=9

I've seen two distinct sets of opinions on this but not a lot of
evidence. One group says that once the data is read ,
de-interleaved, error corrected, etc., it will never, never,
ever, ever be a wrong value because it has been "error
corrected".


CDs are very good at correcting errors, but anyone can take an Exacto knife
and force it to have uncorrectable errors. However, if the errors were
corrected, then the data is definitely perfect -- that's what data
correction is for.

So am I missing something here? Does anyone have the experience
to show that worst case jitter due to pit timing for a standard
red book CD can NEVER reach the point of showing up as false data
in the output stream? Or is my original statement true where
significant pit spacing uneveness CAN show up in the output in
some form or another?


"never" is too absolute. However, the error detection and correction
abilities of a CD are amazing; for example, the CIRC code can correct burst
errors of up to 3500 bits. That's more than a few simple bit errors, and
for pit spacing to cause uncorrectable errors, the entire CD would have to
be considered as faulty. I would also expect it to sound positively
horrible -- clicks, interpolated sections, and muted sections don't make
for a good listening experience.


  #61   Report Post  
Murray Peterson
 
Posts: n/a
Default CD burning at slower speeds (long)

Jeff Wiseman wrote in
:

OK, so jitter in the clock recovered from pit spacing does not
affect the D/A conversion process itself. However, I know that
read errors due to jitter, misalignment/laser focus, etc., can
become great enough that even after "read error correction", the
read data may still have errors in it when passed off to the D/A
converter. Error correction algorithms are likely done in a way
to minimize these "bad data samples" but none-the-less bad values
can occasionally be sent to the DAC.


Yes.

So uneven pit spacing causes jitter which causes read errors...


It's possible, but unlikely, at least in the way you are describing it.
Each pit or land on a CD doesn't represent a single bit. The data on a CD
is in EFM (eight to fourteen modulation), which is used to minimize the
number of 1 to 0 or 0 to 1 transitions. Using EFM ensures that each pit
comes in a length of 3 to 11 bits -- you should think of the pits as
something more like "stripes". Spacing errors can therefore only occur
every 3 to 11 bits, the remainder of the data within one of these stripes
has no leading edge.

Here is a quick description of EFM:
http://www.ee.washington.edu/consele...5x7/efmmod.htm


Now this is an area that I'm not really familiar with. How bad
does jitter have to be before read errors start affecting the
data stream to the DACs?


Bad enough to cause the reading circuitry to completely misread the value
of a bit, and even then, the error correction circuitry can easily correct
most of these errors. To have an effect on the data being fed to the DAC,
the error rate has to be very bad indeed.

If you want a detailed analysis of the error correction capabilities of a
CD, google for "cross interleaved Reed Solomon" (CIRC in short).

How often does data samples with minor faults occur?


Errors occur quite frequently, but they are almost always correctable.
Here are some tests done for various CD burners:
http://www.cdrinfo.com/Sections/Arti...e=LiteOn+SOHW-
832S&index=9

I've seen two distinct sets of opinions on this but not a lot of
evidence. One group says that once the data is read ,
de-interleaved, error corrected, etc., it will never, never,
ever, ever be a wrong value because it has been "error
corrected".


CDs are very good at correcting errors, but anyone can take an Exacto knife
and force it to have uncorrectable errors. However, if the errors were
corrected, then the data is definitely perfect -- that's what data
correction is for.

So am I missing something here? Does anyone have the experience
to show that worst case jitter due to pit timing for a standard
red book CD can NEVER reach the point of showing up as false data
in the output stream? Or is my original statement true where
significant pit spacing uneveness CAN show up in the output in
some form or another?


"never" is too absolute. However, the error detection and correction
abilities of a CD are amazing; for example, the CIRC code can correct burst
errors of up to 3500 bits. That's more than a few simple bit errors, and
for pit spacing to cause uncorrectable errors, the entire CD would have to
be considered as faulty. I would also expect it to sound positively
horrible -- clicks, interpolated sections, and muted sections don't make
for a good listening experience.
  #62   Report Post  
Dave Platt
 
Posts: n/a
Default CD burning at slower speeds (long)

In article ,

Ok, so data being passed to the D/A in general should always
convert on an accurate timing basis. The clock running the
converter also controls the transport speed to prevent
overflow/underflow problems. This is where I missed the point as
most of my experience in telecomm systems and separate DACs don't
have this clock feedback mechanism (i.e., the rate of reception
cannot be controlled). The D/A clock recovery came from the
incoming data stream as you've described for the separate D/A
scenario. One clock ran it all.


Gotcha.

CD players simply couldn't be designed to deliver acceptable
performance, at acceptable cost, with this sort of downstream-only
single-clock design. A cheap $.50 brushless DC motor can't be
speed-controlled reliably enough to deliver adequate clock stability.

OK, so jitter in the clock recovered from pit spacing does not
affect the D/A conversion process itself. However, I know that
read errors due to jitter, misalignment/laser focus, etc., can
become great enough that even after "read error correction", the
read data may still have errors in it when passed off to the D/A
converter. Error correction algorithms are likely done in a way
to minimize these "bad data samples" but none-the-less bad values
can occasionally be sent to the DAC.

So uneven pit spacing causes jitter which causes read errors...


I believe that errors in the pre-corrected bitstream are far more
often due to things like pressing defects, scratches,
vibration/bumping, dust particles, etc. than they are due to jitter
per se.

Now this is an area that I'm not really familiar with. How bad
does jitter have to be before read errors start affecting the
data stream to the DACs? How often does data samples with minor
faults occur?


Errors in the incoming raw bitstream are common and numerous...
hundreds to thousands per seconds, if I recall properly. A lot of
these are corrected by the first stage of the Reed-Solomon error
correction coding, and almost all of the rest are corrected by the
second (C2) phase.

Errors which are severe enough to overcome the C1/C2 coding, and which
result in a single lost sample of audio data, aren't all that
uncommon. On a well-manufactured CD in pristine condition, I believe
its possible to play through the whole disc without having any C1/C2
failures at all, but it's not unusual to hit several. A CD player
will mask single-sample errors by interpolating between samples on
either side.

More severe error bursts can result in the loss of several samples of
data, and a CD player may have to briefly mute the audio to prevent a
nasty SPLAT from occurring. These are usually quite uncommon on CDs
in good condition.

Mis-corrections (failed error correction) which results in a bad
sample being fed to the DAC without muting are, fortunately, very rare
indeed.

Unlike a non-real time arrangment like reading a
data file off of a hard disk where a signal from the decoding
circuitry can indicate a failed read so another attempt can be
made, when you get data off an audio disk you only have a fixed
amount of time to take care of this.


Traditional 1x CD players don't re-read at all. They correct (or
conceal) errors on the fly. It's only more recent players, with skip
correction or high-speed CD-ROM-based mechanisms, that have the luxury
of re-reading.

I've seen two distinct sets of opinions on this but not a lot of
evidence. One group says that once the data is read ,
de-interleaved, error corrected, etc., it will never, never,
ever, ever be a wrong value because it has been "error
corrected". Error correction algorithms are limited in many ways
and when you are limited in time and read passes, I find it
difficult to believe that every sample will be error-free at all
times.


You're right. Claims of never-ever-a-bad-value are overstating the
truth.

So am I missing something here? Does anyone have the experience
to show that worst case jitter due to pit timing for a standard
red book CD can NEVER reach the point of showing up as false data
in the output stream? Or is my original statement true where
significant pit spacing uneveness CAN show up in the output in
some form or another?


Really crufty pits-and-lands can indeed lead to the data being so bad
that the player must interpolate, or mute, or just shut itself down.
However, I believe that the errors in this sort of case aren't so much
due to jitter (basicaly-correct data with subtly bad timings) but are
due to the manufacture or burn quality being so bad that the raw
bitstream being read from the disc is just grossly _wrong_.

The C1/C2 error correction process is very robust. Hundreds
single-bit errors, or multi-bit error bursts, can be corrected per
second. It's pretty amazing.

--
Dave Platt AE6EO
Hosting the Jade Warrior home page: http://www.radagast.org/jade-warrior
I do _not_ wish to receive unsolicited commercial email, and I will
boycott any company which has the gall to send me such ads!
  #63   Report Post  
Dave Platt
 
Posts: n/a
Default CD burning at slower speeds (long)

In article ,

Ok, so data being passed to the D/A in general should always
convert on an accurate timing basis. The clock running the
converter also controls the transport speed to prevent
overflow/underflow problems. This is where I missed the point as
most of my experience in telecomm systems and separate DACs don't
have this clock feedback mechanism (i.e., the rate of reception
cannot be controlled). The D/A clock recovery came from the
incoming data stream as you've described for the separate D/A
scenario. One clock ran it all.


Gotcha.

CD players simply couldn't be designed to deliver acceptable
performance, at acceptable cost, with this sort of downstream-only
single-clock design. A cheap $.50 brushless DC motor can't be
speed-controlled reliably enough to deliver adequate clock stability.

OK, so jitter in the clock recovered from pit spacing does not
affect the D/A conversion process itself. However, I know that
read errors due to jitter, misalignment/laser focus, etc., can
become great enough that even after "read error correction", the
read data may still have errors in it when passed off to the D/A
converter. Error correction algorithms are likely done in a way
to minimize these "bad data samples" but none-the-less bad values
can occasionally be sent to the DAC.

So uneven pit spacing causes jitter which causes read errors...


I believe that errors in the pre-corrected bitstream are far more
often due to things like pressing defects, scratches,
vibration/bumping, dust particles, etc. than they are due to jitter
per se.

Now this is an area that I'm not really familiar with. How bad
does jitter have to be before read errors start affecting the
data stream to the DACs? How often does data samples with minor
faults occur?


Errors in the incoming raw bitstream are common and numerous...
hundreds to thousands per seconds, if I recall properly. A lot of
these are corrected by the first stage of the Reed-Solomon error
correction coding, and almost all of the rest are corrected by the
second (C2) phase.

Errors which are severe enough to overcome the C1/C2 coding, and which
result in a single lost sample of audio data, aren't all that
uncommon. On a well-manufactured CD in pristine condition, I believe
its possible to play through the whole disc without having any C1/C2
failures at all, but it's not unusual to hit several. A CD player
will mask single-sample errors by interpolating between samples on
either side.

More severe error bursts can result in the loss of several samples of
data, and a CD player may have to briefly mute the audio to prevent a
nasty SPLAT from occurring. These are usually quite uncommon on CDs
in good condition.

Mis-corrections (failed error correction) which results in a bad
sample being fed to the DAC without muting are, fortunately, very rare
indeed.

Unlike a non-real time arrangment like reading a
data file off of a hard disk where a signal from the decoding
circuitry can indicate a failed read so another attempt can be
made, when you get data off an audio disk you only have a fixed
amount of time to take care of this.


Traditional 1x CD players don't re-read at all. They correct (or
conceal) errors on the fly. It's only more recent players, with skip
correction or high-speed CD-ROM-based mechanisms, that have the luxury
of re-reading.

I've seen two distinct sets of opinions on this but not a lot of
evidence. One group says that once the data is read ,
de-interleaved, error corrected, etc., it will never, never,
ever, ever be a wrong value because it has been "error
corrected". Error correction algorithms are limited in many ways
and when you are limited in time and read passes, I find it
difficult to believe that every sample will be error-free at all
times.


You're right. Claims of never-ever-a-bad-value are overstating the
truth.

So am I missing something here? Does anyone have the experience
to show that worst case jitter due to pit timing for a standard
red book CD can NEVER reach the point of showing up as false data
in the output stream? Or is my original statement true where
significant pit spacing uneveness CAN show up in the output in
some form or another?


Really crufty pits-and-lands can indeed lead to the data being so bad
that the player must interpolate, or mute, or just shut itself down.
However, I believe that the errors in this sort of case aren't so much
due to jitter (basicaly-correct data with subtly bad timings) but are
due to the manufacture or burn quality being so bad that the raw
bitstream being read from the disc is just grossly _wrong_.

The C1/C2 error correction process is very robust. Hundreds
single-bit errors, or multi-bit error bursts, can be corrected per
second. It's pretty amazing.

--
Dave Platt AE6EO
Hosting the Jade Warrior home page: http://www.radagast.org/jade-warrior
I do _not_ wish to receive unsolicited commercial email, and I will
boycott any company which has the gall to send me such ads!
  #64   Report Post  
Dave Platt
 
Posts: n/a
Default CD burning at slower speeds (long)

In article ,

Ok, so data being passed to the D/A in general should always
convert on an accurate timing basis. The clock running the
converter also controls the transport speed to prevent
overflow/underflow problems. This is where I missed the point as
most of my experience in telecomm systems and separate DACs don't
have this clock feedback mechanism (i.e., the rate of reception
cannot be controlled). The D/A clock recovery came from the
incoming data stream as you've described for the separate D/A
scenario. One clock ran it all.


Gotcha.

CD players simply couldn't be designed to deliver acceptable
performance, at acceptable cost, with this sort of downstream-only
single-clock design. A cheap $.50 brushless DC motor can't be
speed-controlled reliably enough to deliver adequate clock stability.

OK, so jitter in the clock recovered from pit spacing does not
affect the D/A conversion process itself. However, I know that
read errors due to jitter, misalignment/laser focus, etc., can
become great enough that even after "read error correction", the
read data may still have errors in it when passed off to the D/A
converter. Error correction algorithms are likely done in a way
to minimize these "bad data samples" but none-the-less bad values
can occasionally be sent to the DAC.

So uneven pit spacing causes jitter which causes read errors...


I believe that errors in the pre-corrected bitstream are far more
often due to things like pressing defects, scratches,
vibration/bumping, dust particles, etc. than they are due to jitter
per se.

Now this is an area that I'm not really familiar with. How bad
does jitter have to be before read errors start affecting the
data stream to the DACs? How often does data samples with minor
faults occur?


Errors in the incoming raw bitstream are common and numerous...
hundreds to thousands per seconds, if I recall properly. A lot of
these are corrected by the first stage of the Reed-Solomon error
correction coding, and almost all of the rest are corrected by the
second (C2) phase.

Errors which are severe enough to overcome the C1/C2 coding, and which
result in a single lost sample of audio data, aren't all that
uncommon. On a well-manufactured CD in pristine condition, I believe
its possible to play through the whole disc without having any C1/C2
failures at all, but it's not unusual to hit several. A CD player
will mask single-sample errors by interpolating between samples on
either side.

More severe error bursts can result in the loss of several samples of
data, and a CD player may have to briefly mute the audio to prevent a
nasty SPLAT from occurring. These are usually quite uncommon on CDs
in good condition.

Mis-corrections (failed error correction) which results in a bad
sample being fed to the DAC without muting are, fortunately, very rare
indeed.

Unlike a non-real time arrangment like reading a
data file off of a hard disk where a signal from the decoding
circuitry can indicate a failed read so another attempt can be
made, when you get data off an audio disk you only have a fixed
amount of time to take care of this.


Traditional 1x CD players don't re-read at all. They correct (or
conceal) errors on the fly. It's only more recent players, with skip
correction or high-speed CD-ROM-based mechanisms, that have the luxury
of re-reading.

I've seen two distinct sets of opinions on this but not a lot of
evidence. One group says that once the data is read ,
de-interleaved, error corrected, etc., it will never, never,
ever, ever be a wrong value because it has been "error
corrected". Error correction algorithms are limited in many ways
and when you are limited in time and read passes, I find it
difficult to believe that every sample will be error-free at all
times.


You're right. Claims of never-ever-a-bad-value are overstating the
truth.

So am I missing something here? Does anyone have the experience
to show that worst case jitter due to pit timing for a standard
red book CD can NEVER reach the point of showing up as false data
in the output stream? Or is my original statement true where
significant pit spacing uneveness CAN show up in the output in
some form or another?


Really crufty pits-and-lands can indeed lead to the data being so bad
that the player must interpolate, or mute, or just shut itself down.
However, I believe that the errors in this sort of case aren't so much
due to jitter (basicaly-correct data with subtly bad timings) but are
due to the manufacture or burn quality being so bad that the raw
bitstream being read from the disc is just grossly _wrong_.

The C1/C2 error correction process is very robust. Hundreds
single-bit errors, or multi-bit error bursts, can be corrected per
second. It's pretty amazing.

--
Dave Platt AE6EO
Hosting the Jade Warrior home page: http://www.radagast.org/jade-warrior
I do _not_ wish to receive unsolicited commercial email, and I will
boycott any company which has the gall to send me such ads!
  #65   Report Post  
Andrzej Popowski
 
Posts: n/a
Default Is CD burning at slower speeds better?

Sat, 26 Jun 2004 09:44:27 -0600, "Mark A"
pisze:

I have seen tests of various burners and CD's. They usually record with
fewer errors at a speed less than the maximum, but higher than the minimum
for a particular recorder. I think these tests are on the web somewhere, but
don't recall where.


http://www.cdrinfo.com/Sections/Hardware/All.asp


--
Pozdrowienia

Andrzej Popowski


  #66   Report Post  
Andrzej Popowski
 
Posts: n/a
Default Is CD burning at slower speeds better?

Sat, 26 Jun 2004 09:44:27 -0600, "Mark A"
pisze:

I have seen tests of various burners and CD's. They usually record with
fewer errors at a speed less than the maximum, but higher than the minimum
for a particular recorder. I think these tests are on the web somewhere, but
don't recall where.


http://www.cdrinfo.com/Sections/Hardware/All.asp


--
Pozdrowienia

Andrzej Popowski
  #67   Report Post  
Andrzej Popowski
 
Posts: n/a
Default Is CD burning at slower speeds better?

Sat, 26 Jun 2004 09:44:27 -0600, "Mark A"
pisze:

I have seen tests of various burners and CD's. They usually record with
fewer errors at a speed less than the maximum, but higher than the minimum
for a particular recorder. I think these tests are on the web somewhere, but
don't recall where.


http://www.cdrinfo.com/Sections/Hardware/All.asp


--
Pozdrowienia

Andrzej Popowski
  #68   Report Post  
Laurence Payne
 
Posts: n/a
Default Is CD burning at slower speeds better?

On Sun, 27 Jun 2004 16:59:06 GMT, "s.stef"
wrote:

Want to carry on? Does your burner/software/media offer slower burn
speeds? If you CAN burn at 2X (maybe even 1X) are the results good?


I think not.

4x to 8x are good speeds.
probably 8x is the best choise



You think? Have you tried?
  #69   Report Post  
Laurence Payne
 
Posts: n/a
Default Is CD burning at slower speeds better?

On Sun, 27 Jun 2004 16:59:06 GMT, "s.stef"
wrote:

Want to carry on? Does your burner/software/media offer slower burn
speeds? If you CAN burn at 2X (maybe even 1X) are the results good?


I think not.

4x to 8x are good speeds.
probably 8x is the best choise



You think? Have you tried?
  #70   Report Post  
Laurence Payne
 
Posts: n/a
Default Is CD burning at slower speeds better?

On Sun, 27 Jun 2004 16:59:06 GMT, "s.stef"
wrote:

Want to carry on? Does your burner/software/media offer slower burn
speeds? If you CAN burn at 2X (maybe even 1X) are the results good?


I think not.

4x to 8x are good speeds.
probably 8x is the best choise



You think? Have you tried?


  #71   Report Post  
Geoff Wood
 
Posts: n/a
Default Is CD burning at slower speeds better?

Laurence Payne wrote:
On Sun, 27 Jun 2004 16:59:06 GMT, "s.stef"
wrote:

Want to carry on? Does your burner/software/media offer slower
burn speeds? If you CAN burn at 2X (maybe even 1X) are the results
good?


I think not.

4x to 8x are good speeds.
probably 8x is the best choise



You think? Have you tried?


I have, and I do.

geoff


  #72   Report Post  
Geoff Wood
 
Posts: n/a
Default Is CD burning at slower speeds better?

Laurence Payne wrote:
On Sun, 27 Jun 2004 16:59:06 GMT, "s.stef"
wrote:

Want to carry on? Does your burner/software/media offer slower
burn speeds? If you CAN burn at 2X (maybe even 1X) are the results
good?


I think not.

4x to 8x are good speeds.
probably 8x is the best choise



You think? Have you tried?


I have, and I do.

geoff


  #73   Report Post  
Geoff Wood
 
Posts: n/a
Default Is CD burning at slower speeds better?

Laurence Payne wrote:
On Sun, 27 Jun 2004 16:59:06 GMT, "s.stef"
wrote:

Want to carry on? Does your burner/software/media offer slower
burn speeds? If you CAN burn at 2X (maybe even 1X) are the results
good?


I think not.

4x to 8x are good speeds.
probably 8x is the best choise



You think? Have you tried?


I have, and I do.

geoff


  #74   Report Post  
Steven Sullivan
 
Posts: n/a
Default CD burning at slower speeds (long)

Dave Platt wrote:


It's possible for an external DAC box to do a good job of clock
recovery (or re-creation), but many do not.


I've read this for years now...but I wonder if it needs
updating. The last few years have seen an explosion of
'two box' systems in people's homes, as home theater
setups often rely on a DVD player (doubling as a CD player)
TOSlinked to the optical input of a HT receiver.

Has anyone surveyed the clock recovery capabilities of
such modern 'outboard' DAC setups *recently*?


--

-S.
Why don't you just admit that you hate music and leave people alone. --
spiffy



  #75   Report Post  
Steven Sullivan
 
Posts: n/a
Default CD burning at slower speeds (long)

Dave Platt wrote:


It's possible for an external DAC box to do a good job of clock
recovery (or re-creation), but many do not.


I've read this for years now...but I wonder if it needs
updating. The last few years have seen an explosion of
'two box' systems in people's homes, as home theater
setups often rely on a DVD player (doubling as a CD player)
TOSlinked to the optical input of a HT receiver.

Has anyone surveyed the clock recovery capabilities of
such modern 'outboard' DAC setups *recently*?


--

-S.
Why don't you just admit that you hate music and leave people alone. --
spiffy





  #76   Report Post  
Steven Sullivan
 
Posts: n/a
Default CD burning at slower speeds (long)

Dave Platt wrote:


It's possible for an external DAC box to do a good job of clock
recovery (or re-creation), but many do not.


I've read this for years now...but I wonder if it needs
updating. The last few years have seen an explosion of
'two box' systems in people's homes, as home theater
setups often rely on a DVD player (doubling as a CD player)
TOSlinked to the optical input of a HT receiver.

Has anyone surveyed the clock recovery capabilities of
such modern 'outboard' DAC setups *recently*?


--

-S.
Why don't you just admit that you hate music and leave people alone. --
spiffy



  #77   Report Post  
Jeff Wiseman
 
Posts: n/a
Default CD burning at slower speeds (long)



Steven Sullivan wrote:

Dave Platt wrote:

It's possible for an external DAC box to do a good job of clock
recovery (or re-creation), but many do not.


I've read this for years now...but I wonder if it needs
updating. The last few years have seen an explosion of
'two box' systems in people's homes, as home theater
setups often rely on a DVD player (doubling as a CD player)
TOSlinked to the optical input of a HT receiver.

Has anyone surveyed the clock recovery capabilities of
such modern 'outboard' DAC setups *recently*?



What Dave says really appears to be very true even in many
"high-end" (read "expensive") products. I've listened to some
high end systems experimenting with different coaxial cables
between the transport and DACs and unfortunately, you can
frequently hear differences between different cables. The only
two things that should ever effect this is ground loop noise and
jitter (with the main by-product of the ground loop noise
contributing jitter). Given a set of digital cable geometries
that use a shell to shell shield, ground loop issues would be
equivalent and yet there can be subtle audible differences even
between these cables.

If clock recovery were done very well, the effects of jitter
would be minimized eliminating any audible differences. However,
although we have long had the technology in the
telecommunications arena to do really solid clock recovery even
at speeds far above that of a transport/DAC interconnect, it
seems to be taking a long time for audio equipment designers to
catch up. Either they don't want to put in the little extra
hardware to really stabilize their clock recovery, or the DAC
component was designed by someone who's spent their life
designing high-end analog stuff and is still trying to "catch-on"
to digital design.

I used to work with some fellow audiophiles who were also high
speed digital design folks who were well versed at designing
digital audio functioning in the 192GHz region. They were
constantly complaining about how "dumb" the clock recovery and
front end circuits of some very expensive DACs were.

I've seen some really interesting feuds start in other audio
forums when someone suggested that different digital cables sound
different. I suspect that it's not so much the cables as this
fact that many DAC components just don't do clock recovery as
well as they could. If they did, in general, changing cables
wouldn't produce audible differences, subtle or otherwise.

I have a Yamaha RX-V995 receiver. I had it's main pre-outs
connected to a much better amp than the ones in the receiver
itself. I was actually very impressed with the quality of the
built-in DAC of this receiver-unexpected in a relatively economy
priced unit. However, with my better main amp, I could detect
differences when using different cables between the external
DVD/CD player and the receiver. Very subtle ones but
none-the-less detectable. When running off of the internal amps
instead of my outboard one, I couldn't discern those differences.
My suspicions are that ground loop noise may be contributing a
lot to this issue (further increasing jitter) because I've found
many folks with these receivers have discovered that when using
the coax link, they occasionally get drop outs when the receiver
loses frame (usually from someone turning a light or fan on or
off in the room), where it doesn't happen when the TOSlink was
being used with no coax digital inputs connected.

Anyway, the bottom line is that having the transport in the same
box as the DAC so that the clock can be shared can be better, but
not necessarily the best arrangement for your whole system. In my
case, using the cheap DAC built into the DVD/CD player when
listening to CDs wasn't as good as using the Dolby 5.1 DTS
capabile DAC in the receiver because it was just a much better
DAC, even when dealing with a potentially higher jitter rate
using the digital link.

- Jeff
  #78   Report Post  
Jeff Wiseman
 
Posts: n/a
Default CD burning at slower speeds (long)



Steven Sullivan wrote:

Dave Platt wrote:

It's possible for an external DAC box to do a good job of clock
recovery (or re-creation), but many do not.


I've read this for years now...but I wonder if it needs
updating. The last few years have seen an explosion of
'two box' systems in people's homes, as home theater
setups often rely on a DVD player (doubling as a CD player)
TOSlinked to the optical input of a HT receiver.

Has anyone surveyed the clock recovery capabilities of
such modern 'outboard' DAC setups *recently*?



What Dave says really appears to be very true even in many
"high-end" (read "expensive") products. I've listened to some
high end systems experimenting with different coaxial cables
between the transport and DACs and unfortunately, you can
frequently hear differences between different cables. The only
two things that should ever effect this is ground loop noise and
jitter (with the main by-product of the ground loop noise
contributing jitter). Given a set of digital cable geometries
that use a shell to shell shield, ground loop issues would be
equivalent and yet there can be subtle audible differences even
between these cables.

If clock recovery were done very well, the effects of jitter
would be minimized eliminating any audible differences. However,
although we have long had the technology in the
telecommunications arena to do really solid clock recovery even
at speeds far above that of a transport/DAC interconnect, it
seems to be taking a long time for audio equipment designers to
catch up. Either they don't want to put in the little extra
hardware to really stabilize their clock recovery, or the DAC
component was designed by someone who's spent their life
designing high-end analog stuff and is still trying to "catch-on"
to digital design.

I used to work with some fellow audiophiles who were also high
speed digital design folks who were well versed at designing
digital audio functioning in the 192GHz region. They were
constantly complaining about how "dumb" the clock recovery and
front end circuits of some very expensive DACs were.

I've seen some really interesting feuds start in other audio
forums when someone suggested that different digital cables sound
different. I suspect that it's not so much the cables as this
fact that many DAC components just don't do clock recovery as
well as they could. If they did, in general, changing cables
wouldn't produce audible differences, subtle or otherwise.

I have a Yamaha RX-V995 receiver. I had it's main pre-outs
connected to a much better amp than the ones in the receiver
itself. I was actually very impressed with the quality of the
built-in DAC of this receiver-unexpected in a relatively economy
priced unit. However, with my better main amp, I could detect
differences when using different cables between the external
DVD/CD player and the receiver. Very subtle ones but
none-the-less detectable. When running off of the internal amps
instead of my outboard one, I couldn't discern those differences.
My suspicions are that ground loop noise may be contributing a
lot to this issue (further increasing jitter) because I've found
many folks with these receivers have discovered that when using
the coax link, they occasionally get drop outs when the receiver
loses frame (usually from someone turning a light or fan on or
off in the room), where it doesn't happen when the TOSlink was
being used with no coax digital inputs connected.

Anyway, the bottom line is that having the transport in the same
box as the DAC so that the clock can be shared can be better, but
not necessarily the best arrangement for your whole system. In my
case, using the cheap DAC built into the DVD/CD player when
listening to CDs wasn't as good as using the Dolby 5.1 DTS
capabile DAC in the receiver because it was just a much better
DAC, even when dealing with a potentially higher jitter rate
using the digital link.

- Jeff
  #79   Report Post  
Jeff Wiseman
 
Posts: n/a
Default CD burning at slower speeds (long)



Steven Sullivan wrote:

Dave Platt wrote:

It's possible for an external DAC box to do a good job of clock
recovery (or re-creation), but many do not.


I've read this for years now...but I wonder if it needs
updating. The last few years have seen an explosion of
'two box' systems in people's homes, as home theater
setups often rely on a DVD player (doubling as a CD player)
TOSlinked to the optical input of a HT receiver.

Has anyone surveyed the clock recovery capabilities of
such modern 'outboard' DAC setups *recently*?



What Dave says really appears to be very true even in many
"high-end" (read "expensive") products. I've listened to some
high end systems experimenting with different coaxial cables
between the transport and DACs and unfortunately, you can
frequently hear differences between different cables. The only
two things that should ever effect this is ground loop noise and
jitter (with the main by-product of the ground loop noise
contributing jitter). Given a set of digital cable geometries
that use a shell to shell shield, ground loop issues would be
equivalent and yet there can be subtle audible differences even
between these cables.

If clock recovery were done very well, the effects of jitter
would be minimized eliminating any audible differences. However,
although we have long had the technology in the
telecommunications arena to do really solid clock recovery even
at speeds far above that of a transport/DAC interconnect, it
seems to be taking a long time for audio equipment designers to
catch up. Either they don't want to put in the little extra
hardware to really stabilize their clock recovery, or the DAC
component was designed by someone who's spent their life
designing high-end analog stuff and is still trying to "catch-on"
to digital design.

I used to work with some fellow audiophiles who were also high
speed digital design folks who were well versed at designing
digital audio functioning in the 192GHz region. They were
constantly complaining about how "dumb" the clock recovery and
front end circuits of some very expensive DACs were.

I've seen some really interesting feuds start in other audio
forums when someone suggested that different digital cables sound
different. I suspect that it's not so much the cables as this
fact that many DAC components just don't do clock recovery as
well as they could. If they did, in general, changing cables
wouldn't produce audible differences, subtle or otherwise.

I have a Yamaha RX-V995 receiver. I had it's main pre-outs
connected to a much better amp than the ones in the receiver
itself. I was actually very impressed with the quality of the
built-in DAC of this receiver-unexpected in a relatively economy
priced unit. However, with my better main amp, I could detect
differences when using different cables between the external
DVD/CD player and the receiver. Very subtle ones but
none-the-less detectable. When running off of the internal amps
instead of my outboard one, I couldn't discern those differences.
My suspicions are that ground loop noise may be contributing a
lot to this issue (further increasing jitter) because I've found
many folks with these receivers have discovered that when using
the coax link, they occasionally get drop outs when the receiver
loses frame (usually from someone turning a light or fan on or
off in the room), where it doesn't happen when the TOSlink was
being used with no coax digital inputs connected.

Anyway, the bottom line is that having the transport in the same
box as the DAC so that the clock can be shared can be better, but
not necessarily the best arrangement for your whole system. In my
case, using the cheap DAC built into the DVD/CD player when
listening to CDs wasn't as good as using the Dolby 5.1 DTS
capabile DAC in the receiver because it was just a much better
DAC, even when dealing with a potentially higher jitter rate
using the digital link.

- Jeff
  #80   Report Post  
Steven Sullivan
 
Posts: n/a
Default CD burning at slower speeds (long)

Jeff Wiseman wrote:


Steven Sullivan wrote:

Dave Platt wrote:

It's possible for an external DAC box to do a good job of clock
recovery (or re-creation), but many do not.


I've read this for years now...but I wonder if it needs
updating. The last few years have seen an explosion of
'two box' systems in people's homes, as home theater
setups often rely on a DVD player (doubling as a CD player)
TOSlinked to the optical input of a HT receiver.

Has anyone surveyed the clock recovery capabilities of
such modern 'outboard' DAC setups *recently*?



What Dave says really appears to be very true even in many
"high-end" (read "expensive") products. I've listened to some
high end systems experimenting with different coaxial cables
between the transport and DACs and unfortunately, you can
frequently hear differences between different cables. The only
two things that should ever effect this is ground loop noise and
jitter (with the main by-product of the ground loop noise
contributing jitter). Given a set of digital cable geometries
that use a shell to shell shield, ground loop issues would be
equivalent and yet there can be subtle audible differences even
between these cables.


If clock recovery were done very well, the effects of jitter
would be minimized eliminating any audible differences. However,
although we have long had the technology in the
telecommunications arena to do really solid clock recovery even
at speeds far above that of a transport/DAC interconnect, it
seems to be taking a long time for audio equipment designers to
catch up. Either they don't want to put in the little extra
hardware to really stabilize their clock recovery, or the DAC
component was designed by someone who's spent their life
designing high-end analog stuff and is still trying to "catch-on"
to digital design.


I used to work with some fellow audiophiles who were also high
speed digital design folks who were well versed at designing
digital audio functioning in the 192GHz region. They were
constantly complaining about how "dumb" the clock recovery and
front end circuits of some very expensive DACs were.


I've seen some really interesting feuds start in other audio
forums when someone suggested that different digital cables sound
different. I suspect that it's not so much the cables as this
fact that many DAC components just don't do clock recovery as
well as they could. If they did, in general, changing cables
wouldn't produce audible differences, subtle or otherwise.


sigh
I suspect it's more likely just bad comparison techniques.
If they improved those, changing cables probably wouldn't
yield much in the way of statistically significant perceived
differences.

But I would still like to know if any *measurements*, at least,
have been done to verify that many two-box systems still do not do
clock recovery well. At what point their measurable performance
translates to an audible difference, is another issue.

snip

I'm sorry, Jeff, but as you might have realized by now,
you can throw sighted comparison anecdotes at me until
doomsday, but unless there's some good *independent*
reason to believe such reports, they aren't of much use
to me.



--

-S.
"We started to see evidence of the professional groupie in the early 80's.
Alarmingly, these girls bore a striking resemblance to Motley Crue." --
David Lee Roth


Reply
Thread Tools
Display Modes

Posting Rules

Smilies are On
[IMG] code is On
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Burning audio cds Stephen McLuckie High End Audio 5 April 8th 04 01:37 PM
Electrical Burning Smell after Install Mitzi Car Audio 3 February 20th 04 06:42 AM
Record rotation speeds Irwin Schwartz Pro Audio 3 November 15th 03 12:19 PM
So. Cal Burning ScottW Audio Opinions 72 November 1st 03 06:52 PM
CD burning Peter Adamson Pro Audio 15 August 4th 03 01:00 PM


All times are GMT +1. The time now is 03:33 PM.

Powered by: vBulletin
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 AudioBanter.com.
The comments are property of their posters.
 

About Us

"It's about Audio and hi-fi"