Thread: Heaven!
View Single Post
  #226   Report Post  
Posted to rec.audio.high-end
chung
 
Posts: n/a
Default Heaven!

Steven Sullivan wrote:
chung wrote:

Steven Sullivan wrote:

Chung wrote:


Steven Sullivan wrote:


Steven Sullivan wrote:


Chung wrote:

So why do you think SACD players have to have 6 dB higher output levels?


Where does this claim come from?

I ask because I have observed on two universal players now, when digitizing
the two-channel analog output, that the
'default' channel level outputs a clipped signal. To fix it I have
to lower the player's output levels in the channel level menu.
However, I'm not sure this is confined to SACD; I'm still investigating.
Would love to hear more about it.


I've actually since realized that it's more an issue of voltage mismatch
between my soundcard's input and player's output...it's a limitation
of the soundcard (M-Audio 2496).





I took a look at the Audiophile 2496's manual, and it says that maximum
input level is 2 dBV. That is 1.26V, less than the customary 2V that
most CD/SACD players deliver. You may be better off going through a
preamp first and use the preamp's volume control to attenuate the level
that goes into the 2496.



Is that better than using the DVD player's internal level controls?





The answer is: it depends.



The advantage of using your preamp as the volume control is that you
don't lose any bits of resolution (assuming it's an analog volume
control). The disadvantage is that there might be small L/R tracking errors.



The advantage of using the DVD's internal level controls is that there
is no L/R tracking errors, since the DAC's are very well matched. Also,
the level control is very repeatable. The disadvantage is that depending
on how the level control is implemented, there might be some minor loss
of resolution, although in this case, the attenuation is small and
probably a non-issue.



I have a preamp handy, but if I use the
DVD's controls, I could still make the
effort to find the loudest signal on the disc beforehand,
and trim player output just enough so that it doesn't clip
the input yet still uses up most of the headroom in
the resulting capture (e.g., the unclipped peak is at, say,
-1 dB). Wouldn't this keep any resolution loss
to a minimum?




Yes. In your case, it's 2V vs. a little over a volt, so worst case you
lose 1 bit, and you probably start out with 24 bits, so the resolution
loss is pretty insignificant.