Home |
Search |
Today's Posts |
#1
![]() |
|||
|
|||
![]()
A perrennial favorite here on rahe, and as it turns out,
discussed in this rather interesting article by Ian Dennis and the late Julian Dunn "The Numerically-IDentical CD Mystery: A Study in Perception versus Measurement" The 'mystery' in question is the music biz/audiophile folklore that bit-perfect copies can sound different -- specifically, pre-masters versus the final CD product. In esssence, the authors tested this perceptually via blind comparison (by 'mass' listeners and by 'golden ears') and via measurements of carefully-selected and prepared test tracks. The results are rather amusing, and speak to, in addition to the titular matter: - the effects of jitter (something Mr. Dunn in particular had written about a lot before this) - the ability of test subjects to follow directions, - the validity of the 'golden ear' concept, - one-box vs. two-box CD player designs - the utility of controls in comparisons -- note especially the results for the comparison of same to same Required reading, I think , for RAHEistas http://www.prismsound.com/downloads/cdinvest.pdf -- -S. "They've got God on their side. All we've got is science and reason." -- Dawn Hulsey, Talent Director |
#2
![]() |
|||
|
|||
![]()
"Steven Sullivan" wrote in message
... A perrennial favorite here on rahe, and as it turns out, discussed in this rather interesting article by Ian Dennis and the late Julian Dunn "The Numerically-IDentical CD Mystery: A Study in Perception versus Measurement" The 'mystery' in question is the music biz/audiophile folklore that bit-perfect copies can sound different -- specifically, pre-masters versus the final CD product. In esssence, the authors tested this perceptually via blind comparison (by 'mass' listeners and by 'golden ears') and via measurements of carefully-selected and prepared test tracks. The results are rather amusing, and speak to, in addition to the titular matter: - the effects of jitter (something Mr. Dunn in particular had written about a lot before this) - the ability of test subjects to follow directions, - the validity of the 'golden ear' concept, - one-box vs. two-box CD player designs - the utility of controls in comparisons -- note especially the results for the comparison of same to same Required reading, I think , for RAHEistas http://www.prismsound.com/downloads/cdinvest.pdf All I can say is they never would have held a job as market research test designer at any company I worked for. This test design is so bad in terms of what the evaluators are asked to do, that once they found out the evaluators couldn't handle the instructions, they should have thrown out the existing test and started over. There is so much noise in this test (poor design, too few degrees of freedom) that gaining meaningful results is almost doomed from the start. |
#3
![]() |
|||
|
|||
![]()
Harry Lavo wrote:
"Steven Sullivan" wrote in message ... A perrennial favorite here on rahe, and as it turns out, discussed in this rather interesting article by Ian Dennis and the late Julian Dunn "The Numerically-IDentical CD Mystery: A Study in Perception versus Measurement" The 'mystery' in question is the music biz/audiophile folklore that bit-perfect copies can sound different -- specifically, pre-masters versus the final CD product. In esssence, the authors tested this perceptually via blind comparison (by 'mass' listeners and by 'golden ears') and via measurements of carefully-selected and prepared test tracks. The results are rather amusing, and speak to, in addition to the titular matter: - the effects of jitter (something Mr. Dunn in particular had written about a lot before this) - the ability of test subjects to follow directions, - the validity of the 'golden ear' concept, - one-box vs. two-box CD player designs - the utility of controls in comparisons -- note especially the results for the comparison of same to same Required reading, I think , for RAHEistas http://www.prismsound.com/downloads/cdinvest.pdf All I can say is they never would have held a job as market research test designer at any company I worked for. This test design is so bad in terms of what the evaluators are asked to do, that once they found out the evaluators couldn't handle the instructions, they should have thrown out the existing test and started over. There is so much noise in this test (poor design, too few degrees of freedom) that gaining meaningful results is almost doomed from the start. I'd say the worst part of the perceptual test design -- the measurement portion strikes me as *extremely* methodical and thorough, though I'm no engineer -- was that it allowed too *much* freedom to the participants (and yes, I know 'degrees of freedom' doesn't refer to that sort of freedom). But what about the smaller test of 'golden eared' subjects, which appears to have been carried out with more oversight than the 'mail in' part of the test? And even if the instructions were inconsistently followed, what are the chances of a result where the same-to-same comparison produced among the HIGHEST 'difference' scores? Of course, if you give no credence to these results because of sloppiness or poor design, surely you have to maintain at least the same level of skepticism towards the widespread anecdotal reports of audible pressing differences that spawned the project. -- -S. "They've got God on their side. All we've got is science and reason." -- Dawn Hulsey, Talent Director |
#4
![]() |
|||
|
|||
![]()
On 22 Jan 2004 18:04:17 GMT, Steven Sullivan wrote:
A perrennial favorite here on rahe, and as it turns out, discussed in this rather interesting article by Ian Dennis and the late Julian Dunn "The Numerically-IDentical CD Mystery: A Study in Perception versus Measurement" The 'mystery' in question is the music biz/audiophile folklore that bit-perfect copies can sound different -- specifically, pre-masters versus the final CD product. [snip] http://www.prismsound.com/downloads/cdinvest.pdf Somewhere in the nineties some artists, pop musicians and classical musicians alike, began to complain that the final cd would sound different, even worse (more compressed, more sharp) than the master cdrom/tape they had approved of. For a time nobody would believe them. But in the end people started to investigate likely causes of the problem. It turned out that a probable cause could be that SOMETHING went wrong at the pressing plant. It turned out that some plants seemed to produce better cds than others. Is it really true that the pressing makes a difference for cds? In a hearing test it is very difficult to come to conclusions, because, as has been said ample times before, (a) our hearing system is easily influenced by non-auditory information (b) our hearing system DEPENDS for good functioning on non-auditory information. So it is very easy to lead a test person to one or the other direction. Furthermore, this moment the listener may concentrate on this aspect of the sound/music, and another moment he may concentrate on quite another aspect of the sound, which leads him to hear different things at different times for the same sound source. We all know this. If you listen to Beethoven's opus 131 for the 20th time, you hear different things than the first time. So what you hear changes, while the audio data remain the same. This is the fundamental problem with hearing tests, whether blind or sighted or done in one day or over many days. With digital audio the problem is aggravated by the fact that what may be audible with one piece of equipment, may not be audible with another piece of gear. Generally the naive idea is that the gear that makes for most audible differences is the "most sensitive" and hence the "best". With analogue gear this may be true (turntables), but in digital audio this idea is more times false then true. A good example is the audibility of differences between digital interlinks in two box cd systems. A box with not-so-good jitter rejection (e.g. too simple PLL) depends very much on the quality of the interlink. Slightly off 75 ohm and the signal starts to bounce and whatever, resulting in audible jitter. So the simple system needs a very good interlink. A box with superb jitter rejection however, will not suffer that much from a less than perfect interlink. Therefore, differences that are audible with the simple box are inaudible with the very good box. So the good piece of gear is "less sensitive". As far as things have been tested, the same holds true of cd players and different pressings. The better the cd player, the less influential is the pressing. The worse the player is, the more it depends on good pressings. ---------------- In 1994 Bob Katz wrote an article on the supposedly audible effects of bad pressings in Stereophile: http://www.stereophile.com/printarchives.cgi?55 In his book "Mastering Audio" (Focal Press 2002, isbn 0-240-80545-3), a must-read for everybody interested in audio and how it sounds and what makes it sound, he writes in his excellent chapter on jitter the following on page 237: "As I said, there is no jitter on a storage medium, but there is some (controversial) evidence that CDs cut at high speeds sound inferior to CDs cut at low speeds and that CDs cut with a jittery clock sound worse than those cut with a clean clock." In a note he adds: "We theorize that irregular pit spacing or inadequate pit depth on the CDs themselves is affecting the player's servo mechanism. The servo mechanism and sample clock share a common power supply, so with poor power supply bypass in the player, simple power or ground leakage may affect the stability of the clock. It doesn't take much leakage to change a few picoseconds." Furthermore, back to page 237, Bob states: "It only takes a few picoseconds to make an audible difference." We may presume that the better the power supply of the cd player is, the less will this detrimental effect of the servo be audible. ---------------- In the past I have written about the audibility of the display in my TEAC vrds 8 cd player. Light on would add some high freqs to the sound, most notable in some lute music, not notable in some symphonic music. Generally I preferred the sound with the lights ON. This player does not have the best of power supplies. Since then I have inserted new clocks, together with a completely separate power supply, that is, even a transformer of their own. It turned out that soundwise this extra transformer was very important. Also, it has become extremely difficult, or should I say generally impossible to hear any audible influence of the display. This corroborates the theory of Bob Katz: the display lights would influence the power supply and the print board and through these the clocks, leading to jitter. But now, as the clocks are powered completely independent from the main supply, the display does no longer influence the clocks. ----------- A friend of mine has been working in a record shop for years and he claims that consistently American pressings of a CD sound better than German pressings. If possible, he always buys his CDs over the internet, to get an American pressing. He says that German pressings sound sharp, hollow, etc. I cannot corroborate his story as I have not compared on a regular basis American to Geman pressings. ---------------- Many people report that if they burn their CD copies at home, they sound worse if burned at high speed and better if burned at low speed (4 times is low speed, 20 to 40 times is high speed). Again, with a good player this may be less audible or even inaudible. --------------- Regarding the article by Dennis & Dunn, it must be said that they try to measure a lot of things together at the same time, usually not a good idea if you want to achieve valuable results. Ernesto. "You don't have to learn science if you don't feel like it. So you can forget the whole business if it is too much mental strain, which it usually is." Richard Feynman |
#5
![]() |
|||
|
|||
![]()
Steven Sullivan wrote:
Harry Lavo wrote: "Steven Sullivan" wrote in message ... A perrennial favorite here on rahe, and as it turns out, discussed in this rather interesting article by Ian Dennis and the late Julian Dunn "The Numerically-IDentical CD Mystery: A Study in Perception versus Measurement" The 'mystery' in question is the music biz/audiophile folklore that bit-perfect copies can sound different -- specifically, pre-masters versus the final CD product. In esssence, the authors tested this perceptually via blind comparison (by 'mass' listeners and by 'golden ears') and via measurements of carefully-selected and prepared test tracks. The results are rather amusing, and speak to, in addition to the titular matter: - the effects of jitter (something Mr. Dunn in particular had written about a lot before this) - the ability of test subjects to follow directions, - the validity of the 'golden ear' concept, - one-box vs. two-box CD player designs - the utility of controls in comparisons -- note especially the results for the comparison of same to same Required reading, I think , for RAHEistas http://www.prismsound.com/downloads/cdinvest.pdf All I can say is they never would have held a job as market research test designer at any company I worked for. This test design is so bad in terms of what the evaluators are asked to do, that once they found out the evaluators couldn't handle the instructions, they should have thrown out the existing test and started over. There is so much noise in this test (poor design, too few degrees of freedom) that gaining meaningful results is almost doomed from the start. I'd say the worst part of the perceptual test design -- the measurement portion strikes me as *extremely* methodical and thorough, though I'm no engineer -- was that it allowed too *much* freedom to the participants (and yes, I know 'degrees of freedom' doesn't refer to that sort of freedom). I find the measurements where they revealed power-supply related jitter and AM in a single-box player very interesting. I just wish someone could include that kind of measurements in the reviews of disc players. One plot is worth more than 1,000 words of purple prose to me. But what about the smaller test of 'golden eared' subjects, which appears to have been carried out with more oversight than the 'mail in' part of the test? And even if the instructions were inconsistently followed, what are the chances of a result where the same-to-same comparison produced among the HIGHEST 'difference' scores? Of course, if you give no credence to these results because of sloppiness or poor design, surely you have to maintain at least the same level of skepticism towards the widespread anecdotal reports of audible pressing differences that spawned the project. |
#6
![]() |
|||
|
|||
![]()
"Steven Sullivan" wrote in message
... Harry Lavo wrote: "Steven Sullivan" wrote in message ... A perrennial favorite here on rahe, and as it turns out, discussed in this rather interesting article by Ian Dennis and the late Julian Dunn "The Numerically-IDentical CD Mystery: A Study in Perception versus Measurement" The 'mystery' in question is the music biz/audiophile folklore that bit-perfect copies can sound different -- specifically, pre-masters versus the final CD product. In esssence, the authors tested this perceptually via blind comparison (by 'mass' listeners and by 'golden ears') and via measurements of carefully-selected and prepared test tracks. The results are rather amusing, and speak to, in addition to the titular matter: - the effects of jitter (something Mr. Dunn in particular had written about a lot before this) - the ability of test subjects to follow directions, - the validity of the 'golden ear' concept, - one-box vs. two-box CD player designs - the utility of controls in comparisons -- note especially the results for the comparison of same to same Required reading, I think , for RAHEistas http://www.prismsound.com/downloads/cdinvest.pdf All I can say is they never would have held a job as market research test designer at any company I worked for. This test design is so bad in terms of what the evaluators are asked to do, that once they found out the evaluators couldn't handle the instructions, they should have thrown out the existing test and started over. There is so much noise in this test (poor design, too few degrees of freedom) that gaining meaningful results is almost doomed from the start. I'd say the worst part of the perceptual test design -- the measurement portion strikes me as *extremely* methodical and thorough, though I'm no engineer -- was that it allowed too *much* freedom to the participants (and yes, I know 'degrees of freedom' doesn't refer to that sort of freedom). But what about the smaller test of 'golden eared' subjects, which appears to have been carried out with more oversight than the 'mail in' part of the test? And even if the instructions were inconsistently followed, what are the chances of a result where the same-to-same comparison produced among the HIGHEST 'difference' scores? Of course, if you give no credence to these results because of sloppiness or poor design, surely you have to maintain at least the same level of skepticism towards the widespread anecdotal reports of audible pressing differences that spawned the project. The fact that the "no difference" disk ended up "high" is prima-facia evidence that the test was sloppy. If all biases in the test design were properly nulled and the degrees of freedom adequate, the result would have been "average" due to random selection in all disks were the same, or lower, if some disks truly were audibly different and the test design allowed this to be detected. As far as the issue itself, I am an agnostic. I have no first hand experience or theoretical knowledge or even interest in the subject to have an opinion. |
Reply |
Thread Tools | |
Display Modes | |
|
|
![]() |
||||
Thread | Forum | |||
Perception question | Audio Opinions |