Home |
Search |
Today's Posts |
#41
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
In article ,
"Buster Mudd" wrote: Steven Sullivan wrote: Quick-switching simply means the time it takes for the *switching* is short. It does not refer to the time you spend listening to the samples. Wait, and the "subjectivists" have a problem with *that* ?!?!?! Why would anyone even for a moment thing that "slow switching" is preferable? I admit I've never heard anyone specifically endorse "slow switching", but I keep hearing folks griping about problems with "quick switching" comparisons, as if the "quick switching" aspect was germane to the problems they have with the comparison. But it seems like what they really mean when they complain about "quick switching" ABX tests is actually *ANY* blind comparison, period. Yes? No. The confusion comes from the fact that there are those who thought (think) that quick switching means short musical examples in the test, i.e. listen to A for 2 seconds, then B for 2 seconds, etc. |
#42
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
"Stewart Pinkerton" wrote in message
... On 7 Mar 2006 00:49:37 GMT, wrote: snip One such 'fact' would likely be that, given level-matching to +/- 0.1dB across the audio band, and no gross distortion products, all of human knowledge suggests that the items being compared will show no audible differences. That is not an extraordinary claim, it is the outcome of literally *thousands* of such comparisons, which is why level-matched DBTs are the 'gold standard' in the audio industry, including 'high end' brands like Revel. Just out of curiosity, Stewart, where are the "thousands" of tests you cite for reference purposes? In otherwords, wherein did you manage to confabulate this statistic? snip |
#43
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
On 9 Mar 2006 00:37:21 GMT, "Harry Lavo" wrote:
"Stewart Pinkerton" wrote in message ... On 7 Mar 2006 00:49:37 GMT, wrote: snip One such 'fact' would likely be that, given level-matching to +/- 0.1dB across the audio band, and no gross distortion products, all of human knowledge suggests that the items being compared will show no audible differences. That is not an extraordinary claim, it is the outcome of literally *thousands* of such comparisons, which is why level-matched DBTs are the 'gold standard' in the audio industry, including 'high end' brands like Revel. Just out of curiosity, Stewart, where are the "thousands" of tests you cite for reference purposes? In otherwords, wherein did you manage to confabulate this statistic? DBTs are used by Harman International, KEF, B&W and other major manufacturers every working day. Given that they've been doing this for decades, you do the math. DBTs are the gold standard ij audio comparison for the very simple reason that they're the most sensitive known test for *real* sonic differences. They won't of course tell *you* what you want to hear, wherein lies your problem with them. Where is your single solitary shred of *evidence* in rebuttal? -- Stewart Pinkerton | Music is Art - Audio is Engineering |
#44
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
Stewart Pinkerton wrote:
On 9 Mar 2006 00:37:21 GMT, "Harry Lavo" wrote: "Stewart Pinkerton" wrote in message ... On 7 Mar 2006 00:49:37 GMT, wrote: snip One such 'fact' would likely be that, given level-matching to +/- 0.1dB across the audio band, and no gross distortion products, all of human knowledge suggests that the items being compared will show no audible differences. That is not an extraordinary claim, it is the outcome of literally *thousands* of such comparisons, which is why level-matched DBTs are the 'gold standard' in the audio industry, including 'high end' brands like Revel. Just out of curiosity, Stewart, where are the "thousands" of tests you cite for reference purposes? In otherwords, wherein did you manage to confabulate this statistic? DBTs are used by Harman International, KEF, B&W and other major manufacturers every working day. Given that they've been doing this for decades, you do the math. That;s nice. Do you have access to the data from their tests? DBTs are the gold standard ij audio comparison for the very simple reason that they're the most sensitive known test for *real* sonic differences. That's great. now produce the peer reviewed data from dbts that tell us all amps CD players and cables sound the same with all the usual conditions aside from this nonsense about exempting designs *you* happen to not like. They won't of course tell *you* what you want to hear, wherein lies your problem with them. You know this how? Where is your single solitary shred of *evidence* in rebuttal? It isn't scientifically valid but it might ring a bell since it is your test. http://groups.google.com/group/rec.a...d94e22c8f458bf "I positively identified several amps, the Yamaha was closest to indistinguishable from the top runners (Krell, Hafler and Audiolab in this case). The C370 was compared at a later date, to the same Krell transfer standard. Interestingly, a Mark Levinson 333 also provided a positive result against the Krell, showing similar treble sharpness. I did not compare Yamaha and Levinson directly, but that would have been *very* interesting! :-) " It is just plain wierd that you would argue that all amps that are "competently designed" all sound the same when you, of all people, heard differences under blind conditions. Were you wrong? Is there an amp on this group that an consumer should have known was incompetently designed? You are asking for evidence, do you accept your own tests? Scott |
#45
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
Stewart Pinkerton wrote:
On 9 Mar 2006 00:37:21 GMT, "Harry Lavo" wrote: "Stewart Pinkerton" wrote in message ... On 7 Mar 2006 00:49:37 GMT, wrote: snip One such 'fact' would likely be that, given level-matching to +/- 0.1dB across the audio band, and no gross distortion products, all of human knowledge suggests that the items being compared will show no audible differences. That is not an extraordinary claim, it is the outcome of literally *thousands* of such comparisons, which is why level-matched DBTs are the 'gold standard' in the audio industry, including 'high end' brands like Revel. Just out of curiosity, Stewart, where are the "thousands" of tests you cite for reference purposes? In otherwords, wherein did you manage to confabulate this statistic? DBTs are used by Harman International IIRC Sean Olive's speaker preference paper involved more than 250 subjects...for just one paper. -- -S "If men were angels, no government would be necessary." - James Madison (1788) |
#46
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
"Stewart Pinkerton" wrote in message
... On 9 Mar 2006 00:37:21 GMT, "Harry Lavo" wrote: "Stewart Pinkerton" wrote in message ... On 7 Mar 2006 00:49:37 GMT, wrote: snip One such 'fact' would likely be that, given level-matching to +/- 0.1dB across the audio band, and no gross distortion products, all of human knowledge suggests that the items being compared will show no audible differences. That is not an extraordinary claim, it is the outcome of literally *thousands* of such comparisons, which is why level-matched DBTs are the 'gold standard' in the audio industry, including 'high end' brands like Revel. Just out of curiosity, Stewart, where are the "thousands" of tests you cite for reference purposes? In otherwords, wherein did you manage to confabulate this statistic? DBTs are used by Harman International, KEF, B&W and other major manufacturers every working day. Given that they've been doing this for decades, you do the math. DBTs are the gold standard ij audio comparison for the very simple reason that they're the most sensitive known test for *real* sonic differences. They won't of course tell *you* what you want to hear, wherein lies your problem with them. Where is your single solitary shred of *evidence* in rebuttal? Notice that I did not ask if the DBT's were done. I asked where the "evidence" was that thousands had been done that showed no level-matched differences, which is what you claimed. I have your answer. You are guessing at the number done, and you have absolutely no knowledge of what the outcome of the tests is....since they are "insider" and routine tests, and you have no particular access to such. |
Reply |
|
Thread Tools | |
Display Modes | |
|
|
![]() |
||||
Thread | Forum | |||
new song for evaluation - "Bongo Congo" | Pro Audio | |||
Blue Book Evaluation | Marketplace | |||
Reference SPL for loudspeaker evaluation? | Tech | |||
Thread ended ( was Doing an "evaluation" test, or has it already been done?) | High End Audio | |||
H-K's speaker evaluation facility | High End Audio |