Home |
Search |
Today's Posts |
#1
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
I was talking to a young audiophile friend of mine on the phone the
other day, and it occurred to me that many of his attitudes and misconceptions are the products of a lifetime of reading high-end audio rags and drinking the cool-aid that is the high-end manufacturers' endless advertising hype. The biggest myth of all, and one that is almost universally accepted by the non-technical audiophile community is this notion that as an electrical signal, audio is somehow "special". In other words, it's apparently OK for the wire carrying the electrical signals that keep the airliner we're on in the air, and thus keeps us alive to be garden-variety copper wire, terminated with garden-variety connectors and held together with ordinary tin solder, but the wire that carries our music must be single-crystal, oxygen-free copper (or perhaps silver) sheathed in special dielectrics, terminated with platinum connectors fixed with special "audio-quality" silver solder and costing thousands of dollars per foot! Add to that the fact that acceptable electronics must cost so much that one would think that they were made from Mil-Spec parts, which they aren't. (but even then, the parts and other manufacturing costs couldn't begin to justify the selling price of some of this equipment). Even if they were made from Mil-Spec parts, that, in and of itself, would not be any guarantee of better sonic performance. Military and Aerospace specifications are aimed at enhanced reliability and repeatability, not at better performance than the industrial grade specimens of the same parts. Sure, good engineering will result in better sound, better reliability, and greater longevity in hi-fi gear as in any other manufactured goods, but is there really anything in an MSB Diamond Platinum DAC IV plus, for instance, to justify its $40,000+ price tag? I doubt it. In fact, in a recent review of Marantz's latest computer/internet audio "appliance" (NA-11S1), the reviewer observed that the Marantz's built-in DAC was virtually indistinguishable, sound-wise, from the MSB Diamond Platinum DAC that he had on-hand at the time. The Marantz, BTW, was priced at $3499 - less than 1/10th the price of the MSB and not only contained a very good sounding DAC but was also a computer and Internet music server to boot! The idea that components have to cost an arm and a leg in order to perform at "state-of-the-art" sonic levels is definitely a result of manufacturer greed coupled with the willing compliance of the audiophile press who compound the hype by parroting the notion that this stuff is sonically superior to cheaper equipment even though there is usually little or nothing in this equipment's design (other than several thousands of dollars worth of custom metalwork in the component's case) to indicate that it uses any better quality components or design criteria than does much similar, but cheaper, gear. Now, certainly, pride of ownership is a factor in this stuff and expensive components and "boutique" cables certainly do LOOK the business, and if one has unlimited financial resources and wants to purchase a DAC that costs as much as a new Corvette C7, or a pair of speakers* that cost as much as a new Aston-Martin or Maserati, then by all means, be my guest. The tragedy here is not that such expensive equipment exists, but that so many audiophiles are daily frustrated by their heartfelt belief that one must pay these kinds of prices for great sound. It just ain't so, gentlemen. * I'll cut speaker manufacturers a bit of slack here. This is one area where spending more CAN get you more. A pair of Wilson Alexandra XLF speakers or Magico Q7s are state-of-the-art speaker systems at $200K and $165K respectively, but again, a pair of Martin-Logan CLX's and a pair Descent i subwoofers at under $30K for the lot still probably represents the most accurate and transparent speaker sound money can buy these days. --- news://freenews.netfront.net/ - complaints: --- |
#2
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
Audio_Empire wrote:
I was talking to a young audiophile friend of mine on the phone the other day, and it occurred to me that many of his attitudes and misconceptions are the products of a lifetime of reading high-end audio rags and drinking the cool-aid that is the high-end manufacturers' endless advertising hype. Yes yes, we know all that George, and you keep saying it over and over. The main problem is that most of them don't have a clue as to what causes the sound that we hear in a room. They have been talked into the "accuracy" theory of reproduction, so they sit 6 ft away from some monstrosity speaker aimed at their faces and when it doesn't sound right they are told it is bacause they haven't spent enough yet on cables made to get all of the frequencies to the other end at the same time, or speakers that need to be phase aligned, or drivers that aren't light enough, or there isn't enough damping in the room, or any of a litany of misguided ideas about sound that make me cringe. This will continue until they learn something about the differences between live sound and "hi-fi." How live music puts sound into a room vs how "hi-fi" does it. I have spoken about this many times here, but no one quite gets it. See me after class. Gary Eickmeier |
#3
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
I cut the whole message because I would have had to quote too much. I
have a couple of observations. First, so-called "pro" gear is really good stuff at 1/10th or even 1/100th or 1/1000th of the price of "audiophile" products. Second, Hsu's system of two subs and a pair of bookshelf speakers costs way less than $30K and sounds fantastic. I spent 20 years with a pair of Apogee Divas driven by a Classe DR-6 preamp and two DR-9 amps. I finally settled on a DVD player to play CDs and a VPI Super Scout Master with a Shelter MM cartridge for vinyl. I could hear differences in various TTs. Not so much in various CD players. The DR-6 had an excellent MM phono input. For the last couple of years I have been using a TC Electronics Impact Twin driven from a Mac Mini using iTunes, along with Pure Music and Pure Vinyl feeding the amps of a pair of Hsu ULS-15s and one Classe DR-9 driving the Hsu bookshelf speakers. I am using cheap balanced lines in place of expensive RCA interconnects. I use the mic inputs on the Impact Twin along with Pure Vinyl to digitize vinyl at very high bit rates, probably more than I need. Pure Vinyl/Pure Music also allow for the use of computer based filters to tune the room. Try that in analog. I haven't added it up, but I probably have much less than $10K in this system, not counting the VPI TT, which I still own and which cost more than the rest of the system. Okay, I'm old and my ears aren't what they used to be, but I can't say the old system really sounded that much better than the new and the old system cost five times what I paid for the new, and that was in 1990 dollars versus 2011! |
#4
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
"Audio_Empire" wrote in message
... I was talking to a young audiophile friend of mine on the phone the other day, and it occurred to me that many of his attitudes and misconceptions are the products of a lifetime of reading high-end audio rags and drinking the cool-aid that is the high-end manufacturers' endless advertising hype. If people look and listen, there are saner heads to be read. The internet means that you are no lnger lost in any competition you are in with someone who buys ink by the barrel. Furthermore, one can put together a decent education in audio and electronics through the university level with a little googling. The biggest myth of all, and one that is almost universally accepted by the non-technical audiophile community is this notion that as an electrical signal, audio is somehow "special". In other words, it's apparently OK for the wire carrying the electrical signals that keep the airliner we're on in the air, and thus keeps us alive to be garden-variety copper wire, terminated with garden-variety connectors and held together with ordinary tin solder, but the wire that carries our music must be single-crystal, oxygen-free copper (or perhaps silver) sheathed in special dielectrics, terminated with platinum connectors fixed with special "audio-quality" silver solder and costing thousands of dollars per foot! And the corolaries that amplifiers, preamps, DACs, loudspeakers, etc., are "special". Add to that the fact that acceptable electronics must cost so much that one would think that they were made from Mil-Spec parts, which they aren't. (but even then, the parts and other manufacturing costs couldn't begin to justify the selling price of some of this equipment). Even if they were made from Mil-Spec parts, that, in and of itself, would not be any guarantee of better sonic performance. Military and Aerospace specifications are aimed at enhanced reliability and repeatability, not at better performance than the industrial grade specimens of the same parts. For example, high end audiophiles turn their noses up at electrolytic capacitors, while there are such things as mil-spec electrolytic capacitors and I've seen them in use. Sure, good engineering will result in better sound, better reliability, and greater longevity in hi-fi gear as in any other manufactured goods, but is there really anything in an MSB Diamond Platinum DAC IV plus, for instance, to justify its $40,000+ price tag? Or a DAC costing $4,000 or $400, or even more than some costs $40. The idea that components have to cost an arm and a leg in order to perform at "state-of-the-art" sonic levels is definitely a result of manufacturer greed coupled with the willing compliance of the audiophile press who compound the hype by parroting the notion that this stuff is sonically superior to cheaper equipment even though there is usually little or nothing in this equipment's design (other than several thousands of dollars worth of custom metalwork in the component's case) to indicate that it uses any better quality components or design criteria than does much similar, but cheaper, gear. * I'll cut speaker manufacturers a bit of slack here. This is one area where spending more CAN get you more. A pair of Wilson Alexandra XLF speakers or Magico Q7s are state-of-the-art speaker systems at $200K and $165K respectively, but again, a pair of Martin-Logan CLX's and a pair Descent i subwoofers at under $30K for the lot still probably represents the most accurate and transparent speaker sound money can buy these days. I dunno about that. There is good evidence to suggest that there has been considerable progress in the price/performance of speakers, and that high end speakers are just as overpriced as high end speaker cables. |
#5
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
In article ,
"Arny Krueger" wrote: "Audio_Empire" wrote in message ... I was talking to a young audiophile friend of mine on the phone the other day, and it occurred to me that many of his attitudes and misconceptions are the products of a lifetime of reading high-end audio rags and drinking the cool-aid that is the high-end manufacturers' endless advertising hype. If people look and listen, there are saner heads to be read. The internet means that you are no lnger lost in any competition you are in with someone who buys ink by the barrel. Furthermore, one can put together a decent education in audio and electronics through the university level with a little googling. The biggest myth of all, and one that is almost universally accepted by the non-technical audiophile community is this notion that as an electrical signal, audio is somehow "special". In other words, it's apparently OK for the wire carrying the electrical signals that keep the airliner we're on in the air, and thus keeps us alive to be garden-variety copper wire, terminated with garden-variety connectors and held together with ordinary tin solder, but the wire that carries our music must be single-crystal, oxygen-free copper (or perhaps silver) sheathed in special dielectrics, terminated with platinum connectors fixed with special "audio-quality" silver solder and costing thousands of dollars per foot! And the corolaries that amplifiers, preamps, DACs, loudspeakers, etc., are "special". Add to that the fact that acceptable electronics must cost so much that one would think that they were made from Mil-Spec parts, which they aren't. (but even then, the parts and other manufacturing costs couldn't begin to justify the selling price of some of this equipment). Even if they were made from Mil-Spec parts, that, in and of itself, would not be any guarantee of better sonic performance. Military and Aerospace specifications are aimed at enhanced reliability and repeatability, not at better performance than the industrial grade specimens of the same parts. For example, high end audiophiles turn their noses up at electrolytic capacitors, while there are such things as mil-spec electrolytic capacitors and I've seen them in use. Of course there are. Electrolytic capacitors are de riguer for power supplies and the like. Couldn't do without them. But a mil-spec 100 microFared, 50 Volt electrolytic capacitor is the same 100 microFared, 50 Volt electrolytic capacitor as it's commercial equivalent. It just has been tested (and guaranteed) over a wider temperature range and under rigorous vibration and other other environmental tests that the commercial version of the cap hasn't been subjected too. Sure, good engineering will result in better sound, better reliability, and greater longevity in hi-fi gear as in any other manufactured goods, but is there really anything in an MSB Diamond Platinum DAC IV plus, for instance, to justify its $40,000+ price tag? Or a DAC costing $4,000 or $400, or even more than some costs $40. Well, now you're going too far. Many DACs DO sound different (and some better) than others - even in a bias controlled test. However, my point is that these differences are not necessarily tied to the unit's cost. I.E. a $4000 DAC doesn't, by virtue of its cost, necessarily sound better than a $400 DAC. In my experience, however, DACs utilizing stereo D/A chips generally "sound" better (and by that I mean that there are aspects of their audio performance, such as soundstage or bass presentation that they do do differently than other designs) than do DACs that "time-share" a single D/A converter chip and those utilizing dual-differential D/As can sound better yet, but I've found no hard-and-fast rules there, either. The idea that components have to cost an arm and a leg in order to perform at "state-of-the-art" sonic levels is definitely a result of manufacturer greed coupled with the willing compliance of the audiophile press who compound the hype by parroting the notion that this stuff is sonically superior to cheaper equipment even though there is usually little or nothing in this equipment's design (other than several thousands of dollars worth of custom metalwork in the component's case) to indicate that it uses any better quality components or design criteria than does much similar, but cheaper, gear. * I'll cut speaker manufacturers a bit of slack here. This is one area where spending more CAN get you more. A pair of Wilson Alexandra XLF speakers or Magico Q7s are state-of-the-art speaker systems at $200K and $165K respectively, but again, a pair of Martin-Logan CLX's and a pair Descent i subwoofers at under $30K for the lot still probably represents the most accurate and transparent speaker sound money can buy these days. I dunno about that. There is good evidence to suggest that there has been considerable progress in the price/performance of speakers, and that high end speakers are just as overpriced as high end speaker cables. I do know about "that" and I can tell you that if you can find a pair of speakers that are as transparent and as accurate as the M-L CLXs for less money or if you can find another pair of speakers that can produce the sheer volume of a symphony orchestra in full song, or pressurize a room with low end like the Wilson Alexandra XLFs AT ANY PRICE, I'll eat my hat! Because as many high-end speakers as I have heard, I haven't come across such a puppy! Sure, the proliferation of computer modeling has narrowed the gap in performance of a lot of mid-priced speakers where many of them sound as good as or perhaps better than so-called state-of-the-art designs of just a few years ago. But the really high-end speakers do things that lesser speakers simply can't do, and you don't need a DBT to hear it either. It is immediately apparent when one is in the presence of such designs. |
#6
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
I cut the whole message because I would have had to quote too much. I
have a couple of observations. First, so-called "pro" gear is really good stuff at 1/10th or even 1/100th or 1/1000th of the price of "audiophile" products. Second, Hsu's system of two subs and a pair of bookshelf speakers costs way less than $30K and sounds fantastic. I agree with you about pro-gear. One can get some excellent performing amplifiers (especially) from vendors like Peavy and Crown, and Behringer at really bargain prices. Makes one wonder why the high-end amps cost so much? I've heard the HSU horn speakers with an HSU sub and I agree that they sound fantastic - especially for the money. BUT (and this is a big but) they are not in the same ballpark as the transparent and hyper accurate Martin Logan CLX (with suitable subwoofers) or with the sheer amount of sound produced by the Wilson Alexandria XLF (I think a speaker system that could combine the transparency, accuracy and low distortion of the M-L CLX with the dynamic range and the ability to load a room with bottom end like the Wilson Alexandra XLF would be pretty close to the ideal loudspeaker). I spent 20 years with a pair of Apogee Divas driven by a Classe DR-6 preamp and two DR-9 amps. I finally settled on a DVD player to play CDs and a VPI Super Scout Master with a Shelter MM cartridge for vinyl. I could hear differences in various TTs. Not so much in various CD players. The DR-6 had an excellent MM phono input. VPI turntables are VERY good, I agree. I used to have a pair of Apogee Signatures, and aside from the fragility of the "cabinets" (the speakers were VERY heavy, and if you tried to move them by grabbing the MDF cosmetic surrounds, those surrounds would break) I thought they sounded excellent (great bass). Alas, a piece of magnet came loose inside and one of them started to rattle - so I sold them. For the last couple of years I have been using a TC Electronics Impact Twin driven from a Mac Mini using iTunes, along with Pure Music and Pure Vinyl feeding the amps of a pair of Hsu ULS-15s and one Classe DR-9 driving the Hsu bookshelf speakers. I am using cheap balanced lines in place of expensive RCA interconnects. Boutique cables are the biggest rip-off in audio. The companies selling these useless money-pits should be prosecuted for fraud! I use the mic inputs on the Impact Twin along with Pure Vinyl to digitize vinyl at very high bit rates, probably more than I need. Pure Vinyl/Pure Music also allow for the use of computer based filters to tune the room. Try that in analog. Can't be done too easily. |
#7
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
"Audio_Empire" wrote in message
... Of course there are. Electrolytic capacitors are de riguer for power supplies and the like. Couldn't do without them. But a mil-spec 100 microFared, 50 Volt electrolytic capacitor is the same 100 microFared, 50 Volt electrolytic capacitor as it's commercial equivalent. It just has been tested (and guaranteed) over a wider temperature range and under rigorous vibration and other other environmental tests that the commercial version of the cap hasn't been subjected too. Not always. In the equipment I worked with electrolytic capacitor usually meant something with tantalium in it. While small tantalium capacitors are common, finding parts of 1,000 uF and up are not very common. Sure, good engineering will result in better sound, better reliability, and greater longevity in hi-fi gear as in any other manufactured goods, but is there really anything in an MSB Diamond Platinum DAC IV plus, for instance, to justify its $40,000+ price tag? Or a DAC costing $4,000 or $400, or even more than some costs $40. Well, now you're going too far. Many DACs DO sound different (and some better) than others - even in a bias controlled test. Yes. In this point in life, the point where DAC chips are sonically transparent lies about a dollar a channel or less. For example, I ended up with a motherboard sound facility that produced no output, so an external card was the easiest solution. For less than $30 I obtained an audio interface that per independent tests was the equal of an audio interfact that cost me $399 in 2001. However, my point is that these differences are not necessarily tied to the unit's cost. I.E. a $4000 DAC doesn't, by virtue of its cost, necessarily sound better than a $400 DAC. In my experience, however, DACs utilizing stereo D/A chips generally "sound" better (and by that I mean that there are aspects of their audio performance, such as soundstage or bass presentation that they do do differently than other designs) than do DACs that "time-share" a single D/A converter chip and those utilizing dual-differential D/As can sound better yet, but I've found no hard-and-fast rules there, either. Your distaste for the kind of DBT that professionals use is well known. I dunno about that. There is good evidence to suggest that there has been considerable progress in the price/performance of speakers, and that high end speakers are just as overpriced as high end speaker cables. I do know about "that" and I can tell you that if you can find a pair of speakers that are as transparent and as accurate as the M-L CLXs for less money or if you can find another pair of speakers that can produce the sheer volume of a symphony orchestra in full song, or pressurize a room with low end like the Wilson Alexandra XLFs AT ANY PRICE, I'll eat my hat! Because as many high-end speakers as I have heard, I haven't come across such a puppy! Given your track record with blind, level matched tests, how would you know? Sure, the proliferation of computer modeling has narrowed the gap in performance of a lot of mid-priced speakers where many of them sound as good as or perhaps better than so-called state-of-the-art designs of just a few years ago. But the really high-end speakers do things that lesser speakers simply can't do, and you don't need a DBT to hear it either. It is immediately apparent when one is in the presence of such designs. True for subwoofers, but even there the price performance has migrated downward. In the 1980s there simply were no subwoofer drivers with 30 mm Xmax. Today, one can obtain such a thing and get change from $400. Of course with the usual markups and accessories such as built-in power amp, the street price of the installable system is still $2,000. But that is chump change by high end audio standards. |
#8
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
In article ,
"Arny Krueger" wrote: "Audio_Empire" wrote in message ... Of course there are. Electrolytic capacitors are de riguer for power supplies and the like. Couldn't do without them. But a mil-spec 100 microFared, 50 Volt electrolytic capacitor is the same 100 microFared, 50 Volt electrolytic capacitor as it's commercial equivalent. It just has been tested (and guaranteed) over a wider temperature range and under rigorous vibration and other other environmental tests that the commercial version of the cap hasn't been subjected too. Not always. In the equipment I worked with electrolytic capacitor usually meant something with tantalium in it. While small tantalium capacitors are common, finding parts of 1,000 uF and up are not very common. Many electrolytics are tantalum, but certainly not all. many are aluminum, especially the bigger ones. But tantalum or aluminum have nothing to do with Mil-Spec vs commercial spec or their prices. Sure, good engineering will result in better sound, better reliability, and greater longevity in hi-fi gear as in any other manufactured goods, but is there really anything in an MSB Diamond Platinum DAC IV plus, for instance, to justify its $40,000+ price tag? Or a DAC costing $4,000 or $400, or even more than some costs $40. Well, now you're going too far. Many DACs DO sound different (and some better) than others - even in a bias controlled test. Yes. In this point in life, the point where DAC chips are sonically transparent lies about a dollar a channel or less. For example, I ended up with a motherboard sound facility that produced no output, so an external card was the easiest solution. For less than $30 I obtained an audio interface that per independent tests was the equal of an audio interfact that cost me $399 in 2001. However, my point is that these differences are not necessarily tied to the unit's cost. I.E. a $4000 DAC doesn't, by virtue of its cost, necessarily sound better than a $400 DAC. In my experience, however, DACs utilizing stereo D/A chips generally "sound" better (and by that I mean that there are aspects of their audio performance, such as soundstage or bass presentation that they do do differently than other designs) than do DACs that "time-share" a single D/A converter chip and those utilizing dual-differential D/As can sound better yet, but I've found no hard-and-fast rules there, either. Your distaste for the kind of DBT that professionals use is well known. That's only because it seems to often yield little useful information. For instance, two DAC units that sound identical in a bias controlled DBT, when connected to another system with more resolving power clearly showed that one had much better and tighter bass than the other. The DBT didn't show that because the system used didn't have great bass itself. In another similar test of DAC units, two otherwise identical sounding DACs yielded very different soundstage and imaging results when connected to a system that imaged well. I agree that all modern DACs sound acceptable, but differences in imaging, bottom-end performance and even top-end performance do exist and will only show up on a DBT when the system used for the DBT is of sufficient resolving power to highlight these aspects of performance. Otherwise, they go by unnoticed and leave the test participants with the incorrect conclusion that everything sounds the same. I dunno about that. There is good evidence to suggest that there has been considerable progress in the price/performance of speakers, and that high end speakers are just as overpriced as high end speaker cables. I do know about "that" and I can tell you that if you can find a pair of speakers that are as transparent and as accurate as the M-L CLXs for less money or if you can find another pair of speakers that can produce the sheer volume of a symphony orchestra in full song, or pressurize a room with low end like the Wilson Alexandra XLFs AT ANY PRICE, I'll eat my hat! Because as many high-end speakers as I have heard, I haven't come across such a puppy! Given your track record with blind, level matched tests, how would you know? Given your track record of not recognizing good audio performance when you hear it, how would you? Sure, the proliferation of computer modeling has narrowed the gap in performance of a lot of mid-priced speakers where many of them sound as good as or perhaps better than so-called state-of-the-art designs of just a few years ago. But the really high-end speakers do things that lesser speakers simply can't do, and you don't need a DBT to hear it either. It is immediately apparent when one is in the presence of such designs. True for subwoofers, True for all types of speakers. but even there the price performance has migrated downward. Like I said. Modern modest-priced speakers can perform at levels of performance undreamed of 20 years ago. In the 1980s there simply were no subwoofer drivers with 30 mm Xmax. Today, one can obtain such a thing and get change from $400. Of course with the usual markups and accessories such as built-in power amp, the street price of the installable system is still $2,000. But that is chump change by high end audio standards. By $400, I take it you are talking about the raw drivers? If so, I concur. --- news://freenews.netfront.net/ - complaints: --- |
#9
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
"Audio_Empire" wrote in message
... Your distaste for the kind of DBT that professionals use is well known. That's only because it seems to often yield little useful information. That would probably be due to differences in one's definition of "useful information" For instance, two DAC units that sound identical in a bias controlled DBT, when connected to another system with more resolving power clearly showed that one had much better and tighter bass than the other. Perfect example of faulting DBT procedures when the problem might have been at a higher level - the choice of system used for the evaluation. Apparently someone has been reading a book where it is written in stone that DBTs can only be done in inadequate systems, and more specifically never the system at hand. ;-) Furthermore, there is no evidence that the test in the second system was bias-controlled, so we appear to have a surreptitious raising of the perceived merit of a sighted evaluation over the DBT. In short, I see evidence of overwhelming false logic, bias and denial in the above comments. The DBT didn't show that because the system used didn't have great bass itself. Not at all the fault of the DBT, yet it is apparently being presented here as a global limitation of DBTs. In another similar test of DAC units, two otherwise identical sounding DACs yielded very different soundstage and imaging results when connected to a system that imaged well. Same mistake, just different systems and different related system parameter. |
#10
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
Audio_Empire wrote:
I agree that all modern DACs sound acceptable, but differences in imaging, bottom-end performance and even top-end performance do exist and will only show up on a DBT when the system used for the DBT is of sufficient resolving power to highlight these aspects of performance. Otherwise, they go by unnoticed and leave the test participants with the incorrect conclusion that everything sounds the same. Of course you have to partner the DAC with high-quality components for any test. But without the DBT, the test participants will be left with the incorrect conclusion that the units sound different. Better still, your claim that some system has better "resolving power" can be tested with a DBT. (With proper blinding, statistical controls, etc. This should go without saying, but often needs to be repeated.) Andrew. |
#11
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
On Wednesday, September 25, 2013 9:50:39 AM UTC-7, Andrew Haley wrote:
Audio_Empire wrote: =20 =20 =20 I agree that all modern DACs sound acceptable, but differences in =20 imaging, bottom-end performance and even top-end performance do =20 exist and will only show up on a DBT when the system used for the =20 DBT is of sufficient resolving power to highlight these aspects of =20 performance. Otherwise, they go by unnoticed and leave the test =20 participants with the incorrect conclusion that everything sounds =20 the same. =20 =20 =20 Of course you have to partner the DAC with high-quality components for =20 any test. But without the DBT, the test participants will be left =20 with the incorrect conclusion that the units sound different. Better =20 still, your claim that some system has better "resolving power" can be =20 tested with a DBT. (With proper blinding, statistical controls, etc. =20 This should go without saying, but often needs to be repeated.) =20 =20 =20 Andrew. With any test of audibility one has to implement controls for false positi= ves and false negatives. Bias controls against a false positive are the eas= y part. Bias controls against false negatives can be a bit trickier. Also o= ne has to test the test for sensitivity. that is not so easy either. |
#12
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
In article ,
Andrew Haley wrote: Audio_Empire wrote: I agree that all modern DACs sound acceptable, but differences in imaging, bottom-end performance and even top-end performance do exist and will only show up on a DBT when the system used for the DBT is of sufficient resolving power to highlight these aspects of performance. Otherwise, they go by unnoticed and leave the test participants with the incorrect conclusion that everything sounds the same. Of course you have to partner the DAC with high-quality components for any test. But without the DBT, the test participants will be left with the incorrect conclusion that the units sound different. Better still, your claim that some system has better "resolving power" can be tested with a DBT. (With proper blinding, statistical controls, etc. This should go without saying, but often needs to be repeated.) Andrew. Of course it can. But often it is not. My point was that if a system doesn't show the differences between two components, then those differences go unnoticed in a DBT. Many here seem to think that it's all down to the specs of the DAC chips (or modules) themselves. The power supplies and the analog stages probably have just as much to do with a DAC's sound as does the D/A converter itself. --- news://freenews.netfront.net/ - complaints: --- |
#13
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
On Thursday, September 26, 2013 11:39:58 AM UTC-7, ScottW wrote:
On Wednesday, September 25, 2013 10:22:42 AM UTC-7, Scott wrote: =20 On Wednesday, September 25, 2013 9:50:39 AM UTC-7, Andrew Haley wrote: =20 =20 =20 =20 =20 =20 =20 =20 =20 With any test of audibility one has to implement controls for false po= sitives and false negatives. Bias controls against a false positive are th= e easy part. Bias controls against false negatives can be a bit trickier. = Also one has to test the test for sensitivity. that is not so easy either. =20 =20 =20 I never understood this point. If one doesn't think they hear a differen= ce...what is point of proving that they do? =20 I would suggest that a person who doesn't think they can hear a differenc= e is simply not a good subject for such a test. =20 Even if you don't believe that they can't hear a difference, what is the = point in trying to prove they do? All you could prove with great effort is= they're being deceptive which would only disqualify them from the test....= something you should have concluded when they said they can't hear a differ= ence. =20 =20 =20 Also, consider the example of AEs claim of audible artifacts on all MP3s,= regardless of bit rate. I don't hear them on high quality VBR files. I m= ight with training be able to discern what he's talking about....but I'd ra= ther not. =20 =20 =20 ScottW It is most often the people who feel that no differences exist that are the= most vocal about the need for DBTs. So if these folks want to conduct test= s that is their prerogative. But if they want to do a good job of it they n= eed to control for same sound biases and test the test for sensitivity. Oth= erwise it's just a show. |
#14
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
In article ,
ScottW wrote: On Wednesday, September 25, 2013 10:22:42 AM UTC-7, Scott wrote: On Wednesday, September 25, 2013 9:50:39 AM UTC-7, Andrew Haley wrote: With any test of audibility one has to implement controls for false positives and false negatives. Bias controls against a false positive are the easy part. Bias controls against false negatives can be a bit trickier. Also one has to test the test for sensitivity. that is not so easy either. I never understood this point. If one doesn't think they hear a difference...what is point of proving that they do? For the individual, there would be no such point, but the purpose of a bias controlled test is to obtain a statistically significant result as to whether there actually exists audible differences or not. Humans are very subjective creatures. We can talk ourselves (either consciously, or subconsciously) into hearing - or not hearing - a myriad of things just because we WANT to hear or not hear them. For instance, if a guy spends $4K on a pair of interconnects to go between his preamp and his amp, believe me, they ARE going to be the biggest improvement he ever experienced with his system. Even if in a subsequent DBT he can't tell the difference between his new $4K babies and a set of Radio-Shack $5 specials, he will swear that the high-priced cables improved his system's sound. There are people who post here regularly who have convinced themselves that everything pretty much sounds the same - even when a DBT shows otherwise. But they have painted themselves so deeply into that corner, that they REFUSE to hear differences even when those differences are very easy for anyone to hear because they are so gross (like with speakers). I would suggest that a person who doesn't think they can hear a difference is simply not a good subject for such a test. Yes. That is true, but still, that's something that is hard to determine beforehand. Even if you don't believe that they can't hear a difference, what is the point in trying to prove they do? All you could prove with great effort is they're being deceptive which would only disqualify them from the test....something you should have concluded when they said they can't hear a difference. Yes, but if I understand your question correctly, the way that properly executed DBTs are designed (or at least the way I understand it), one or two tin-eared listeners out of many and over many tries, won't, appreciably, alter the results. Also, most people who take part in these tests probably don't know beforehand whether or not that can hear a difference. Some, of course, will go into such a test determined to NOT hear a difference even when one exists. Statistically, their results won't make any difference to the outcome either. The general wisdom with DBTs seems to be that one either hears a difference between the devices under test or one doesn't. I'm convinced that in most cases, a statistically positive result can probably be very accurate, I'm less sure about a statistically negative result. Also, consider the example of AEs claim of audible artifacts on all MP3s, regardless of bit rate. I don't hear them on high quality VBR files. I might with training be able to discern what he's talking about....but I'd rather not. I don't blame you there. I wish I didn't hear them, it would make things much easier, but I do hear them. I have trained myself to be able to listen to Internet radio as long as the data rate is 128 kbps or greater on SPEAKERS. I still can't stand to listen on headphones. I convinced myself that Internet radio is basically no more flawed than FM. It's just that the flaws are different. I.R. has compression artifacts, and FM has multipath, dynamic range compression, and hard limiting, plus, even full quieting on FM isn''t all that silent. I.R., OTOH, is very quiet (at least) and the better feeds aren't compressed or hard limited. That makes the digital compression artifacts more palatable. ScottW |
#15
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
On Thursday, September 26, 2013 12:33:29 PM UTC-7, Scott wrote:
On Thursday, September 26, 2013 11:39:58 AM UTC-7, ScottW wrote: =20 On Wednesday, September 25, 2013 10:22:42 AM UTC-7, Scott wrote: =20 =20 =20 On Wednesday, September 25, 2013 9:50:39 AM UTC-7, Andrew Haley wrote= : =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 With any test of audibility one has to implement controls for false = positives and false negatives. Bias controls against a false positive are = the easy part. Bias controls against false negatives can be a bit trickier= .. Also one has to test the test for sensitivity. that is not so easy eithe= r. |
#16
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
Audio_Empire wrote:
That's true and to my mind it makes DBT null results more than a little suspect. This kind of testing [the double-blind test] seems to have been "borrowed" from the hard sciences (drug testing, hypothesis testing, etc.) and I don't consider listening a hard science. What does this even mean? The question of audibility is a scientific one, and can be verified scientifically. Are you denying this? OTOH, if the premise of the test is simple enough, (like listening to wires) I think they are useful when they return a (inevitable) null result, but for more complex things such as D to A conversion, amplifier or preamplifier sound, etc., the return of a null result is far less reliable. Why should it be? The same tests apply to a DAC (which should be perfectly transparent in a bypass test) and a wire (which should also be perfectly transparent). It's not about "hard science", it's about honesty: "As far as the real world is concerned, high-end audio lost its credibility during the 1980s, when it flatly refused to submit to the kind of basic honesty controls (double-blind testing, for example) that had legitimized every other serious scientific endeavor since Pascal. [This refusal] is a source of endless derisive amusement among rational people and of perpetual embarrassment for me..." J. Gordon Holt, Stereophile Posted: Nov 10, 2007 Andrew. |
#17
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
On Friday, September 27, 2013 7:04:32 AM UTC-7, Andrew Haley wrote:
Audio_Empire wrote: =20 =20 =20 That's true and to my mind it makes DBT null results more than a =20 little suspect. This kind of testing [the double-blind test] seems =20 to have been "borrowed" from the hard sciences (drug testing, =20 hypothesis testing, etc.) and I don't consider listening a hard =20 science. =20 =20 =20 What does this even mean? The question of audibility is a scientific =20 one, and can be verified scientifically. Are you denying this? I am sure he isn't. But weekend warrior science isn't real science. So if o= ne wants to wave the science flag they need to have some legitimate science= .. that means peer reviewed published tests.=20 =20 =20 =20 OTOH, if the premise of the test is simple enough, (like listening =20 to wires) I think they are useful when they return a (inevitable) =20 null result, but for more complex things such as D to A conversion, =20 amplifier or preamplifier sound, etc., the return of a null result =20 is far less reliable. =20 =20 =20 Why should it be? The same tests apply to a DAC (which should be =20 perfectly transparent in a bypass test) and a wire (which should also =20 be perfectly transparent). =20 =20 =20 It's not about "hard science", it's about honesty: =20 "As far as the real world is concerned, high-end audio lost its =20 credibility during the 1980s, when it flatly refused to submit to the =20 kind of basic honesty controls (double-blind testing, for example) =20 that had legitimized every other serious scientific endeavor since =20 Pascal. [This refusal] is a source of endless derisive amusement among =20 rational people and of perpetual embarrassment for me..."=20 =20 J. Gordon Holt, Stereophile Posted: Nov 10, 2007 =20 High end audio community doesn't have a say so in submitting to real scient= ific scrutiny. If real scientists want to test claims in a scientific manne= r and publish the results in a peer reviewed scientific journal there ain't= nothin the high end audio community can do about it. Likewise it is not on= the high end audio community to try to be what they are not, legitimate sc= ientific researchers. By the way, the man you quote, J Gordon Holt was pret= ty much the inventor of subjective audio reviewing and never used DBTs in h= is protocols. he also reported hearing differences between cables and digit= al playback devices. Go figure.... |
#18
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
"Scott" wrote in message
... On Friday, September 27, 2013 7:04:32 AM UTC-7, Andrew Haley wrote: Audio_Empire wrote: That's true and to my mind it makes DBT null results more than a little suspect. As compared to sighted evaluations where all results are totally suspect. This kind of testing [the double-blind test] seems to have been "borrowed" from the hard sciences (drug testing, hypothesis testing, etc.) and I don't consider listening a hard science. Tell that to the Acoustical Society of America! Their motto is "Acoustics is the science of sound.", which of course includes audibility. What does this even mean? The question of audibility is a scientific one, and can be verified scientifically. Are you denying this? So it would seem but I suspect this is more based in a lack of familiarity with science. I'm sure he isn't. But weekend warrior science isn't real science. Except it is. Here are 5 amateur scientists and their discoveries: Michael Faraday - discovered diamagnetism, electrolysis, and electromagnetic induction. Gregor Mendel - discovered genetics while his day job was in organized religion Robert Evans - various significant astronomical discoveries while his day job was also in organized religion Albert Einstein - discovered relativity when his day job was being a low level clerk in a government office. Thomas Edison - various inventions related to the telegraph while his day job was selling newspapers on a train in Michigan. So if one wants to wave the science flag they need to have some legitimate science. that means peer reviewed published tests. Excluded middle argument. Its like saying that in order to call yourself an automobile racer you have to win the Indy 500. OTOH, if the premise of the test is simple enough, (like listening to wires) I think they are useful when they return a (inevitable) null result, but for more complex things such as D to A conversion, amplifier or preamplifier sound, etc., the return of a null result is far less reliable. Actually, with modern DACs null results are all you get. That's very reliable, no? "As far as the real world is concerned, high-end audio lost its credibility during the 1980s, when it flatly refused to submit to the kind of basic honesty controls (double-blind testing, for example) that had legitimized every other serious scientific endeavor since Pascal. [This refusal] is a source of endless derisive amusement among rational people and of perpetual embarrassment for me..." J. Gordon Holt, Stereophile Posted: Nov 10, 2007 High end audio community doesn't have a say so in submitting to real scientific scrutiny. If real scientists want to test claims in a scientific manner and publish the results in a peer reviewed scientific journal there ain't nothin the high end audio community can do about it. This has of course happened on many occasions, with embarassing results for the high-enders. Likewise it is not on the high end audio community to try to be what they are not, legitimate scientific researchers. Does this give them a pass to the results of testing procedures known to be highly inaccurate as justification for making purchase decisions? |
#19
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
Scott wrote:
High end audio community doesn't have a say so in submitting to real scientific scrutiny. What does this mean? That high-end audio enthusiats can't do things scientficially? because the priesthood will come and get them? Or for some other reason? That only real scientists can perform experiments? No: the whole point of science is that if an experiment is done properly the results will be valid no matter who does the experiment. You don't even have to own a lab coat. All you have to do is not mess it up. By the way, the man you quote, J Gordon Holt was pretty much the inventor of subjective audio reviewing and never used DBTs in his protocols. he also reported hearing differences between cables and digital playback devices. Go figure.... And he saw the light. Good for him. Andrew. |
#20
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
On Friday, September 27, 2013 11:58:58 AM UTC-7, Arny Krueger wrote:
"Scott" wrote in message=20 =20 ... =20 On Friday, September 27, 2013 7:04:32 AM UTC-7, Andrew Haley wrote: =20 Audio_Empire wrote: =20 =20 =20 That's true and to my mind it makes DBT null results more than a =20 little suspect. =20 =20 =20 As compared to sighted evaluations where all results are totally suspect. =20 =20 =20 This kind of testing [the double-blind test] seems =20 to have been "borrowed" from the hard sciences (drug testing, =20 hypothesis testing, etc.) and I don't consider listening a hard =20 science. =20 =20 =20 Tell that to the Acoustical Society of America! Their motto is "Acoustics= is=20 =20 the science of sound.", which of course includes audibility. =20 =20 =20 What does this even mean? The question of audibility is a scientific =20 one, and can be verified scientifically. Are you denying this? =20 =20 =20 So it would seem but I suspect this is more based in a lack of familiarit= y=20 =20 with science. =20 =20 =20 I'm sure he isn't. But weekend warrior science isn't real science. =20 =20 =20 Except it is. Here are 5 amateur scientists and their discoveries: =20 =20 =20 Michael Faraday - discovered diamagnetism, electrolysis, and electromagne= tic=20 =20 induction. =20 Gregor Mendel - discovered genetics while his day job was in organized=20 =20 religion =20 Robert Evans - various significant astronomical discoveries while his day= =20 =20 job was also in organized religion =20 Albert Einstein - discovered relativity when his day job was being a low= =20 =20 level clerk in a government office. =20 Thomas Edison - various inventions related to the telegraph while his day= =20 =20 job was selling newspapers on a train in Michigan. Anyone can discover stuff. It's nice, it's cool, it's unusual and most of a= ll it doesn't fall under the umbrella of legitimate science until it has en= dured the rigors that all science has to endure. So they really aren't exce= ptions in the end. =20 =20 =20 So if one wants to wave the science flag they need to have some=20 =20 legitimate science. that means peer reviewed published tests. =20 =20 =20 Excluded middle argument. Its like saying that in order to call yourself = an=20 =20 automobile racer you have to win the Indy 500. There is no middle argument in real science. It's either gone through the r= igors of proper scientific protocols or it is junk. There is no middle grou= nd. With middle ground you end up with cold fusion. =20 =20 =20 OTOH, if the premise of the test is simple enough, (like listening =20 to wires) I think they are useful when they return a (inevitable) =20 null result, but for more complex things such as D to A conversion, =20 amplifier or preamplifier sound, etc., the return of a null result =20 is far less reliable. =20 =20 =20 Actually, with modern DACs null results are all you get. That's very=20 =20 reliable, no? Depends on the methodology. But the fact is that it is not all you get unle= ss you cherry pick. Cherry picking is very unscientific. "modern DACs?" wha= t is your cut off date for "modern?" =20 =20 =20 "As far as the real world is concerned, high-end audio lost its =20 credibility during the 1980s, when it flatly refused to submit to the =20 kind of basic honesty controls (double-blind testing, for example) =20 that had legitimized every other serious scientific endeavor since =20 Pascal. [This refusal] is a source of endless derisive amusement among =20 rational people and of perpetual embarrassment for me..." =20 J. Gordon Holt, Stereophile Posted: Nov 10, 2007 =20 =20 =20 High end audio community doesn't have a say so in submitting to real=20 =20 scientific scrutiny. If real scientists want to test claims in a=20 =20 scientific manner and publish the results in a peer reviewed scientific= =20 =20 journal there ain't nothin the high end audio community can do about it= .. =20 =20 =20 This has of course happened on many occasions, with embarassing results f= or=20 =20 the high-enders. Many occasions? really? can you cite the scientifically peer reviewed resul= ts and provide some sort of quotes of these "embarrassing results?" My unde= rstanding is that science for he most part just doesn't waste precious time= and resources on audiophilia. And the folks in the business of audio that = are doing the sort of testing that would pass peer review tend to keep thei= r work under wraps. I do not know of any peer reviewed scientific studies o= n things like amplifier sound or cable sound or the sound of commercial dig= ital players. But if there are, as you claim, many of them please fill us i= n on the details. And please don't ask me to go buy some article from the A= ESJ that may or may not be relevant. I don't want to spend 25 bucks just to= find out you cited a completely irrelevant article.=20 =20 =20 =20 =20 =20 Likewise it is not on the high end audio community to try to be what the= y=20 =20 are not, legitimate scientific researchers. =20 =20 =20 Does this give them a pass to the results of testing procedures known to = be=20 =20 highly inaccurate as justification for making purchase decisions? Purchasing decisions are on the consumer. If any given manufacturer is lyin= g about the content or objective performance of their gear that is a proble= m. If they are claiming it is subjectively better than the competition then= it's on the consumer to audition the gear for themselves and decide for th= emselves.=20 |
#21
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
On Friday, September 27, 2013 12:00:48 PM UTC-7, Andrew Haley wrote:
Scott wrote: =20 =20 =20 High end audio community doesn't have a say so in submitting to real =20 scientific scrutiny. =20 =20 =20 What does this mean? It means that makers of high end equipment have no control over what scient= ists choose to test. That high-end audio enthusiats can't do things =20 scientficially? because the priesthood will come and get them? Or =20 for some other reason? That only real scientists can perform =20 experiments? The bottom line is that legitimate science has clear cut standards and if t= he weekend warrior doesn't meet those standards then their tests are consid= ered anecdotal in nature and are junk in the world of real science and will= never be added to the collective body of scientifically valid research. Th= at is the reality of the situation. High end enthusiasts can do all the DBT= s they want but until they are subjected to peer review they are junk in th= e eyes of real science.=20 =20 =20 =20 No: the whole point of science is that if an experiment is done =20 properly the results will be valid no matter who does the experiment. Yeah , if it is done "properly." And science has a protocol for determining= this. It's called peer review and if an experiment hasn't endured the peer= review process it remains anecdotal and junk in the eyes of real science.= =20 =20 You don't even have to own a lab coat. All you have to do is not mess =20 it up. And then actually subject it to peer review. Otherwise we don't know you di= dn't mess up.=20 =20 =20 =20 By the way, the man you quote, J Gordon Holt was pretty much the =20 inventor of subjective audio reviewing and never used DBTs in his =20 protocols. he also reported hearing differences between cables and =20 digital playback devices. Go figure.... =20 =20 =20 And he saw the light. Good for him. What light? He never recanted any of *his* subjective reviewing.=20 |
#22
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
In article ,
ScottW wrote: On Thursday, September 26, 2013 4:48:40 PM UTC-7, Audio_Empire wrote: In article , ScottW wrote: I never understood this point. If one doesn't think they hear a difference...what is point of proving that they do? For the individual, there would be no such point, but the purpose of a bias controlled test is to obtain a statistically significant result as to whether there actually exists audible differences or not. Humans are very subjective creatures. We can talk ourselves (either consciously, or subconsciously) into hearing - or not hearing - a myriad of things just because we WANT to hear or not hear them. Exactly. So how do you expect to get a rational response from a person who hears no difference in identifying one (same to him/her) sound from another? It's not a reasonable request. Well, theoretically, if the listener does not know what he is listening to, then he leaves his expectational biases at the door. For instance, if he knows that one cable he is auditioning costs thousands of dollars and the other costs $5, then he is going to hear that the expensive one sounds better - every time. But when he doesn't know what he's listening to, and just knows that he is comparing two cables, then it comes down to whether or not he can detect when the two samples are switched in and out of the test system. If he can't over a large number of tries, then it is assumed that no difference between the two sample exists, and if he can tell the difference over a statistically high number of tries, then it is assumed that some difference does exist. But all of this depends so much on how the tests are set-up and run, and the environment in which they are held, and who is participating (what are the p articipant's personal agendas, if any? There are people who post here who I wouldn't let participate in a DBT, because their personal agenda is to find NO difference between any two of anything). All of this makes DBTs of audio gear somewhat suspect in my mind. For instance, if a guy spends $4K on a pair of interconnects to go between his preamp and his amp, believe me, they ARE going to be the biggest improvement he ever experienced with his system. Even if in a subsequent DBT he can't tell the difference between his new $4K babies and a set of Radio-Shack $5 specials, he will swear that the high-priced cables improved his system's sound. There are people who post here regularly who have convinced themselves that everything pretty much sounds the same - even when a DBT shows otherwise. But they have painted themselves so deeply into that corner, that they REFUSE to hear differences even when those differences are very easy for anyone to hear because they are so gross (like with speakers). And how does a test overcome such a refusal? You can't make a dishonest subject honest when all they need do is fabricate random responses. Yes that's true. As far as I'm concerned, it makes DBTs for audio somewhat suspect as we're not dealing with concrete results. In medicine, DBTs are used routinely to test new drugs. There are usually two groups, one of which gets the real drug, and the other (called the control group) gets a placebo. The people taking part in the test don't know which group they are in and the people dispensing the drugs don't know which participants are getting the placebo and which are getting the real drug. Someone, way up the line knows which is which, but even they just know the participants by number - not by name. The results of these tests compare results with the control group to see if the new drug is statistically effective. IOW, either the drug taking group has a change in symptoms compared with the control group, or they don't. The results are pretty unambiguous as there is simply no way to fake a result. Audio relies on people's impressions and there is no way to really weed out those people who have an agenda or who are basically dishonest in their approach to hearing audio components. Plus...when they produce a null and you produce a positive...you get to claim the golden ear with test results to prove it ![]() Oh yeah, and it's done all the time, I'm sure of it. Those who believe firmly in the Julian Hirsch philosophy that everything sounds the same seem to put more stock in DBTs than do those who believe that all audio equipment sounds different. This dichotomy is further confused by the fact, that in some cases, the DBT results and physics agree - like with cables and interconnects. All DBTs with which I'm familiar, always return a null result. Physics says that a wire is a conductor and in lengths used in domestic audio situations can have NO effect on the signal passing through that conductor. Measurements confirm this. But that doesn't mean that because DBTs accurately show that cable "sound" is bogus, that they are equally accurate when they give a null result with more complex systems. I would suggest that a person who doesn't think they can hear a difference is simply not a good subject for such a test. Yes. That is true, but still, that's something that is hard to determine beforehand. It's hard to ask a person...do you think you hear a difference sighted? Exactly. Even if you don't believe that they can't hear a difference, what is the point in trying to prove they do? All you could prove with great effort is they're being deceptive which would only disqualify them from the test....something you should have concluded when they said they can't hear a difference. Yes, but if I understand your question correctly, the way that properly executed DBTs are designed (or at least the way I understand it), one or two tin-eared listeners out of many and over many tries, won't, appreciably, alter the results. Also, most people who take part in these tests probably don't know beforehand whether or not that can hear a difference. Some, of course, will go into such a test determined to NOT hear a difference even when one exists. Statistically, their results won't make any difference to the outcome either. The general wisdom with DBTs seems to be that one either hears a difference between the devices under test or one doesn't. I'm convinced that in most cases, a statistically positive result can probably be very accurate, I'm less sure about a statistically negative result. I think you're mixing apples and oranges in tests. I'm thinking of the question, can one individual provide reasonable statistical evidence they can hear a difference? A person who doesn't believe they can sighted is not a candidate for such a test. Large numbers of trials with one subject produces only a result relevant to that subject which is still interesting to me. I'm actually less interested in general population results which makes the challenge much less daunting. Large numbers of trials with many subjects (which, IMO, need to be conducted one subject at a time if only to control for listening position...mass subject trials are for show) are pretty rare and are for extrapolating results to the general population. Given the lack of general interest in high end audio...who really cares what the general population thinks? It's a significantly different question and one far more difficult to answer so I ask...why bother? Well, there is that.... ScottW |
#23
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
In article ,
Andrew Haley wrote: Audio_Empire wrote: That's true and to my mind it makes DBT null results more than a little suspect. This kind of testing [the double-blind test] seems to have been "borrowed" from the hard sciences (drug testing, hypothesis testing, etc.) and I don't consider listening a hard science. What does this even mean? The question of audibility is a scientific one, and can be verified scientifically. Are you denying this? What I'm disputing is the accepted notion that methodologies (such as DBTs) that work in the hard sciences (such as drug testing) where the results do not rely on people's abilities to discern something or upon their opinions are wholly applicable to testing audio gear. OTOH, if the premise of the test is simple enough, (like listening to wires) I think they are useful when they return a (inevitable) null result, but for more complex things such as D to A conversion, amplifier or preamplifier sound, etc., the return of a null result is far less reliable. Why should it be? The same tests apply to a DAC (which should be perfectly transparent in a bypass test) There's the problem. You say that DACs should be "perfectly transparent" in a bypass test, yet there is much evidence that says that they aren't. and a wire (which should also be perfectly transparent). Well, wire IS perfectly transparent (as long as it's just a simple conductor - if the cable in question has boxes and bulges in it containing external components such as resistors, capacitors, and inductors, then, of course, all bets are off). The physics tells us that. There is nothing going on in a cable or interconnect in the lengths commonly used for home audio to keep it from being, for all practical \purposes, a "perfect" conductor. It's not about "hard science", it's about honesty: "As far as the real world is concerned, high-end audio lost its credibility during the 1980s, when it flatly refused to submit to the kind of basic honesty controls (double-blind testing, for example) that had legitimized every other serious scientific endeavor since Pascal. [This refusal] is a source of endless derisive amusement among rational people and of perpetual embarrassment for me..." J. Gordon Holt, Stereophile Posted: Nov 10, 2007 Yet Gordon, who was one of my closest friends, BTW, was extremely skeptical of DBTs (as applied in the audio world) and was convinced that most active components had a signature sound. |
#24
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
In article ,
"Arny Krueger" wrote: OTOH, if the premise of the test is simple enough, (like listening to wires) I think they are useful when they return a (inevitable) null result, but for more complex things such as D to A conversion, amplifier or preamplifier sound, etc., the return of a null result is far less reliable. Actually, with modern DACs null results are all you get. That's very reliable, no? The results are reliable, I'm not convinced the methodology used to get those results is reliable. |
#25
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
Scott wrote:
On Friday, September 27, 2013 12:00:48 PM UTC-7, Andrew Haley wrote: Scott wrote: High end audio community doesn't have a say so in submitting to real scientific scrutiny. What does this mean? It means that makers of high end equipment have no control over what scientists choose to test. That high-end audio enthusiats can't do things scientficially? because the priesthood will come and get them? Or for some other reason? That only real scientists can perform experiments? The bottom line is that legitimate science has clear cut standards and if the weekend warrior doesn't meet those standards then their tests are considered anecdotal By whom? in nature and are junk in the world of real science and will never be added to the collective body of scientifically valid research. That is the reality of the situation. High end enthusiasts can do all the DBTs they want but until they are subjected to peer review they are junk in the eyes of real science. I don't see why, as long as they don't mess it up. It seems to me that you have a very old-fashioned idea about science. It's not something only available to "real scientists": those with special qualifications. Anyone can do it, as long as they're careful enough. The goal I'm talking about isn't to impress the priesthood, it's to find out what is true. Besides that, more rigorous testing conditions aren't going to make reviews any worse, even if they're imperfect. A little bit of experimental control would inject a little reality. No: the whole point of science is that if an experiment is done properly the results will be valid no matter who does the experiment. Yeah , if it is done "properly." And science has a protocol for determining this. It's called peer review and if an experiment hasn't endured the peer review process it remains anecdotal and junk in the eyes of real science. Why does "the eyes of real science" matter? The goal I'm talking about is to inject a bit of honesty into audio reviewing. The Nobel Prize committee can wait. You don't even have to own a lab coat. All you have to do is not mess it up. And then actually subject it to peer review. Otherwise we don't know you didn't mess up. Again, it seems to me that you have a terribly old-fashioned attitude: that unless you can get published in the Proceedings of the Royal Society, it's not worth using a scientific approach. But people use science all the time when measuring things and making things and repairing things, and they don't expect to be peer-reviewed or published. They just want to know the truth. By the way, the man you quote, J Gordon Holt was pretty much the inventor of subjective audio reviewing and never used DBTs in his protocols. he also reported hearing differences between cables and digital playback devices. Go figure.... And he saw the light. Good for him. What light? See the quote... Finally, let me remark: if some of the claims that are made in the audio press are true, there is a real scientific breakthrough to be announced: the thresholds of hearing of certain kinds of distortion must be far lower than anyone thought. Who could resist the opportunity to make a famous scientific discovery? Which manufacturer would not be delighted to publish ground-breaking results? Andrew. |
#26
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
On Saturday, September 28, 2013 8:29:37 AM UTC-7, Andrew Haley wrote:
Scott wrote: =20 On Friday, September 27, 2013 12:00:48 PM UTC-7, Andrew Haley wrote: =20 Scott wrote: =20 =20 =20 High end audio community doesn't have a say so in submitting to real =20 scientific scrutiny. =20 =20 =20 What does this mean? =20 =20 =20 It means that makers of high end equipment have no control over what =20 scientists choose to test. =20 =20 =20 That high-end audio enthusiats can't do things =20 scientficially? because the priesthood will come and get them? Or =20 for some other reason? That only real scientists can perform =20 experiments? =20 =20 =20 The bottom line is that legitimate science has clear cut standards =20 and if the weekend warrior doesn't meet those standards then their =20 tests are considered anecdotal =20 =20 =20 By whom? By actual scientists. Don't believe me? ask one. One of my best friends hap= pens to be one. He has a PhD in molecular genetic biology and worked in the= research field for many years as a research scientist. When I state that p= eer review is the standard by which research is considered scientifically v= alid or anecdotal and junk in the world of science I am pretty much quoting= him. =20 =20 =20 in nature and are junk in the world of real science and will never =20 be added to the collective body of scientifically valid =20 research. That is the reality of the situation. High end enthusiasts =20 can do all the DBTs they want but until they are subjected to peer =20 review they are junk in the eyes of real science. =20 =20 =20 I don't see why, as long as they don't mess it up. It doesn't matter if *you* don't see why. Sorry but I am going to side with= the actual research scientists with whom I have discussed this topic exten= sively on the topic of what science actually thinks of home brewed tests th= at have not gone through the peer review process. =20 =20 =20 It seems to me that you have a very old-fashioned idea about science. =20 It's not something only available to "real scientists": those with =20 special qualifications. Anyone can do it, as long as they're careful =20 enough. The goal I'm talking about isn't to impress the priesthood, =20 it's to find out what is true. Again, I am going to take the word of actual real world scientists on this = subject over yours. I have yet to find any scientist that considers tests t= hat have not gone through the peer review process to be anything more than = anecdotal. =20 =20 =20 Besides that, more rigorous testing conditions aren't going to make =20 reviews any worse, even if they're imperfect. A little bit of =20 experimental control would inject a little reality. History has shown us otherwise. Audio magazines have a really poor track r= ecord when it comes to doing quality bias controlled tests.=20 =20 =20 =20 No: the whole point of science is that if an experiment is done =20 properly the results will be valid no matter who does the experiment. =20 =20 =20 Yeah , if it is done "properly." And science has a protocol for =20 determining this. It's called peer review and if an experiment =20 hasn't endured the peer review process it remains anecdotal and junk =20 in the eyes of real science. =20 =20 =20 Why does "the eyes of real science" matter? It only matters if one wants to make claims in the name of science. If you = want to do that then ya gots ta have the actual science to back it.=20 The goal I'm talking =20 about is to inject a bit of honesty into audio reviewing. The Nobel =20 Prize committee can wait. I suggest you take a quick look at Howard Ferstler's attempt at doing so an= d then tell me if you think it worked out. =20 =20 =20 You don't even have to own a lab coat. All you have to do is not mess =20 it up. =20 =20 =20 And then actually subject it to peer review. Otherwise we don't know =20 you didn't mess up. =20 =20 =20 Again, it seems to me that you have a terribly old-fashioned attitude: =20 that unless you can get published in the Proceedings of the Royal =20 Society, it's not worth using a scientific approach. But people use =20 science all the time when measuring things and making things and =20 repairing things, and they don't expect to be peer-reviewed or =20 published. They just want to know the truth. No, it is not old fashioned. It is science. You can't have real science wit= hout the rigors required by it. Without that you end up with things like co= ld fusion and homeopathic medicine. =20 =20 =20 By the way, the man you quote, J Gordon Holt was pretty much the =20 inventor of subjective audio reviewing and never used DBTs in his =20 protocols. he also reported hearing differences between cables and =20 digital playback devices. Go figure.... =20 =20 =20 And he saw the light. Good for him. =20 =20 =20 What light? =20 =20 =20 See the quote... I did. So he saw the light that cables do in fact sound different as well a= s various digital playback gear as he reported over the years and never rec= anted? OK if you say so.=20 =20 =20 =20 Finally, let me remark: if some of the claims that are made in the =20 audio press are true, there is a real scientific breakthrough to be =20 announced: the thresholds of hearing of certain kinds of distortion =20 must be far lower than anyone thought. Who could resist the =20 opportunity to make a famous scientific discovery? Which manufacturer =20 would not be delighted to publish ground-breaking results? There was a time when objectivists back in the 60s declared the new SS amps= to be transparent due to their very low THD. But they were really really w= rong. Could be that the world of science really isn't terribly concerned wi= th such matters. Discovering that there were distortions in these SS amps t= hat were not being measured didn't really make news in the scientific commu= nity. I don't think it is any different today.=20 And let's not forget, most people in audio who are doing research that woul= d pass peer review are mostly keeping their work under wraps. Seems in the = world of commercial audio there is more money in using research for an adva= ntage in the market place than there is in getting the research published.= =20 |
#27
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
In article ,
Andrew Haley wrote: Scott wrote: High end audio community doesn't have a say so in submitting to real scientific scrutiny. What does this mean? That high-end audio enthusiats can't do things scientficially? because the priesthood will come and get them? Or for some other reason? That only real scientists can perform experiments? It means that rigorous scientific controls are rarely, if ever applied to the audiophile community and often when such things are tried, it is usually in search of some agenda, and not in search of some truth which can often be inconvenient at best. No: the whole point of science is that if an experiment is done properly the results will be valid no matter who does the experiment. You don't even have to own a lab coat. All you have to do is not mess it up. In the case of audio, that's more difficult than it might seem on the surface of it. By the way, the man you quote, J Gordon Holt was pretty much the inventor of subjective audio reviewing and never used DBTs in his protocols. he also reported hearing differences between cables and digital playback devices. Go figure.... And he saw the light. Good for him. Now that is one absurd piece of jactitation. You have no idea of the context of that comment, nor do you know what motivated it. To assume that the man saw some "light" by making a comment that agrees with your preconceived notions is most dishonest. You sound like a religious zealot here! Andrew. |
#28
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
Audio_Empire wrote:
In article , Andrew Haley wrote: Audio_Empire wrote: That's true and to my mind it makes DBT null results more than a little suspect. This kind of testing [the double-blind test] seems to have been "borrowed" from the hard sciences (drug testing, hypothesis testing, etc.) and I don't consider listening a hard science. What does this even mean? The question of audibility is a scientific one, and can be verified scientifically. Are you denying this? What I'm disputing is the accepted notion that methodologies (such as DBTs) that work in the hard sciences (such as drug testing) where the results do not rely on people's abilities to discern something or upon their opinions are wholly applicable to testing audio gear. Double-blind testing works for everything else, as far as I know. I'm not going to accept any special pleading (sans really good evidence) that it may not be applicable to audio. How would you prove such a thing, anyway? OTOH, if the premise of the test is simple enough, (like listening to wires) I think they are useful when they return a (inevitable) null result, but for more complex things such as D to A conversion, amplifier or preamplifier sound, etc., the return of a null result is far less reliable. Why should it be? The same tests apply to a DAC (which should be perfectly transparent in a bypass test) There's the problem. You say that DACs should be "perfectly transparent" in a bypass test, ..... given known thresholds of hearing ... yet there is much evidence that says that they aren't. I don't believe that, at least not for DACs without faults. (It's always possible to mess something up, of course.) All I want to see is someone repeatably distingush a good pair of converters in a straight wire test. The only problem that I know of is that there will be an inevitable time delay. You can eliminate that, though, by introducing a converter pair with double the time delay and comparing to see if the delay is audible. It's not about "hard science", it's about honesty: "As far as the real world is concerned, high-end audio lost its credibility during the 1980s, when it flatly refused to submit to the kind of basic honesty controls (double-blind testing, for example) that had legitimized every other serious scientific endeavor since Pascal. [This refusal] is a source of endless derisive amusement among rational people and of perpetual embarrassment for me..." J. Gordon Holt, Stereophile Posted: Nov 10, 2007 Yet Gordon, who was one of my closest friends, BTW, was extremely skeptical of DBTs (as applied in the audio world) and was convinced that most active components had a signature sound. Sure. I don't know that I agree with any statment of his except this one. I posted it not because I'm a Holt fanboy but because it's very well expressed. Andrew. |
#29
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
In article ,
Andrew Haley wrote: Audio_Empire wrote: In article , Andrew Haley wrote: Audio_Empire wrote: That's true and to my mind it makes DBT null results more than a little suspect. This kind of testing [the double-blind test] seems to have been "borrowed" from the hard sciences (drug testing, hypothesis testing, etc.) and I don't consider listening a hard science. What does this even mean? The question of audibility is a scientific one, and can be verified scientifically. Are you denying this? What I'm disputing is the accepted notion that methodologies (such as DBTs) that work in the hard sciences (such as drug testing) where the results do not rely on people's abilities to discern something or upon their opinions are wholly applicable to testing audio gear. Double-blind testing works for everything else, as far as I know. I'm not going to accept any special pleading (sans really good evidence) that it may not be applicable to audio. How would you prove such a thing, anyway? I don't pretend to know. How do you prove that it DOES work for audio? Since it usually returns a null result, I'd say such overwhelmingly one-sided results indicates one of two things: either everything does sound the same (which my experience tells me is extremely unlikely), or DBTs aren't good at uncovering differences in audio gear unless they are extremely gross differences. We certainly know which of those two outcomes the "strict objectivists" believe in, but how do we prove which is the real answer? OTOH, if the premise of the test is simple enough, (like listening to wires) I think they are useful when they return a (inevitable) null result, but for more complex things such as D to A conversion, amplifier or preamplifier sound, etc., the return of a null result is far less reliable. Why should it be? The same tests apply to a DAC (which should be perfectly transparent in a bypass test) There's the problem. You say that DACs should be "perfectly transparent" in a bypass test, .... given known thresholds of hearing ... yet there is much evidence that says that they aren't. I don't believe that, at least not for DACs without faults. (It's always possible to mess something up, of course.) Here's the thing. I suspect that you could build a specific DAC decoder box, and swap out the D/A chips in the circuit all day (Burr-Brown for Audio Devices, for SaberDACs, for Wolfson, etc.) and all of them would sound, essentially, the same. But there are so many different ways to design the circuit, even the D/A converter part - Single DACs in switching mode, separate stereo D/A chips, differential D/A chips, even dual differential chips and even custom designs like dCS ring-DACs and MSB Ladder DACs, that there are BOUND to be differences between the various schemes. Also , there are other parts of the circuits that are probably just as important to the audio performance of a DAC as is the method used for for the D/A conversion. Power supply performance, digital filter design, analog output design etc. [quoted text deleted -- deb] |
#30
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
On 9/28/2013 1:30 PM, Audio_Empire wrote:
In article , Andrew Haley wrote: Audio_Empire wrote: In article , Andrew Haley wrote: snip Double-blind testing works for everything else, as far as I know. I'm not going to accept any special pleading (sans really good evidence) that it may not be applicable to audio. How would you prove such a thing, anyway? I don't pretend to know. How do you prove that it DOES work for audio? Since it usually returns a null result, I'd say such overwhelmingly one-sided results indicates one of two things: either everything does sound the same (which my experience tells me is extremely unlikely), or DBTs aren't good at uncovering differences in audio gear unless they are extremely gross differences. We certainly know which of those two outcomes the "strict objectivists" believe in, but how do we prove which is the real answer? Well, you start out by demonstrating that there are objectively verifiable differences between units - through measurement - that at least approach the demonstrated lower threshold of human hearing. If all measured artifacts or differences are, say -110db down, then a null DBT *is* expected, and a subjectively "verified" audible difference is clearly suspect. Keith |
#31
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
Scott wrote:
On Saturday, September 28, 2013 8:29:37 AM UTC-7, Andrew Haley wrote: Scott wrote: On Friday, September 27, 2013 12:00:48 PM UTC-7, Andrew Haley wrote: Scott wrote: That high-end audio enthusiats can't do things scientficially? because the priesthood will come and get them? Or for some other reason? That only real scientists can perform experiments? The bottom line is that legitimate science has clear cut standards and if the weekend warrior doesn't meet those standards then their tests are considered anecdotal By whom? By actual scientists. Don't believe me? ask one. One of my best friends happens to be one. He has a PhD in molecular genetic biology and worked in the research field for many years as a research scientist. When I state that peer review is the standard by which research is considered scientifically valid or anecdotal and junk in the world of science I am pretty much quoting him. Sure. That's how the process of science works, but that's not really relevant to the point I'm making. The science of hearing is quite well-established, after all, and I don't expect it to be overturned. Extraordinary claims are being made by audio reviewers that are in conflict with some of what is known about thresholds of hearing. The reviewers seem to believe these claims, but with no supporting evidence. That is anecdotal and junk. How is it any less anecdotal and junk than the exact same listening test, but with some bias controls? There is a much simpler explanation: we tend to "hear" differences that aren't there. The only way to find out whether people actually can hear such differences is a blind test. This is true whatever unnamed PhDs in molecular biology say. Besides that, more rigorous testing conditions aren't going to make reviews any worse, even if they're imperfect. A little bit of experimental control would inject a little reality. History has shown us otherwise. Audio magazines have a really poor track record when it comes to doing quality bias controlled tests. Sure, but that's not an excuse for not trying. They've never been much more than half-hearted, I suspect. No: the whole point of science is that if an experiment is done properly the results will be valid no matter who does the experiment. Yeah , if it is done "properly." And science has a protocol for determining this. It's called peer review and if an experiment hasn't endured the peer review process it remains anecdotal and junk in the eyes of real science. Why does "the eyes of real science" matter? It only matters if one wants to make claims in the name of science. All claims about audibility are scientific claims. The question is whether those claims are justified or not. If you want to do that then ya gots ta have the actual science to back it. Well, yes, you do. The goal I'm talking about is to inject a bit of honesty into audio reviewing. The Nobel Prize committee can wait. I suggest you take a quick look at Howard Ferstler's attempt at doing so and then tell me if you think it worked out. References? He did that sort of thing quite a bit, IIRC. Again, it seems to me that you have a terribly old-fashioned attitude: that unless you can get published in the Proceedings of the Royal Society, it's not worth using a scientific approach. But people use science all the time when measuring things and making things and repairing things, and they don't expect to be peer-reviewed or published. They just want to know the truth. No, it is not old fashioned. It is science. You can't have real science without the rigors required by it. Without that you end up with things like cold fusion and homeopathic medicine. Indeed you do. The anecodotes used by homeopaths are essentially indistinguishable from the anecdotes used by many audio reviewers. Finally, let me remark: if some of the claims that are made in the audio press are true, there is a real scientific breakthrough to be announced: the thresholds of hearing of certain kinds of distortion must be far lower than anyone thought. Who could resist the opportunity to make a famous scientific discovery? Which manufacturer would not be delighted to publish ground-breaking results? There was a time when objectivists back in the 60s declared the new SS amps to be transparent due to their very low THD. But they were really really wrong. Could be that the world of science really isn't terribly concerned with such matters. Maybe not: if some of the claims of audio reviewers are true, at least a couple of chapters of Zwicker and Fastl would have to be rewritten, and models of the way the auditory system works would have to be revisited too. Discovering that there were distortions in these SS amps that were not being measured didn't really make news in the scientific community. I don't think it is any different today. So, you think that there may be mysterious forms of distortion that are audible only to people who know which equipment is playing and no measurements detect. I supose it may be so, but there's no reason to believe it without, y'know, evidence. And let's not forget, most people in audio who are doing research that would pass peer review are mostly keeping their work under wraps. Seems in the world of commercial audio there is more money in using research for an advantage in the market place than there is in getting the research published. Hmm. So there may be secret methods known only to commercial audio designers. Andrew. |
#32
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
On 9/27/2013 4:04 PM, Audio_Empire wrote:
In article , ScottW wrote: snip But all of this depends so much on how the tests are set-up and run, and the environment in which they are held, and who is participating (what are the p articipant's personal agendas, if any? There are people who post here who I wouldn't let participate in a DBT, because their personal agenda is to find NO difference between any two of anything). All of this makes DBTs of audio gear somewhat suspect in my mind. As has been stated often before, all that is required are "subjectivist" audiophiles who *know* they hear a difference between two components, sighted, to participate in DBT's of those two components. If they can, with proper controls, reliably identify these, typically, obvious differences, then voila! Done. If not, the DBT protocol has successfully removed the subject bias. snip And how does a test overcome such a refusal? You can't make a dishonest subject honest when all they need do is fabricate random responses. Yes that's true. As far as I'm concerned, it makes DBTs for audio somewhat suspect as we're not dealing with concrete results. In medicine, DBTs are used routinely to test new drugs. There are usually two groups, one of which gets the real drug, and the other (called the control group) gets a placebo. Actually, much more often the control group gets a currently marketed drug, not a placebo. The people taking part in the test don't know which group they are in and the people dispensing the drugs don't know which participants are getting the placebo and which are getting the real drug. Someone, way up the line knows which is which, but even they just know the participants by number - not by name. The results of these tests compare results with the control group to see if the new drug is statistically effective. IOW, either the drug taking group has a change in symptoms compared with the control group, or they don't. The results are pretty unambiguous That, unfortunately, is very often *not* the case. Even when a placebo is used. It's a function of the "power" of the ample size, and often requires a very large population (hence the extremely high costs associated with them) to demonstrate statistical differences. Most results are not binary (i.e. responds/does not respond). In this respect, and audio DBT is much easier, actually than a drug trial. As stated above, the control and tests groups are the same. Subject to subject variability in response is completely eliminated since you start with subjects already demonstrated to respond to the difference in "tests" under sighted conditions. The statistical power of the test population becomes moot. as there is simply no way to fake a result. Assumes a simple binary response. Accurate for audio, not for drugs. I would suggest that a person who doesn't think they can hear a difference is simply not a good subject for such a test. Yes. That is true, but still, that's something that is hard to determine beforehand. It's hard to ask a person...do you think you hear a difference sighted? Exactly. Why is that hard? You either pick one of the ubiquitous audiophiles who claim differences between *everything* they hear, sighted, and pick to components and do the DBT. Why is that hard? Or, you perform the "DBT" methodology, sans blinding, and use only the subjects that clearly hear a difference in two components. In most cases, that will be all of them. Then blind the study and repeat. As I've posted here before, I know the physics involved in wires, and *know* that wire is wire in audio situations. I have also been in sighted wire demonstrations by AQ in which I clearly heard a difference between zip cord and AQ wire. Single blind at home - zilch, as expected. The brain is a wonderful pattern processor and difference engine, and will always try to discern a difference between stimuli. Sighted testing simply doesn't work to discriminate subtle differences. Keith |
#33
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
Audio_Empire wrote:
In article , Andrew Haley wrote: Double-blind testing works for everything else, as far as I know. I'm not going to accept any special pleading (sans really good evidence) that it may not be applicable to audio. How would you prove such a thing, anyway? I don't pretend to know. How do you prove that it DOES work for audio? Since it usually returns a null result, I'd say such overwhelmingly one-sided results indicates one of two things: either everything does sound the same (which my experience tells me is extremely unlikely), or DBTs aren't good at uncovering differences in audio gear unless they are extremely gross differences. We certainly know which of those two outcomes the "strict objectivists" believe in, but how do we prove which is the real answer? We can't. It's one of the basic assumptions of the scientific method that any truth about nature can be discovered by means of systematic observation and experimentation. (This is the assumption that, for example, bacteria don't behave differently when they are being observed in the laboratory from the rest of the time.) Without this assumption there can be no science. Here's the thing. I suspect that you could build a specific DAC decoder box, and swap out the D/A chips in the circuit all day (Burr-Brown for Audio Devices, for SaberDACs, for Wolfson, etc.) and all of them would sound, essentially, the same. But there are so many different ways to design the circuit, even the D/A converter part - Single DACs in switching mode, separate stereo D/A chips, differential D/A chips, even dual differential chips and even custom designs like dCS ring-DACs and MSB Ladder DACs, that there are BOUND to be differences between the various schemes. Sure, but audible ones? There's the rub. There's only one way to find out... Andrew. |
#34
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
In article ,
Andrew Haley wrote: Audio_Empire wrote: In article , Andrew Haley wrote: Double-blind testing works for everything else, as far as I know. I'm not going to accept any special pleading (sans really good evidence) that it may not be applicable to audio. How would you prove such a thing, anyway? I don't pretend to know. How do you prove that it DOES work for audio? Since it usually returns a null result, I'd say such overwhelmingly one-sided results indicates one of two things: either everything does sound the same (which my experience tells me is extremely unlikely), or DBTs aren't good at uncovering differences in audio gear unless they are extremely gross differences. We certainly know which of those two outcomes the "strict objectivists" believe in, but how do we prove which is the real answer? We can't. It's one of the basic assumptions of the scientific method that any truth about nature can be discovered by means of systematic observation and experimentation. (This is the assumption that, for example, bacteria don't behave differently when they are being observed in the laboratory from the rest of the time.) Without this assumption there can be no science. While that's obvious, the assumption that this methodology extends to audio where we are dealing with people's perceptions rather than on hard, repeatable results (such as mixing zinc with hydrochloric acid releases hydrogen - every time!), is, in my opinion, perhaps an assumption too far. Here's the thing. I suspect that you could build a specific DAC decoder box, and swap out the D/A chips in the circuit all day (Burr-Brown for Audio Devices, for SaberDACs, for Wolfson, etc.) and all of them would sound, essentially, the same. But there are so many different ways to design the circuit, even the D/A converter part - Single DACs in switching mode, separate stereo D/A chips, differential D/A chips, even dual differential chips and even custom designs like dCS ring-DACs and MSB Ladder DACs, that there are BOUND to be differences between the various schemes. Sure, but audible ones? There's the rub. There's only one way to find out... yes, audible ones. Bass performance and soundstage performance are two performance parameters that I contend are (a) easy to hear in prolonged listening tests and (b) are greatly affected by design decisions about DAC configuration, filter design, power supply design, and analog stage design. And I agree that there is only one way to find out, and I have my doubts that it's traditional DBT. Make no mistake here. I've not heard a DAC that sounds "bad" since the early days of CD (Sony CD-101, anybody?) In fact, sitting here in my home office listening to streaming radio through my desktop audio system, I'm using a no-name Chinese 24-bit/192KHz USB DAC that sold for less than $50 on E-bay, and even though it doesn't sound anywhere near as good as my DragonFly, it's certainly good enough for the task at hand. Through my current desktop speakers (Napa Acoustic NA-208s) the music sounds FINE. BTW, through this system, both the DragonFly and the $50 Chinese DACare indistinguishable, one from another. But on my main stereo system in my living room, there are vast differences in imaging and bass performance. The cheap spread doesn't have the depth and it is a lightweight in the bass performance, but neither of these is very important in a computer desktop system which has little bass below 55 Hz and to which pinpoint imaging is just not a priority. |
#35
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
In article , KH
wrote: On 9/27/2013 4:04 PM, Audio_Empire wrote: In article , ScottW wrote: snip But all of this depends so much on how the tests are set-up and run, and the environment in which they are held, and who is participating (what are the p articipant's personal agendas, if any? There are people who post here who I wouldn't let participate in a DBT, because their personal agenda is to find NO difference between any two of anything). All of this makes DBTs of audio gear somewhat suspect in my mind. As has been stated often before, all that is required are "subjectivist" audiophiles who *know* they hear a difference between two components, sighted, to participate in DBT's of those two components. If they can, with proper controls, reliably identify these, typically, obvious differences, then voila! Done. If not, the DBT protocol has successfully removed the subject bias. Obviously, it easier to prove that subjectivists can hear no differences in a properly set-up DBT than it is to prove that objectivists CAN hear differences when said objectivists DON'T WANT TO hear differences. In the former case, the DBT should be able to eliminate biases. You can't pretend to hear differences when you don't know what you are listening to. The results should prove you wrong every time (if you are merely determined to hear differences whether they exist or not). OTOH, a subjectivist whose agenda is to not hear differences can fool the system by merely reporting that they heard no differences on any try and there is no way for the statistics to trip them up. You can't prove a negative, after all. And how does a test overcome such a refusal? You can't make a dishonest subject honest when all they need do is fabricate random responses. Yes that's true. As far as I'm concerned, it makes DBTs for audio somewhat suspect as we're not dealing with concrete results. In medicine, DBTs are used routinely to test new drugs. There are usually two groups, one of which gets the real drug, and the other (called the control group) gets a placebo. Actually, much more often the control group gets a currently marketed drug, not a placebo. The people taking part in the test don't know which group they are in and the people dispensing the drugs don't know which participants are getting the placebo and which are getting the real drug. Someone, way up the line knows which is which, but even they just know the participants by number - not by name. The results of these tests compare results with the control group to see if the new drug is statistically effective. IOW, either the drug taking group has a change in symptoms compared with the control group, or they don't. The results are pretty unambiguous That, unfortunately, is very often *not* the case. Even when a placebo is used. It's a function of the "power" of the ample size, and often requires a very large population (hence the extremely high costs associated with them) to demonstrate statistical differences. Most results are not binary (i.e. responds/does not respond). There are exceptions to everything. Obviously I was generalizing here. In this respect, an audio DBT is much easier, actually than a drug trial. As stated above, the control and tests groups are the same. Subject to subject variability in response is completely eliminated since you start with subjects already demonstrated to respond to the difference in "tests" under sighted conditions. The statistical power of the test population becomes moot. But since the results are nowhere as clear or well defined, I'd say that audio DBTs are FAR less reliable than drug tests. as there is simply no way to fake a result. Assumes a simple binary response. Accurate for audio, not for drugs. I would suggest that a person who doesn't think they can hear a difference is simply not a good subject for such a test. Yes. That is true, but still, that's something that is hard to determine beforehand. It's hard to ask a person...do you think you hear a difference sighted? Exactly. Why is that hard? You either pick one of the ubiquitous audiophiles who claim differences between *everything* they hear, sighted, and pick to components and do the DBT. Why is that hard? Or, you perform the "DBT" methodology, sans blinding, and use only the subjects that clearly hear a difference in two components. In most cases, that will be all of them. Then blind the study and repeat. As I've posted here before, I know the physics involved in wires, and *know* that wire is wire in audio situations. I have also been in sighted wire demonstrations by AQ in which I clearly heard a difference between zip cord and AQ wire. Single blind at home - zilch, as expected. The brain is a wonderful pattern processor and difference engine, and will always try to discern a difference between stimuli. Sighted testing simply doesn't work to discriminate subtle differences. And DBT is unreliable because the bias controls are only effective for a positive result, not for a null result. So, what does one do? |
#36
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
On 9/29/2013 7:40 PM, Audio_Empire wrote:
In article , KH wrote: On 9/27/2013 4:04 PM, Audio_Empire wrote: In article , ScottW wrote: snip As has been stated often before, all that is required are "subjectivist" audiophiles who *know* they hear a difference between two components, sighted, to participate in DBT's of those two components. If they can, with proper controls, reliably identify these, typically, obvious differences, then voila! Done. If not, the DBT protocol has successfully removed the subject bias. Obviously, it easier to prove that subjectivists can hear no differences in a properly set-up DBT than it is to prove that objectivists CAN hear differences when said objectivists DON'T WANT TO hear differences. In the former case, the DBT should be able to eliminate biases. You can't pretend to hear differences when you don't know what you are listening to. The results should prove you wrong every time (if you are merely determined to hear differences whether they exist or not). OTOH, a subjectivist whose agenda is to not hear differences can fool the system by merely reporting that they heard no differences on any try and there is no way for the statistics to trip them up. You can't prove a negative, after all. Well, that does assume a level of malfeasance on the part of objectivists that is not in evidence. But again, it doesn't matter. You need only test the subjectivists that "clearly" hear a difference to establish efficacy. snip That, unfortunately, is very often *not* the case. Even when a placebo is used. It's a function of the "power" of the ample size, and often requires a very large population (hence the extremely high costs associated with them) to demonstrate statistical differences. Most results are not binary (i.e. responds/does not respond). There are exceptions to everything. Obviously I was generalizing here. Just pointing out that your generalization is misinformed. *Far* more often one drug is compared to another, not a typical placebo control group. snip As I've posted here before, I know the physics involved in wires, and *know* that wire is wire in audio situations. I have also been in sighted wire demonstrations by AQ in which I clearly heard a difference between zip cord and AQ wire. Single blind at home - zilch, as expected. The brain is a wonderful pattern processor and difference engine, and will always try to discern a difference between stimuli. Sighted testing simply doesn't work to discriminate subtle differences. And DBT is unreliable because the bias controls are only effective for a positive result, not for a null result. So, what does one do? Really? What makes you think this? Bias controls are applicable and effective no matter what the test result. You assume that there must be a positive result to verify the efficacy of the controls, but that simply isn't the case. And once again, *all* that is required is to test subjects that, in open evaluations, hear differences that *should* (based on physics and engineering principles) not be there. If they are detectable in bias controlled tests, they are audible. If not, then it's clear that bias is involved. Keith |
#37
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
"Audio_Empire" wrote in message
... the assumption that this methodology extends to audio where we are dealing with people's perceptions rather than on hard, repeatable results (such as mixing zinc with hydrochloric acid releases hydrogen - every time!), is, in my opinion, perhaps an assumption too far. Claiming that human perceptions are impossible to measure is quite a exceptional claim! |
#38
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
"Audio_Empire" wrote in message
... Obviously, it easier to prove that subjectivists can hear no differences in a properly set-up DBT than it is to prove that objectivists CAN hear differences when said objectivists DON'T WANT TO hear differences. This argument fails because to be true, it demands that no person with good will and basic honesty has ever done an audio DBT. Nothing prevents so-called subjectivists from doing DBTs .I it were just a matter of having a subjectivist do a DBT in order to obtain positive results, then such a thing would have had to happen at last once in the past 30 or more years. In fact many so-called subjectivists have participated in DBTs and they obtained the same results as so-called objectivists. DBTs have a proven track record of converting so-called subjectivists into objectivists. |
#39
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
"Audio_Empire" wrote in message
... All DBTs with which I'm familiar, always return a null result. It is fairly easy to do a DBT that yields a postive results - just compare two things that should sound different! DBT's to confirm the known thresholds of hearing for all sorts of things produce positive results as long as you stay near the thresholds of hearing that have been established by other means. I've done so many times. I've pointed this out before on RAHE. Apparently you have perfected a methodology for not reading RAHE when things that don't conform to your belief system are posted? ;-) |
#40
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
On Monday, September 30, 2013 6:05:25 AM UTC-7, KH wrote:
On 9/29/2013 7:40 PM, Audio_Empire wrote: In article , KH wrote: On 9/27/2013 4:04 PM, Audio_Empire wrote: And DBT is unreliable because the bias controls are only effective for a positive result, not for a null result. So, what does one do? Really? It certainly *can* be and often is. depends on the test protocols. What makes you think this? Bias controls are applicable and effective no matter what the test result. Do tell me what is controlling for a *same sound bias* in an ABX DBT in which the subjects are aware of the what A and B actually are? That is one of the most common ABX DBTs I have seen reported by the typical week end warrior objectivist. You assume that there must be a positive result to verify the efficacy of the controls, but that simply isn't the case. There must be some sort of evidence that the test is capable of showing differences where differences actually do exist otherwise you don't know that the test was in some way masking real differences. And once again, *all* that is required is to test subjects that, in open evaluations, hear differences that *should* (based on physics and engineering principles) not be there. If they are detectable in bias controlled tests, they are audible. If not, then it's clear that bias is involved. Again it depends on the test. A badly designed or badly executed DBT does not bring any sort of clarity to the picture. |
Reply |
Thread Tools | |
Display Modes | |
|
|