Home |
Search |
Today's Posts |
#81
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
"Audio Empire" wrote in message
Another way to put this, I think, is that while Arny believes that since there is no evidence of peer-reviewed support for what he calls "audiophile myths", it means that no evidence HAS or CAN be found supporting those propositions, while many of the rest of us takes that lack of evidence to mean simply that serious science hasn't "tackled" the issue (nor are they likely to do so). Various people including ourselves have done DBTs relating to several audiophile myths, and found that the promised audible benefits become elusive when tested with any amount of rigor. You can't find evidence if you don't look for it. There are people who have done their homework. We've looked for the evidence, but its exceedingly hard to find. I freely admit that we're the wrong people to do this, but science isn't so fragile that only advocates can make something that wants to work, actually workd. Now, If Arny wishes to fund a peer-reviewed university study on Audiophile Mythology, I'm sure he could find someone to step forward and tackle the issue, but I'm equally sure that aside from that eventuality, funding from the usual sources is going to be hard to come by. There you go - we see once again where people who advocate and have spent the big bucks on audiophile myths want other people to pay money to show them the error of their ways. It's called $200 for a magic HDMI cable but never ever spend $20 for a JAES preprint. |
#82
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
"Audio Empire" wrote in message
This type of person is often the type who participate in DBTs as well, rank laymen. Simply not true. The DBTs I've been involved with involved experienced audiophiles, some youngsters, some who went back to the days of tubes. People like him and college students who were weened on MP3s and ear-buds are the average "listener". Here we go again, another set of self-serving audiophile myths. Where are the peer-reviewed paper that shows that people who listen to MP3 and personal listening devices necessarily have any deficiencies when it comes to reliably detecting audible differences? Fact is that many audible differences are easier to detect with earphones and/or headphones. |
#83
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
On Mar 31, 12:41=A0pm, "Arny Krueger" wrote:
"Audio Empire" wrote in message Another way to put this, I think, is that while Arny believes that since there is no evidence of peer-reviewed support for what he calls "audiophile myths", it means that no evidence HAS or CAN be found supporting those propositions, while many of the rest of us takes that lack of evidence to mean simply that serious science hasn't "tackled" the issue (nor are they likely to do so). Various people including ourselves have done DBTs relating to several audiophile myths, and found that the promised audible benefits become elusive when tested with any amount of rigor. Anecdotal account of non peer reviewed junk evidence noted. You can't find evidence if you don't look for it. There are people who have done their homework. =A0We've looked for the evidence, but its exceedingly hard to find. You've "looked' for the evidence? Huh? This is not evidence you "look for." This isn't archeology. This is evidence that is created by doing tests that stand up to peer review. I freely admit that we're the wrong people to do this, but science isn't so fragile that only advocates can make something that wants to work, actually workd. Now, If Arny wishes to fund a peer-reviewed university study on Audiophile Mythology, I'm sure he could find someone to step forward and tackle the issue, but I'm equally sure that aside from that eventuality, funding from the usual sources is going to be hard to come by. There you go - we see once again where people who advocate and have spent the big bucks on audiophile myths want other people to pay money to show them the error of their ways. No we would rather have you demonstrate rather than posture about the science. that's all. Don't worry Arny. I don't expect you to come up with the goods. That is the point. =A0It's called $200 for a magic HDMI cable but never ever spend $20 for a JAES preprint. And that's called a red herring. I *have* spent $20.00 on AESJ reprints. Never spent $200.00 on any magic HDMI cable. Of course the AESJ reprints I bought based on claims you made about their content didn't support much less address your asssertions at the time. IOW it was a waste of money. Fool me once shame on you, fool me twice...... If you want to cite such papers then you will have to offer quotes in context. You have a bad track record with me when it comes to recomendations on AESJ papers. |
#84
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
On Thu, 31 Mar 2011 12:41:08 -0700, Arny Krueger wrote
(in article ): "Audio Empire" wrote in message This type of person is often the type who participate in DBTs as well, rank laymen. Simply not true. The DBTs I've been involved with involved experienced audiophiles, some youngsters, some who went back to the days of tubes. So, you feel that you can speak for all DBTs? People like him and college students who were weened on MP3s and ear-buds are the average "listener". Here we go again, another set of self-serving audiophile myths. Where are the peer-reviewed paper that shows that people who listen to MP3 and personal listening devices necessarily have any deficiencies when it comes to reliably detecting audible differences? They can listen to low-data rate MP3s Fact is that many audible differences are easier to detect with earphones and/or headphones. And it seems that a large majority of the younger generations DON'T CARE about these "differences" AT ALL or they wouldn't be listening to really low-bit rate MP3s and would insist in ripping their music at higher bit rates. I have a number of friends with teenaged and college aged kids with iPod-like devices. They listen to them constantly. When I ask them what bit-rate they use, the answer is always the same: "The one that allows me to put the most songs in the available space". I.E. quantity instead of quality. |
#85
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
On Thu, 31 Mar 2011 12:40:55 -0700, Arny Krueger wrote
(in article ): "Audio Empire" wrote in message On Thu, 31 Mar 2011 06:56:03 -0700, Arny Krueger wrote (in article ): "Peter Wieck" wrote in message Either you will hear it or you will not. Are you sure about that? One of my favorite examples is an early Dynaco ST-120 using the 2N3055 output transistors. Great measurements, sounded like glass-in-a-blender. Where is the reliable evidence supporting this generally unsupported audiophile myth? Just about everybody (except, apparently, you) who ever owned one. People said similar things about the CDP 101, but tests on several samples of them also come up empty. There are subtle audible differences, but nothing that can honestly be called "glass-in-a-blender". That consensus of opinion is reliable enough to me. I'm looking for reliable technical evidence, not the results of a public opinion survey, BTW, where is that public opinion survey? ;-) I know what I heard then, and I know what I hear now. Last year I heard an ST-120 A/B'd against a new Audio Research 220 W/channel tube amp in a DBT (just for laughs). We got the laughs all right. The ST-120 sounded DREADFUL, and more than that, it sounded just like I remember is sounding! Got any bench tests showing that both power amps met origional vendor specs? I know for sure that my ST-120 does so. Yeah, and I'll bet it still sounds terrible compared to a new amp. Of course, you seem to have one of the later ones without the 2N3055s and without the crossover notch, but even those sounded pretty bad - just better than the early ones. |
#86
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
On Thu, 31 Mar 2011 12:41:02 -0700, Arny Krueger wrote
(in article ): "Audio Empire" wrote in message Another way to put this, I think, is that while Arny believes that since there is no evidence of peer-reviewed support for what he calls "audiophile myths", it means that no evidence HAS or CAN be found supporting those propositions, while many of the rest of us takes that lack of evidence to mean simply that serious science hasn't "tackled" the issue (nor are they likely to do so). Various people including ourselves have done DBTs relating to several audiophile myths, and found that the promised audible benefits become elusive when tested with any amount of rigor. That's fine Arny. It has nothing to do with my comment above, but that you have this conviction is just fine. You asked for peer-reviewed evidence of the validity of what you call "audiophile myths". The insinuation here is that lack of same means that there aren't any because there cannot BE any, when all it really shows is that none have been done - that we are aware of. You can guess at the reason, and your guesses can be used to support your conviction, but the truth is that you don't know (and neither do I). And that has nothing whatsoever to do with DBTs that you have performed or anyone else's. The topic was peer-review of data. You can't find evidence if you don't look for it. There are people who have done their homework. We've looked for the evidence, but its exceedingly hard to find. I freely admit that we're the wrong people to do this, but science isn't so fragile that only advocates can make something that wants to work, actually workd. Now, If Arny wishes to fund a peer-reviewed university study on Audiophile Mythology, I'm sure he could find someone to step forward and tackle the issue, but I'm equally sure that aside from that eventuality, funding from the usual sources is going to be hard to come by. There you go - we see once again where people who advocate and have spent the big bucks on audiophile myths want other people to pay money to show them the error of their ways. It's called $200 for a magic HDMI cable but never ever spend $20 for a JAES preprint. No, you misunderstand me, again. My comment has nothing whatsoever to do with the studies themselves, or magic HDMI cables or even $20 JAES reprints. You asked for peer-reviewed studies showing that audiophile myths are true. I'm merely pointing out that the only way that's going to happen is for someone to pay to have the studies performed. My invitation to you to step-up to the plate was done in jest, of course. |
#87
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
Audio Empire wrote:
Another way to put this, I think, is that while Arny believes that since there is no evidence of peer-reviewed support for what he calls "audiophile myths", it means that no evidence HAS or CAN be found supporting those propositions, while many of the rest of us takes that lack of evidence to mean simply that serious science hasn't "tackled" the issue (nor are they likely to do so). You can't find evidence if you don't look for it. I think you're being grossly unfair. It's a matter of record that Arny did once believe what he calls "audiophile myths", but he wasn't satisfied with that, so he did some experiments himself. To say that his experiments weren't "serious science" because they weren't funded or sanctioned by a research institute is mere prejudice. Surely it's better to have more people doing science, not keep it confined to an ivory tower. Andrew. |
#88
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
"Audio Empire" wrote in message
You asked for peer-reviewed evidence of the validity of what you call "audiophile myths". The insinuation here is that lack of same means that there aren't any because there cannot BE any, when all it really shows is that none have been done - that we are aware of. Not at all. My point is that while the peer-reviewed support for a scientific approach to audio may not satisfy every dedicated true believer in anti-science, the peer reviewed evidence that supports their viewpoint is non-existent. They would like to ignore the fact that the original Clark JAES article introducing ABX was peer-reviewed. A friend of mine likes to say "People hear what they want to hear and read what they want to read". It is very clear to me that people who have invested $10,000's, perhaps $100,000's, and most of their adult lives on anti-technology like tubes, vinyl, Mpingo discs and Bedini Clarifiers, and believe that digital can't sound right because of the empty space beteween the samples, aren't going to read a few peer-reviewed papers and suddenly have a major change of heart. The current round of posts blaming problems that afflict *any* listening test on just DBTs shows that biases run deep, and that some critics simply do not feel constrained by the actual facts or reason in their blind rush to preserve the status quo. |
#89
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
"Audio Empire" wrote in message
On Thu, 31 Mar 2011 12:41:08 -0700, Arny Krueger wrote (in article ): "Audio Empire" wrote in message This type of person is often the type who participate in DBTs as well, rank laymen. Simply not true. The DBTs I've been involved with involved experienced audiophiles, some youngsters, some who went back to the days of tubes. So, you feel that you can speak for all DBTs? That's not what I wrote. I feel no need to respond to made-up statements. People like him and college students who were weened on MP3s and ear-buds are the average "listener". Here we go again, another set of self-serving audiophile myths. Where are the peer-reviewed paper that shows that people who listen to MP3 and personal listening devices necessarily have any deficiencies when it comes to reliably detecting audible differences? They can listen to low-data rate MP3s They could. Heck, I listen to low bitrate files frequently because that is how most spoken word recordings are distributed. It doesn't sound lifelike or even good, but the goal is communicating information, not tickling the inner ear. Fact is that many audible differences are easier to detect with earphones and/or headphones. And it seems that a large majority of the younger generations DON'T CARE about these "differences" AT ALL or they wouldn't be listening to really low-bit rate MP3s and would insist in ripping their music at higher bit rates. Straw man argument because it has already been generally agreed upon that the vast majority of music listeners aren't audiophiles and never will be. OTOH, there is a rapidly emerging market for music encoded in high-bitrate compressed files, uncompressed and lossless-compressed files, and even music files with 24 bit data words and sample rates up to 192KHz. There has been a major explosion in sales of high priced and in some cases high quality earphones and headphones. Traditional vendors like Sennheiser and Etymotics are bringing out new extremely expensive high performance headphones and earphones. Non-traditional vendors are doing similar things in even greater volumes. If not for the young, mobile music listener, then who? I have a number of friends with teenaged and college aged kids with iPod-like devices. They listen to them constantly. When I ask them what bit-rate they use, the answer is always the same: "The one that allows me to put the most songs in the available space". I.E. quantity instead of quality. These are choices that they get to make. This is also just the mass market, not the already large and rapidly emerging market for high quality mobile listening experiences. Remember that most of our parents were happy listening to AM radios when they were young, and as a rule they had no viable alternatives until the 1950s. On balance the low and rapidly falling prices for flash memory make crushing music in order to store huge amounts of it in portable devices more nonsensical than ever. |
#90
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
On Fri, 1 Apr 2011 07:29:10 -0700, Arny Krueger wrote
(in article ): "Audio Empire" wrote in message You asked for peer-reviewed evidence of the validity of what you call "audiophile myths". The insinuation here is that lack of same means that there aren't any because there cannot BE any, when all it really shows is that none have been done - that we are aware of. Not at all. My point is that while the peer-reviewed support for a scientific approach to audio may not satisfy every dedicated true believer in anti-science, the peer reviewed evidence that supports their viewpoint is non-existent. Yes, I know that's your point. But my point is this evidence is not non-existent because it cannot exist, it's non-existent probably because the research has not been done, or at least hasn't been done in a manner that would lead to the results being peer-reviewed. This is an entirely different thing from what you want the answer to be. They would like to ignore the fact that the original Clark JAES article introducing ABX was peer-reviewed. I'm not sure that anyone would like to do that. Just because a paper on the methodology has been peer-reviewed, doesn't mean that the results have. A friend of mine likes to say "People hear what they want to hear and read what they want to read". Very true. I'm not advocating that test methodologies which eliminate expectational and sighted bias aren't necessary, I'm just wondering if the tests we now employ satisfy the other side of the equation. IOW, we've got the bias-neutral part right, but are the results of those DBTs either accurate or reliable? One side of the equation doesn't necessarily guarantee the efficacy of the other. In fact, they have little to do with each other. It is very clear to me that people who have invested $10,000's, perhaps $100,000's, and most of their adult lives on anti-technology like tubes, vinyl, Mpingo discs and Bedini Clarifiers, and believe that digital can't sound right because of the empty space beteween the samples, aren't going to read a few peer-reviewed papers and suddenly have a major change of heart. Nor are they going to believe a bunch of DBTs that tell them that they are wrong. OTOH, I think that you are wrong (and incredibly biased), however, when you group tubes and vinyl in with Mpingo discs, Bedini Clarifiers and other REAL anti-technology. I can show you any number of tube amps, for example, that you couldn't tell were tubed on the basis of their sound or for that matter differentiate between the tubed amp and a modern SS amp in any DBT or ABX test that you'd like to name. I will give you that since modern, quality tube amps from the likes of Audio Research and VTL, etc., sound so much like a modern SS amp, it's difficult to justify putting-up with the downside of tubes just to have one. There was a time when tube amps sounded much better than transistor amps, but those days are done and gone. There might be some romance associated with a set of KT-88s glowing softly in the dark while great music fills the room, but at that juncture we have reduced tubes to an electronic fireplace, and I think I'd rather have a nice, reliable SS amp and a REAL fireplace, thank you. The darkened room, filled with great music, of course, is always welcome from any source, including vinyl. The current round of posts blaming problems that afflict *any* listening test on just DBTs shows that biases run deep, and that some critics simply do not feel constrained by the actual facts or reason in their blind rush to preserve the status quo. Your opinions are not necessarily facts, and I've seen no DIRECT proof that these null results from DBTs actually PROVE anything. Sure they satisfy those who believe that DBT is the final arbiter of component differences, but that, in itself, is a form of circular reasoning. A self-fulfilling prophecy as it were. |
#91
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
On Fri, 1 Apr 2011 07:30:11 -0700, Arny Krueger wrote
(in article ): "Audio Empire" wrote in message On Thu, 31 Mar 2011 12:41:08 -0700, Arny Krueger wrote (in article ): "Audio Empire" wrote in message This type of person is often the type who participate in DBTs as well, rank laymen. Simply not true. The DBTs I've been involved with involved experienced audiophiles, some youngsters, some who went back to the days of tubes. So, you feel that you can speak for all DBTs? That's not what I wrote. I feel no need to respond to made-up statements. People like him and college students who were weened on MP3s and ear-buds are the average "listener". Here we go again, another set of self-serving audiophile myths. Where are the peer-reviewed paper that shows that people who listen to MP3 and personal listening devices necessarily have any deficiencies when it comes to reliably detecting audible differences? They can listen to low-data rate MP3s They could. Heck, I listen to low bitrate files frequently because that is how most spoken word recordings are distributed. It doesn't sound lifelike or even good, but the goal is communicating information, not tickling the inner ear. But that's a totally irrelevant side issue on your part, which, I believe, is designed to obfuscate the debate. Fact is that many audible differences are easier to detect with earphones and/or headphones. And it seems that a large majority of the younger generations DON'T CARE about these "differences" AT ALL or they wouldn't be listening to really low-bit rate MP3s and would insist in ripping their music at higher bit rates. Straw man argument because it has already been generally agreed upon that the vast majority of music listeners aren't audiophiles and never will be. Again withe the deliberate obfuscation. We are TALKING about the fact that the average listener is NOT an audiophile. That's the whole point of my bringing up the fact that most young people don't care about sound. If they did, they wouldn't be satisfied listening to low bit-rate MP3s. When this type of "listener" is pressed into service to participate in a listening DBT, I don't wonder that they return a null result. They likely don't even understand what they are supposed to be listening FOR, and probably wouldn't recognize these differences even if they existed. THAT'S THE POINT. OTOH, there is a rapidly emerging market for music encoded in high-bitrate compressed files, uncompressed and lossless-compressed files, and even music files with 24 bit data words and sample rates up to 192KHz. But again, that;'s NOT the discussion. There has been a major explosion in sales of high priced and in some cases high quality earphones and headphones. Traditional vendors like Sennheiser and Etymotics are bringing out new extremely expensive high performance headphones and earphones. Non-traditional vendors are doing similar things in even greater volumes. If not for the young, mobile music listener, then who? You are assuming that these expensive headphones are bought by people who encode their ripped music at the lowest possible data rate (thereby expanding their iPod-like device's capacity). And that is simply not in evidence. Every audiophile I know has an iPod or similar device. They DO NOT use MP3 they use FLAC or ALC and trade ultimate storage capacity for quality. They also tend to listen with expensive headphones and many have outboard headphone amplifiers which accompany their iPod devices I have a number of friends with teenaged and college aged kids with iPod-like devices. They listen to them constantly. When I ask them what bit-rate they use, the answer is always the same: "The one that allows me to put the most songs in the available space". I.E. quantity instead of quality. These are choices that they get to make. This is also just the mass market, not the already large and rapidly emerging market for high quality mobile listening experiences. Remember that most of our parents were happy listening to AM radios when they were young, and as a rule they had no viable alternatives until the 1950s. This just reinforces my point about the quality of listeners that take part in these university level DBT studies such as the Meyer/Moran paper that you are so fond of. On balance the low and rapidly falling prices for flash memory make crushing music in order to store huge amounts of it in portable devices more nonsensical than ever. While that might be true for those of us interested in sound quality. To the average teen, larger memory means MORE low-quality music files on their players. I know kids with libraries that include thousands of "songs", far more than they will ever listen to, but to hear them tell it, that's not the point. The point is to have everything. They trade songs, buy songs, rip songs and steal songs from the internet. The game is MORE, not BETTER. |
#92
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
On Apr 1, 4:59=A0am, Andrew Haley
wrote: Audio Empire wrote: Another way to put this, I think, is that while Arny believes that since there is no evidence of peer-reviewed support for what he calls "audiophile myths", it means that no evidence HAS or CAN be found supporting those propositions, while many of the rest of us takes that lack of evidence to mean simply that serious science hasn't "tackled" the issue (nor are they likely to do so). You can't find evidence if you don't look for it. I think you're being grossly unfair. =A0It's a matter of record that Arny did once believe what he calls "audiophile myths", but he wasn't satisfied with that, so he did some experiments himself. =A0To say that his experiments weren't "serious science" because they weren't funded or sanctioned by a research institute is mere prejudice. =A0Surely it's better to have more people doing science, not keep it confined to an ivory tower. It's not prejudice. It's how science works. I had exactly the opposite experience. I was a hard nosed objectivist who scoffed at the notion that a tube amp could sound better than a modern SS amp and mocked audiophiles for thinking one could get better sound than digital audio by "dragging a rock over a piece of plastic." Yep that is what I would say. So I did some blind comparisons. Wow was I wrong! Neither Arny's nor my blind tests are anything other than anecdotal evidence in the eyes of real science. So it is not unfair much less grossly unfair to make this charcterization when Arny pulls out the science flag. It's only better to have more people doing "science" so long as they are doing it up to the standards set by the scientific community. More junk wrongly presented as science is not a good thing in any way. If one considers the standards set by the scientific community as some sort of ivory tower we have nothing more to discuss on the merits fo evidence. |
#93
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
On Apr 1, 7:29=A0am, "Arny Krueger" wrote:
"Audio Empire" wrote in message You asked for peer-reviewed evidence of the validity of what you call "audiophile myths". The insinuation here is that lack of same means that there aren't any because there cannot BE any, when all it really shows is that none have been done - that we are aware of. Not at all. My point is that =A0while the peer-reviewed support for a scientific approach to audio may not satisfy every dedicated true believe= r in anti-science, the peer reviewed evidence that supports their viewpoint= is non-existent. They would like to ignore the fact that the original Clark JAES article introducing ABX was peer-reviewed. Your points are built on prejudicial axioms about other peoples' beliefs. We already are aware of those prejudices. You give them away every time you use loaded language such as "true believers in anti- science" to describe people who merely have a different opinion about the meaning of your anecdotal evidence. The real anti science here is the idea that your anecdotal evidence is actually scientifically valid and the inference that the lack of evidence in support of ideas that you prejudicially brand as "audio myths" are invalid due to a lack of evidencial support even though you know there is no evidence at all either in support or in conflict with those ideas. That is very anti- scientific A friend of mine likes to say "People hear what they want to hear and rea= d what they want to read". Maybe he was including you when he was saying this to you. Or did you assume you were the exception? =A0It is very clear to me that people who have invested $10,000's, perhaps $100,000's, =A0and most of their adult lives = on anti-technology like tubes, vinyl, Mpingo discs and Bedini Clarifiers, an= d believe that digital can't sound right because of the empty space betewee= n the samples, aren't going to read a few peer-reviewed papers and suddenly have a major change of heart. This is just rhetoric built on prejudices. Since when are tubed electronics "anti"technology?" And associating such things as tubed electronics which clearly actually work! And have been demonstrated to have objectively measurable characteristics which give them a sonic signature that many find preferable with things like Mpingo discs is simply a logical fallacy of guilt by association. Then to make assumptions about what others who you have prejudicially mischaracterized would and would not read is really ridiculous. Especially given the fact that I have actually bought and read such papers based on your misrepresentations of their content! You are burning straw men left and right here. The current round of posts blaming problems that afflict *any* listening test on just DBTs shows that biases run deep, and that some critics simpl= y do not feel constrained by the actual facts or reason in their blind rush= to preserve the status quo. And yet no one has actually done *that.* Another straw man goes up in flames. When you have the goods. When you wave the sicence flag and you actually have the science to do so, there is no need to pollute the web with the ashes of so much burnt straw. |
#94
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
Scott wrote:
On Apr 1, 4:59?am, Andrew Haley wrote: Audio Empire wrote: Another way to put this, I think, is that while Arny believes that since there is no evidence of peer-reviewed support for what he calls "audiophile myths", it means that no evidence HAS or CAN be found supporting those propositions, while many of the rest of us takes that lack of evidence to mean simply that serious science hasn't "tackled" the issue (nor are they likely to do so). You can't find evidence if you don't look for it. I think you're being grossly unfair. It's a matter of record that Arny did once believe what he calls "audiophile myths", but he wasn't satisfied with that, so he did some experiments himself. To say that his experiments weren't "serious science" because they weren't funded or sanctioned by a research institute is mere prejudice. Surely it's better to have more people doing science, not keep it confined to an ivory tower. It's not prejudice. It's how science works. I had exactly the opposite experience. I was a hard nosed objectivist who scoffed at the notion that a tube amp could sound better than a modern SS amp and mocked audiophiles for thinking one could get better sound than digital audio by "dragging a rock over a piece of plastic." Yep that is what I would say. So I did some blind comparisons. Wow was I wrong! Right, so you're not absolutely opposed to the idea of non-scientists doing experiments. Neither Arny's nor my blind tests are anything other than anecdotal evidence in the eyes of real science. Think about how negative this sounds. You're implying that there is never any point to anyone who is not an official scientist doing a careful experiment. They might as well guess, because their results won't be valid anyway. Care and diligence is a waste of time. So it is not unfair much less grossly unfair to make this charcterization when Arny pulls out the science flag. It's only better to have more people doing "science" so long as they are doing it up to the standards set by the scientific community. There, I agree totally. What matters is how well the experiment is done. But it's a matter of degree: some experimental controls are surely better than none, even if the experiment isn't perfect. Andrew. |
#95
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
On Apr 1, 12:10=A0pm, Andrew Haley
wrote: Scott wrote: On Apr 1, 4:59?am, Andrew Haley wrote: Audio Empire wrote: Another way to put this, I think, is that while Arny believes that since there is no evidence of peer-reviewed support for what he calls "audiophile myths", it means that no evidence HAS or CAN be found supporting those propositions, while many of the rest of us takes that lack of evidence to mean simply that serious science hasn't "tackled" the issue (nor are they likely to do so). You can't find evidence if you don't look for it. I think you're being grossly unfair. It's a matter of record that Arny did once believe what he calls "audiophile myths", but he wasn't satisfied with that, so he did some experiments himself. To say that his experiments weren't "serious science" because they weren't funded or sanctioned by a research institute is mere prejudice. Surely it's better to have more people doing science, not keep it confined to an ivory tower. It's not prejudice. It's how science works. I had exactly the opposite experience. I was a hard nosed objectivist who scoffed at the notion that a tube amp could sound better than a modern SS amp and mocked audiophiles for thinking one could get better sound than digital audio by "dragging a rock over a piece of plastic." Yep that is what I would say. So I did some blind comparisons. Wow was I wrong! Right, so you're not absolutely opposed to the idea of non-scientists doing experiments. Of course not. I am opposed to misrepresentations of their merit in the eyes of real science. Whether that misrepresentation comes from "creationist scientists" Bigfoot hunters, UFOlogists or rabid audio objectivists. And yes you can throw in the radical audio subjectivists like the Peter Beltians who advocate things like freezing pictures of your dog and many other things that could not possibly affect the performance of an audio system. Neither Arny's nor my blind tests are anything other than anecdotal evidence in the eyes of real science. Think about how negative this sounds. =A0You're implying that there is never any point to anyone who is not an official scientist doing a careful experiment. Not at all. Again it's not about people doing experiments it's about misrepresenting real science. Weekend scientists don't get a special pass that allows them to bypass the rigors of accepted scientific methodologies. Do all the experiments you want just don't pretend it is something the actual scientific community considers to be real science. =A0They might as well guess, because their results won't be valid anyway. =A0Care and diligence is a waste of time. Look validity means different things in different contexts. They are as valid as one wants to think they are on a personal level. Just as much as your opinions on your favorite flavor of ice cream is justa s valid as you want it to be on a personal level. But scientific validity is a different thing and demands very different standards. It's the bait and switch that I take issue with. So it is not unfair much less grossly unfair to make this charcterization when Arny pulls out the science flag. It's only better to have more people doing "science" so long as they are doing it up to the standards set by the scientific community. There, I agree totally. =A0What matters is how well the experiment is done. =A0But it's a matter of degree: some experimental controls are surely better than none, even if the experiment isn't perfect. I agree with your agreement. ;-) I am going to go out on a limb and guess you would perefer that people don't peddle junk science and anecdotes as real science as well. |
#96
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
"Arny Krueger" wrote in message
... "Audio Empire" wrote in message On Thu, 31 Mar 2011 12:41:08 -0700, Arny Krueger wrote (in article ): "Audio Empire" wrote in message This type of person is often the type who participate in DBTs as well, rank laymen. Simply not true. The DBTs I've been involved with involved experienced audiophiles, some youngsters, some who went back to the days of tubes. So, you feel that you can speak for all DBTs? That's not what I wrote. I feel no need to respond to made-up statements. He's simply saying that the groups you've been involved in are not (or may not be) representative of the group tests that have been done (which often use university students, from what I've seen). People like him and college students who were weened on MP3s and ear-buds are the average "listener". Here we go again, another set of self-serving audiophile myths. Where are the peer-reviewed paper that shows that people who listen to MP3 and personal listening devices necessarily have any deficiencies when it comes to reliably detecting audible differences? They can listen to low-data rate MP3s They could. Heck, I listen to low bitrate files frequently because that is how most spoken word recordings are distributed. It doesn't sound lifelike or even good, but the goal is communicating information, not tickling the inner ear. I'm not sure ANYBODY listens to music for the "information content" rather than enjoyment. This would seem to negate your point. Fact is that many audible differences are easier to detect with earphones and/or headphones. And it seems that a large majority of the younger generations DON'T CARE about these "differences" AT ALL or they wouldn't be listening to really low-bit rate MP3s and would insist in ripping their music at higher bit rates. Straw man argument because it has already been generally agreed upon that the vast majority of music listeners aren't audiophiles and never will be. OTOH, there is a rapidly emerging market for music encoded in high-bitrate compressed files, uncompressed and lossless-compressed files, and even music files with 24 bit data words and sample rates up to 192KHz. There has been a major explosion in sales of high priced and in some cases high quality earphones and headphones. Traditional vendors like Sennheiser and Etymotics are bringing out new extremely expensive high performance headphones and earphones. Non-traditional vendors are doing similar things in even greater volumes. If not for the young, mobile music listener, then who? The well-heeled audiophile who wants to be "with it"? I have a number of friends with teenaged and college aged kids with iPod-like devices. They listen to them constantly. When I ask them what bit-rate they use, the answer is always the same: "The one that allows me to put the most songs in the available space". I.E. quantity instead of quality. These are choices that they get to make. This is also just the mass market, not the already large and rapidly emerging market for high quality mobile listening experiences. Remember that most of our parents were happy listening to AM radios when they were young, and as a rule they had no viable alternatives until the 1950s. On balance the low and rapidly falling prices for flash memory make crushing music in order to store huge amounts of it in portable devices more nonsensical than ever. And your point is....? |
#97
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
"Andrew Haley" wrote in message
... Scott wrote: On Apr 1, 4:59?am, Andrew Haley wrote: snip So it is not unfair much less grossly unfair to make this charcterization when Arny pulls out the science flag. It's only better to have more people doing "science" so long as they are doing it up to the standards set by the scientific community. There, I agree totally. What matters is how well the experiment is done. But it's a matter of degree: some experimental controls are surely better than none, even if the experiment isn't perfect. Not necesarily. If the controls that aren't there are crucial to the validity of the test, or the design of the test itself is not valid (stimulus, measurements, intervals, training, intervening technology, etc.) Conventional ABX'ng has never been shown to be valid in evaluating MUSIC differences that other approaches (the aforementioned Oohashi test) and even the ABC/hr test have proven better at. Yet ABX is the test that Arny developed a computerized version of, and has relied on. If the construct of the test itself intereferes with the normal evaluative process, you can almost be guaranteed that it will not produce valid results. One of the principles of testing in any field of human endeavor is to try to emulate as much as possible the conventional context of the variable under test. |
#98
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
On Apr 1, 4:40=A0pm, "Harry Lavo" wrote:
"Andrew Haley" wrote in message ... Scott wrote: On Apr 1, 4:59?am, Andrew Haley wrote: snip So it is not unfair much less grossly unfair to make this charcterization when Arny pulls out the science flag. It's only better to have more people doing "science" so long as they are doing it up to the standards set by the scientific community. There, I agree totally. =A0What matters is how well the experiment is done. =A0But it's a matter of degree: some experimental controls are surely better than none, even if the experiment isn't perfect. Not necesarily. =A0If the controls that aren't there are crucial to the validity of the test, or the design of the test itself is not valid (stimulus, measurements, intervals, training, intervening technology, etc= ..) If the controls "aren't there" then you have "none" by definition. Conventional ABX'ng has never been shown to be valid in evaluating MUSIC differences that other approaches (the aforementioned Oohashi test) and e= ven the ABC/hr test have proven better at. =A0Yet ABX is the test that Arny developed a computerized version of, and has relied on. If the construct of the test itself intereferes with the normal evaluativ= e process, you can almost be guaranteed that it will not produce valid results. =A0One of the principles of testing in any field of human endeav= or is to try to emulate as much as possible the conventional context of the variable under test. How does ABX interfere in a way that ABC/hr does not? Neither methodology is particularly more or less like the "normal evaluative process" if there is such a singular thing. I can't go there with you Harry. If done well ABX should do the trick. Sure any given ABX test may miss an audible difference that is present and not specifically being listened for. But I have to side with the DBT advocates that when used to test claims of audibility those making the claims should already know what specifically to listen for. ABX doen right does not make audible differences go away. I think "done right" is the issue not ABX per se. |
#99
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
"Harry Lavo" wrote in message
Conventional ABX'ng has never been shown to be valid in evaluating MUSIC differences that other approaches (the aforementioned Oohashi test) and even the ABC/hr test have proven better at. You see to be very confused, Harry. Of course the ABX test has been shown to be valid in evaluating differences in sound quality related to the reproduction of music and voice. The OOhashi test has never been confirmed and was only published in a journal that makes the AESJ look like a major bastion of Science. There's no controversy between ABC/hr and ABX. They are two different tests with two different purposes. Many people use both, depending on the question at hand. Yet ABX is the test that Arny developed a computerized version of, and has relied on. What you can't say truthfully Harry is all that matters, which is whether I rely on ABX to the exclusion of all others, which everybody knows is false. It's all about the right tool for the job. I also use and recognize other double blind testing methodologies, as they fit the work at hand. If the construct of the test itself intereferes with the normal evaluative process, you can almost be guaranteed that it will not produce valid results. Sighted evaluations would be the world's best example of that. There is only speculation and no peer-reviewed scientific opinion that ABX interferes with the normal evaluatative process, any more so than any of the alternatives. Of course doing an evaluations is not identically the same as just listening to music for pleasure. But, nobody has figured out how to reduce that difference to zero. One of the principles of testing in any field of human endeavor is to try to emulate as much as possible the conventional context of the variable under test. Exactly. And that is exactly the path we followed while developing ABX. That you would mention that concept and the OOhashi test and all of the technical gyrations that it imposes on the normal listening experience in the same post, is a true wonder! |
#100
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
"Audio Empire" wrote in message
We are TALKING about the fact that the average listener is NOT an audiophile. That's the whole point of my bringing up the fact that most young people don't care about sound. If they did, they wouldn't be satisfied listening to low bit-rate MP3s. I agree. When this type of "listener" is pressed into service to participate in a listening DBT, I don't wonder that they return a null result. Who is silly enough to do that? They likely don't even understand what they are supposed to be listening FOR, and probably wouldn't recognize these differences even if they existed. Who actually wastes their time doing that? THAT'S THE POINT. My point is that we never used people like that in our ABX tests, and AFAIK neither does anybody else if sensitive results are the goal. Looks like a straw man argument to me! There has been a major explosion in sales of high priced and in some cases high quality earphones and headphones. Traditional vendors like Sennheiser and Etymotics are bringing out new extremely expensive high performance headphones and earphones. Non-traditional vendors are doing similar things in even greater volumes. If not for the young, mobile music listener, then who? You are assuming that these expensive headphones are bought by people who encode their ripped music at the lowest possible data rate (thereby expanding their iPod-like device's capacity). Not at all. I'm saying that people who go to all that trouble and expense are often far more demanding of their program material. The fact of the matter is that even a minimal 2 GB Sansa Clip ( a device with 24 GB max capacity today) can hold enough lossless FLAC files in 2G to be a very enjoyable listening tool. And that is simply not in evidence. Every audiophile I know has an iPod or similar device. They DO NOT use MP3 they use FLAC or ALC and trade ultimate storage capacity for quality. They also tend to listen with expensive headphones and many have outboard headphone amplifiers which accompany their iPod devices Then we agree. I have a number of friends with teenaged and college aged kids with iPod-like devices. They listen to them constantly. When I ask them what bit-rate they use, the answer is always the same: "The one that allows me to put the most songs in the available space". I.E. quantity instead of quality. These are choices that they get to make. This is also just the mass market, not the already large and rapidly emerging market for high quality mobile listening experiences. Remember that most of our parents were happy listening to AM radios when they were young, and as a rule they had no viable alternatives until the 1950s. This just reinforces my point about the quality of listeners that take part in these university level DBT studies such as the Meyer/Moran paper that you are so fond of. The Meyer Moran tests were done "With the help of about 60 members of the Boston Audio Society and many other interested parties.." (quote from page one of the Meyer JAES Peer-reviewed paper. Your claim is totally flasified. BTW the rest of the sentence I quoted said: "a series of double-blind (A/B/X) listening tests were held over a period of about a year" Thus we have recent confirmation of the validity of ABX testing in a peer-reviewed paper. |
#101
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
On Sat, 2 Apr 2011 06:29:48 -0700, Arny Krueger wrote
(in article ): "Audio Empire" wrote in message We are TALKING about the fact that the average listener is NOT an audiophile. That's the whole point of my bringing up the fact that most young people don't care about sound. If they did, they wouldn't be satisfied listening to low bit-rate MP3s. I agree. When this type of "listener" is pressed into service to participate in a listening DBT, I don't wonder that they return a null result. Who is silly enough to do that? They likely don't even understand what they are supposed to be listening FOR, and probably wouldn't recognize these differences even if they existed. Who actually wastes their time doing that? THAT'S THE POINT. My point is that we never used people like that in our ABX tests, and AFAIK neither does anybody else if sensitive results are the goal. Looks like a straw man argument to me! Tell that to Meyer/Moran. Many of their participants were just university students (although most were Boston Audio Society members, and that's to the good). The paper made no differentiation between experienced listeners and non-experienced except to say that in their tests, it didn't seem to matter. There has been a major explosion in sales of high priced and in some cases high quality earphones and headphones. Traditional vendors like Sennheiser and Etymotics are bringing out new extremely expensive high performance headphones and earphones. Non-traditional vendors are doing similar things in even greater volumes. If not for the young, mobile music listener, then who? You are assuming that these expensive headphones are bought by people who encode their ripped music at the lowest possible data rate (thereby expanding their iPod-like device's capacity). Not at all. I'm saying that people who go to all that trouble and expense are often far more demanding of their program material. The fact of the matter is that even a minimal 2 GB Sansa Clip ( a device with 24 GB max capacity today) can hold enough lossless FLAC files in 2G to be a very enjoyable listening tool. And that is simply not in evidence. Every audiophile I know has an iPod or similar device. They DO NOT use MP3 they use FLAC or ALC and trade ultimate storage capacity for quality. They also tend to listen with expensive headphones and many have outboard headphone amplifiers which accompany their iPod devices Then we agree. Only if you concede that the average iPod toting teen wouldn't know decent sound if it came up and bit them in the arse! I have a number of friends with teenaged and college aged kids with iPod-like devices. They listen to them constantly. When I ask them what bit-rate they use, the answer is always the same: "The one that allows me to put the most songs in the available space". I.E. quantity instead of quality. These are choices that they get to make. This is also just the mass market, not the already large and rapidly emerging market for high quality mobile listening experiences. Remember that most of our parents were happy listening to AM radios when they were young, and as a rule they had no viable alternatives until the 1950s. This just reinforces my point about the quality of listeners that take part in these university level DBT studies such as the Meyer/Moran paper that you are so fond of. The Meyer Moran tests were done "With the help of about 60 members of the Boston Audio Society and many other interested parties.." (quote from page one of the Meyer JAES Peer-reviewed paper. Your claim is totally flasified. The paper also says that they used over one hundred participants, "of widely varying ages, activities, and levels of musical and audio experience.' BTW the rest of the sentence I quoted said: "a series of double-blind (A/B/X) listening tests were held over a period of about a year" Yep. Thus we have recent confirmation of the validity of ABX testing in a peer-reviewed paper. I didn't see the peer-review info noted in that paper. |
#102
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
On Apr 1, 12:02=A0pm, Audio Empire wrote:
Your opinions are not necessarily facts, and I've seen no DIRECT proof th= at these null results from DBTs actually PROVE anything. Sure they satisfy t= hose who believe that DBT is the final arbiter of component differences, but t= hat, in itself, is a form of circular reasoning. A self-fulfilling prophecy as= it were. If DBTs don't prove anything, why are they accepted by peer-reviewed psychoacoustics journals? Could it be that the real scientists have a different standard for what constitutes proof than you do? And whose standard should we trust, in that case? bob |
#103
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
"Scott" wrote in message
Not at all. Again it's not about people doing experiments it's about misrepresenting real science. Who does that? Weekend scientists don't get a special pass that allows them to bypass the rigors of accepted scientific methodologies. Who does that? Do all the experiments you want just don't pretend it is something the actual scientific community considers to be real science. Remember that ABX and its procedures were fully described in a peer-reviewed paper that was printed in the JAES. |
#104
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
"Scott" wrote in message
How does ABX interfere in a way that ABC/hr does not? Good point Scott. But lets step back even further and see the big picture. How does ABX interfere in a way that any test that demands the listener express an opinon does not? Neither methodology is particularly more or less like the "normal evaluative process" if there is such a singular thing. Additional good points. Is an ABX test less intrusive than listening in a stereo salon with a commissioned salesman hovering? I can't go there with you Harry. If done well ABX should do the trick. Sure any given ABX test may miss an audible difference that is present and not specifically being listened for. But I have to side with the DBT advocates that when used to test claims of audibility those making the claims should already know what specifically to listen for. ABX donr right does not make audible differences go away. I think "done right" is the issue not ABX per se. What makes this all a giant joke is the fact that so many people take sighted, non-level-matched, non-time-synched listening evaluations as their definitive standard for evaluating audio gear. If that isn't invalid, then is anything invalid? |
#105
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
On Fri, 1 Apr 2011 16:40:13 -0700, Harry Lavo wrote
(in article ): "Andrew Haley" wrote in message ... Scott wrote: On Apr 1, 4:59?am, Andrew Haley wrote: snip So it is not unfair much less grossly unfair to make this charcterization when Arny pulls out the science flag. It's only better to have more people doing "science" so long as they are doing it up to the standards set by the scientific community. There, I agree totally. What matters is how well the experiment is done. But it's a matter of degree: some experimental controls are surely better than none, even if the experiment isn't perfect. Not necesarily. If the controls that aren't there are crucial to the validity of the test, or the design of the test itself is not valid (stimulus, measurements, intervals, training, intervening technology, etc.) Conventional ABX'ng has never been shown to be valid in evaluating MUSIC differences that other approaches (the aforementioned Oohashi test) and even the ABC/hr test have proven better at. Yet ABX is the test that Arny developed a computerized version of, and has relied on. If the construct of the test itself intereferes with the normal evaluative process, you can almost be guaranteed that it will not produce valid results. One of the principles of testing in any field of human endeavor is to try to emulate as much as possible the conventional context of the variable under test. Well put. These are some of the things that bother me about the body of conclusions that many of these tests produce. As I have indicated before, I have participated in many DBT tests where we have worked hard to set up correctly, with level matching to less than a quarter of a dB both electrical and acoustical, set switch times, long samples, the switch operator in another room, all indications of a switch taking place masked (input lights, etc.), the AB box (where used) in an insulation-filled box so we can't hear the relays, etc. and we have returned statistically positive results for amps and DACs. . I have also been involved in DBTs where null results have been returned. In those tests where a positive result occurred, I found the differences to be so trivial that only a very anal retentive audiophile could possibly not be happy with any of the units under test! While they all sounded a little different in some respect, they all sounded good. The only time we got a gross difference was when, for fun, we pulled out our host's old Dynaco ST-120 and ran it against a new, and very expensive Audio Research Hybrid HD220 amp. The results made us all laugh. The ST-120 sounded dreadful while the AR was very neutral sounding. |
#106
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
On Apr 2, 9:28=A0am, "Arny Krueger" wrote:
"Harry Lavo" wrote in message =A0One of the principles of testing in any field of human endeavor is to try to emulate as much as possible the conventional context of the variable under test. Exactly. And that is exactly the path we followed while developing ABX. T= hat you would mention that concept and the OOhashi test and all of the techni= cal gyrations that it imposes on the normal listening experience in the same post, is a true wonder! Yeah, the standard seems to be that an negative ABX test conducted in someone's living room is too unfamiliar to be reliable. But a positive listening test conducted in an MRI tube, well, that's the gold standard! bob |
#107
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
On Apr 2, 12:25=A0pm, bob wrote:
On Apr 1, 12:02=A0pm, Audio Empire wrote: Your opinions are not necessarily facts, and I've seen no DIRECT proof = that these null results from DBTs actually PROVE anything. Sure they satisfy= those who believe that DBT is the final arbiter of component differences, but= that, in itself, is a form of circular reasoning. A self-fulfilling prophecy = as it were. If DBTs don't prove anything, why are they accepted by peer-reviewed psychoacoustics journals? Could it be that the real scientists have a different standard for what constitutes proof than you do? And whose standard should we trust, in that case? You are right. We should trust the scientists and the peer review process for giving us the highest standards for proof. One should also note that one paper. one reported set of tests etc does not constitute a meaningful *body* of evidence. |
#108
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
On Sat, 2 Apr 2011 12:25:27 -0700, bob wrote
(in article ): On Apr 1, 12:02=A0pm, Audio Empire wrote: Your opinions are not necessarily facts, and I've seen no DIRECT proof th= at these null results from DBTs actually PROVE anything. Sure they satisfy t= hose who believe that DBT is the final arbiter of component differences, but t= hat, in itself, is a form of circular reasoning. A self-fulfilling prophecy as= it were. If DBTs don't prove anything, why are they accepted by peer-reviewed psychoacoustics journals? Are they? Where, then, are these peer reviews? And do psychoacoustic journals test audio gear? Could it be that the real scientists have a different standard for what constitutes proof than you do? I doubt it, Because certainly Arny has not satisfied my standards for proof yet. Remember, I'm not anti-DBT, I just have a few niggling doubts about its efficacy for testing audio equipment. And whose standard should we trust, in that case? Only those who prove their case beyond a reasonable doubt |
#109
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
On 26/03/2011 3:24 PM, Audio Empire wrote:
The hardest part was replacing the multi-section electrolytic capacitor in the power supply (these are no longer available) I fitted two brand-new multi-section electrolytic capacitors into a vintage valve amplifier just a few weeks ago. They are still produced in reasonable variety for the guitar amplifier market. |
#110
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
On Apr 2, 5:30=A0pm, Audio Empire wrote:
On Sat, 2 Apr 2011 12:25:27 -0700, bob wrote (in article ): If DBTs don't prove anything, why are they accepted by peer-reviewed psychoacoustics journals? Are they? Where, then, are these peer reviews? In peer-reviewed journals. Duh. You can find others, if you care to look. But even a single accepted article suffices to prove that they are accepted. What you won't find in any psychoacoustics journal is any comparative listening test that ISN'T double-blind. And do psychoacoustic journals test audio gear? Of course not. As I said earlier in this thread, scientific journals don't waste space on old news. And the fact that several categories of audio gear are audibly transparent is very old news in the psychoacoustics field. Could it be that the real scientists have a different standard for what constitutes proof than you do? I doubt it, Because certainly Arny has not satisfied my standards for pro= of yet. =A0 Oh yeah, that follows logically. ;-) Remember, I'm not anti-DBT, I just have a few niggling doubts about its efficacy for testing audio equipment. What you call niggling doubts, I call pseudoscientific rationalization. It's like saying, "I agree that naturally occurring carbon dioxide traps heat in the lower atmosphere, but I have a few niggling doubts about whether man-made carbon dioxide does so." You're grasping at straws. And whose standard should we trust, in that case? Only those who prove their case beyond a reasonable doubt This from a man who can't present even one iota of plausibly scientific evidence in favor of his position. bob |
#111
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
On Apr 2, 4:38=A0pm, Scott wrote:
You are right. We should trust the scientists and the peer review process for giving us the highest standards for proof. One should also note that one =A0paper. one reported set of tests etc does not constitute a meaningful *body* of evidence. Nor did anyone claim it was. That article was cited as an example of the evidence, not its totality. In addition, its publication demonstrates, contrary to assertions made here, that ABX and similar tests ARE recognized as valid by the people who actually understand the underlying science. And, to beat a dead horse, where is there even one single published listening test that supports the other side of this "debate"? Nowhere, mon frere. bob |
#112
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
On Apr 1, 7:40=A0pm, "Harry Lavo" wrote:
Conventional ABX'ng has never been shown to be valid in evaluating MUSIC differences that other approaches (the aforementioned Oohashi test) and e= ven the ABC/hr test have proven better at. =A0Yet ABX is the test that Arny developed a computerized version of, and has relied on. This is a good example of subjectivists' penchant for inventing science. (There have been plenty of others in this thread.) Harry takes it upon himself to declare something to be true--that our hearing perception is somehow different for music than for other sounds--without a shred of evidence. In fact, DBTs have been accepted as valid by the field of psychoacoustics (of which Harry is not a part and in which he has no training), to the point where no peer reviewed journal will accept reports of listening tests that are NOT double-blind. The claim that human hearing perception is more acute when listening to music is not only unproven but false. Music, because of its dynamic changes and the phenomenon of masking, makes for a very poor medium for objective listening tests of any kind. If the construct of the test itself intereferes with the normal evaluativ= e process, you can almost be guaranteed that it will not produce valid results. =A0One of the principles of testing in any field of human endeav= or is to try to emulate as much as possible the conventional context of the variable under test. Again, Harry takes it upon himself to invent science. There is no evidence that ABX tests are less sensitive to anything than other double-blind tests. Quite the contrary--it's pretty easy to design a test that's less sensitive than an ABX test. bob |
#113
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
"Scott" wrote in message
... On Apr 1, 4:40 pm, "Harry Lavo" wrote: "Andrew Haley" wrote in message ... Scott wrote: On Apr 1, 4:59?am, Andrew Haley wrote: snip So it is not unfair much less grossly unfair to make this charcterization when Arny pulls out the science flag. It's only better to have more people doing "science" so long as they are doing it up to the standards set by the scientific community. There, I agree totally. What matters is how well the experiment is done. But it's a matter of degree: some experimental controls are surely better than none, even if the experiment isn't perfect. Not necesarily. If the controls that aren't there are crucial to the validity of the test, or the design of the test itself is not valid (stimulus, measurements, intervals, training, intervening technology, etc.) If the controls "aren't there" then you have "none" by definition. No, then the controls are inadequate. There is a difference. Sometimes "inadequate" controls can slip by the designer, as can validity-destroying intervening variables. That's why careful peer review is important. Conventional ABX'ng has never been shown to be valid in evaluating MUSIC differences that other approaches (the aforementioned Oohashi test) and even the ABC/hr test have proven better at. Yet ABX is the test that Arny developed a computerized version of, and has relied on. If the construct of the test itself intereferes with the normal evaluative process, you can almost be guaranteed that it will not produce valid results. One of the principles of testing in any field of human endeavor is to try to emulate as much as possible the conventional context of the variable under test. How does ABX interfere in a way that ABC/hr does not? Neither methodology is particularly more or less like the "normal evaluative process" if there is such a singular thing. I can't go there with you Harry. If done well ABX should do the trick. Sure any given ABX test may miss an audible difference that is present and not specifically being listened for. But I have to side with the DBT advocates that when used to test claims of audibility those making the claims should already know what specifically to listen for. ABX doen right does not make audible differences go away. I think "done right" is the issue not ABX per se. I don't like either, although ABC/hr takes a timid step in the direction of musical evaluation. |
#114
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
On Apr 2, 1:35=A0pm, "Arny Krueger" wrote:
"Scott" wrote in message Not at all. Again it's not about people doing experiments it's about misrepresenting real science. Who does that? Since you asked, I think you do. Weekend scientists don't get a special pass that allows them to bypass the rigors of accepted scientific methodologies. Who does that? *does* what? I didn't say anybody *does* anything in the above quote. Do all the experiments you want just don't pretend it is something the actual scientific community considers to be real science. =A0Remember that ABX and its procedures were fully described in a peer-reviewed paper that was printed in the JAES. Yep. I have it. Remember when I asked for any peer reviewed papers with results of such tests that show amplifiers sound the same? The AESJ clearly shows that they would publish such papers should thet stand up to scrutiny by having published the paper you are referencing above which cites the need for such tests in audio. |
#115
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
On Apr 2, 1:38=A0pm, "Arny Krueger" wrote:
"Scott" wrote in message How does ABX interfere in a way =A0that ABC/hr does not? Good point Scott. But lets step back even further and see the big picture= .. How does ABX interfere in a way that any test that demands the listener express an opinon does not? I don't think it does. Nor does ABC/hr Neither methodology is particularly more or less like the "normal evaluative process" if there is such a singular thing. Additional good points. Is an ABX test less intrusive than listening in a stereo salon with a commissioned salesman hovering? I think the intrusiveness of ABX simply is a function of the physical imposition of such a test. If we are talking an ABX box and amps this is trivial. If we are talking about other things it can be cumbersome. Try doing an ABX test of power conditioners for example. Not a simple test to design or execute. I can't go there with you Harry. If done well ABX should do the trick. Sure any given ABX test may miss an audible difference that is present and not specifically being listened for. But I have to side with the DBT advocates that when used to test claims of audibility those making the claims should already know what specifically to listen for. ABX donr right does not make audible differences go away. I think "done right" is the issue not ABX per se. What makes this all a giant joke is the fact that so many people take sighted, non-level-matched, non-time-synched listening evaluations as the= ir definitive standard for evaluating audio gear. =A0If that isn't invalid, = then is anything invalid? Personal evaluation only requires personal validation. The last set of blind comparisons I did (not ABX since theyr were preference comparisons and there was NO question of sameness) was between several *performances* of Rachmaninoff's 2nd piano concerto. It was an arduous task to say the least. You really can't time sync, nor do you want to. the pieces have to be heard in sections and as a whole. Level matching is impossible so we level "optimized" for each version. As different and as recognizable as one would expect the different interpretations to be the blind comparisons were really an eye, or ear opener. A lot of the presumptions about the artists' technical and artistic talents were exposed as questionable in these blind comparisons. But it was a lot of work. Luckily it was also a lot of fun. It was quite a learning experience in regards to the concerto itself and a learning experience in my personal tastes. One of the lessons was that despite the obvious and, in many cases, recognizable differences between these performances the bias controls made a significant impact on the results and preferences formed. |
#116
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
On Apr 2, 6:49=A0pm, bob wrote:
On Apr 2, 5:30=A0pm, Audio Empire wrote: On Sat, 2 Apr 2011 12:25:27 -0700, bob wrote (in article ): If DBTs don't prove anything, why are they accepted by peer-reviewed psychoacoustics journals? Are they? Where, then, are these peer reviews? In peer-reviewed journals. Duh. You can find others, if you care to look. But even a single accepted article suffices to prove that they are accepted. What you won't find in any psychoacoustics journal is any comparative listening test that ISN'T double-blind. Actually you will find a few of those here and there. Sometimes bias just isn't an issue. And do psychoacoustic journals test audio gear? Of course not. As I said earlier in this thread, scientific journals don't waste space on old news. And the fact that several categories of audio gear are audibly transparent is very old news in the psychoacoustics field. Well OK....if the reason is that it's been covered in the past so extensively that it is old news/established conclusions based on a substantial body of evidence that would explain why it isn't being covered *now* But audio is a reletviely new technology in the grand scheme of things so there must have been a time when it wasn't old news. So what about the peer reviewed research that was done back when it was news not old news? Can you cite the old news/body of peer reviewed research from the past that supports your assertions of transparency? |
#117
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
"Audio Empire" wrote in message
On Sat, 2 Apr 2011 06:29:48 -0700, Arny Krueger wrote (in article ): "Audio Empire" wrote in message This just reinforces my point about the quality of listeners that take part in these university level DBT studies such as the Meyer/Moran paper that you are so fond of. The Meyer Moran tests were done "With the help of about 60 members of the Boston Audio Society and many other interested parties.." (The above is a quote from page one of the Meyer JAES Peer-reviewed paper. ) Your claim is totally falsified. The paper also says that they used over one hundred participants, "of widely varying ages, activities, and levels of musical and audio experience.' Thank you for presenting more evidence that is contrary to your previous statements about the listening panels being compsed of just university students. While there may have been *some* university students in the listening panels, it is abundently clear that the listeners were people of "of widely varying ages, activities, and levels of musical and audio experience." BTW the rest of the sentence I quoted said: "a series of double-blind (A/B/X) listening tests were held over a period of about a year" Yep. Thus we have recent confirmation of the validity of ABX testing in a peer-reviewed paper. I didn't see the peer-review info noted in that paper. I'm sorry that you are so unfamiliar with the protocols that are used to qualify papers that are published in the JAES. |
#118
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
On Apr 3, 10:33=A0am, Scott wrote:
Well OK....if the reason is that it's been covered in the past so extensively that it is old news/established conclusions based on a substantial body of evidence that would explain why it isn't being covered *now* But audio is a reletviely new technology in the grand scheme of things so there must have been a time when it wasn't old news. So what about the peer reviewed research that was done back when it was news not old news? Can you cite the old news/body of peer reviewed research from the past that supports your assertions of transparency? The invention of high-fidelity home audio reproduction was not a revolutionary event in the field of psychoacoustics. It's not a study of equipment; it's a study of human perception. And human perception did not suddenly change when Avery Fisher started making amps. So, no, the issue of the audibility of consumer audio products was never of great interest to the field. There's also a fallacy at work here about the centrality of peer- reviewed journals. The vast majority of what we know in any academic field never appeared in a peer-reviewed journal. In most fields other than medicine, peer review was not really formalized until well into the 20th century. Even since then, a lot of good, hard science never makes it into one of the few top journals in any given field. And just because something makes it into a peer-reviewed journal doesn't make it right. A better picture of the state of knowledge in the field can be found in textbooks, which are not only peer-reviewed but must also stand up in the marketplace. You aren't going to sell many textbooks if your colleagues think you got a lot of stuff wrong, after all. I know of only one psychoacoustics textbook that discusses audio gear directly. I'll bet you can guess what it says. :-) As I said in an earlier post, the real scientific case here rests on the well-documented limits of human hearing perception, mapped against the measured performance of audio gear. The DBTs that have been done, either by scientists or amateurs, serves largely to confirm that science. bob |
#119
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
On Sat, 2 Apr 2011 17:49:24 -0700, Esmond Pitt wrote
(in article ): On 26/03/2011 3:24 PM, Audio Empire wrote: The hardest part was replacing the multi-section electrolytic capacitor in the power supply (these are no longer available) I fitted two brand-new multi-section electrolytic capacitors into a vintage valve amplifier just a few weeks ago. They are still produced in reasonable variety for the guitar amplifier market. Hmmm. I've tried to restore a number of pieces of vintage gear, including a Citation One preamp. I was told by everybody that these multi-section caps aren't available any more. Do you have a source? |
#120
![]()
Posted to rec.audio.high-end
|
|||
|
|||
![]()
"Audio Empire" wrote in message
On Fri, 1 Apr 2011 16:40:13 -0700, Harry Lavo wrote (in article ): Conventional ABX'ng has never been shown to be valid in evaluating MUSIC differences that other approaches (the aforementioned Oohashi test) and even the ABC/hr test have proven better at. I find it ironic that Harry continues to idolize the Oohashi tests when in fact they are among the listening tests I know of that are most different from "Just listening to music" of all that I know of. ABX is not about hooking wires up to people's heads or putting them into large scale diagnositic machines that make loud clanking sounds when they run. Yet ABX is the test that Arny developed a computerized version of, and has relied on. Yes I developed ABX, but no I don't rely on it exclusively. If the construct of the test itself intereferes with the normal evaluative process, you can almost be guaranteed that it will not produce valid results. This senstence is ludicrous coming from a proponent of highly mechanistic tests such as those used by Oohashi. One of the principles of testing in any field of human endeavor is to try to emulate as much as possible the conventional context of the variable under test. That's what ABX does. Most of the ABX tests that we did in the early days were done using proponents of the audible difference, using the proponent's home systems. Well put. No, straw man. These are some of the things that bother me about the body of conclusions that many of these tests produce. We're aware of that. The real problem is that ABX tests don't support your cherished beliefs about audio, such as the audible performance of certain power amps for which you have *never* provided any technical support for. Ditto for your cherished beliefs about high sample rates and magic DACs. As I have indicated before, I have participated in many DBT tests where we have worked hard to set up correctly, with level matching to less than a quarter of a dB both electrical and acoustical, set switch times, long samples, Well there you go. It is well known that long samples are an enemy of sensitive results. the switch operator in another room, all indications of a switch taking place masked (input lights, etc.), the AB box (where used) in an insulation-filled box so we can't hear the relays, etc. and we have returned statistically positive results for amps and DACs. . I have also been involved in DBTs where null results have been returned. But you didn't say that the samples were time-synched within a few milliseconds. I can ace any ABX test where the music is not accurately time synched, even if the equipment being compared is in fact the very same equipment. In those tests where a positive result occurred, I found the differences to be so trivial that only a very anal retentive audiophile could possibly not be happy with any of the units under test! While they all sounded a little different in some respect, they all sounded good. The only time we got a gross difference was when, for fun, we pulled out our host's old Dynaco ST-120 and ran it against a new, and very expensive Audio Research Hybrid HD220 amp. The results made us all laugh. The ST-120 sounded dreadful while the AR was very neutral sounding. Obviously the ST-120 was broken, and you have no technical tests to confirm that it wasn't. If you ever did proper bench tests you'd know that audiophile myth about this amplifier is vastly overstated and subject to immense hyperbole. |
Reply |
Thread Tools | |
Display Modes | |
|
|
![]() |
||||
Thread | Forum | |||
FS: Vintage Audio Tubes and other Vintage Electronic Parts | Vacuum Tubes | |||
FS: Vintage Audio Tubes and other Vintage Electronic Parts | Vacuum Tubes | |||
FS: Vintage Audio Tubes and other Vintage Electronic Parts | Vacuum Tubes | |||
FS: Vintage Audio Tubes and other Vintage Electronic Parts | Vacuum Tubes | |||
Semi OT - vintage amplifier for vintage system? | Vacuum Tubes |