Home |
Search |
Today's Posts |
#281
|
|||
|
|||
Comments regarding: Cables, Hearing, Stuff!!
"The amps sounded different and consistently so, in a way that I am
completely incapable of producing. I would have had to create and keep track of 7 different sonic signaures, which is preposterous. Your claim to be an exception to listening tests showing results near the level of guessing first need to be established. I claim no 'exception' or special ability. I take my time and listen carefully, and I learned over the years how components differ. You'll not 'argue' me out of my senses." Your claim to hear anything in the above context must first be valididated to see if your reports exclude you from the tests showing a level similar to guessing. That you make such claims also makes a claim of exception because they contrast with the tests. I'm not concerned with your mental state after presentation with the test results, only in your claim to be an exception to them. Do the test and you remain free to accept them or not. Only the test will confirm if continued interest in your reports merit further interest. Your senses are not at issue, we must conclude they are of the common sort which produce the test results, or otherwise demonstrated by testing. We must conclude that your senses are subject to the same perception process that produces all manner of end states which have no physical reality, to which we all are subject. |
#282
|
|||
|
|||
Comments regarding: Cables, Hearing, Stuff!!
"normanstrong" wrote in message
... As for the right way to test your theory (that long-term evaluative listening is more sensitive to sonic differences than an ABX test), a simple, double-blind preference test would serve. Wouldn't give you the results you want, but that's not my problem. No it will not...it presumes the test is already validated. That's the purpose of this whole "control" test...to find out if it is valid and gives the same results...with the effects of "blinding" separated from the change in test technique. There's almost no chance of any blind cable test giving the same results as a sighted one. What is at issue here is what conclusions can be drawn from that fact. My guess is that Harry will claim that such results show that blind testing is useless, since it gives null results. On the other hand, null results is exactly what I would expect, and what all blind cable tests have produced--so far. It would be a lot more fun to compare 2 speakers whose differences are great enough to give a reasonable expectation of interesting results when tested blind. I would suggest speakers of about the same size and type of design, but wildly different prices and presumed quality. The comparison should be done blind first, then sighted. Evaluations should be written, using language understandable by the public at large, and with no communication between different listeners. Along with the evaluation, there should be an opportunity to guess the MSRP; usually good for a laugh. By eliminating quick switching we make the test simpler to run and more satisfactory to the subjective audiophiles in the group. Thanks for your support trying to resolve the issue, Norm. Probelem with speakers is....almost nobody (even objectivists) would argue that the blind tests will give null. Just as their is no large unanimity among both camps that their is an appreciable difference in the camps...even though some minority of subjectivists feel cables may have a sound, but even they acknowledge that it is sometimes difficult to hear. We need to test something where their is a fairly clear difference between the camps...that's why I nominated inexpensive cd players, or a SACD versus CD test. Most subjectivists feel there are audible differences in these comparisons (at least among some CD players) and most objectivists (based on the sample on this forum) feel that their are no differences in these cases, given identical source important (difficult but not impossible to find for the SACD vs. CD comparison). |
#283
|
|||
|
|||
Comments regarding: Cables, Hearing, Stuff!!
"Steven Sullivan" wrote in message
... Harry Lavo wrote: Then you haven't been paying attention. Here is what I have said (repeatedly) in a nutshell. 1) The main problem I have with double-blind is its tremendous impracticality in actual use in the home. 2) The main problem I have with Tom's DBT a-b and a-b-x tests is that they force the ear-brain into a short-term comparative mode, versus the long term evaluative mode used by audiophiles (listen at several times under several conditions to each unit, sometimes compare rapidly to listen to a specific effect, then go back to evaluative listening, etc.). I posit that this is allows the right brain as well as the left brain to "weigh in". I believe DBT comparisons are okay *if you know what specific sonic artifac you are listening for". Not for open ended testing where you don't know going in what you are listening for. This, I posit, is when confusion sets in and all you can hear are obvious volume or frequency response differences. But this purely hypothetical 'problem' , which has no *positive* evidence in its support, is in any case of NO CONSEQUENCE if one, such as yourself, has *already* identified two components as being different, under *your* preferred, sighted, comparative mode -- which as you say, is the *typical* comparative mode for audiophiles. In this case, one has already identified and desribed to oneself, the characteristic 'sound' of each component. One 'knows' what to listen for. *All* that is required at this point, therefore, is to present the two components under blinded conditions, to that person. If they have 'memorized' a real difference, then there should be no problem whatever in identifyimg it under such conditions. If you insist that the blind comparison be 'long term' and involve 'ratings' or whatever, fine. Just make sure it's blind. Tests like the ones Tom Nousaine conducted on Steve Zipser involved a listener who *already* claimed to 'know' the difference between two components, from sighted experience. He 'knew' what to listen for. He 'knew' what his preferred amplifier sounded like. Or so he thought. Given your dogged advocacy of a so far entirely speculative set of psychological/cognitive problems with 'forced' comparison, I propose again that you offer *yourself*, and a pair of components you ALREADY believe sound different, from your experience with them, as a test case for YOUR hypotheses. From your posts it appears there must be at least two cables or amps you already have evaluated, and believe to sound different. I make no claims that I could do it in a quick-switch environment with Tom standing beside me, or even having coffee in the next room. And I don't think Tom would like to be my apartment mate for a couple of weeks while I reached a decision. And even then, since I don't know whether blinding or quick switching causes the null, I would not predict the outcome. Although as I have pointed out in my control test proposal, the only way I could determine this / choose to believe that binding is the culprit would be after spending a long and equal time doing the evaluation blinded as well as sighted. |
#284
|
|||
|
|||
Comments regarding: Cables, Hearing, Stuff!!
|
#286
|
|||
|
|||
Comments regarding: Cables, Hearing, Stuff!!
|
#287
|
|||
|
|||
Comments regarding: Cables, Hearing, Stuff!!
|
#288
|
|||
|
|||
Comments regarding: Cables, Hearing, Stuff!!
"Bob Marcus" wrote in message
... Harry Lavo wrote: "Bob Marcus" wrote in message news:KmVfc.4829$aM4.16670@attbi_s53... Harry Lavo wrote: Why don't you contribute constructively rather than destructively. Why don't you point out exactly how "this is not one of them" and how this "does not have much to do with good test design" and then propose althernative ways to test my theory. Since I posited the test we have heard nothing but negatives from you. I've criticized your "test" at length before, but here are the highlights: 1) You have no coherent, testable hypothesis. Sure I do. It is that blinding per se, when done on an relaxed, longer-term, evaluative basis, is not likely to change the results of sighted listening done under the same conditions. But that the switch to blind a-b testing, or a-b-x testing will tend the results toward null because of ear-brain confusion. The control test is set up *exactly* to separate the two things. Well, no, it's far too complex to do this job. If you want to compare two tests, with only sightedness as the variable, then you certainly don't need THREE tests. A bigger problem is that there is no way statistically to compare the multiplicity of results you would get using the evaluation approach you propose. That's the virtue of the preference test I proposed--there are only two possible answers. Whereas, if you ask audiophiles to "evaluate" components based on, say, ten either-or criteria (a la Oohashi, who I believe is your model here), each subject has 1,024 possible answers. How do you tell whether his sighted answers match his blind answers? There's no meaningful statistical standard, nor is there any way of determining--without a huge amount of research--whether the criteria are themselves independent, which would be another requirement. Sorry, doesn't fly. If the comparative test itself is the problem, blind vs not blind will show no difference. Since their are two variables, there have to be two tests, controlling one variable in each matched pair. As to statistical complexity of evaluative testing, it is not all that complex. If the pair shows a statistical difference on one or more characteristics sighted, then that is the standard (and presumably it will if the test components are well chosen). Then if the blind test shows comparable statistical significances on one or more of these variables, it shows that blinding per se does not invalidate the sighted test (whether on some or all variables would be interesting in and of itself). Then, whether or not the comparative, iterative test (blind) supported the evaluative test (blind) would answer the question test technique. 2) You ask your subjects to do the impossible--namely, to conduct two independent subjective evaluations of the same equipment. Can't be done. Three's no way the first can't affect the second. Absolutely not, one evaluative sighted test and one evaluative blind test per subject...that's why several dozen subjects are required. Can't be done. Subjects will recall their sighted evaluations when they do their blind ones, so instead of the latter being an independent evaluation, all they'll be doing is trying to match their previous evaluations to the two components they are listening to now. What's wrong with that. It is a necessary part of the test. If after a few weeks time and this time blind, subjects can still identify the components under test by accurately recording the same subjective evaluation (as measured by statistical significance) then the blinding has not nulled the sighted evaluation. That is *exactly* what this stage of the testing is designed to determine. Their is no prejudgement involved...simply the results (whatever they are) of the first stage sighted test as a benchmark. The other advantage of my proposed preference test is that it leaves the subject free to listen however he wants, just as your theory ought to demand. Whereas you want to impose an artificial "scorecard evaluation," which may be nothing like that subject's actual practice. Agreed in principle, although I think most audiophiles at least keep a scorecard in their head ("bass more defined, dynamic", "broader soundstage", etc.) I would make it explicit here simply to allow statistical analysis. The subject can still take or spend most of his time in completely subjective listening, and only do the "rating" at the end. Of course, lot's of care must be put into the rating factors to make sure that all here agree nothing significant has been left out and that their is no undue redundancy. Then a 16 trial run for each person using Tom's traditional A-B or A-B-X test. As I said above, if your goal is to compare sighted to blind evaluative approaches, this step is unnecessary. Absolutely not. This is another main objective of the test...deviding the "blind" effect from the "comparative test" effect. As for the right way to test your theory (that long-term evaluative listening is more sensitive to sonic differences than an ABX test), a simple, double-blind preference test would serve. Wouldn't give you the results you want, but that's not my problem. No it will not...it presumes the test is already validated. That's the purpose of this whole "control" test...to find out if it is valid and gives the same results...with the effects of "blinding" separated from the change in test technique. I think you'll see that my longer proposal does exactly what you ask--it compares sighted results to blind results using exactly the same listening method, to see if they give the same results. And, unlike you, I have defined statistically what "same" means. I agree your proposal is similar, but also potentially misleading since it relies on lots of dissimilar comparisons of dissimilar equipment that may / may not actually have differences (a null comparison of units that show no difference sighted does not mean much). Moreover, doing away with evaluative ratings is wrong IMO because this is what *lead* audiophiles to their choices and it is important to understand if/what of these evaluative factors (if any) make the transition from sighted to blind. |
#289
|
|||
|
|||
Comments regarding: Cables, Hearing, Stuff!!
"Michael Scarpitti" wrote in message
news:Ruqgc.8483$hw5.7851@attbi_s53... chung wrote in message news:hO2gc.148131$gA5.1797802@attbi_s03... You still have not answered this. If the differences are so obvious, why not do a DBT to prove that the differences are real? There is nothing to be gained, that's why! The differences are so dramatic that it is not worth my time..... That is precisely when a blind test is most needed--when the differences are "dramatic". Fortuntely, that's also the time when good results should be obtained. Norm Strong |
#292
|
|||
|
|||
Comments regarding: Cables, Hearing, Stuff!!
"Nousaine" wrote in message
news:W4Kgc.157303$gA5.1886001@attbi_s03... Sullivan's commens snipped for clarity, since response is to Tom's points Harry's theory also contains the assumption that preferences determined under open conditions carry some kind of scientific authority. Nope, the "authority" comes from the fact that this is the normal approach used by audiophiles, and thus is the most widespread. It is the technique you are attempting to prove is invalid. And that if a subject does not come to identical conclusions under bias-controlled conditions that means that such controls mask 'real' sonic difference instead of the more logical conclusion that the conclusions were not sound-based in the first place. Nope, all the control test does is show whether under identical conditions blinding decreases or negates differences rated under sighted conditions. If it does, you are correct. If it doesn't (but subsequent quick-switch a--b testing does) then the problem lies in quick-switching and comparing, versus evaluating. That's all. I simply start where most audiophiles are, and move to where you are...but controlling blindness and test design factors as two separate variables that must be isolated to really understand what is happening. So the experiment was proposed with an eye toward a biased outcome. When subjects form different or statistically non-uniform results (which would be likely when the units actually had identical sound) then Harry would call the controlled tests "invalid" instead of concluding that the open tests were not based on sound but some other mechanism. You have to prove and validate the test first. Otherwise, your example above is based on faith, not on science. And I agree that Harry should be the first subject in any experiment because he already has equipment the sound of which he "knows". It seems extremely unlikely that simply putting a blanket over the I/O terminals would stop him from "hearing" his own equipment no matter what the length of the audition. The test is not based on one person, since it cannot be statistically validated except over a sample of two dozen or so audiophiles. And before that can happen, we need to agree on what is to be tested, what evaluative factors are to be included, and on a written set of protocols to be used. I'm happy to lead a discussion / provided and modify with the group. And I am perfectly happy to be one of or even the lead person in doing the test once all this has been hammered out. To suggest otherwise would mean that no one would ever be able to enjoy a 30 second or 3-minute recording under any conditions. There wouldn't be time to get into the right listening mode. Rhetorical hogwash, Tom. And, I continue to wonder how Steve Zipser with long term knowledge of his reference amplifier could have all those intimate details disappear when nothing more than a blanket was placed over the I/O terminals in comparison to a completely different unit. He didn't do evaluative ratings. He had to do a comparative choice, with you standing over his shoulder (figuratively if not literally). That changes things, I believe. I can't understand how a simple cloth could 'mask' differences gleaned under long term conditions AND that a completely unknown and presumed inferior device could suddenly become sonically equivalent to a well-known device with clearly identifiable sound. Again, depends on the test technique. Did the subject then have weeks/months to evaluate the two options before having to make an identity choice? I think not. As for practicality of either technique Harry's method requires lengthy IN-HOME audition of all possible candidates BEFORE any decision can be made. It seems to me that a double blind test has no less practicality. Indeed it may even be more practical because it does not require hours/weeks of audition. Except that it begs the question that is attempting to be resolved. It only works if you grant the test a priori validity. And I and many others are not willing to grant that...that's the whole concept of a control test. |
#293
|
|||
|
|||
Comments regarding: Cables, Hearing, Stuff!!
Harry Lavo wrote:
"Nousaine" wrote in message news:W4Kgc.157303$gA5.1886001@attbi_s03... Sullivan's commens snipped for clarity, since response is to Tom's points Harry's theory also contains the assumption that preferences determined under open conditions carry some kind of scientific authority. Nope, the "authority" comes from the fact that this is the normal approach used by audiophiles, and thus is the most widespread.Â* It is the technique you are attempting to prove is invalid. WE are attempting to prove nothing. The test Tom uses is recognized by all experts in the field of human hearing perception as an appropriate and reliable test for sonic differences--ANY sonic differences--and is used every day by said experts to do just that both in the audio industry and in academia. It is you who are trying to prove that it is somehow uniquely inadequate for the specific task of comparing high-end components. And that if a subject does not come to identical conclusions under bias-controlled conditions that means that such controls mask 'real' sonic difference instead of the more logical conclusion that the conclusions were not sound-based in the first place. Nope, all the control test does is show whether under identical conditions blinding decreases or negates differences rated under sighted conditions. If it does, you are correct.Â* If it doesn't (but subsequent quick-switch a--b testing does) then the problem lies in quick-switching and comparing, versus evaluating.Â* That's all. I simply start where most audiophiles are, and move to where you are...but controlling blindness and test design factors as two separate variables that must be isolated to really understand what is happening. To be precise, you are controlling test design, as you call it, in order to determine the difference if any between sighted and blind tests. But I agree with the conclusions you would draw from the results. So the experiment was proposed with an eye toward a biased outcome. When subjects form different or statistically non-uniform results (which would be likely when the units actually had identical sound) then Harry would call the controlled testsÂ* "invalid" instead of concluding that the open tests were not based on sound but some other mechanism. You have to prove and validate the test first.Â* Otherwise, your example above is based on faith, not on science. As of right now, it is your theory that is based on faith, not science, because you haven't done a speck of science to back it up. (And because it runs counter to a whole lot of scientific findings, but we'll let that pass.) And I agree that Harry should be the first subject in any experiment because he already has equipment the sound of which he "knows". It seems extremely unlikely that simply putting a blanket over the I/O terminals would stop him from "hearing" his own equipment no matter what the length of the audition. The test is not based on one person, since it cannot be statistically validated except over a sample of two dozen or so audiophiles.Â* Actually, it could, if that subject had sufficient patience to conduct multiple blind trials. But then somebody would complain about listener fatigue! And before that can happen, we need to agree on what is to be tested, what evaluative factors are to be included, and on a written set of protocols to be used. For your test, yes, we would have to agree on those things. But given that it is YOUR test, it is incumbent on you to come up with--and justify--a set of evaluative factors. As I have explained elsewhere, this would be an extremely difficult undertaking even for an expert in psychoacoustics--which you ain't. (I'm not sure any regular participant in rahe would be up to the task, frankly.) And given that I, for one, believe that such an exercise is neither possible nor necessary--and would make the test LESS sensitive by imposing a listening protocol on the subject--I don't see the point. I've proposed an alternative approach that--except for the time factor, which will be a problem no matter what test you use--is thoroughly practicable and meets every condition you've posed. bob __________________________________________________ _______________ FREE pop-up blocking with the new MSN Toolbar – get it now! http://toolbar.msn.com/go/onm00200415ave/direct/01/ |
#294
|
|||
|
|||
Comments regarding: Cables, Hearing, Stuff!!
Harry Lavo wrote:
"Bob Marcus" wrote in message ... Harry Lavo wrote: "Bob Marcus" wrote in message news:KmVfc.4829$aM4.16670@attbi_s53... Harry Lavo wrote: Why don't you contribute constructively rather than destructively. Why don't you point out exactly how "this is not one of them" and how this "does not have much to do with good test design" and then propose althernative ways to test my theory. Since I posited the test we have heard nothing but negatives from you. I've criticized your "test" at length before, but here are the highlights: 1) You have no coherent, testable hypothesis. Sure I do. It is that blinding per se, when done on an relaxed, longer-term, evaluative basis, is not likely to change the results of sighted listening done under the same conditions. But that the switch to blind a-b testing, or a-b-x testing will tend the results toward null because of ear-brain confusion. The control test is set up *exactly* to separate the two things. Well, no, it's far too complex to do this job. If you want to compare two tests, with only sightedness as the variable, then you certainly don't need THREE tests. A bigger problem is that there is no way statistically to compare the multiplicity of results you would get using the evaluation approach you propose. That's the virtue of the preference test I proposed--there are only two possible answers. Whereas, if you ask audiophiles to "evaluate" components based on, say, ten either-or criteria (a la Oohashi, who I believe is your model here), each subject has 1,024 possible answers. How do you tell whether his sighted answers match his blind answers? There's no meaningful statistical standard, nor is there any way of determining--without a huge amount of research--whether the criteria are themselves independent, which would be another requirement. Sorry, doesn't fly.Â* If the comparative test itself is the problem, blind vs not blind will show no difference.Â* Since their are two variables, there have to be two tests, controlling one variable in each matched pair. If you get the same results in sighted and blind evaluative tests, then you know that it is the comparative nature of ABX tests that causes them to be insensitive. If you get different results, that tells you that the sighted evaluations are flawed because of biases resulting the subjects' knowledge of what they are listening to. But if you really want to do an ABX test, go ahead and waste your time. It won't tell you a thing extra. As to statistical complexity of evaluative testing, it is not all that complex.Â* If the pair shows a statistical difference on one or more characteristics sighted, then that is the standard (and presumably it will if the test components are well chosen).Â* Unlike my proposed preference test, your approach presumes not only that subjects will hear differences, but that they will agree on what those differences are. This greatly complicates the task of findings components to compare. Then if the blind test shows comparable statistical significances on one or more of these variables, it shows that blinding per se does not invalidate the sighted test (whether on some or all variables would be interesting in and of itself). Not at all. It depends on the number of variables--the more you have, the more likely it is that you will get a statistically significant result for at least one of them by chance alone. That's why the statistics is complex--if, that is, all of teh variables are known to be independent. If it is not known that all of the variables are independent, then the statistics is well nigh impossible. Then, whether or not the comparative, iterative test (blind) supported the evaluative test (blind) would answer the question test technique. 2) You ask your subjects to do the impossible--namely, to conduct two independent subjective evaluations of the same equipment. Can't be done. Three's no way the first can't affect the second. Absolutely not, one evaluative sighted test and one evaluative blind test per subject...that's why several dozen subjects are required. Can't be done. Subjects will recall their sighted evaluations when they do their blind ones, so instead of the latter being an independent evaluation, all they'll be doing is trying to match their previous evaluations to the two components they are listening to now. What's wrong with that.Â* It is a necessary part of the test.Â* If after a few weeks time and this time blind, subjects can still identify the components under test by accurately recording the same subjective evaluation (as measured by statistical significance) then the blinding has not nulled the sighted evaluation.Â* But now you're comparing/identifying, rather than evaluating, according to your own definitions. If that's what you want to do, fine, but just do it. Do a sighted evaluation, let people fill out a scorecard, then let them consult that scorecard in the blind evaluation and determine which amp matches which set of characteristics. A preference test, by the way, is just a single-variable version of this latter approach. And there is no theoretical reason why you need more than one variable. That is *exactly* what this stage of the testing is designed to determine.Â* Their is no prejudgement involved...simply the results (whatever they are) of the first stage sighted test as a benchmark. The other advantage of my proposed preference test is that it leaves the subject free to listen however he wants, just as your theory ought to demand. Whereas you want to impose an artificial "scorecard evaluation," which may be nothing like that subject's actual practice. Agreed in principle, although I think most audiophiles at least keep a scorecard in their head ("bass more defined, dynamic", "broader soundstage", etc.)Â* I would make it explicit here simply to allow statistical analysis. As I point out above, there is no need for such complex statistical analysis. Also, making it explicit requires you to impose an analytical framework on the subjects, rather than letting them decide what to listen for and what is important to them. If you want to conduct a blind test that's as close to what audiophiles do every day as possible, my preference test has your highly prescriptive and overly complex scorecard evaluation beat hands down. The subject can still take or spend most of his time in completely subjective listening, and only do the "rating" at the end.Â* Of course, lot's of care must be put into the rating factors to make sure that all here agree nothing significant has been left out and that their is no undue redundancy. Actually, years of research will be required to determine that there is no redundancy. Proving that two variables are independent is fairly straightforward. Proving that ten are is a life's undertaking. Then a 16 trial run for each person using Tom's traditional A-B or A-B-X test. As I said above, if your goal is to compare sighted to blind evaluative approaches, this step is unnecessary. Absolutely not.Â* This is another main objective of the test...deviding the "blind" effect from the "comparative test" effect. In other words, despite your previous protestations, you do not accept the necessity of blind testing. If that is the case, why should I take you seriously? As for the right way to test your theory (that long-term evaluative listening is more sensitive to sonic differences than an ABX test), a simple, double-blind preference test would serve. Wouldn't give you the results you want, but that's not my problem. No it will not...it presumes the test is already validated. That's the purpose of this whole "control" test...to find out if it is valid and gives the same results...with the effects of "blinding" separated from the change in test technique. I think you'll see that my longer proposal does exactly what you ask--it compares sighted results to blind results using exactly the same listening method, to see if they give the same results. And, unlike you, I have defined statistically what "same" means. I agree your proposal is similar, but also potentially misleading since it relies on lots of dissimilar comparisons of dissimilar equipment that may / may not actually have differences (a null comparison of units that show no difference sighted does not mean much). So all you have to do is find two components that audiophiles are willing to express a preference between. Given all the subjectivist stuff we read here and elsewhere, that can't be too hard, can it? Â* Moreover, doing away with evaluative ratings is wrong IMO because this is what *lead* audiophiles to their choices and it is important to understand if/what of these evaluative factors (if any) make the transition from sighted to blind. But now you've created a hypothesis that's too complex to test. It's one thing to test whether perceptions change from sighted to blind, holding all else equal (which my preference test does). But you're also testing a hypothesis about how audiophiles evaluate components. You may well be right, in some general way. But your test requires you to be right in a very specific way--that you can list a set of attributes that covers what audiophiles actually listen for. You have no real basis (other than anecdote and conjecture) for constructing that list. The only reason I can see for insisting on such an impossibly complex test is that you want to ensure that the test will never be performed, so that you can continue forever insisting against all evidence that we can't know for sure that ABX works because we haven't done YOUR test. And that is what I think you are doing. bob __________________________________________________ _______________ Stop worrying about overloading your inbox - get MSN Hotmail Extra Storage! http://join.msn.com/?pgmarket=en-us&...ave/direct/01/ |
#295
|
|||
|
|||
Comments regarding: Cables, Hearing, Stuff!!
(Nousaine) wrote in message ...
(Michael Scarpitti) Well, anytime you tell someone you have a new amp, and play it a little lounder, your friend will say "wow". That's not the case. You were not there. I reapeat and insist that the Sony TA-N88B amp is so much clearer than other amps that anyone who spends more than 3 seconds listening will notice it. Harry Lavo says that it takes long term listening to make this kind of evaluation and any 3-second listening test is far too short. Not with this amp. That is the point. It was a dramatically clearer amp than anything I have ever heard. It was a digital amp. |
#296
|
|||
|
|||
Comments regarding: Cables, Hearing, Stuff!!
Walter Bushell wrote in
: Once you begin to understand that your mind plays tricks on you, you move to another level of self understanding. If I ever have to face a trial while falsely accused, I would hope for a jury that knows this, particularly if the evidence against me is eye witness. We know that magicians can do appear to do impossible things, we don't believe they actually can bend the law of reality, why do people believe there ears when presented with the same paradox? Anyway, can we take this discussion to apply in spades to power cords? And whether we should expect a $10,000 power cord to improve a CD player much more than a $1000 dollar one, for example. You are listening to music. If anything, I mean anything that can enhance your listening pleasure, that's all that count! No matter your mind is playing trick with you, or something like that. I mean, why is it so hard to understand? We don't need scientific evidence on enjoying music, we need our own judgement, our instance! We believe to our ears, to our brain because it is actually what we are hearing! Panzzi |
#297
|
|||
|
|||
Comments regarding: Cables, Hearing, Stuff!!
"Bob Marcus" wrote in message
news:eC%gc.163161$gA5.1923725@attbi_s03... Harry Lavo wrote: "Bob Marcus" wrote in message ... Harry Lavo wrote: "Bob Marcus" wrote in message news:KmVfc.4829$aM4.16670@attbi_s53... Harry Lavo wrote: Why don't you contribute constructively rather than destructively. Why don't you point out exactly how "this is not one of them" and how this "does not have much to do with good test design" and then propose althernative ways to test my theory. Since I posited the test we have heard nothing but negatives from you. I've criticized your "test" at length before, but here are the highlights: 1) You have no coherent, testable hypothesis. Sure I do. It is that blinding per se, when done on an relaxed, longer-term, evaluative basis, is not likely to change the results of sighted listening done under the same conditions. But that the switch to blind a-b testing, or a-b-x testing will tend the results toward null because of ear-brain confusion. The control test is set up *exactly* to separate the two things. Well, no, it's far too complex to do this job. If you want to compare two tests, with only sightedness as the variable, then you certainly don't need THREE tests. A bigger problem is that there is no way statistically to compare the multiplicity of results you would get using the evaluation approach you propose. That's the virtue of the preference test I proposed--there are only two possible answers. Whereas, if you ask audiophiles to "evaluate" components based on, say, ten either-or criteria (a la Oohashi, who I believe is your model here), each subject has 1,024 possible answers. How do you tell whether his sighted answers match his blind answers? There's no meaningful statistical standard, nor is there any way of determining--without a huge amount of research--whether the criteria are themselves independent, which would be another requirement. Sorry, doesn't fly. If the comparative test itself is the problem, blind vs not blind will show no difference. Since their are two variables, there have to be two tests, controlling one variable in each matched pair. If you get the same results in sighted and blind evaluative tests, then you know that it is the comparative nature of ABX tests that causes them to be insensitive. If you get different results, that tells you that the sighted evaluations are flawed because of biases resulting the subjects' knowledge of what they are listening to. But if you really want to do an ABX test, go ahead and waste your time. It won't tell you a thing extra. It will confirm your first point above. If we didn't do it, there would be another three years of discussion and defense here from the objectivists, just as you fear from the subjectivists. So for the test to be neutral, it has to conclusively close both ends of the loop. As to statistical complexity of evaluative testing, it is not all that complex. If the pair shows a statistical difference on one or more characteristics sighted, then that is the standard (and presumably it will if the test components are well chosen). Unlike my proposed preference test, your approach presumes not only that subjects will hear differences, but that they will agree on what those differences are. This greatly complicates the task of findings components to compare. I not only presume, but it is essential, that the subjectivist audio community *believe* that the units under test sound different for the test to be valid. It is also essential that the large majority of the objectivist camp believe the units under test do not / can not/ will not sound different. But I still believe it is worthwhile doing. There does seem to be some broad antecdotal consensus about the sound of certain items within the subjective comments of the audiophile community, and I would use those as a starting point. And then ask the objectivist commenty for their opinions / comments on the comparison to make sure they see the two units as supposedly equal in sound / no different. Then if the blind test shows comparable statistical significances on one or more of these variables, it shows that blinding per se does not invalidate the sighted test (whether on some or all variables would be interesting in and of itself). Not at all. It depends on the number of variables--the more you have, the more likely it is that you will get a statistically significant result for at least one of them by chance alone. That's why the statistics is complex--if, that is, all of teh variables are known to be independent. If it is not known that all of the variables are independent, then the statistics is well nigh impossible. But it is not at all likely that you would get that same rating significance blind in the follow up test. Then, whether or not the comparative, iterative test (blind) supported the evaluative test (blind) would answer the question test technique. 2) You ask your subjects to do the impossible--namely, to conduct two independent subjective evaluations of the same equipment. Can't be done. Three's no way the first can't affect the second. Absolutely not, one evaluative sighted test and one evaluative blind test per subject...that's why several dozen subjects are required. Can't be done. Subjects will recall their sighted evaluations when they do their blind ones, so instead of the latter being an independent evaluation, all they'll be doing is trying to match their previous evaluations to the two components they are listening to now. What's wrong with that. It is a necessary part of the test. If after a few weeks time and this time blind, subjects can still identify the components under test by accurately recording the same subjective evaluation (as measured by statistical significance) then the blinding has not nulled the sighted evaluation. But now you're comparing/identifying, rather than evaluating, according to your own definitions. If that's what you want to do, fine, but just do it. Do a sighted evaluation, let people fill out a scorecard, then let them consult that scorecard in the blind evaluation and determine which amp matches which set of characteristics. No, let them evaluate the two units under test in depth, just as they did sighted. Statistical analysis, not choice, will determine if the two results are the same or different. I still don't think you grasp the fact that the difference in the techniques is not to have to decide to choose (the left brain approach), but rather let the "choice" grow explicitly out of the evaluative experience (the right brain approach). A preference test, by the way, is just a single-variable version of this latter approach. And there is no theoretical reason why you need more than one variable. That is *exactly* what this stage of the testing is designed to determine. Their is no prejudgement involved...simply the results (whatever they are) of the first stage sighted test as a benchmark. The other advantage of my proposed preference test is that it leaves the subject free to listen however he wants, just as your theory ought to demand. Whereas you want to impose an artificial "scorecard evaluation," which may be nothing like that subject's actual practice. Agreed in principle, although I think most audiophiles at least keep a scorecard in their head ("bass more defined, dynamic", "broader soundstage", etc.) I would make it explicit here simply to allow statistical analysis. As I point out above, there is no need for such complex statistical analysis. Also, making it explicit requires you to impose an analytical framework on the subjects, rather than letting them decide what to listen for and what is important to them. If you want to conduct a blind test that's as close to what audiophiles do every day as possible, my preference test has your highly prescriptive and overly complex scorecard evaluation beat hands down. Nope, they can listen and decide totally subjectively. All they have to do is then to translate their "impressions" into ratings on the scale. They are simply recording the conclusions they came to, not making a forced choice. They are evaluating the two units separately (monadically) just as they did sighted unless they specifically ask for a switch to check out a specific characteristic. The subject can still take or spend most of his time in completely subjective listening, and only do the "rating" at the end. Of course, lot's of care must be put into the rating factors to make sure that all here agree nothing significant has been left out and that their is no undue redundancy. Actually, years of research will be required to determine that there is no redundancy. Proving that two variables are independent is fairly straightforward. Proving that ten are is a life's undertaking. Methinks you protest too much. It requires thought, feedback from the group in iterations, but it is not rocket science. The first thing that must be done however is to decide on the units to test. Then a 16 trial run for each person using Tom's traditional A-B or A-B-X test. As I said above, if your goal is to compare sighted to blind evaluative approaches, this step is unnecessary. Absolutely not. This is another main objective of the test...deviding the "blind" effect from the "comparative test" effect. In other words, despite your previous protestations, you do not accept the necessity of blind testing. If that is the case, why should I take you seriously? What on earth are you talking about? This is a non-sequitor. As for the right way to test your theory (that long-term evaluative listening is more sensitive to sonic differences than an ABX test), a simple, double-blind preference test would serve. Wouldn't give you the results you want, but that's not my problem. No it will not...it presumes the test is already validated. That's the purpose of this whole "control" test...to find out if it is valid and gives the same results...with the effects of "blinding" separated from the change in test technique. I think you'll see that my longer proposal does exactly what you ask--it compares sighted results to blind results using exactly the same listening method, to see if they give the same results. And, unlike you, I have defined statistically what "same" means. I agree your proposal is similar, but also potentially misleading since it relies on lots of dissimilar comparisons of dissimilar equipment that may / may not actually have differences (a null comparison of units that show no difference sighted does not mean much). So all you have to do is find two components that audiophiles are willing to express a preference between. Given all the subjectivist stuff we read here and elsewhere, that can't be too hard, can it? That's only half the equation. The other half is what the objectivists think of those same two components. I nominate a redbook test between the Panasonic S55 or S85 or equivalent later model and the least expensive Sony DVD/SACD player. Both have MSRP's in the 100-200 range, and both feature hi-rez as well as DVD and Redbook reproduction. So they should be roughly equivalent, and if I read the objectivist sentiment here correctly, the Redbook technology is a ten-year old "settled issue" and "most all players sound alike". Morover, this is a very practical choice for many people, at least for a second system if not the first.  Moreover, doing away with evaluative ratings is wrong IMO because this is what *lead* audiophiles to their choices and it is important to understand if/what of these evaluative factors (if any) make the transition from sighted to blind. But now you've created a hypothesis that's too complex to test. It's one thing to test whether perceptions change from sighted to blind, holding all else equal (which my preference test does). But you're also testing a hypothesis about how audiophiles evaluate components. You may well be right, in some general way. But your test requires you to be right in a very specific way--that you can list a set of attributes that covers what audiophiles actually listen for. You have no real basis (other than anecdote and conjecture) for constructing that list. This group can provide the iterative feedback necessary to make the list a good one. This group is uniquely suited to try to undertake/move this test along. The only reason I can see for insisting on such an impossibly complex test is that you want to ensure that the test will never be performed, so that you can continue forever insisting against all evidence that we can't know for sure that ABX works because we haven't done YOUR test. And that is what I think you are doing. Thanks for your vote of confidence in my motives. My purpose is to as closely as possible within a tightly controlled, scientific test, approach the conditions many audiophiles use at home in sighted testing on one end, and Tom's DBT approach on the other, controlling the variables in between. And not incidentally, learning all it is possible to learn upon the way about what really happens in sighted versus blind testing as it affects perceptions. And the same for forced comparison versus evaluative comparisons. |
#298
|
|||
|
|||
Comments regarding: Cables, Hearing, Stuff!!
Stewart Pinkerton wrote in message news:tvWgc.173042$JO3.101084@attbi_s04...
That's not the case. You were not there. I reapeat and insist that the Sony TA-N88B amp is so much clearer than other amps that anyone who spends more than 3 seconds listening will notice it. Absolute nonsense! I have set up several 'bypass' tests to compare a power amp with a straight wire link. In each case, the amplifier contributed nothing to the sound, hence could be considered to be sonically transparent. *If* the Sony sounds different, it is because it *adds* something to the sound, not because it is superior. BTW, that Sony is *known* to have some quite nasty HF artifacts - perhaps that is what you are confusing with 'clarity'? The nonsense is coming from you. Have you or have you not heard the TA-N88B? If you had, we would not be having this conversation. The amp is staggeringly clearer than any conventional amp. The problem with it was stability. The other amps you have listened to --- ALL of them -- have nothing in common with this digital amp. It reveals levels of detail you never could hear with a conventional amp. |
#299
|
|||
|
|||
Comments regarding: Cables, Hearing, Stuff!!
Harry Lavo wrote:
Nope, the "authority" comes from the fact that this is the normal approach used by audiophiles, and thus is the most widespread. It is the technique you are attempting to prove is invalid. Harry, there is a *big* difference evaluating a loudspeaker or whatever component and just to find out if there are any differences between two or more setups. Our ears can find out the differences much easier, because we do not need to qualify a source as "better" or "worse". It is just like taking up the phone. You recognize the caller immediately from his voice, you do not need 5min of conversation. Much more important is the instantanous switching without gaps, so the short term memory is not lost. When audiophiles need long evaluation time it is probably, that the brain has to eliminate disturbing sounds, which are covering subtle sonic information. It is like you are living near a busy street, after a few days or even weeks, the noise of the street is not heard any more, the brain has learned to eliminate it. But if there are differences between one sound sample and the next, they jump immediately into the ear, so to say. I think everyone should find this out by himself. Maybe there is a difference between persons, but I doubt it, because all the results of scientific work seem to support my personal experience. -- ciao Ban Bordighera, Italy |
#300
|
|||
|
|||
Comments regarding: Cables, Hearing, Stuff!!
Harry Lavo wrote:
"Bob Marcus" wrote in message news:eC%gc.163161$gA5.1923725@attbi_s03... snip If you get the same results in sighted and blind evaluative tests, then you know that it is the comparative nature of ABX tests that causes them to be insensitive. If you get different results, that tells you that the sighted evaluations are flawed because of biases resulting the subjects' knowledge of what they are listening to. But if you really want to do an ABX test, go ahead and waste your time. It won't tell you a thing extra. It will confirm your first point above.* If we didn't do it, there would be another three years of discussion and defense here from the objectivists, just as you fear from the subjectivists. Not if you compare components that we agree in advance are nominally competent. But wouldn't "three years of discussion and defense here from the objectivists" about your evidence be an improvement on the current situation, which is that YOU HAVE NO EVIDENCE? * So for the test to be neutral, it has to conclusively close both ends of the loop. As to statistical complexity of evaluative testing, it is not all that complex. If the pair shows a statistical difference on one or more characteristics sighted, then that is the standard (and presumably it will if the test components are well chosen). Unlike my proposed preference test, your approach presumes not only that subjects will hear differences, but that they will agree on what those differences are. This greatly complicates the task of findings components to compare. I not only presume, but it is essential, that the subjectivist audio community *believe* that the units under test sound different for the test to be valid.* No, it's only essential that you find 20 people who hear a difference sighted. That's all. I don't give two hoots about what the "subjective audiophile community" thinks. It is also essential that the large majority of the objectivist camp believe the units under test do not / can not/ will not sound different. Equally easy to do. Though easier for amps than CD players. * But I still believe it is worthwhile doing.* There does seem to be some broad antecdotal consensus about the sound of certain items within the subjective comments of the audiophile community, Again, no. We can match up each individual's impressions sighted to that same individual's impressions blind. We don't need the various subjects to agree among themselves in order to do a test of whether impressions change from sighted to blind. and I would use those as a starting point.* And then ask the objectivist commenty for their opinions / comments on the comparison to make sure they see the two units as supposedly equal in sound / no different. We don't have opinions on this matter. We have measurements. Then if the blind test shows comparable statistical significances on one or more of these variables, it shows that blinding per se does not invalidate the sighted test (whether on some or all variables would be interesting in and of itself). Not at all. It depends on the number of variables--the more you have, the more likely it is that you will get a statistically significant result for at least one of them by chance alone. That's why the statistics is complex--if, that is, all of teh variables are known to be independent. If it is not known that all of the variables are independent, then the statistics is well nigh impossible. But it is not at all likely that you would get that same rating significance blind in the follow up test. This is a non sequitur. Then, whether or not the comparative, iterative test (blind) supported the evaluative test (blind) would answer the question test technique. 2) You ask your subjects to do the impossible--namely, to conduct two independent subjective evaluations of the same equipment. Can't be done. Three's no way the first can't affect the second. Absolutely not, one evaluative sighted test and one evaluative blind test per subject...that's why several dozen subjects are required. Can't be done. Subjects will recall their sighted evaluations when they do their blind ones, so instead of the latter being an independent evaluation, all they'll be doing is trying to match their previous evaluations to the two components they are listening to now. What's wrong with that. It is a necessary part of the test. If after a few weeks time and this time blind, subjects can still identify the components under test by accurately recording the same subjective evaluation (as measured by statistical significance) then the blinding has not nulled the sighted evaluation. But now you're comparing/identifying, rather than evaluating, according to your own definitions. If that's what you want to do, fine, but just do it. Do a sighted evaluation, let people fill out a scorecard, then let them consult that scorecard in the blind evaluation and determine which amp matches which set of characteristics. No, let them evaluate the two units under test in depth, just as they did sighted.* Statistical analysis, not choice, will determine if the two results are the same or different. I still don't think you grasp the fact that the difference in the techniques is not to have to decide to choose (the left brain approach), but rather let the "choice" grow explicitly out of the evaluative experience* (the right brain approach). It's not a fact, it's pseudoscientific speculation on your part. Besides, the SECOND time they do the test (the blind trial), it will become a comparative experience, because they'll have their initial evaluations in mind as they listen, and will be comparing what they are hearing now to what they thought then. (This is assuming your comparative-evaluative conjecture isn't entirely fanciful.) A preference test, by the way, is just a single-variable version of this latter approach. And there is no theoretical reason why you need more than one variable. That is *exactly* what this stage of the testing is designed to determine. Their is no prejudgement involved...simply the results (whatever they are) of the first stage sighted test as a benchmark. The other advantage of my proposed preference test is that it leaves the subject free to listen however he wants, just as your theory ought to demand. Whereas you want to impose an artificial "scorecard evaluation," which may be nothing like that subject's actual practice. Agreed in principle, although I think most audiophiles at least keep a scorecard in their head ("bass more defined, dynamic", "broader soundstage", etc.) I would make it explicit here simply to allow statistical analysis. As I point out above, there is no need for such complex statistical analysis. Also, making it explicit requires you to impose an analytical framework on the subjects, rather than letting them decide what to listen for and what is important to them. If you want to conduct a blind test that's as close to what audiophiles do every day as possible, my preference test has your highly prescriptive and overly complex scorecard evaluation beat hands down. Nope, they can listen and decide totally subjectively.* All they have to do is then to translate their "impressions" into ratings on the scale.* They are simply recording the conclusions they came to, not making a forced choice.* They are evaluating the two units separately (monadically) just as they did sighted unless they specifically ask for a switch to check out a specific characteristic. I'm sorry. I had envisioned a rather simpler approach, in which you asked them things like, which amp is brighter, which amp has clearer highs, that sort of thing. What you seem to be saying here is that you will ask them to rate each amp's brightness, say, on a scale of one to ten. In that case, there would be billions of possible answers, and statistical comparisons would be quite impossible. The subject can still take or spend most of his time in completely subjective listening, and only do the "rating" at the end. Of course, lot's of care must be put into the rating factors to make sure that all here agree nothing significant has been left out and that their is no undue redundancy. Actually, years of research will be required to determine that there is no redundancy. Proving that two variables are independent is fairly straightforward. Proving that ten are is a life's undertaking. Methinks you protest too much.* It requires thought, feedback from the group in iterations, but it is not rocket science.* The first thing that must be done however is to decide on the units to test. You are being naive. We can speculate all we want about what we think audiophiles listen for, but if you want to do a serious test, then you need some basis for KNOWING what audiophiles listen for. You haven't got one, and it would be a lifetime's work for someone with a far better background in the field than you to get one. Again, audiophiles determine preferences every day. We don't need to know the basis on which they do it to test the robustness of those preferences. * Then a 16 trial run for each person using Tom's traditional A-B or A-B-X test. As I said above, if your goal is to compare sighted to blind evaluative approaches, this step is unnecessary. Absolutely not. This is another main objective of the test...deviding the "blind" effect from the "comparative test" effect. In other words, despite your previous protestations, you do not accept the necessity of blind testing. If that is the case, why should I take you seriously? What on earth are you talking about?* This is a non-sequitor. I thought we had agreed that there was no blind effect. As for the right way to test your theory (that long-term evaluative listening is more sensitive to sonic differences than an ABX test), a simple, double-blind preference test would serve. Wouldn't give you the results you want, but that's not my problem. No it will not...it presumes the test is already validated. That's the purpose of this whole "control" test...to find out if it is valid and gives the same results...with the effects of "blinding" separated from the change in test technique. I think you'll see that my longer proposal does exactly what you ask--it compares sighted results to blind results using exactly the same listening method, to see if they give the same results. And, unlike you, I have defined statistically what "same" means. I agree your proposal is similar, but also potentially misleading since it relies on lots of dissimilar comparisons of dissimilar equipment that may / may not actually have differences (a null comparison of units that show no difference sighted does not mean much). So all you have to do is find two components that audiophiles are willing to express a preference between. Given all the subjectivist stuff we read here and elsewhere, that can't be too hard, can it? That's only half the equation.* The other half is what the objectivists think of those same two components. I told you what the objectivists think of those same components. If they measure as nominally competent, then we confidently predict that their differences are inaudible. * I nominate a redbook test between the Panasonic S55 or S85 or equivalent later model and the least expensive Sony DVD/SACD player.* Both have MSRP's in the 100-200 range, and both feature hi-rez as well as DVD and Redbook reproduction. By the time you get around to testing anything, Harry, these technologies will be dead. * So they should be roughly equivalent, and if I read the objectivist sentiment here correctly, the Redbook technology is a ten-year old "settled issue" and "most all players sound alike".* That's not the same as saying that any two specific players are both nominally competent. Morover, this is a very practical choice for many people, at least for a second system if not the first.  Moreover, doing away with evaluative ratings is wrong IMO because this is what *lead* audiophiles to their choices and it is important to understand if/what of these evaluative factors (if any) make the transition from sighted to blind. But now you've created a hypothesis that's too complex to test. It's one thing to test whether perceptions change from sighted to blind, holding all else equal (which my preference test does). But you're also testing a hypothesis about how audiophiles evaluate components. You may well be right, in some general way. But your test requires you to be right in a very specific way--that you can list a set of attributes that covers what audiophiles actually listen for. You have no real basis (other than anecdote and conjecture) for constructing that list. This group can provide the iterative feedback necessary to make the list a good one.* This group is uniquely suited to try to undertake/move this test along. Again, you are being naive, or you simply don't understand what you are proposing. You don't design scientific experiments by asking people what they think oughta work. The only reason I can see for insisting on such an impossibly complex test is that you want to ensure that the test will never be performed, so that you can continue forever insisting against all evidence that we can't know for sure that ABX works because we haven't done YOUR test. And that is what I think you are doing. Thanks for your vote of confidence in my motives.* My purpose is to as closely as possible within a tightly controlled, scientific test, approach the conditions many audiophiles use at home in sighted testing on one end, and Tom's DBT approach on the other, controlling the variables in between. And not incidentally, learning all it is possible to learn upon the way about what really happens in sighted versus blind testing as it affects perceptions.* And the same for forced comparison versus evaluative comparisons. Then do it, if you think it's worth doing. bob __________________________________________________ _______________ MSN Toolbar provides one-click access to Hotmail from any Web page – FREE download! http://toolbar.msn.com/go/onm00200413ave/direct/01/ |
#301
|
|||
|
|||
Comments regarding: Cables, Hearing, Stuff!!
On 20 Apr 2004 02:12:40 GMT, (Michael
Scarpitti) wrote: (Nousaine) wrote in message ... (Michael Scarpitti) Well, anytime you tell someone you have a new amp, and play it a little lounder, your friend will say "wow". That's not the case. You were not there. I reapeat and insist that the Sony TA-N88B amp is so much clearer than other amps that anyone who spends more than 3 seconds listening will notice it. Harry Lavo says that it takes long term listening to make this kind of evaluation and any 3-second listening test is far too short. Not with this amp. That is the point. It was a dramatically clearer amp than anything I have ever heard. Since there exist *many* amps which are sonically transparent, this is clearly an unlikely claim. It was a digital amp. Actually, it's not a digital amp at all. It's a 'switch mode' PWM analogue amp, otherwise known as Class D, very popular in pro-audio because you can make a powerful amp that doesn't weight much. However, such amps frequently suffer from HF artifacts which can noticeably colour the sound. Some listeners tend to confuse this with clarity. It's a plain fact that the TAN88, a design from the '70s, simply did not have access to the ultra-fast switching devices now used for such amplifiers, and which have enormously improved their performance. It's interesting that you should be so vocal - and so intransigent - about this early, and quite flawed, design, when the vastly superior modern Class D pro-audio amps made by Mackie et al tend to be sneered at by so-called 'high enders'. -- Stewart Pinkerton | Music is Art - Audio is Engineering |
#302
|
|||
|
|||
Comments regarding: Cables, Hearing, Stuff!!
|
#303
|
|||
|
|||
Comments regarding: Cables, Hearing, Stuff!!
In article ,
(Michael Scarpitti) wrote: (Nousaine) wrote in message ... (Michael Scarpitti) Well, anytime you tell someone you have a new amp, and play it a little lounder, your friend will say "wow". That's not the case. You were not there. I reapeat and insist that the Sony TA-N88B amp is so much clearer than other amps that anyone who spends more than 3 seconds listening will notice it. Harry Lavo says that it takes long term listening to make this kind of evaluation and any 3-second listening test is far too short. Not with this amp. That is the point. It was a dramatically clearer amp than anything I have ever heard. It was a digital amp. Fifty cents says the amp that sounds different from all others is doing *something* bad. I remember talking to audiophiles when solid state first came out for hi-fi and they were raving about how detailed it was, it was _absoluterly_ better than vacuum tube sound. What it was, was high order distortion. |
#304
|
|||
|
|||
Comments regarding: Cables, Hearing, Stuff!!
"Bob Marcus" wrote in message ...
It will confirm your first point above. If we didn't do it, there would be another three years of discussion and defense here from the objectivists, just as you fear from the subjectivists. Not if you compare components that we agree in advance are nominally competent. But wouldn't "three years of discussion and defense here from the objectivists" about your evidence be an improvement on the current situation, which is that YOU HAVE NO EVIDENCE? The mistake seems to be a confusion between product evaluations and scientific testing. There is no way anyone would claim that sighted auditioning of audio equipment qualifies as a scientific test in the strict sense of the term. Given that there is some variabilty among people, losses as they age, and between the sexes, it is impossible to be sure that any two individuals can hear exactly the same thing. So, by showing that subject 'A' cannot hear the differences between two amplifiers, you have indeed demonstrated very little. Experience is also involved. |
#305
|
|||
|
|||
Comments regarding: Cables, Hearing, Stuff!!
Harry Lavo wrote:
.....large snips ..... 1) The main problem I have with double-blind is its tremendous impracticality in actual use in the home. I'd say they are more practical than bringing home amplifiers singly and taking weeks of long term listening to evaluate their sound. Remember you'll have to do this with every piece of equipment and every equipment change cycle. And if you think that every amplifier might sound different and sound different with every speaker assembling a new system might be practically impossible. As for me I'd much rather rely on the already written body of evidence on the subject. .....more long snips.... I said (quiote) "Tom's DBT a-b and a-b-x tests" (end quote) and specifically state that it is because they (quote) "force the ear-brain into a short-term comparative mode" (unquote).* Why are we arguing? Here's my proposal for answering the practicalities of amp/cable sound. I offer a solution to the $$$$ of high-end equipment blues. Simply purchase a $600 QSC ABX box and everytime you feel the urge to re-mortgage the house for that $35k power amplifier just fire up the ABX machine and force youself into that "short-term comparative mode" where all amplifiers sound the same. |
#306
|
|||
|
|||
Comments regarding: Cables, Hearing, Stuff!!
The testing done thus far shows that tested results are at a level near
guessing, that is a group result. Now the only question is to exclude the possibility any one individual is an exception to the group, totally regardless of the reasons one might evaluate/test for any reason whatsoever. Therefore it is not one person compared to another but one compared to the population to check for exceptions. Previous testing has set a threshold benchmark for individual variability for all reasons, now we are looking for possible exceptions. If one might be found, adjustment to the benchmark can be made accordingly. "The mistake seems to be a confusion between product evaluations and scientific testing. There is no way anyone would claim that sighted auditioning of audio equipment qualifies as a scientific test in the strict sense of the term. Given that there is some variabilty among people, losses as they age, and between the sexes, it is impossible to be sure that any two individuals can hear exactly the same thing. So, by showing that subject 'A' cannot hear the differences between two amplifiers, you have indeed demonstrated very little. Experience is also involved." |
#307
|
|||
|
|||
Comments regarding: Cables, Hearing, Stuff!!
|
#308
|
|||
|
|||
Comments regarding: Cables, Hearing, Stuff!!
Stewart Pinkerton wrote in message news:MSchc.166942$gA5.1955926@attbi_s03...
On 20 Apr 2004 02:12:40 GMT, (Michael Scarpitti) wrote: (snippity-doo-dah) Not with this amp. That is the point. It was a dramatically clearer amp than anything I have ever heard. Since there exist *many* amps which are sonically transparent, this is clearly an unlikely claim. How so? Did you read the part: 'anything I have ever heard'? I have not heard every major amp, though I have heard the big Levinson stuff (though not in the last couple of years), the big Audio Research tube amps, and even some Krell amps. The TA-N88B was without doubt the most amazingly clear amp I have ever heard before or since. It was a digital amp. Actually, it's not a digital amp at all. It's a 'switch mode' PWM analogue amp, otherwise known as Class D, very popular in pro-audio because you can make a powerful amp that doesn't weight much. However, such amps frequently suffer from HF artifacts which can noticeably colour the sound. Some listeners tend to confuse this with clarity. It's a plain fact that the TAN88, a design from the '70s, simply did not have access to the ultra-fast switching devices now used for such amplifiers, and which have enormously improved their performance. I'd be interested in something like that. The amp I have now is a Denon POA-1500-2, which has a lot of punch. It was inferior sonically to the TA-N88B, but it had the overwhelming advantage of not blowing up. It's interesting that you should be so vocal - and so intransigent - about this early, and quite flawed, design, when the vastly superior modern Class D pro-audio amps made by Mackie et al tend to be sneered at by so-called 'high enders'. I cannot say anything about those, but the TA-N88B was the best amp I have ever heard. |
#309
|
|||
|
|||
Comments regarding: Cables, Hearing, Stuff!!
On 4/20/04 7:04 PM, in article r0ihc.7318$GR.869715@attbi_s01, "Nousaine"
wrote: It seems like you are claiming that if you were to listen to any other product following a evaluation of one product then you have to "re-learn" the sound of your own gear? Not to throw gasoline on the fire - I have found that when doing extensive listening to other setups, when you get back to your setup - you hear it with "new ears" to a large degree (for instance, listening to electrostatics and coming home to your cones and domes). It reminds me of when you go traveling and come back to your own house - the smells, sounds and so on are new to your senses to some degree and it usually takes a day or two to get "used to it" again so it becomes transparent to you again. I suppose this is what happens in this case. |
#311
|
|||
|
|||
Comments regarding: Cables, Hearing, Stuff!!
"Bromo" wrote in message
news:Lplhc.37177$yD1.107791@attbi_s54... Depends - I have found that tube amps sound different than solid state amps. A solid state PA amp sounds different than an amp, such as NAD. A lot of negative feedback seems to change the sound somewhat as well - and much of it should be measurable, though I am sure we haven't learned all there is to know about audio measurement. ..... and then as for those tube amps, we must know which manufacturer and vintage of tube you are talking about, right? since tubes don't simply sound like tubes and then that's the end of the story? and after that, how many hours were on those tubes, and then of course how long you warmed them up before listening to music. |
#312
|
|||
|
|||
Comments regarding: Cables, Hearing, Stuff!!
|
#313
|
|||
|
|||
Comments regarding: Cables, Hearing, Stuff!!
On 21 Apr 2004 03:07:51 GMT, (Michael
Scarpitti) wrote: Stewart Pinkerton wrote in message news:MSchc.166942$gA5.1955926@attbi_s03... On 20 Apr 2004 02:12:40 GMT, (Michael Scarpitti) wrote: (snippity-doo-dah) Not with this amp. That is the point. It was a dramatically clearer amp than anything I have ever heard. Since there exist *many* amps which are sonically transparent, this is clearly an unlikely claim. How so? Did you read the part: 'anything I have ever heard'? I have not heard every major amp, though I have heard the big Levinson stuff (though not in the last couple of years), the big Audio Research tube amps, and even some Krell amps. The TA-N88B was without doubt the most amazingly clear amp I have ever heard before or since. Then one might conclude that your listening has been confined to extremely poor amps. This is unlikely (especially since I own a Krell), so my original conclusion (based on personal experience of the TAN88B) stands - the Sony is doing audibly *bad* things that are initially impressive. It was a digital amp. Actually, it's not a digital amp at all. It's a 'switch mode' PWM analogue amp, otherwise known as Class D, very popular in pro-audio because you can make a powerful amp that doesn't weight much. However, such amps frequently suffer from HF artifacts which can noticeably colour the sound. Some listeners tend to confuse this with clarity. It's a plain fact that the TAN88, a design from the '70s, simply did not have access to the ultra-fast switching devices now used for such amplifiers, and which have enormously improved their performance. I'd be interested in something like that. The amp I have now is a Denon POA-1500-2, which has a lot of punch. It was inferior sonically to the TA-N88B, but it had the overwhelming advantage of not blowing up. These amps have advantage for roadies, in that they are advantageous in the kilograms per kilowatt stakes. They have shown *zero* advantage in sonic transparency - indeed, how could they? Good amp design has been a done deal for close on two devcades, despite the claims of the more imaginative 'high enders'. It's interesting that you should be so vocal - and so intransigent - about this early, and quite flawed, design, when the vastly superior modern Class D pro-audio amps made by Mackie et al tend to be sneered at by so-called 'high enders'. I cannot say anything about those, but the TA-N88B was the best amp I have ever heard. So you keep saying - but have no evidence for. I didn't find it at all exceptional. -- Stewart Pinkerton | Music is Art - Audio is Engineering |
#314
|
|||
|
|||
Comments regarding: Cables, Hearing, Stuff!!
|
#315
|
|||
|
|||
Comments regarding: Cables, Hearing, Stuff!!
Bromo wrote:
On 4/20/04 1:53 PM, in article , "Steven Sullivan" wrote: From what I understand Harry can as well switch the two tests and *start* with the blind test and do the sighted test afterwards. This would eliminate the bias completely, as the first time he doesn't know if there are even any changes made, so he really has to find out if there is some *audible* difference. I agree. I don't see why Harry has to go through the 'sighted evaluative' stage at all at this point -- nor why most audiophiles would. They *already* believe they can distinguish 'their' amps and cables from others amps and cables. All he needs to do is 'blinded evaluative' listening. I attended as talk given by Paul Barton (PSB founder) and he does A/B double blind tests - his comment that anyone was good at this EXCEPT audiophile journalists - most of them were scared too much of making an "incorrect" decision. In my business one of the greatest lessons I learned about sound from controlled testing was that some things do sound the same and IT'S ALRIGHT. This as a time when the 'everything makes a difference' merchandising and journalism was exploding. |
#316
|
|||
|
|||
Comments regarding: Cables, Hearing, Stuff!!
"Nousaine" wrote in message
news:yExhc.40425$ru4.39405@attbi_s52... Bromo wrote: On 4/20/04 7:04 PM, in article r0ihc.7318$GR.869715@attbi_s01, "Nousaine" wrote: It seems like you are claiming that if you were to listen to any other product following a evaluation of one product then you have to "re-learn" the sound of your own gear? Not to throw gasoline on the fire - I have found that when doing extensive listening to other setups, when you get back to your setup - you hear it with "new ears" to a large degree (for instance, listening to electrostatics and coming home to your cones and domes). It reminds me of when you go traveling and come back to your own house - the smells, sounds and so on are new to your senses to some degree and it usually takes a day or two to get "used to it" again so it becomes transparent to you again. I suppose this is what happens in this case. Sure, your description of human sensory adaptation exactly describes my point. That phenomenon doesn't take but a few minutes. And you don't even need to leave your house. Just don't listen to your system for 2 weeks and you'll sometimes have a version of that same effect. But, you don't have to re-learn anything, like Harry's description, would seem to indicate. You simply adapt the sensory input of the environment as normal and you accept it as not being of interest until something 'changes.' So your own system means nothing, huh? One can demonstrate this to himself by turning on a fan. Flip the switch and the fan seems unduly loud. 10 minutes later when you turn it off the room seems unusually quiet for a few minutes. This also highlights the point that differences in states will seem greatest at the switch. It's like that with sound too. Differences in acoustical sound quality will appear greatest at the point when they are switched. Frankly, I read that as "differences in acoustical sound quanitity". Which is consistent with what we have learned so far about audible differences with music using this technique. The latter is why the ABX technique was developed.... to make evaluation and comparison at a point where differences that exist will be at the highest level of sensitivity. |
#317
|
|||
|
|||
Comments regarding: Cables, Hearing, Stuff!!
|
#318
|
|||
|
|||
Comments regarding: Cables, Hearing, Stuff!!
|
#319
|
|||
|
|||
Comments regarding: Cables, Hearing, Stuff!!
On 4/21/04 12:31 PM, in article Elxhc.10834$GR.1339861@attbi_s01, "Stewart
Pinkerton" wrote: These amps have advantage for roadies, in that they are advantageous in the kilograms per kilowatt stakes. They have shown *zero* advantage in sonic transparency - indeed, how could they? Good amp design has been a done deal for close on two devcades, despite the claims of the more imaginative 'high enders'. While it is easy to design a mediocre amplifier, it is difficult, even today, to design a truly excellent one. This does not matter if it is an audio amp, amplifier in a cellular base station or TV transmitter. The advantages of digital amplifiers (switch mode) has barely begun - and it is a matter of will, time and money to make them as good and transparent as the current class A/AB circuits the define the best we can do currently. |
#320
|
|||
|
|||
Comments regarding: Cables, Hearing, Stuff!!
On 4/21/04 1:07 AM, in article Kknhc.8765$GR.1105595@attbi_s01, "Norman
Schwartz" wrote: "Bromo" wrote in message news:Lplhc.37177$yD1.107791@attbi_s54... Depends - I have found that tube amps sound different than solid state amps. A solid state PA amp sounds different than an amp, such as NAD. A lot of negative feedback seems to change the sound somewhat as well - and much of it should be measurable, though I am sure we haven't learned all there is to know about audio measurement. .... and then as for those tube amps, we must know which manufacturer and vintage of tube you are talking about, right? since tubes don't simply sound like tubes and then that's the end of the story? and after that, how many hours were on those tubes, and then of course how long you warmed them up before listening to music. Absolutely - I think the human ear can be much more sensitive to our ability to measure in a lot of cases. Kinda nice, though, since it given EE's like myself a whole career to try to devise means of closing the gap! |
Reply |
Thread Tools | |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Forum | |||
Hearing aids and music | High End Audio | |||
Can network, video and sound cables be combined to save space? | General | |||
Comments about Blind Testing | High End Audio | |||
Note to the Idiot | Audio Opinions | |||
hearing loss info | Car Audio |