View Single Post
  #59   Report Post  
Posted to rec.audio.high-end
KH KH is offline
external usenet poster
 
Posts: 137
Default A Brief History of CD DBTs

On 12/18/2012 10:18 AM, Scott wrote:
On Dec 18, 4:09 am, KH wrote:
On 12/17/2012 8:46 PM, Scott wrote:


On Dec 17, 6:43 am, "Arny Krueger" wrote:
"Scott" wrote in message


...
On Dec 14, 8:17 pm, Barkingspyder wrote:


snip

But you are putting way too much weight on such a test if you think you
walk away from a single null result "knowing"
that the more expensive gear is not better sounding.


Ignores the fact that we are repeatedly told that hyper-expensive equipment
sounds "mind blowingly" better and that one has to be utterly tasteless to
not notice the difference immediately.


And here is a classic case in point. You are getting ready to wave the
science flag again in this post and here you are suggesting that a
proper analysis of data would include taking audiophile banter into
account.


In this instance, as Arny presented it, it would not be "banter", but
would, rather, define the null hypothesis. I.e., instead of being
"there are no audible differences", it becomes "there are no major,
unmistakeable audible differences". In a "typical" audiophile scenario,
these are the differences described. How many of these claims are
"unmistakeable", "not at all subtle" etc. In constructing the null
hypothesis of any test these qualifiers cannot be casually ignored.

This is, to me, the heart of the stereotypical subjectivist argument
against DBT or ABX testing - the differences are claimed as obvious
sighted, but then become obscured by any imposed test rigor. In testing
any such claim, the magnitude of the difference (e.g. "obvious to anyone
with ears") defines the precision and detectability requirements of the
test design.


Well thank goodness in real science researchers know better than to
move the goal posts due to trash talking between audiophiles.


Well, some of us *are* engaged in *real* science on a daily basis, and
do understand the precepts.

I would
think that if objectivists were genuinely interested in applying
science to the question of amplifier sound they would not move the
goal posts nor would they use ABX DBTs the way they have when it comes
to amplifier sound.


The thread has nothing to do with "amplifier" sound.

That being typically breaking out ABX and failing
to ever control for same sound bias or even calibrate the sensitivity
of the test. Without such calibration a single null result tells us
very little about what was and was not learned about the sound of the
components under test.


Careful reading would show I clearly stipulated such requirements need
to be defined and accounted for. Arguing in favor of my stated position
isn't much of a refutation.


But of course my point was the fact that no scientist worth his or her
salt would ever make dogmatic claims of fact based on the results of
any single ABX DBT null. And if one think that claims from
subjectivists should alter that fact then they simply don't understand
how real science deals with and interprets real scientific data.


The "dogmatic" claims, as you describe them, were based on physics and
engineering principles, and the fact that listening tests, under
controlled conditions, have not shown results that dispute those
principles. There was no claim, as I read it, that any individual test
was applicable to all conditions. Quite the opposite in fact - where
are the tests that contradict the the physics and engineering principles?

Understanding he true significance of a single null result
does not require consideration of you or anyone else has been told by
other audiophiles.


That would rest entirely upon how the null hypothesis is constructed,
and may indeed include such claims.


No it does not. Real science builds it's conclusions on an
accumulation of research.


No, every test has a conclusion, and is dispositive, if executed
accurately, within the limitations of the specific test.

Again if one understands how science works
they should know the real standing of one singular null result. That
being it is most certainly not something one can reasonably close the
books on and say that it is final proof of no difference.


The "books" are clearly closed on that test group, under those test
conditions. To think otherwise is to deny the relevance of all tests
under all conditions.


For that to affect the weight placed on any single
test result would quite unscientific thinking.


Again, simply not accurate with respect to the world of possible
hypotheses. Any null result for a discrimination test evaluating
"obvious" differences will be significant, if not dispository, for that
test and equipment, as long as the test is set up properly.


Sorry but you are plainly wrong. No scientist would ever put that much
stock in one test. It runs contrary to the very idea of
falsifiability, peer review or the idea of verification via repetition
of previous tests.very very unscientific


Nonsense. Do one tox study and argue that 90% severe adverse effects
doesn't mean anything. See how far that gets you. And, in any event,
that has zero to do with falsifiability. The results of any study stand
on their own unless and until they are demonstrated to be suspect, or
wrong. If the test is not designed to be falsifiable, it is a defective
design irrespective of how the data are analyzed or used. Perhaps you
need to brush up on what falsifiability means in test design.

snip

Sorry, but you seem to be using a rather unique definition of "fact" as
"real scientists" make claims of fact for every such study.


Complete nonsense. And you say this after bring up the null
hypothesis. You might want to read up on the null hypothesis and what
it proves and what it does not prove.
http://en.wikipedia.org/wiki/Null_hypothesis


I suggest you follow your own recommendation.


The results
*are* facts, and are true, and applicable, within the constraints and
confidence interval of the test design. To believe otherwise would
require a refutation of statistics. If you doubt this, then please
explain exactly how many tests are required to result in "facts".


No the results are not facts the results are data.


Data *are* objective facts. What do you think they are if not facts?

Often in this kind
of research one will find conflicting data. That is what no one who
understands these kinds of things would ever draw a conclusion of fact
from a single test. To say it would be a hasty conclusion would be an
understatement.


Clearly you need to brush up on what constitutes "data", "facts", and
"conclusions". They are not interchangeable nor fungible. And you are
conflating "facts" with "conclusions". The only relevant conclusion I
saw in the subject post had to do with lack of data contravening known
physical and engineering principles, not citing any single test as
globally applicable.

Keith