Reply
 
Thread Tools Display Modes
  #1   Report Post  
Chelvam
 
Posts: n/a
Default We passed the DBT.

After reading many postings about our human ears limitation to hear the
difference between equipments above certain level and our current player
specs were more enough, five of us decided to do our own DBT and see if we
can tell the difference.

Three players were used and specs given were taken from the manuals:-

Player A with the following specs:- (Price about $500 )

S/N ratio = 115db

Harmonic distortion = 0.003%

Dynamic Range = 99db

Frequency Response = 2hz - 20Khz (+-0.5db).

jitter = not given but I think it is few hundred ps.


Player B (Classe Audio CDP-10) :-

Frequency Response = DC- 20Khz (+-0.1db).

S/N ratio = 100db typical? ( Iexpected it to be higher)

THD + noise= 0.00003 (wow! it means good right?)

Dynamic Range = 16 bit linear (I don't what that is but is it something like
98db?)

jitter = 2 ps (wow, again)

Results

4 of us unanimously agreed that Player B sounded better when compared with
player A. I am not included because i was doing the switching. But something
interesting happened. After they take a short break in the garden (while I
am supposed to switch to Player C), they were asked to identify which player
was playing when they walked in. They have to tell whether it was C or not.
In another word, have they heard the sound of the player or not, since they
do not which is player A or B or C.

Point to note:

1.They have not heard C yet. Player A was playing and they were mislead to
decide between Player B and C.

2 When I say Player A or B they do not know which is which. (They would say
this sound is better than the earlier one. In a way I can play the same
player over and over again deceiving them to think I am testing multiple
players)


The result was not encourging and I would say all them were guessing. they
can't tell for sure which player was playing.

Then I switched to CDP-10 (player B), they all agreed "this player" is more
pleasant but they do not know if that was player A or B or even C.

In the end, Player C was not tested. between Player A and B, B won. So far,
according many in RAHE, the specs above is quite insignificant to make
audible difference but yet 4 of us correcly identified Player B as superior
to player A.

So guys how you want to explain this and what else I should do to make the
test more reliable.

Cheers.

  #2   Report Post  
chung
 
Posts: n/a
Default We passed the DBT.

Chelvam wrote:

After reading many postings about our human ears limitation to hear the
difference between equipments above certain level and our current player
specs were more enough, five of us decided to do our own DBT and see if we
can tell the difference.

Three players were used and specs given were taken from the manuals:-

Player A with the following specs:- (Price about $500 )

S/N ratio = 115db

Harmonic distortion = 0.003%

Dynamic Range = 99db

Frequency Response = 2hz - 20Khz (+-0.5db).

jitter = not given but I think it is few hundred ps.


Player B (Classe Audio CDP-10) :-

Frequency Response = DC- 20Khz (+-0.1db).

S/N ratio = 100db typical? ( Iexpected it to be higher)

THD + noise= 0.00003 (wow! it means good right?)


Have you missed the precentage sign?


Dynamic Range = 16 bit linear (I don't what that is but is it something like
98db?)

jitter = 2 ps (wow, again)

Results

4 of us unanimously agreed that Player B sounded better when compared with
player A.


Blind or sighted?

I am not included because i was doing the switching. But something
interesting happened. After they take a short break in the garden (while I
am supposed to switch to Player C), they were asked to identify which player
was playing when they walked in. They have to tell whether it was C or not.
In another word, have they heard the sound of the player or not, since they
do not which is player A or B or C.

Point to note:

1.They have not heard C yet. Player A was playing and they were mislead to
decide between Player B and C.

2 When I say Player A or B they do not know which is which. (They would say
this sound is better than the earlier one. In a way I can play the same
player over and over again deceiving them to think I am testing multiple
players)


The result was not encourging and I would say all them were guessing. they
can't tell for sure which player was playing.

Then I switched to CDP-10 (player B), they all agreed "this player" is more
pleasant but they do not know if that was player A or B or even C.

In the end, Player C was not tested. between Player A and B, B won. So far,
according many in RAHE, the specs above is quite insignificant to make
audible difference but yet 4 of us correcly identified Player B as superior
to player A.

So guys how you want to explain this and what else I should do to make the
test more reliable.

Cheers.


One extremely important point: you have to carefully match the playback
levels. To 0.1 dB (within 1%). Your results are meaningless if the
levels are not matched.

  #3   Report Post  
Don Pearce
 
Posts: n/a
Default We passed the DBT.

On 26 Jun 2004 14:32:23 GMT, "Chelvam" wrote:

After reading many postings about our human ears limitation to hear the
difference between equipments above certain level and our current player
specs were more enough, five of us decided to do our own DBT and see if we
can tell the difference.

Three players were used and specs given were taken from the manuals:-

Player A with the following specs:- (Price about $500 )

S/N ratio = 115db

Harmonic distortion = 0.003%

Dynamic Range = 99db

Frequency Response = 2hz - 20Khz (+-0.5db).

jitter = not given but I think it is few hundred ps.


Player B (Classe Audio CDP-10) :-

Frequency Response = DC- 20Khz (+-0.1db).

S/N ratio = 100db typical? ( Iexpected it to be higher)

THD + noise= 0.00003 (wow! it means good right?)

Dynamic Range = 16 bit linear (I don't what that is but is it something like
98db?)

jitter = 2 ps (wow, again)

Results

4 of us unanimously agreed that Player B sounded better when compared with
player A. I am not included because i was doing the switching. But something
interesting happened. After they take a short break in the garden (while I
am supposed to switch to Player C), they were asked to identify which player
was playing when they walked in. They have to tell whether it was C or not.
In another word, have they heard the sound of the player or not, since they
do not which is player A or B or C.

Point to note:

1.They have not heard C yet. Player A was playing and they were mislead to
decide between Player B and C.

2 When I say Player A or B they do not know which is which. (They would say
this sound is better than the earlier one. In a way I can play the same
player over and over again deceiving them to think I am testing multiple
players)


The result was not encourging and I would say all them were guessing. they
can't tell for sure which player was playing.

Then I switched to CDP-10 (player B), they all agreed "this player" is more
pleasant but they do not know if that was player A or B or even C.

In the end, Player C was not tested. between Player A and B, B won. So far,
according many in RAHE, the specs above is quite insignificant to make
audible difference but yet 4 of us correcly identified Player B as superior
to player A.

So guys how you want to explain this and what else I should do to make the
test more reliable.

Cheers.


I don't actually follow the protocol of the trial. Could you describe
it please?

d
Pearce Consulting
http://www.pearce.uk.com

  #4   Report Post  
Nousaine
 
Posts: n/a
Default We passed the DBT.

Chelvam" wrote:

After reading many postings about our human ears limitation to hear the
difference between equipments above certain level and our current player
specs were more enough, five of us decided to do our own DBT and see if we
can tell the difference.


You did not mention how this test qualifies as a Double Blind Test. You, as
proctor knew which was which, so it could properly not have been Double Blind.
What bias controls were employed? Were levels matched? How were the players
synched for comparisons?


Three players were used and specs given were taken from the manuals:-


....snips.....

Results

4 of us unanimously agreed that Player B sounded better when compared with
player A. I am not included because i was doing the switching. But something
interesting happened.


How were decisions recorded? Were decisions of each participant made in
private? It sounds as though conditions were 'open-session'

After they take a short break in the garden (while I
am supposed to switch to Player C), they were asked to identify which player
was playing when they walked in. They have to tell whether it was C or not.
In another word, have they heard the sound of the player or not, since they
do not which is player A or B or C.

Point to note:

1.They have not heard C yet. Player A was playing and they were mislead to
decide between Player B and C.

2 When I say Player A or B they do not know which is which. (They would say
this sound is better than the earlier one. In a way I can play the same
player over and over again deceiving them to think I am testing multiple
players)


The result was not encourging and I would say all them were guessing. they
can't tell for sure which player was playing.

Then I switched to CDP-10 (player B), they all agreed "this player" is more
pleasant but they do not know if that was player A or B or even C.

In the end, Player C was not tested. between Player A and B, B won. So far,
according many in RAHE, the specs above is quite insignificant to make
audible difference but yet 4 of us correcly identified Player B as superior
to player A.


I thought you weren't included? And because you had already "agreed" on what
the right answer it seems fairly obvious that it would be easy to get that
answer again with no bias control protocols implemented. I hope that it seems
apparent that when the test was blind to the other subjects (mislead about C)
they could no longer tell them apart.

So guys how you want to explain this and what else I should do to make the
test more reliable.

Cheers.


To make the test more "reliable" is easy just keep the bias mechanisms you
already have or allow a few more in :-) To make the test double blind and more
valid as a listening test implement more stringent bias controls (blinding the
proctor, use private decision methods, match levels, synch player timing,
include enough trials to get a legitimate number of decisions for statistical
analysis.)

  #5   Report Post  
Bromo
 
Posts: n/a
Default We passed the DBT.

On 6/26/04 7:14 PM, in article trnDc.101928$Hg2.29121@attbi_s04, "Nousaine"
wrote:

4 of us unanimously agreed that Player B sounded better when compared with
player A. I am not included because i was doing the switching. But something
interesting happened.


How were decisions recorded? Were decisions of each participant made in
private? It sounds as though conditions were 'open-session'


It is important for us as a group to keep *our* biases under control. As
someone who has said that all CD players sound the same - you seem to be
searching for an answer that will confirm your apparent bias. I would
caution you against this.



  #6   Report Post  
Rich.Andrews
 
Posts: n/a
Default We passed the DBT.

"Chelvam" wrote in
:

After reading many postings about our human ears limitation to hear the
difference between equipments above certain level and our current player
specs were more enough, five of us decided to do our own DBT and see if
we can tell the difference.

Three players were used and specs given were taken from the manuals:-

Player A with the following specs:- (Price about $500 )

S/N ratio = 115db

Harmonic distortion = 0.003%

Dynamic Range = 99db

Frequency Response = 2hz - 20Khz (+-0.5db).

jitter = not given but I think it is few hundred ps.


Player B (Classe Audio CDP-10) :-

Frequency Response = DC- 20Khz (+-0.1db).

S/N ratio = 100db typical? ( Iexpected it to be higher)

THD + noise= 0.00003 (wow! it means good right?)

Dynamic Range = 16 bit linear (I don't what that is but is it something
like 98db?)

jitter = 2 ps (wow, again)

Results

4 of us unanimously agreed that Player B sounded better when compared
with player A. I am not included because i was doing the switching. But
something interesting happened. After they take a short break in the
garden (while I am supposed to switch to Player C), they were asked to
identify which player was playing when they walked in. They have to tell
whether it was C or not. In another word, have they heard the sound of
the player or not, since they do not which is player A or B or C.

Point to note:

1.They have not heard C yet. Player A was playing and they were mislead
to decide between Player B and C.

2 When I say Player A or B they do not know which is which. (They would
say this sound is better than the earlier one. In a way I can play the
same player over and over again deceiving them to think I am testing
multiple players)


The result was not encourging and I would say all them were guessing.
they can't tell for sure which player was playing.

Then I switched to CDP-10 (player B), they all agreed "this player" is
more pleasant but they do not know if that was player A or B or even C.

In the end, Player C was not tested. between Player A and B, B won. So
far, according many in RAHE, the specs above is quite insignificant to
make audible difference but yet 4 of us correcly identified Player B as
superior to player A.

So guys how you want to explain this and what else I should do to make
the test more reliable.

Cheers.


The audibility test is a nice first step, but what precise measurement
could be made to explain the difference? What is the cause of the better
sound of the unit in question?

Frequence response, etc. are all nice things to measure, but there are
many other things to measure.

Of course, there is also the issue of D-A converter used, D-A scheme, op-
amps, grade of each, etc. There may be too many variables to take in at
once.

r



--
Nothing beats the bandwidth of a station wagon filled with DLT tapes.

  #9   Report Post  
Chelvam
 
Posts: n/a
Default We passed the DBT.

"Bromo" wrote in message
news:8VnDc.186683$Ly.95137@attbi_s01...
On 6/26/04 7:14 PM, in article trnDc.101928$Hg2.29121@attbi_s04,

"Nousaine"
wrote:

4 of us unanimously agreed that Player B sounded better when compared

with
player A. I am not included because i was doing the switching. But

something
interesting happened.


How were decisions recorded? Were decisions of each participant made in
private? It sounds as though conditions were 'open-session'


It is important for us as a group to keep *our* biases under control. As
someone who has said that all CD players sound the same - you seem to be
searching for an answer that will confirm your apparent bias. I would
caution you against this.


Actually all these about audible inaudible, 0.01%, jitter and other
scientific measurements, me wonder would Aliens think that the difference
between human and Chimpanze cannot be perceived due to the similarities of
genomic sequence is to be 98.7% alikeness.

  #10   Report Post  
Nousaine
 
Posts: n/a
Default We passed the DBT.

Bromo wrote:


On 6/27/04 10:01 AM, in article
, "Nousaine"
wrote:

It is important for us as a group to keep *our* biases under control. As
someone who has said that all CD players sound the same - you seem to be
searching for an answer that will confirm your apparent bias. I would
caution you against this.


I've never said that all cd players sound the same .... I've only noted

that
nobody has ever shown that the good ones don't.

I'm also not "searching" for any answers. The best way to "keep *our*

biases
under control" would be to implement bias control in listening tests .....
such
as level/synch matching at a mininum.

Why not ask for details about any reports? Isn't it useful to know the
conditions of any experiment? How would this do anything except confirm
results?

It seems to me that your questions of me are simply an attempt to validate
results that you consider favorable by rejecting examination.


Not at all - that is not my intent, except you do seem to have a penchant
for being a skeptic about most things.


Perhaps that's because I began using bias-controlled experiments in the late
70s and have yet to find a replicable experiment that shows that amp/wire/bit
"sound" exists beyond that which exceeds known levels of human hearing
threshold (level, frequency and time.)

I've traveled half way across the country at my own expenseon more than one
occasion at the bequest of subjects who promised to demonstrate amp/cable sound
but were unable to do so.

I've acquired copies of most bias controlled listening test that I'm aware
exists (mp3 tests excepted.) I've conducted long term, high sensitivity single
and multiple subject bias controlled experiments and have yet to find a single
subject that was able to confirm amp/wire/capacitor sound.

Would you blame me for caution?



  #11   Report Post  
Bromo
 
Posts: n/a
Default We passed the DBT.

On 6/27/04 4:02 PM, in article OJFDc.190029$Ly.4456@attbi_s01, "Nousaine"
wrote:

Would you blame me for caution?


Nope not at all - but with such an array of experience, it is possible to
get jaded as well - which was my real caution! :-)

  #13   Report Post  
Bob Marcus
 
Posts: n/a
Default We passed the DBT.

Nousaine wrote:

"Bob Marcus" wrote:

Here's what I would do next. You "know" that everybody prefers B to A. So

do
a longer test; have each subject do 5 trials, mixing up A and B randomly

so
they don't know which is which each time. This will give you 20 trials.

If
they agree on B (or A!) at least 15 times out of 20, I'd say there's an
audible difference between them. But:

1) You must level-match to within 0.1 dB.
2) You should get someone other than yourself to do the random switching

of
A and B, and keep that person away from the subjects.
3) You must make sure each subject is making his preference on his own,
without consulting the others.

Good luck.

bob


Another issue with cd players is synch-ing of playback start-time.


In an identification test (e.g., ABX), absolutely, because if A and B are
out of synch it will be easy to tell which one X is in synch with. That's
why ABX tests of disk players are difficult for amateurs to pull off.

But what this fellow was doing was an A-B preference test. If you're just
choosing which of two alternatives you prefer, I would think that
time-synching wouldn't be as critical. You don't want the same one leading
every time, but just stopping and starting the players between each trial
might be enough to randomize this. If not, you could intentionally randomize
which player is leading each time.

bob

__________________________________________________ _______________
FREE pop-up blocking with the new MSN Toolbar – get it now!
http://toolbar.msn.click-url.com/go/...ave/direct/01/
Reply
Thread Tools
Display Modes

Posting Rules

Smilies are On
[IMG] code is On
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Libs vs Cons Schizoid Man Audio Opinions 48 January 20th 04 06:42 AM
Peter Walker R.I.P John Stone Audio Opinions 3 December 15th 03 11:40 PM
Where are those Wascally Weapons of Mass Destwuction??? Jacob Kramer Audio Opinions 1094 September 9th 03 02:20 AM


All times are GMT +1. The time now is 11:48 PM.

Powered by: vBulletin
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 AudioBanter.com.
The comments are property of their posters.
 

About Us

"It's about Audio and hi-fi"