The Wayback Machine - https://web.archive.org/web/20110930084747/http://www.avguide.com/forums/blind-listening-tests-are-flawed-editorial?page=2

Blind Listening Tests are Flawed: An Editorial

Robert Harley -- Wed, 05/28/2008 - 16:18

The following is my editorial from The Absolute Sound Issue 183 (not yet published) on blind listening tests.

The Blind (Mis-) Leading the Blind

Every few years, the results of some blind listening test are announced that purportedly “prove” an absurd conclusion. These tests, ironically, say more about the flaws inherent in blind listening tests than about the phenomena in question.

The latest in this long history is a double-blind test that, the authors conclude, demonstrates that 44.1kHz/16-bit digital audio is indistinguishable from high-resolution digital. Note the word “indistinguishable.” The authors aren’t saying that high-res digital might sound a little different from Red Book CD but is no better. Or that high-res digital is only slightly better and not worth the additional cost. Rather, they reached the rather startling conclusion that CD-quality audio sounds exactly the same as 96kHz/24-bit PCM and DSD, the encoding scheme used in SACD. That is, under double-blind test conditions, 60 expert listeners over 554 trials couldn’t hear any differences between CD, SACD, and 96/24. The study was published in the September, 2007 Journal of the Audio Engineering Society.

I contend that such tests are an indictment of blind listening tests in general because of the patently absurd conclusions to which they lead. A notable example is the blind listening test conducted by Stereo Review that concluded that a pair of Mark Levinson monoblocks, an output-transformerless tubed amplifier, and a $220 Pioneer receiver were all sonically identical. (“Do All Amplifiers Sound the Same?” published in the January, 1987 issue.)

Most such tests, including this new CD vs. high-res comparison, are performed not by disinterested experimenters on a quest for the truth but by partisan hacks on a mission to discredit audiophiles. But blind listening tests lead to the wrong conclusions even when the experimenters’ motives are pure. A good example is the listening tests conducted by Swedish Radio (analogous to the BBC) to decide whether one of the low-bit-rate codecs under consideration by the European Broadcast Union was good enough to replace FM broadcasting in Europe.

Swedish Radio developed an elaborate listening methodology called “double-blind, triple-stimulus, hidden-reference.” A “subject” (listener) would hear three “objects” (musical presentations); presentation A was always the unprocessed signal, with the listener required to identify if presentation B or C had been processed through the codec.

The test involved 60 “expert” listeners spanning 20,000 evaluations over a period of two years. Swedish Radio announced in 1991 that it had narrowed the field to two codecs, and that “both codecs have now reached a level of performance where they fulfill the EBU requirements for a distribution codec.” In other words, Swedish Radio said the codec was good enough to replace analog FM broadcasts in Europe. This decision was based on data gathered during the 20,000 “double-blind, triple-stimulus, hidden-reference” listening trials. (The listening-test methodology and statistical analysis are documented in detail in “Subjective Assessments on Low Bit-Rate Audio Codecs,” by C. Grewin and T. Rydén, published in the proceedings of the 10th International Audio Engineering Society Conference, “Images of Audio.”)

After announcing its decision, Swedish Radio sent a tape of music processed by the selected codec to the late Bart Locanthi, an acknowledged expert in digital audio and chairman of an ad hoc committee formed to independently evaluate low-bit rate codecs. Using the same non-blind observational-listening techniques that audiophiles routinely use to evaluate sound quality, Locanthi instantly identified an artifact of the codec. After Locanthi informed Swedish Radio of the artifact (an idle tone at 1.5kHz), listeners at Swedish Radio also instantly heard the distortion. (Locanthi’s account of the episode is documented in an audio recording played at workshop on low-bit-rate codecs at the 91st AES convention.)

How is it possible that a single listener, using non-blind observational listening techniques, was able to discover—in less than ten minutes—a distortion that escaped the scrutiny of 60 expert listeners, 20,000 trials conducted over a two-year period, and elaborate “double-blind, triple-stimulus, hidden-reference” methodology, and sophisticated statistical analysis?

The answer is that blind listening tests fundamentally distort the listening process and are worthless in determining the audibility of a certain phenomenon.

As exemplified by yet another reader letter published in this issue, many people naively assume that blind listening tests are somehow more rigorous and honest than the “single-presentation” observational listening protocols practiced in product reviewing. There’s a common misperception that the undeniable value of blind studies of new drugs, for example, automatically confers utility on blind listening tests.

I’ve thought quite a bit about this subject, and written what I hope is a fairly reasoned and in-depth analysis of why blind listening tests are flawed. This analysis is part of a larger statement on critical listening and the conflict between audio “subjectivists” and “objectivists,” which I presented in a paper to the Audio Engineering Society entitled “The Role of Critical Listening in Evaluating Audio Equipment Quality.” You can read the entire paper here http://www.avguide.com/news/2008/05/28/the-role-of-critical-listening-in-evaluating-audio-equipment-quality/. I invite readers to comment on the paper, and discuss blind listening tests, on a special new Forum on AVguide.com. The Forum, called “Evaluation, Testing, Measurement, and Perception,” will explore how to evaluate products, how to report on that evaluation, and link that evaluation to real experience/value. I look forward to hearing your opinions and ideas.

Robert Harley

Jonathan (not verified) -- Wed, 07/01/2009 - 14:03

Dear Robert,
I'm a university professor writing on the MPEG tests.  I found your story quite interesting, but I can't seem to track down a source for that Bart Locanthi tape you mention.  Where might I find a copy?  Or is it written down anywhere else besides your editorial?  How did MPEG people respond to Locanthi's findings?
Thanks.  I assume you can look up my email if you'd prefer to respond privately.
Yours,
An interested reader.

Robert Harley -- Wed, 07/01/2009 - 14:46

Jonathan:
The Bart Locanthi tape was played during a workshop on low-bit-rate coding at an Audio Engineering Society convention. It's been a long time, but if forced to guess as to the time and location, I would say the convention was in Los Angeles between 1993 and 1996. It might be possible to look at the programs of past conventions at www.aes.org and figure out which convention it was from the presence of the workshop. You can then buy recordings of the workshop from the AES. If I recall correctly, Ron Striecher was the workshop chairman. Incidentally, Locanthi formed an ad hoc committee within the AES to independently evaluate (through listening tests) perceptual codecs. He and others in the Los Angeles audio community were concerned that standards were being set without adequate vetting through critical listening. This was probably prompted in part by the hubris of the developers of MP3 (Karlheinz Brandenburg in particular) who used phrases such as "psychoacoustic redundancy" and "informational irrelevance" in his papers, and who seemed to have a complete disregard for the effect these codecs had on the listening experience.
 
 

5 decade audio enthusiast (not verified) -- Sun, 07/26/2009 - 14:41

 The introduction of new equipment to an audio system is filled with chemical anticipation flushiing through the reward centres of the brain.The product costs the earth so must provide miracles on demand. It is how we are brought up from birth.  Price = value. This, regardless of the fact that most of today's supermarket boxes and "audiophile" quality components are both made and labelled in exactly the same factory in China. I find that different CD issues of the same track sound  more different than the latest high end player and yesterday's very modest, hifi dealer's shelf filler. I know this because I own both.
-
 To be honest with yourself is far more difficult than being honest with others. I would not buy a used amplifier from Mr Harley. He is not only deluded but demands others are too. This is worse than pushing religious belief on others. His gullible victims will spend far more than is necessaryl to enjoy music to the highest possible, modern standards. The pointless, excess expenditure would be far better spent on the music which fuels the system and lifts the musical soul. Happiness is an adio system you can live with. When you stop listening to your system and start listening to the music you are finally on your way to a happier and healthier life. The desire to own more music is a sure sign of balance in any audio system and its owner. The pursuit of audio perfection has nothing to do with music and is destined to lead to lifelong, obsessive disatisfaction and  bitter disillusionment. Only those with small egos are driven to such comparisons. It can become a hobby in itself  but is a negative one unless you make the components yourself. Only then does it become a positive benefit to the enjoyment of life. Creativity is constructive and life enhancing for both body and mind. Mere expenditure is completely unskilled labour requiring only blind greed in the participant.
 

Mr Plus -- Sun, 07/26/2009 - 20:34

I don't think many audiophile readers of this site will take kindly to being refered to as 'gullible victims'. And many of those who pursue the goal of better music replay using good audio equipment do so from the position of music lover first and foremost. Ultimately, if someone derives pleasure from owning something, and in so doing they are putting some much-needed money back into our economies, what's the problem?
 
 

Alan Sircom
Editor, Hi-Fi Plus Magazine
London, England
editor [at] hifiplus [dot] com

5 decade audio enthusiast (not verified) -- Mon, 07/27/2009 - 04:33

Hi Alan
One is a victim if one has been conned by a global industry and supporting cast of countless audio magazines fed by manufacturer's advertising disguised as backhanders. All make a living from selling products which sound identical. Vinyl sounds completely different to my ears but I cannot distinguish between my various CD players and DVDPs playing the same CD after countless hours of repeating the same track. Yet I can quite easily diferentiate between different releases of the same track on CD. So much for identikit digital recordings! There is nothing wrong with owning nice equipment for its decorative qualities. If you think you can tell them apart with your ears then you are mistaken. To gain financially by telling others that you can tell them apart is quite probably fraud. Perhaps you should turn to astrology to help you decide which latest bit of kit sounds best for your glowing reviews?  Or stick to speaker and subwoofer reviewing where differences are obvious and the results much safer from a legal point of view. 
Drug abuse also puts lots of money into the economy worldwide and makes people happy for a while.There are obvious parallels with audio here. ;-)

Mr Plus -- Mon, 07/27/2009 - 07:12

"If you think you can tell them apart with your ears then you are mistaken."
 
Once again I disagree strongly.
 
I have worked on magazines in the UK that ran blind, level-matched AB and ABX tests. I have administered these tests, assisted on them and acted as listener. The panel of listeners were not able to determine the look, the price or the brand name of the product. Such tests were run across a genre, not within a price band, so the magazine would check 25 CD players between £200-£2,000 for example. It often didn't pan out that the most expensive one was the best one in the group (sometimes the cheapest was the best sounding), but differences were clearly identified. Products would be periodically re-submitted within the test (and in subsequent tests) and - although not always identified as the re-submission - the listeners were suprisingly consistent and frequently described the re-submitted product using the same terms they used in the first submission. Results were more consistent when running AB over ABX, and ABX testing was ultimately rejected because it proved almost impossible to get listeners to return, as they felt "like lab rats" at the end of each ABX session.
 
These tests have been rejected by the hard-line objectivists as 'not objective enough', because the tests are not practical to run as double-blind ABX and it is not economically viable to have each product submitted 16 times to eliminate the possibility of the test being a glorified coin-toss. Or maybe they have been rejected because they happen to turn up answers that do not sit comfortably with the hard-line objectivists position that everything sounds the same.
 
I don't run these tests for two reasons; they invariably turn in very similar results to sighted tests and they read like they've been written by committee. Which, in a way, they are. The magazine that used to run such tests now runs them on a much smaller scale (typically six products in a specific price category), as they were met with increasing confusion among the readership.
 
 

Alan Sircom
Editor, Hi-Fi Plus Magazine
London, England
editor [at] hifiplus [dot] com

5 decade audio enthusiast (not verified) -- Tue, 07/28/2009 - 03:07

 
Alan
The reason participants do not want to return to A/B/X testing  is because they are stressed by their inability to identify differences between products. Reading about A/B/X completely undermines the audio glutton's raison d'être. The old saw: "A fool and his money are soon parted" was never more true than in audio. The victims of this obsessive-compulsive disorder are drawn back to the dealer like tarts to a high street jeweller's window. It is the perfect con. Since the onus lies with the naive buyer to come into the dealer's temple, make the choice of offering to the audio gods and pay heavily for the privelege. They then take the artefact back home and arrange it on their audio altar/rack with all the care, ritual and ceremony of a religious devotee. Usually to be disappointed when the promised miracle doesn't instantly happen. This will usually require the personal attention of the dealer/coach./minister/soothsayer/witch doctor/priest/guru/consultant.
 
It's really all about attention seeking in a world which lacks respect for the obsessive adio junk collector. Spending more on the next model up will always cure the patient's complaint about the first product's inadequacies.  The maufacturer's ads/annual new model release/carpetted dealer/glossy mag/forum mafa constantly reinforce the insecurity in the hapless victim. Which always requires another fix. Disatisfaction is the name of the game. Satisfaction means bankruptcy! Keep them hungry for more! The victims get upset if you question their behaviour. Just like a religious person gets upset if you question their faith. More parallels! Call your audio product "high-end" and you can sell empty cotton reels at £/$1000 a shot to support the £/$K000 snake oil cables of their choice.
 
A few successful prosecutions would destroy the audio market bubble and probably make a lot of people angry and very unhappy to be reminded of their foolishness. Is it better to point out to the audio faithfull that their religion has absolutely no basis in fact? Who knows? With increasing emphasis on digital, and the visual entertainment aspects in particular, the days are already numbered for this particular fad.  The visual glamour and excitement of gllowing valves and spinning platters is morphing into faceless, black, digital computer boxes with a single blue eye. A/B/X won't kill audio. Audio is attending its own funeral with a black, digital spike thrust through its cold heart. Long live the 100" 3D OEL screen and the one box, anonymous, sound system out in the utilities cupboard beside the electricity meter. Just above the washing machine.

Mr Plus -- Tue, 07/28/2009 - 04:31

Given your last post, it's not hard to see why I said that "tests have been rejected by the hard-line objectivists as 'not objective enough'." Thank you for making my point for me.
 
I have run ABX tests and explained why the participants were reluctant to repeat the experience. You now jump to a conclusion that suggests it has something to do with them being "stressed by their inability to identify differences between products." Then use that particular strawman argument to come up with a series of increasingly angst-ridden conclusions.
 
The real reason why participants disliked the ABX experience was down to the nature of the process itself, rather than the conslusions drawn from that process. I don't know if you have ever sat in an ABX test, but it can be a soul-destroying experience for the listener, especially when running repeated tests. Even lab rats get a reward if they get the right answer. Participants of these ABX tests got rewarded with their choice of asperin, ibuprophen or paracetemol and a free lunch, but that was about it. Unfortunately, I was forced to withdraw from ABX testing by the participants after just two morning sessions of testing (in both cases, they refused to return for the afternoon sessions). Interestingly, when tabulated after the event, the results of those tests suggested they were reliably hearing differences, but the number of ABX tests sat were too low to eliminate randomness.
 
 
 
 

Alan Sircom
Editor, Hi-Fi Plus Magazine
London, England
editor [at] hifiplus [dot] com

5 decade audio enthusiast (not verified) -- Fri, 07/31/2009 - 02:48

 
 
Alan
I am not "angst ridden". I simply put all my energy into speaker building and listening to music these days. Apart form the source software the speaker system is the only place in any audio system where real differences actually exist. Your attempts to undermine my own very long experience (and maintain your lucrative employment by false pretences) is duly noted. ;-)
 
I was a confirmed subjectivist and a driven system modifier until my wife offered to make a simple blind A/B swap involving a new piece of audio equipment.  I was utterly convinced that the item caused a night and day difference to my system's sound quality. I had believed the glowing revues in various subjectivist magazines of the time. The reviewers and my own opinions were completely false. In practice the item made zero difference to SQ. This was an eye opener which turned me into a questioner of all the false promises behind the constant need for reinvestment in shiny, new boxes. I still spend hours testing my own ability to confirm differences between new purchases and the redundent item but would much rather be enjoying music. If I can discern differences between the same track on different CDs (as can my wife without prompting) then my decades of enjoying audio in the multi thousand £/$ range have not made me completely deaf. Nor addled my senses.
 
In my own experience CD sound quality is awful but a practical medium for casual listening. Vinyl trashes CD but is a very fragile medium. How's that for blind subjectivism? ;-)  In audio there are only opinions. Everybody has them. Every single listener's opinion is equally valid. Very few of them are perfectly formed.  As can be judged by attending high end audio shows in smart hotels. Read the online forums after the event as they bewail the dreadful performance of a top end system by their favourite manufacturer. Then read other member's posts extolling the near-angelic SQ of exactly the same system. Neither member is likely to have been stressed as occurs in an A/B test or a dealer demo. They enter the room full of warm anticipation. One leaves delighted. The other wanders out completely dejected. Personal sensitivity to  room modes or damping? Hardly. The manufacturer is only there to push product. You'd think they'd get it right, wouldn't you? Few such demonstations attract a static (or exstatic) audience for the duration of the show regardless of expense in the components in the system. Yet one would think that If the punters were at home with such a system they would never go out! These top end systems are the constantly  moving goal posts of the hierarchical audio system for many audio fans. The stairway to heaven regardless of musical taste.  At such shows I have found myself glued to an uncomfortable chair in some lowly system demonstation as other visitors come and go. One glance at the system and they can "tell" they are wasting their time in there. Yet the music is filling the room like a second glass of good wine. Quite unilike the megabuck system next door with its chest pounding bass and tinkly highs and dustbin sized amps and skyscraper speakers. All of which leave every listener impressed but completely unmoved.
 
I have one very simple test for judging the quality of any audio component or system. It requires no stressful audition or critical listening. In fact the more relaxed one is the better.
 
Does the system make me want to buy or hear more music?
 
Nothing else matters. Not the price. Nor the label. Nor the appearance. Nor the fancy internal components.  Nor the technology. Nor the endless hype. Do I feel I want to beg, steal or borrow more music to play on this system? It really is as simple as it sounds. If, after a purchase, you find your system is gathering dust then the new item was a false move. It has denied you it's promised pleasure. If, however, you are staying up late again and the CDs/LPs are scattered in disarray around your limp form as you lie half-slumped in the "hot seat" then something must be right with the world of audio. The music is the thing. Everything else just gets in the way. Including "better sound quality". A red herring if ever there was one. :-)
 
 

Mr Plus -- Fri, 07/31/2009 - 11:19

First, define your terms. You mention "a simple blind A/B swap involving a new piece of audio equipment". Was this level-matched, assuming the nature of the audio equipment provided a difference that would require level-matching of course? How did your wife signal to you that she moved from product A to product B?
 
The reason I ask these questons is that I have conducted single-blind AB tests (as you describe), as well as single and double-blind, level-matched AB tests on many occasions, as well as the 'match abandoned' ABX test described earlier. Even the blind, level-matched AB test was considerably more robust and less likely to be prone to external bias than a "simple blind A/B swap."
 
And that's why I fundamentally disagree with you. The tests described above repeatedly, reliably and robustly highlighted changes between individual components. Moreover, when test products were later resubmitted, the terms used to describe the product originally were remarkably close to those when submitted a second time. So how come these more demanding tests turn in results that your less reliable test criteria failed to determine? False negative, perhaps?
 
Also, the statement "Does the system make me want to buy or hear more music?" is a pre-requisite of critical listening, whether auditioning loudspeakers or any other aspect of the system. But it is very dependent on the system, not simply the loudspeakers. Here's a perfect case in point; I am covering the Waterfall Victoria EVO speakers for an upcoming issue of HF+. These are good, if slightly mid-forward loudspeaker towers made of glass. Used with a modest, 25W class A integrated amp, the speakers sound great and listening session last long into the evening, but used with a set of 200w monblocs and a two box preamp costing some six times as much as the integrated amp, the speakers still sound great, but listening sessions never lasted beyond an hour. Nothing else changed. That can't be right, the amplifiers sound the same, don't they? Swapping the speakers delivered completely the opposite result, but that can't be right either - there's no such thing as system synergy.
 
Such changes are subtle, but commonplace. This is one of my major criticisms with ABX testing is that it rarely takes the length of listening session into account. In theory it can, but in reality, listeing sessions are spent with a fraction of a track on A, a fraction on B and the same fraction on X. It's what ultimately made me (and many others) abandon the blind AB panel test - they go for the best sounding at the expense of the one that sounds most enjoyable over long-term listening.
 

Alan Sircom
Editor, Hi-Fi Plus Magazine
London, England
editor [at] hifiplus [dot] com

Terje Skjaerpe (not verified) -- Sat, 08/15/2009 - 16:44

 What you are writing is very interesting: "The tests described above repeatedly, reliably and robustly highlighted changes between individual components". I presume that "components" mean both CD/DVD players, amplifiers and preamplifiers. Assuming that no component was of a very low quality, you seem to be one of the first to observe differences between amplifiers in blinded tests. Since you are confident about it, I also assume that you took necessary notes of the results, and made statistical tests to ensure significant differences. It would be extremely interesting to see your data and the results of the statistics. I think AB tests are valid as long as they are completely blinded (double blinded), and as long A and B are randomly selected. There should also be an "external" observer that could attest the effectivness of the blinding and randomization.
Considering what I have written above, you may suspect that I am out to get you. That is not my intention. If you can document your results, they really would be interesting. If, however, you cannot present any documentation - well, then I got you afterall. 
I would be happy for a website link or a pdf-file. If you only have a copy of a representative issue of your magazin, I will be happy to buy one from you.

Mr Plus -- Sun, 08/16/2009 - 14:17

The tests described were a series of reviews run in Hi-Fi Choice magazine in the UK from the late 1980s until the late 1990s. The magazine at that time was owned by Dennis Publishing. The magazine still exists, but is now owed by Future Publishing (a different company) and the new company no longer runs an archive of that magazine's reviews, and - because that magazine was one of the first to insist upon owning all copyright for published material - I could not reprint any of the tests I either ran or participated in due to that material being copyright protected. So, there are no legal PDFs available of these tests, although there are many bootleg scans reprinted by readers from that time. Ask on some of the UK forums (like Pink Fish or Hi-Fi Wigwam). If you consider this a 'gotcha', then OK, you got me.
 
The protocol was developed by Paul Miller, then contributor to Hi-Fi Choice and current editor of Hi-Fi News and Record Review in the UK. The tests ran as follows. A regular team of listeners were used, engaging three listeners from a pool of myself, Guy Sergeant (originally of Audio Innovations, then of JPW, now of Pure Sound), Roger Batchelor (Denon), Andy Whittle (then of Rogers, now of Audio Note), Mark Hockey (then Kenwood, now Harman UK) and a few irregulars. Products were inserted into a system comprising Pink Triangle PT TOO, SME V arm and I can't remember the cartridge, a TEAC Esoteric transport (I think) into a Deltec DAC, Deltec 100S pre/power amplifier system into a pair of Snell E loudspeakers on Stand & Deliver stands (Deltec Black Slink cable was used throughout). The Deltec equipment was chosen because it measured better than any product on the market at the time and the design could effectively be run with cables out of audio path (In fairness, I have precisely no idea how this worked, you would need to speak to Rob Watts - currently with Chord Electronics - for an explanation). Products submitted for test were 'run in' for several days before being submitted for test, and were level matched to within a fraction of a decibel.
 
How the test worked was as follows; the reference system was played. A new product was introduced and compared AB. Notes were taken and recorded. The next product would be introduced and compared to the previous model AB. Periodically, the reference point was re-introduced and products under test were re-submitted to ensure consistency. The products were not selected at random, although they were introduced into the test at random. Product categories were typically broad - CD players between £150-£3,000, for example. The listeners were aware of the type of product they were listening to, but they were not aware of the product names or prices until the test was concluded. Nor were they aware of the product names of the resubmitted models, or how many resubmissions were introduced until after the test was over.
 
Each product was initially presented more than once, although this was quickly deemed not cost effective by the magazine's publishers and the bulk of the testing was limited to single presentations for most products, with randomly-selected repeats of one or possibly two devices under test. An external observer was not used - "it's a magazine, not a science project" was the typical cry from the publishers. The tests were run single-blind and were not repeated often enough for statistical significance for pretty much the same reasons.
 
Most of these tests were run by Paul Miller, although I did attempt to replicate the protocol (using different equipment - PT Anniversary turntable, SME V arm, Denon DL-304 cartridge, Meridian 508 CD player, John Shearne Phase One amplifiers and ProAc Response One speakers) with mixed success (I prefer rather than a repeated series of AB tests - effectively ABCDEFG... - I chose to run a 'pool table', or 'winner stays on' system, meaning for example AB, AC, CD, CE, CF, FG and so on) and once instigated a full double-blind ABX test at Paul Miller's, with him at the controls, me giving the orders and GS, MH and RB in the listening seats. As I have stated earlier, this was quickly rejected by these regulars because they felt more divorced than ever from the listening test process. As I have also said, the abandonment of this test meant that any results generated here were not statistically significant; although early indications suggested listeners were hearing differences, I am well aware that the with precious few presentations this could be nothing more than a coin toss.
 
I am also well aware that the initial protocol would never provide statistically significant data and the test that potentially could provide such data was abandoned long before the test began to deliver enough resits to count. To the best of my knowledge I have never made a claim to suggest they are statistically significant. I suspect that the appearance of such statistically relevant tests would be exceptionally rare in print (even in a magazine that understood the importance of such tests) because the concepts underlying such tests are not commonly understood by the wider readership, reflecting the sorry level of understanding of mathematics and especially statistics in the UK and US publics at large. It was also rather difficult to justify the extra time involved in running a statistically-relevant double-blind ABX test, when the person who ultimately sanctioned the payment for the test asks if you had to retake the test because you forgot to close your eyes.
 
I still maintain that these single-blind AB tests do highlight changes between individual components in a repeatable, reliable and robust manner. The reason why I feel justified in saying this is the repeated products; these were often repeated not just in a single test, but longitudinally - if an amplifier is described as 'lifeless' in one test, 'boring' on the re-submission and 'soulless' when tested again a year or so later, I maintain it's a pretty good bet that this is an intrinsic part of the character of the amplifier. It's the same repeatable, reliable and robust manner used in the development and engineering stages of manufacturing these products. I accept that these tests do not provide the sort of data demanded by those who take a similarly empirical - but hard objectivist - position, but there we must disagree on epistemic grounds.
 
All of which invites the question why do I no longer support blind AB testing in reguar reviewing? I have two interrelated difficulties with the concept. First, it is at one remove from the typical buying process - our tests should be a simulacra of the experience of a prospective owner of the product, and a blind test (of any description) is not a function of that experience. Secondly, the blind AB process is invalidated by one of the most basic actions of a listener in reality; volume adjustment. If one plays a piece of music, one's first action generally is to adjust the volume control up or down until one reaches a 'sweet spot'. This is a dynamic process; a function of the musical genre, one's own feelings at the time of listening and the overall performance of a system. How one reacts to a system in terms of turning a piece of music up or down is remarkably important in that person's perception of what constitutes good or bad sound. And this basic action is entirely removed from the blind AB test, where the volume of both must be level-matched if they are to be compared. I still maintain the blind AB test has its place (as does double-blind AB and ABX testing), but that place is further up the developmental chain than the reviewer's chair or the audio salesperson's demonstration suite.
 
Incidentally, some blind listening tests are still conducted today at that magazine, but by another reviewer. The number of products submitted for comparison is now very low (typically six products instead of as many as 20 a decade and a half ago) and the methodology is different to the earler tests (the groups are far narrower in scope, the blind listening panel's results are supplemented by sighted listening notes and I am unsure whether products are resubmitted as controls). These changes to the test parameters were made in part because the original participants were scattered to the four winds of the UK audio scene, but also as a reflection of reader response to blind tests in general. The relevance of whether the test is performed sighted, blind or double-blind, or whether the test is conducted solus, AB or ABX was considered less important than the review being written in an entertaing manner.

Alan Sircom
Editor, Hi-Fi Plus Magazine
London, England
editor [at] hifiplus [dot] com

Terje Skjaerpe (not verified) -- Mon, 08/17/2009 - 16:07

 Thank you for a detailed answer.
You indicate that there is documentation, but that you cannot provide me with any. Well, that may be the case, and therefore I didn’t get you. The really big problem with your tests, as I see it, is expressed by your publisher: "it's a magazine, not a science project". Basically, what science means, is “to find out”. Your publisher is really saying “it’s a magazine, we are not here to find out”.
Even if you don’t admit it openly, your attitude is similar. To cite you: 

“I am also well aware that the initial protocol would never provide statistically significant data and the test that potentially could provide such data was abandoned long before the test began to deliver enough resits to count. To the best of my knowledge I have never made a claim to suggest they are statistically significant”.

What you are saying, is that tests were underpowered to prove any differences, and that you knew it before you ran the tests.
 
You continue:
“I suspect that the appearance of such statistically relevant tests would be exceptionally rare in print (even in a magazine that understood the importance of such tests) because the concepts underlying such tests are not commonly understood by the wider readership, reflecting the sorry level of understanding of mathematics and especially statistics in the UK and US publics at large”.

This is a very common misunderstanding concerning the use of statistics. Statistical tests are for he who run the tests, not for the public. He must use statistics to find out whether there is a difference or not. Then he can communicate the results to the public. Without statistics, you are just expressing opinions, not facts. Statistics are a very important step on the way towards “finding out”. 
 
Then, a bit surprisingly:
“I still maintain that these single-blind AB tests do highlight changes between individual components in a repeatable, reliable and robust manner. The reason why I feel justified in saying this is the repeated products; these were often repeated not just in a single test, but longitudinally - if an amplifier is described as 'lifeless' in one test, 'boring' on the re-submission and 'soulless' when tested again a year or so later, I maintain it's a pretty good bet that this is an intrinsic part of the character of the amplifier”. 

How on earth do you know? If your amplifiers showed differences on longitudinal tests, why didn’t you run statistics on those longitudinal data? That could easily be done.
 
Then you seem to shoot yourself in the foot:
“I accept that these tests do not provide the sort of data demanded by those who take a similarly empirical - but hard objectivist - position, but there we must disagree on epistemic grounds”.

You characterize your opponents as “hard objectivists”. You probably mean people with a scientific approach, people who use statistics to find if the differences are real. People that are not satisfied by “I am very convinced that I hear a difference”, or “it is my opinion that there is a difference even if the statistics tell me otherwise”. What other epistemic grounds can there be?
I think your intentions are good, but the way the tests are done leaves far too many questions. What surprises me over and over again, is the subjectivists saying: Even if it cannot be proved by scientific valid tests, I hear a difference. Bluntly, that means that anybody can say anything, and nobody can prove them wrong. I am sorry, but despite your efforts, you seem to place yourself in this group. 

 

Mr Plus -- Mon, 08/17/2009 - 20:07

Thank you for your comments. I feel I have to comment on several issues, but I'l try to keep these brief as these posts are becoming unweildy.
Your publisher is really saying “it’s a magazine, we are not here to find out”. First, it is not my current publisher! I also think what you claim as “we are not here to find out” really decomposes to "this is beyond our scope". The distinction is subtle, but is less intentionally malign. Why such a concept should be beyond the scope of a magazine is a more significant question, however.
 
"What you are saying, is that tests were underpowered to prove any differences, and that you knew it before you ran the tests." Not exactly. The protocol I inherited was underpowered from a statistical standing, The test that I attempted to run was not. That it failed to complete was not something I anticipated.
 
"Statistical tests are for he who run the tests, not for the public." Outside of scientific and technical journals, if you pay for something in publishing, it goes in print. Or you don't run it and pay a 'kill fee'.
 
"What other epistemic grounds can there be?" How about classical foundationalism? It's built axiomatically, and those axioms go on to justify non-foundational beliefs from there. It almost perfectly defines the scope of a magazine (not just an audio magazine) or an interest group. It's also pretty good for declaring independence from dictatorial, money-grabbing, porphyrric British Kings - "We hold these truths to be self-evident..."
 
I knew those years studying philosophy would come in handy one day.
 
In all seriousness, I suspect the gulf between science and audio engineering on this matter really is an epistemic one.

Alan Sircom
Editor, Hi-Fi Plus Magazine
London, England
editor [at] hifiplus [dot] com

Terje Skjaerpe (not verified) -- Sat, 08/22/2009 - 14:40

Both your previous publisher and you are certainly there to find out. Isn’t that you do when you are doing sighted listening test? Science is about refining tests to prove that differences are caused by the test object. If sighted tests gave the same result as blinded tests, then they also would be scientifically valid. But they never do (except, apparently, in your case, which made me curious). So when science is beyond your scope, then “finding out” is beyond your scope. There is tons of evidence that visual clues add information to the sound (and to other types of sensed information) as it enters the brain. Already Rene Decartes, a prominent epistemological internalists wrote that “The only way to find anything that could be described as "infallibly true," would be to pretend that an omnipotent, deceitful being is tampering with one's perception of the universe, and that the logical thing to do is to question anything that involves the senses”.
Since you studied these matters, you know about the “regress problem”. Foundationalism’s answer is that some beliefs that support other beliefs do not themselves require justification by other beliefs. So what is the foundational belief in audio? Please, explain.
Foundationalism is clearly a dead end concerning our discussion. I could also give you examples from medicine where foundational beliefs resulted in a lot of dead patients. Please, bow to the wisdom of your philosophical grandfather, Decartes. There is now a way out of his dilemma: Blinded tests and statistical analysis.

 

Mr Plus -- Sat, 08/22/2009 - 18:36

That foundational belief in audio is that "all things can influence sound". That is held a priori. Anything from there - whether what we hear has a correlate with measurement, the robustness of our perception mechanisms, the amount those mechanisms can be biased and the relative merits of particular testing methodologies – is contingent upon that a priori statement. 
 
So, going back to my previous publisher, concepts that are beyond the scope of that original a priori statement would likely be met with "It's a magazine, not a science project". Interestingly, even the single-blind AB tests I described earlier often proved deeply unpopular with enthusiasts because they did not tally with their own findings and have been somewhat downplayed in recent years. Had my own double-blind ABX test made it off the test-bench, I suspect it would have met with even more animosity, irrespective of the conclusions it came to, because it is at odds with that foundational a priori
 
Which is why I hold to my initial statement that a range of blind tests are vitally important in audio, but their importance diminishes the closer you get to the end user. Cast in that light, the purpose of a magazine review should be to 'find out' the pre- and post-purchase experience of a prospective buyer of a component and convey that experience in an entertaining and informative manner. 
 
So perhaps now you can see why I think the basis for the dichotomy between us is an epistemic one.
 
And, I'm sorry... but I find bringing 'dead patients' into the argument to be just crass. This is about getting more experience out of your musical experience, not about saving lives. If I'm wrong and the worst thing that happens is that someone has some fun spending their money for no good reason, well.. they still had fun in the process. If I'm right, then they had fun spending their money and got better sound in the process. When last I looked, no-one died of too much preamplification. 
 
This really is not the place for a lengthy aside into philosophy (even if I started the aside), but it is worth pointing out that Descartes Meditations began the rebuilding process from the Cogito by suggesting that the 'I' (in 'I think, therefore I am' - more accurately 'I am a thinking thing') is the possessor of sensory perceptions - even if those perceptions are generated by a deceitful being. Which does tend to place one in a curious position of accepting the validity of perception that may or may not be valid. One of the great mis-readings of Descartes is that having reasoned himself into the tearing down of perception until he comes to the Cogito, many skip over the fact that he reasoned himself out of the solipsist trap soon after. Without ever needing to call upon a statistician.

Alan Sircom
Editor, Hi-Fi Plus Magazine
London, England
editor [at] hifiplus [dot] com

Mr Plus -- Sat, 08/22/2009 - 18:47

 All that being said, perhaps it is time to start investigating the robustness of the foundational belief once again... 

Alan Sircom
Editor, Hi-Fi Plus Magazine
London, England
editor [at] hifiplus [dot] com

Cemil Gandur -- Wed, 08/26/2009 - 04:47

Thanks for these posts Alan. I found them highly informative.

Philip Graves (not verified) -- Thu, 09/24/2009 - 11:27

Firstly, thank you for a fascinating article - and one that I may well reference in my forthcoming book.  As a consumer psychologist I would like to offer my perspective on this very interesting debate.
Blind testing is very important because of the human mind's capacity for, for want of a better label,  delusion.  The human mind is always searching for patterns of cause and effect and will jump to eroneous conclusions with extraordinary (and at times depressing) ease.  There are billions of pounds spent on alternative therapies that have no demonstrable efficacy, people continue to believe their set of religous values is right (despite the presence of many others with equal support and the fact that countless others have been abandoned through history), and there is no shortage of people who will cling to superstitions ignoring bountiful empirical evidence that they haven't had a beneficial impact most of the time.  I'd like my drugs blind tested please.
But.
The world isn't just medicine.  And even if it is there is an argument to say that if a sugar-coated pill can cause some people to feel better you should shut up and let the placebo effect do its thing.
There are lots of problems with blind tests.  Firstly, the context in which they are conducted will influence the response obtained.  Secondly, in stripping out variables such as brand and price people's brains respond differently.  Studies using fMRI scans have shown that, when people are tasting something they believe is more expensive (in fact it isn't), it lights up the reward centres of the brain more... in other words people really can experience the same product as being better simply because they believe it to be better.
Thirdly, and perhaps most importantly in this case, we are hugely susceptible to priming.  When the expert detected the unwelcome tone he primed others to go and listen for it, at which point they believed it was there and changed their opinion.  Human awareness isn't a fixed entity; once people have been primed to reevaluate something for a credible reason they may well entirely reappraise it.  This is also an issue of focus; some estimates say the brain is receiving 10 million bits of data per second, the vast majority of which are unconsciously screened out (this is why you can miss a man dressed as a gorilla walking across a group of basketball players).  Once your attention is directed away from the ball and towards the gorilla you can't help but see the gorilla.
Looked at in this light you could imagine a scenario where, in blind tests no one noticed the gorilla walking across the film set, so there's no need to reshoot the scene and the movie can go on general release.  It only needs one person to spot it and the movie becomes a joke.
Blind tests are important in medical trials.  But in commercial matters competition and judgment are what count.  I'm not an expert on digital music, but presumably there's an argument to say that MP3 quality doesn't measure up.  And yet it has won through because, in a trade-off between quality and utility, it offers sufficient of the latter to overcome the limitations of the former.  It is now so popular (and convenient) that it is hard to imagine any less convenient format gaining significant commercial success... but you never know.
So, when it comes the nuances of human perception, the apparent absolute rigour of a blind test is neither meaningful nor relevant.

Marty B (not verified) -- Thu, 09/24/2009 - 12:35

 Well, I'm an ex-recording engineer and an objectivist, so I think your conclusions are mostly incorrect.   Furthermore, you have an inherent bias built in because if you can't tell the difference, you don't need audiophile equipment and if you don't need audiophile level equipment, there's no AVGuide and the like.  
The result of the studies did not say that there was no difference between (for example) Redbook CD and SACD.   What they said is that the people being tested could not perceive a difference.  There's a big difference.   I bought the OPPO player recently, partially so I could listen to the SACDs in my collection.   Do they sound better?  I think so.   Do they sound so much better that you can immediately perceive a giant technology leap?  Definitely not.  
Why would/should someone spend $20,000 on a system in which they cannot perceive differences between that and a $2000 system or a $200 system?  There's only one reason and it's the same reason people buy an esoteric car or a Leica camera:   they enjoy the inherent craftsmanship and the fact that they have something that few other people have.   But in the case of audio, that frequently has nothing to do with how good it sounds.   If I take the guts of a cheap Sony receiver and place it inside a Mark Levinson case and you can't tell the difference in the sound, you cannot tell me it's an invalid test.   
Esoteric audio (and video) is like religion or politics:   you never want to let the facts get in the way of one's beliefs.    Now, even though I claim to be an objectivist, I fall for this also:   when I walk into a showroom and I look at LCD TVs, they mostly all look the same to me from the bottom of the line to the top of the line.   Yet, I can't bring myself to buy the bottom of the line because I always think there's a chance, given the right program material, that the top of the line is going to look better at some point.  But the truth is that in a (less than perfect) showroom situation, if they changed the model numbers around on all the sets, I'd probably buy the wrong one.  
Having said that, I do think there are two areas in which subjectivists might be correct, although I can't prove it:
1.  In most cases, when listening to an audio system, you DO KNOW what you're listening to and it's possible that when you do know what you're listening to, it changes your perception of the sound.   Therefore, if you know you're listening to an expensive system, you actually do physically hear it differently.   It may be that blind testing is like eating a candy with your eyes closed and with your nose blocked.   If you can't see it and can't smell it, you can't reliably tell what the flavor is.    But in real life, you do see it and you do smell it and isn't that the flavor that counts?
2.  Short term listening may be invalid.   I think the deficiencies in most systems are most obvious over long periods of time.  Listening to highly compressed rock music might initially sound great because of its impact, but over time it becomes tedious.   
3.  Naive listeners will always pick bad sounding systems just as naive restaurant customers choose to eat mass-produced fast food over carefully prepared food with fresh ingredients.    (This has nothing to do with not being able to perceive the difference between two systems, but it does have to do with people choosing an inferior system over a quality system when they can perceive a difference.)    
But having said that, I am constantly shocked at how often I walk into an audio showroom expecting to hear great sound and in most cases, I don't, regardless of the price of the system.   Part of this may be a function of my aging ears, but when I was very young, good fidelity would actually make me sweat.   No system has made me sweat in many years.    (Which may be a good thing because if it did, I'd probably have to buy it whether I could afford it or not.)    I was in a showroom recently and they were trying to push this ridiculously expensive iPod dock which supposedly had far superior D/A conversion.   It still sounded like crap to me.  (And that's aside from the issue of why the controller wasn't doing superior D/A conversion.)  
Now back to the objective view:
I frequently transfer vinyl to CD for use in radio.   There are many who would contend that vinyl listening is far superior to digital listening.    Yet those same people absolutely cannot  ever tell the difference between the vinyl playing and the CD-R playing if I play them both in sync and A-B between them.   So if you can't tell the difference, how can you make the case that digital recording inherently has a negative impact on the quality of the sound?
So all-in-all, I'm still an objectivist and I guess I always will be.   
 
 

Jim35645 (not verified) -- Wed, 10/28/2009 - 13:07

This article is pure bunk.   The conclusions are an indictment of the test?  Not being able to see the brand ruins the test, but being able to see the brand and price tag doesn't?  Give me a break.  This sounds like psychics who claim they failed a test because of the bad vibes in the room.  The outrageously expensive stuff produced no difference so OBVIOUSLY the test must be wrong.  Yeah sure thing.    Your statements are Exhibit A as to why double blind testing should be implemented in audiophile magazines; you make the  assumption that something with a higher price absolutely must be better.   If you are so confident, then submit to a double-blind test.  But you won't because deep down you know that if you can't see the brand and you don't know the price tag, you won't be able to tell which is the $1500 speaker and which is the $50,000 speaker, and you certainly wouldn't be able to tell if a power cord had been switched.   It has nothing to do with a double-blind test ruining the mood and everyone who didn't just fall off the turnip truck knows it. 
 
It is such biased reviewing that enables companies to get away with selling $50,000 speakers, $20,000 speaker cables and $5,000 interconnects and power cords.  They know how powerful the placebo effect is, and they know audiophile reviewers will claim a cable that costs $5000/meter just has to sound better than one that sounds $1000 a meter, because the belief that it has to sound better because it costs more is so powerful.  All the claims about sound staging, deeper bass and tighter midrange from a cable or AC Power Cord (the ultimate scam) is absolutely absurd and everyone knows it, which is why you won't submit to a double blind test.  Because the whole high-end facade would come crashing down.  People who buy $150,000 speakers would no longer have ridiculously expensive status symbols, they would have over-priced reminders of their breath-taking gullibilty and monuments to the power of the placebo effect.
 
I guess when scienctific research, which uses double-blind testing as a standard, can't back you up, you have to resort to blasting the test (you know the one scientists worldwide have been using for centuries)because what else can you do?

Jim35645 (not verified) -- Wed, 10/28/2009 - 13:14

 "I don't think many audiophile readers of this site will take kindly to being refered to as 'gullible victims'. And many of those who pursue the goal of better music replay using good audio equipment do so from the position of music lover first and foremost. Ultimately, if someone derives pleasure from owning something, and in so doing they are putting some much-needed money back into our economies, what's the problem?"
 
 
What's the problem.  It is a scam, that is what the problem is.  People are perpetrating fraud, and magazines that review the stuff are abetting it.  That is why you refuse to submit to a double-blind test.  If you didn't see someone switch power cables, you would not be able to tell the difference, and you know it, which is why the reviewers absolutely refuse to participate in these tests and why they blast a testing method that is the absolute gold standard when conducting various forms of research.  Companies that sell these cords and cables are ripping people off, period.  The same is true of people selling outrageously priced speaker cable, $150,000 speakers, $17,000 monoblocks, $12,000 turntables, $5,000 cartridges, cable elevators, cable conditioners, etc.   
 
People derive pleasure from wearing magnets, but when you tell them it improves their health you are scamming them, just like you are scamming somebody when you praise non existent differences in super expensive cables, power cords and speakers.  
 
Of course you can shut everyone up by participating in objective tests, but we all know that will never happen. 
 

Mr Plus -- Wed, 10/28/2009 - 15:29

 Jim,
 
There are reasons that have been touted again and again why double-blind tests are not relevant at the final evaluation part of the audio chain. To wheel out the oft-toted medical analogy, DBTs are absolutely invaluable when it comes to determining the performance of a drug. However, unless you have agreed to engage in a medical trial, a physician is highly unlikely to use a DBT in a differential diagnosis. I maintain DBTs are about as relevant at the reviewer or buyer end of the audio evaluation process, as they are at the physician's end of the medical diagnostic process. This does not mean I 'blast' DBTs (far from it, DBTs are an invaluable part of the development of the best audio products), but I question their use out of context. 
 
When I evaluate a product, irrespective of whether it's an entry-level amplifier or a high-end mains conditioner, I spend time with it and determine how it changes my musical tastes and listening sessions over time. I know that if I played a lot of Rammstein in a given time period, the component under test is behaving very differently to one that would make me play nothing but Thelonious Monk. Usually when testing a piece of equipment, it's used for long enough to get past the musical mood swings, that might explain why I was in a Monk mood for a few days, and when the product is removed from the system, I monitor my musical tastes for some time after to see if they have reset themselves or if I am really in some kind of Monk funk. 
 
The amount of time spent in front of a system can also vary according to the components in that system. I know that if I average two hours per night sitting and listening and a product in that system means that I listen for two and a half hours per night, it's doing something right, but if I spend an hour and a half per night listening, then it's doing something wrong. Once again, this is averaged out over time to determine whether short-term events were skewing the test.
 
I have found these changes in musical taste and time spent listening imposed by product changes to be remarkably consistent and repeatable, even though they are entirely outside the purview of a DBT. Don't take my word for it, run the same tests; keep a diary of the sort of music you listen to and the amount of time you spend listening to your system for six or eight weeks, then put a few styrofoam cups under your speaker cables. Monitor your musical tastes and time spent listening for a similar period, then remove the and do the same monitoring for a few more days (to make sure you weren't just in a particular mood). At worst, you spent a few weeks looking like an idiot with your speaker cables on cheap plastic cups for no good reason. On the other hand, there's a fighting chance you'll end up finding that your listening sessions altered very slightly over the test period because of something that would never pass muster under a DBT.
 
 
 
 
 

Alan Sircom
Editor, Hi-Fi Plus Magazine
London, England
editor [at] hifiplus [dot] com

AJ (not verified) -- Thu, 10/29/2009 - 19:20

Alan, I have to agree with you and Robert. We "know" that hi-fi stereo equipment (especially the shiny, expensive variety) sounds different, so when controlled tests result in no difference, the obvious conclusion is....that the tests are flawed. The controls create an artifical (superficial?) and disorienting experience that suppress our abilty the hear the differences we "know" are there during normal, casual (uncontrolled) sighted, priori knowledge listening!
It's the same with the magnets on my cars fuel line. "Contolled" tests show that they produce no difference in fuel mileage whatsoever, but I "know" it does, because I have experienced it for myself. When I drive normally (like down to my favorite hi-end dealership to hear the latest wires), I don't do so in an artificial "controlled" fashion. The difference/gains in fuel efficiency are abundantly clear and are not surpressed and masked the way they would be if I drove around in a "controlled" way. A clear indication that controlled tests are flawed and fail to reveal the differences that I "know" are there.
I sincerely hope no one is hypocritical and closed minded enough to dismiss the claim that magnets improve fuel mileage, until they have tried and experienced it for themselves!
cheers,
 
AJ

Terje Skjaerpe (not verified) -- Sat, 11/14/2009 - 17:45

 Having been
away for a while, I have to go back Alan’s answer where he comments on my
medical example: “And, I'm sorry... but I find bringing 'dead patients' into
the argument to be just crass”

Well, Alan,
this is reality. Take it as a wake up call. It is nice to sit in front of the
fireplace solving problems. When people lives depend on your conclusions, it is
not so nice anymore. As doctors, we made mistakes just because we believed in
some basic truth and went on from there. It was so obvious the “basic truth”
was true, that we didn’t care to check it. Randomized, double blind testing
have saved a lot of lives. Open, nonrandomized tests took a lot of lives.
Then we are
down to “I maintain DBTs are about as relevant at the reviewer or buyer end of
the audio evaluation process, as they are at the physician's end of the medical
diagnostic process”
. What? WHAT?? It is a very bad example for justifying unblinded
tests. Of course we cannot double blind a patient seeking our advice. However,
as doctors we are, or should be, painfully aware that so is the case and that
the situation may be heavily biased, and what appear as simple or straight
forward – because we see it – may be dramatically different. Take patients
consulting for chest pain. A very large group have potential life threatening
ischemic heart disease which is  taken
care of in a very streamlined way. However, now and then an extremely dangerous
disease, aortic dissection, closely mimicking the symptoms of cardiac pain, may
be causing the symptoms. By subjecting this patient to the procedures designed for
the cardiac patient, delayed diagnosis and death may ensue. The only way to
not miss the diagnosis and the patient is to be aware of all the biasing
factors pointing to the heart. It has some similarity to the problems we are
discussing. Through blind testing you learn what the real differences are, so
that you, when the test situation are sighted, are able to look away from
the severely biasing sight of that big, shiny, expensive amplifier.
The
subjectivist’s views are very similar to what I experienced with a person
practicing alternative medicine. He had diagnosed my father with a very rare
type of cancer by examining energy waves from a drop of blood. He said my
father should be checked by a highly qualified doctor. Then, “If he cannot find
the cancer, is much easier for me to cure your father”.  This person’s, and the subjectivist’s views,
are the same: I can say anything, and you cannot disprove me, because if you
don’t find anything with your tests, your tests are flawed.
I am sorry.
There is no other way of  putting than:
It is fraud, fraud, fraud.
 

Mr Plus -- Sat, 11/14/2009 - 22:29

We are talking about the enjoyment of music in the home, not the potential preservation of life or quality of life through medical means. As much as it might put a crimp in my ego saying this, lives do not depend on me recommending the right audio product for the task - as I said earlier, no-one died of too much preamplification.
 
The medical analogy does extend, though. Here's what I mean. From my days in audio retail (back in the jurassic era), I sold a lot of systems comprising a Rega Planar 3 turntable, an Arcam Alpha CD player, Royd Eden loudspeakers and either a Naim Nait 2 or a Onix OA21 integrated amplifier. An equivalent system today would cost about $3,000. The only change in the system would be the amplifier (we had a very selective range of products in the shop), the two amplifiers were priced identically, had near identical performance (both products were tested in magazines under blind, level-matched test conditions and were impossible to differentiate) and I was not working on commission or on targets set by specific manufacturers. When demonstrating these two - at any arbitrary 'normal' volume level - if the prospective customer asked for the music to be turned up after that first track, they'd buy Naim, if they asked for it to be turned down, they'd buy Onix. This happened whatever music they listened to, whether they listened predominantly to LP or CD, and whether they were influenced by me, the prevalent Naim-oriented press at the time, their friends, family or workmates. The only exception was one guy who bought a Naim despite asking for the music to be played quieter, who returned the Naim soon after to buy the Onix (he thought he should be buying Naim because of all the positive press it received, but couldn't live with it). 
 
This is the sort of differential we use to ascribe differences to products. It's also the sort of differential for which there is precisely no DBT methodology. So a potentially definable and repeatable function of the performance of two amplifiers that doesn't come under the DBT yardstick is (therefore?) dismissed as subjective nonsense. Not investigated, not even placed on the 'interesting... but how do we test it?' pile; just dismissed. And anyone who suggests such findings might be pointing at some aspect of audio performance that exists outside the double-blind ABX is called delusional, or a fraud. Usually more than once. 
 
There's every likelihood that what I'm describing above is purely bias, that I anticipated those who wanted the music turned up would like Naim, so I guided them in that direction. But I've also spoken to others (long afterwards) who experienced the same effect. So if half a dozen people selling similar equipment across the country independently found the same trends occurring, how were we creating that bias? If it wasn't down to some aspect of the performance of the two products pushing people in specific directions and we had made no contact at the time, what caused that commonality? Morphic resonance? 
 
At no time have I ever said DBTs are flawed. However, I consider their use inappropriate in some settings. The closer we get to the end user's listening room, the less significant their impact seems to be. I would like to know whether this truly is the case and if so why, but once again it seems easier to dismiss this than investigate it. 
 
I'm sorry to hear your father was taken in by an alternative medicine practitioner. For the record, my late mother was similarly afflicted. 

Alan Sircom
Editor, Hi-Fi Plus Magazine
London, England
editor [at] hifiplus [dot] com

Terje Skjaerpe (not verified) -- Sun, 11/15/2009 - 16:53

 
 
I will try
to be clear. I never, ever, tried to indicate that improper testing of audio
equipment would take lives (did you really believe I meant that?). My intention
with examples from medicine was to highlight the limitations of improper
testing when applied to other aspects of life. To be a bit crass again, you
don’t get away with it if unblinded testing results in more dead bodies than
blinded testing. In audio you get away with it because being wrong have no
consequences except that people may be fooled to buy more expensive stuff than
they need.
Your
example with the amplifiers is not well chosen as a medical analogy. In a
doctor/patient relationship you tell the patient what the best treatment is
(based on DBT if it concerns medication). You don’t give him two or more
options to choose from based on his preference (if you do, you may have to
start counting bodies again).
The
“flawed” term was not aimed at you (sorry for not being clear on this), but at
Robert Harley who started this thread (I went back to check that he really used
the word). 
One last
point: I totally disagree with you when you say that blind testing becomes
increasingly irrelevant the closer you are to the end users listening room. It
is like saying that blind testing medication becomes increasingly irrelevant
the closer you are to the patient. That is the whole purpose of testing, to be
able to advice the customer properly
Sorry for the odd line breaks - I have no idea why it happens. Now I am going back to my audio equipment to start listening to a fantastic set of 55 records from Deutsche Grammophone which I downloaded for less than 90 Euros. You should check it out.
 

Mr Plus -- Sun, 11/15/2009 - 18:58

 I think we are at cross-purposes, here. 
 
I know you were not saying improper testing of audio would take lives - although in the past I have experienced a badly-designed, untested product that would have killed someone if it had come to market. Part of the reviewer's job at one time was to tell companies not to be companies. Better legislation has taken care of this, fortunately and electrically-unsafe products don't make it past the drawing board (those who design such things tend not to 'get' CAD/CAM).
 
Yes, my analogy breaks down in that the patient does not dictate their treatment. However, the physician and the audio dealer both use the details presented by the patient or client. In the case of the audio dealer, he or she will interview the client to fine-tune the selection process (room size, musical tastes, any previous audio history if relevant, what they've heard before that they particularly like and dislike) and draw up prospective systems for the client to audition, based on that interview session. This selection process is largely informed by a barrage of tests performed before that client walks in the door, and a series of listening tests at the dealers to create systems that work together better than others. 
 
This happens irrespective of price point; a budget NAD/Revel combination will provide more sustained musical enjoyment than a Cambridge Audio/Revel combination that costs more, even though the measured performance of both amps is closely matched. Change the speakers to a pair of B&Ws and the Cambridge Audio will keep the music lover smiling longer. Surely if the amplifiers behave near identically, the difference in performance should begin and end with the loudspeakers? Once again, this could be all anecdote and bias, but once again, those buyers and retailers who supply such equipment independently report these compatibility issues. 
 
The simple reason why I feel DBTs have less relevance the further they get from the lab comes precisely down to the example I cited above. From an audio dealer's perspective, the data acquired 'in the lab' is important, but secondary to the 'street' findings. And to be able to advise the customer properly includes making sure the right components fit both the person and the rest of the system.  
 
And thanks for the DG hint. It's a superb deal. I'll check it out.

Alan Sircom
Editor, Hi-Fi Plus Magazine
London, England
editor [at] hifiplus [dot] com

tractrix (not verified) -- Sun, 11/15/2009 - 22:32

You're not going to tell me all amplifiers ever created sound the same-a 1930's versus today's? "unless it's seriously flawed"-please define.
It's well known that one falls in love with the music that one is hearing when ones hormones are flying. When a teenager I had the privilege of recording my classical guitar in a studio with Telefunken-Neumann condenser mikes and an Ampex going at 120ips. On playback in the same studio the realness of the sound absolutely shocked me. We've made the process smaller and more convenient but better?? Not to my teenage ears- or hormones?? I'd love to have that sound. Many other serious listeners agree. I don't see how any new [digital] system would offer better sound. There's a limit to human hearing.
When did the electronics get there?
 

krabapple (not verified) -- Mon, 11/23/2009 - 16:42

First, the test was done in 1991; lossy codecs have come rather a long way since then, thanks in large part to *double blind testing*.

Second, Harley is recounting from memory in 2004, something heard (by him or someone else?) on tape in 1996 (at the 101st Convention in LA, I'm guessing from what he says). Locanthi was dead by then, so there was no Q&A for him. Can we get corroboration of the story, perhaps from the authors of the several Swedish radio papers (Grewin and Ryden) ? Was Locanthi's own claim derived from a DBT? Was it corroborated objectively in any way? Were those who heard it *after* Locanthi's report, hearing it in DBTs?

Third, Locanthi was an avowed anti-lossy figure...and a president of the AES in 1986-87. But was he as fundamentally anti-DBT as Harley is? That would be a bit odd. Or was he anti-*poor* DBTs...a significant difference from Harley, who believes DBTs are inherently flawed? If so, then Harley is performing a rhetorical trick to corral Locanthi from a camp they share -- anti-lossy -- into into a camp he's not a member of.

Robert Harley -- Mon, 11/23/2009 - 19:13

I'm not suggesting that Bart Locanthi was "anti-DBT," nor am I attempting to perform "a rhetorical trick" to portray Locanthi in an inaccurate light. Rather, I simply reported his recorded statement on his experience listening to the codec Swedish Radio had blessed. In that statement, Locanthi made no judgment regarding the efficacy of DBT, only that he immediately heard an artifact.

krabapple (not verified) -- Mon, 11/23/2009 - 21:24

Since my last post I found your original report of this anecdote. 
http://www.stereophile.com/asweseeit/894awsi/index1.html
From it, I surmise that the event actually took place at the January 1992 workshop that was part of  "91st Convention of the Audio Engineering Society held in New York on October 4–8 1991", not the mid-90s.  I would guess the  'As We See It' article itself dates from early-mid 1992? The story is much the same, though even then you were reporting from memory & notes rather than tape transcript.

It is of course possible that by sheer bad luck of choices none of the panel of 60 listeners heard a real artifact in either loudspeaker or headphone listening; listeners in general would not have been as familiar with lossy 'sound' then as now. However, I've read the some of the Swedish work now, and scores were averages, not broken down individually.  Exactly what training qualified them as 'expert'  lossy artifact listeners is not explained (23 were appointed by Swedish Radio, 24 by groups developing the four codecs tested, and the rest from EBU and AES) .   And of course these were not simple difference tests, they were quality grades. So it is possible someone actually did hear the tones Locanthi heard, but their report got lost in averaged subject data; or the subject(s) did not perceive the tones as being as grievously artifactual as Locanthi did. In any case having read these papers, and what they do and do not detail, and given the continuous fruitful blind testing of mp3 codecs in the intervening decades, on all sorts of program material and with thousands more subjects of all stripes, the mystery of the 'obvious' missed 1.5 kHz idle tone (surely it could have been seen/verified in measurements?)  only deepens and makes me want even more to hear the story fleshed out.  Most of the questions I posed in my last post remain as well.

 

tskjaerpe (not verified) -- Sat, 01/09/2010 - 06:46

 There is
now further evidence on the importance of Dopamin on decision making: http://news.bbc.co.uk/2/hi/health/8357739.stm
(see also my previous post on the effect of Dopamin and endorphins in sighted
tests). Study leader Dr. Tali Sharot said “they had been surprised at the
strength of the effect they had seen” and "Our results indicate that when
we consider alternative options when making real-life decisions, dopamine has a
role in signalling the expected pleasure from those possible future events. We
then use that signal to make our choices."
As I said
before, Robert Harley (and others), you must be very naïve to disregard the extremely
important biasing effect of seeing the objects you are testing. Sighted tests
are basically flawed.

RichardP (not verified) -- Wed, 01/13/2010 - 06:28

This is a very interesting thread, and it's fascinating that it has been running so long. No-one could intelligently argue against the fact that double blind tests are the ne plus ultra of objective assessment of any given criteria, and audio quality can surely be no exception. The brain runs the most sophisticated processing software in the known universe, and it should be obvious to any fair-minded, unbiased individual that any differential in audio performance that occurs between systems determined to be imperceptibly different under properly conducted double blind test conditions can be due only to subjective conditions arising as a result of that software. That really is the end of the story.

Unfortunately, objectivity and logic have amazingly little effect on the vast majority of individuals involved in industries and institutions founded on the peddling of belief, opinion, image, and superstition. Of course, hifi manufacturers' and reviewers' livelihoods, reputations, and advice generally rely on little else, and this, combined with vested interests, will always ensure their continuing hostility to anything that usurps their making a living from it, however adversely it affects their customers. Fortunately for them, most said customers are sadly more than willing to be misdirected.

By the way, I stopped reading hifi mags after I read a review in which, in all seriousness, the reviewer stated that he had detected a significant sonic difference in his listening tests after covering the telephone receiver in his listening room to prevent “sympathetic vibrations in the receiver’s diaphragm from interfering with the speakers’ pressure waves”. This, not especially unusual, extremism is essentially a very short step away from the wearing of tin foil on the head to prevent alien interference with thought patterns. The fact that this made it through the editorial process finally convinced me it was time to quit hifi, and just start enjoying music.

blainesnell@shaw.ca -- Thu, 01/14/2010 - 14:35

All methods to objectify a intrinsically subjective phenomenen such as listening to and enjoying music are flawed by their own nature ie. the division created by mathematical statistical analysis. Trained panels are taught to respond in trained responses. The words used to describe the beauty of a falling leaf in a snowstorm would fill the universe and yet not conveying the beauty. Yet it is strange that a short poem of 3 stanzas can give us a whiff of the cold air.

RichardP (not verified) -- Tue, 01/26/2010 - 15:19

@blainesnell

You seem to be confused. All panelists involved in double-blind tests choose between two alternatives based on subjective audio criteria, i.e. how much they enjoy listening to a piece of music played through equipment A compared with equipment B. DB tests simply ensure that their enjoyment is not biased by anything other than their auditory experience. If they enjoy the music equally or can discern no difference between A or B, then there is no difference, regardless of price, brand, or any other criterion.

If anyone were suggesting that listening tests should be replaced by, say, detailed electronic measurement and testing with no subjective criteria taken into account, then you might have a point, but since they are not, I'm afraid you don't.

AJ (not verified) -- Sun, 01/17/2010 - 10:45

Wow, that was deep Blaine. Really deep. Where does it leave us?

Galileo Pardoned (not verified) -- Sun, 02/14/2010 - 21:12

Funny, isn't it, that all defenses against using AB/ABX testing are from the writers/editors/publishers and not the magazine readers?
Why aren't the readers demanding subjective tests?
Funny, too, that optometrists use AB tests every single day so people can get glasses and contacts that work. (And too, somehow, Consumer Reports manages objective, head-to-head tests for myriad products.)
My dad was a publisher. His dad was a publisher. Truly, magazines derive essentially ALL their money from advertising. It's fine to run a booster magazine. But acknowledge it.
And even then, at some level, you must support musical artists, NOT con artists.

Galileo Pardoned (not verified) -- Mon, 02/15/2010 - 16:43

Is it the readers who clamor for subjective tests, or is it the writers, editors, publishers?

Galileo Pardoned (not verified) -- Mon, 02/15/2010 - 16:45

Robert Harley knows best.

lets test it (not verified) -- Sat, 03/20/2010 - 07:11

A lot of conclusions of Mr Harley are fascinating.
Hearing tests/research is dismissable because of psychoacoustics...... Audiophiles though are free from psychoacoustic effects
We comfortably ignore the fact that audiophiles do not want to live without the effects of psycho acoustics.
Blind tests are totaly dismissed because a group of people said a codec was okay and a listener afterwards heard a difference...
What is wrong with the fact you can measure there are no humanly audible differences between cables? Not in technical tests and not in (well performed!!) blind tests. Ony on high volumes with demanding speakers you need to use not the thinnest cable because this will produce some audible differences.
What is wrong with the fact that a cheap well constructed amplifier sounds the same as an expensive one. Its just a matter of a simple current going through wires and components. Radiation has little (non audible) influence. Simple. The negative influences there are are well documented and used by most companies.
Real audio enthusiasts agree that only parts of your audio system make a difference:
1 The quality of your recording
2 The quality of the loudspeakers
3 The accoustics of the listening room
I dont know why mr Harly (and a lot of others) are so vigilant against researchers. I think the believers in unmeasurable influences causing the stereo image of music to improve dramatically are simply asking for scientists to investigate. Especially when companies make big sums of money on claims of differences between audio parts. The arrogance of some audiophile journalists regretfully also attracts researchers that dont test scientiffically, but that does not make well performed research less valuable.
The people that claim the tests are being done wrong and that claim they themselves can hear differences between something as cables, why not be part of a blind test? (and just skip the argument that you need to listen to the cables for a long time to hear differences)

Joel Wideman (not verified) -- Sat, 05/01/2010 - 19:28

"The answer is that blind listening tests fundamentally distort the listening process"
So you're saying that if you don't know you're what you're listening to, you can't tell the difference? That's exactly WHY we have double-blind tests, Bob.

Mr Rotovator! (not verified) -- Sun, 05/23/2010 - 10:25

Slightly off topic maybe, and I know that keen gardeners can be told from all angles to become more environmentally friendly. However it is equally as important that you consider on the human ethics. For example, a few makes of rotovators are manufactured with child labor in Asia. So PLEASE consider the source of your rotovator is sourced when you buy. A rotovator made in Europe may not be cheap, but it's a very fundamental decision.

ev -- Wed, 10/20/2010 - 09:34

Robert,

I was pleased to find this thread with your thoughts on blind AB listening.  I am in the midst of preparing a tutorial presentation on comparative listening for the 129th AES convention in San Francisco and found this thread while researching the subject on the internet.  Blind AB listening will be an important part of the presentation.

    I have spent a considerable amount of time trying to develop comparative listening techniques that help reduce the disparity in results between blind AB and non-blind AB listening.  It has helped me get a better grasp on where the threshold of human perception is with audio and optimize ways to give the best possible chances of being able to really hear even the smallest of differences.  I agree that pure blind AB listening can hinder a listeners ability to identify minute differences between 2 sound sources and does account for part of the disparity between the 2 listening contexts.  The other factor mentioned in some of the responses in this thread may actually be a more significant contributor to the disparity.  The idea that having an expectation as to the difference between 2 compared items can dramatically affect how things are being perceived has been well studied within the realm of taste.  There was some very interesting research done at Stanford University on the influence of non-sensory information affecting the perception of sensory input.  The research shows via fMRI imaging that the brain is filtering or modulating sensory input information to help it better match up with a particular psychological expectation.  This means that we are actually experiencing a very real sensory difference but it may only be because our brain created that difference to have it be consistent with our expectation.  This was a disconcerting discovery for me as I thought "can I not trust my own ears?"  Ultimately, it has helped me develop listening techniques that allow me to better predict when I will be able to continue to hear a difference when transitioning from non-blind listening to blind listening. 

    It seems we both find this to be an interesting topic.  If you plan to be at the 129th AES I hope you will find time to check out the tutorial presentation.

Eric Valentine

ilxman99 -- Fri, 10/22/2010 - 12:34

As a longtime TAS reader (and GEC member), I'd like to focus on something slightly off-topic but highly relevant. I believe TAS would do its readership a great service if it re-examined its reviewing methodology.  Specifically, there are two practices that I find most unhelpful in pointing me toward that subset of products that I might usefully target for audition: 1) Having a reviewer "specialize" in products from a manufacturer or a class of products; 2) Auditioning moderately priced products paired only with other similarly priced products so as not to be "unfair" to the product under review. Allow me to briefly illustrate what I mean.
 
I have noticed over the course of years that AHC, PS, and REG tend to be much more circumspect when describing the added benefits of the SOTA  gear vs. well-engineered, more moderately priced gear. ( No, this is not another useless diatribe against high price tags.) I have also noticed over the years that these reviewers have reviewed a healthy mix of components across the price spectrum. Similarly, I have noticed that the estimable JV, whom I respect greatly, has seemingly resolved to surround himself with nothing but SOTA gear; CM seems resigned to the shallow end of the pool. Now, when JV now and again ventures out of that SOTA bubble (Maggie 1.6/7, Odyssey amp, etc.) I do take note. But  the problem for the reader is this: It's much harder to place JV's effusive praise of some SOTA gear in proper context re the magnitude of the differences/improvements he describes (inherently subjective, of course) . When AHC reviews SOTA gear, he typically describes small differences/improvements over the best engineered "mid-range" stuff. JV, ensconced within a world of SOTA stuff, tends to use language suggesting big differences/improvements--but since his frame of reference is other SOTA gear, it makes it hard for the reader to gauge just how profound a difference that Soulution  amp makes vs. a top-line Odyssey monoblock.  My point:  I believe it would help readers if the entire staff mixed it up a little more in terms of range of gear.  Alternative:  Do more "second opinion" pieces between reviewers (e.g., have CM live the Soulutions, Q5, etc.)
 
It is simply unhelpful when a reviewer dismisses using a product in a reference-class system because it would be "unfair". When I read a review that says a DAC is good "for a desktop system" what am I to conclude? It would be helpful to review every product in the reference system and honestly appraise its strengths and weaknesses, then transfer it to a more realistic context. I believe this is especially true of electronics and wire--most specifically DACs, as it seems performance differences are narrowing.
 
Yes, I'm aware of the absurd logical conclusions one could draw from the above:  It wouldn't be especially helpful to read about a $400 receiver connected to MBL speakers, etc.  I trust the TAS folks grasp my meaning. Features like TM's provocative piece on the Parasound/Marantz multichannel preamps' stereo performance, JV's on the Maggie 1.6/7, RH's on the Naim 5i, etc. were especially illuminating because they focused on absolute performance in reference systems, with strengths and weaknesses exposed. Equally important here was the fact that reviewers with broad experience with SOTA stuff spent quality time with more attainable pieces in a SOTA system, which provides helpful context. While my finances are such that I could buy SOTA stuff, I don't drop big bucks easily. Nor, here in the real world, can I simply "audition the product at a local dealer"--and I live in one of America's largest cities. I believe the above suggestions would greatly aid us readers in honing in on the finalists to hunt down for audition.

ilxman99 -- Fri, 10/29/2010 - 10:24

One more quick point re blind testing. In the real world, even with some of us audiophiles, aesthetics play a role in our hifi purchase decisions. So while I find the discussion re DB testing of mild theoretical interest, the vast majority of folks spending the money on hifi care very much about aesthetics, something the manufacturers are slowly starting to grasp, thankfully.

twelvebears -- Tue, 11/09/2010 - 13:31

 I do find this whole thread very interesting/amusing, particularly the point about the degree to which audiophiles are influenced by what something looks like and possibly the price of a component.
Of course none of this explains the result of a listening session I had while auditioning a selection  of 3 competing CDPs with a friend some years back. He was a bit of a HiFi fan but not to the same degree as me and was quite sceptical about hearing much (if any) difference between the three.
The result of this session was that two of the machines, both of which had better looks and more badge 'kudos' lost out to a big, square, ugly black Sony unit, so if it wasn't for the sound I don't know what exactly was influencing my decision.  I even went into the session having already taken a bit of a fancy (oh so cool looking) to one of the two units I ultimately rejected, but the ear said no even if the eye said yes.
My skeptical friend admitted after the session that although he would have bought the 'eye candy', but even he agreed that I'd picked the best sounding player.

Steven0100 -- Sat, 03/12/2011 - 11:45

 Robert,
I realize that this is an old posting, but I have to agree with you about DBT's.  They are fraught with errors.  I've suggested other ways to naysayers, but they always conclude that DBT's are perfect.  I would like to add one more thing to the mix.
I think you would agree that an equalizer changes the sound, is measurable,  and a listener can hear the differences when settings are changed. My experience is that most people can't tell the difference when slight changes are made unless they are the ones making the adjustment. I have experienced this when I was making the adjustment or when someone else made it. They couldn't tell what changed when I made the adjustment and I couldn't tell what changed when they made the adjustment. The person making the adjustment in all cases could hear the change. Why is that do you think? 

DaveC -- Tue, 03/15/2011 - 10:17

Sonic memory is fragile and evanescent. Psychoacoustic perception is influenced by myriad variables, among them tactile and visual cues. The entire listening experience is subjective, irregularly reproducible, and self reported.  The double blind methodology (which is as scientifically sound as humans can devise) is not readily applicable given these variables since in order to at least begin to attempt to standardize the protocol and give it statistical significance the power equation will demand huge numbers of participants, further complicated by complex exclusion/inclusion criteria, such as the "golden ear" or the "tin ear." Human complex behavior and human genetic heterogeneity are  the enemies of sound scientific method and experimental design. Well designed studies w/ acceptable controls are often unethical.    Does this mean that non-blind listening tests are scientifically sound?  No, since it's a logical fallacy to assume that if double blind listening tests are typically unsound that the opposite is conversely valid. The reproduction of music by electronic means is a hobby that should provide the hobbyist w/ pleasure. That may require that the hobbyist focus on those aspects of the activity that provide pleasure while avoiding those aspects that do not. In my opinion attempting to quantify and statisically reproduce the listening experience does not add to the pleasure of the hobby.  No offense intended but audio equipment reviews are strictly for entertainment purposes only and not for publication in peer review journals.

kalley -- Wed, 06/15/2011 - 08:29

Seek out a mentor whom you can shadow to learn the ins and outs of your proposed career. This person should be able to honestly communicate the advantages and disadvantages of your career choice while guiding you through each step along the way. Anytime during but especially near the end of your educational training, you should check into available apprenticeships in your chosen field. Ask questions and gather information.
 
Visit Now :-  Norfolk Jobs