Read 2008 - Bad Science Page 7


  There have been over a hundred randomised placebo-controlled trials of homeopathy, and the time has come to stop. Homeopathy pills work no better than placebo pills, we know that much. But there is room for more interesting research.

  People do experience that homeopathy is positive for them, but the action is likely to be in the whole process of going to see a homeopath, of being listened to, having some kind of explanation for your symptoms, and all the other collateral benefits of old–fashioned, paternalistic, reassuring medicine. (Oh, and regression to the mean.)

  So we should measure that; and here is the final superb lesson in evidence-based medicine that homeopathy can teach us: sometimes you need to be imaginative about what kinds of research you do, compromise, and be driven by the questions that need answering, rather than the tools available to you.

  It is very common for researchers to research the things which interest them, in all areas of medicine; but they can be interested in quite different things from patients. One study actually thought to ask people with osteoarthritis of the knee what kind of research they wanted to be carried out, and the responses were fascinating: they wanted rigorous real-world evaluations of the benefits from physiotherapy and surgery, from educational and coping strategy interventions, and other pragmatic things. They didn’t want yet another trial comparing one pill with another, or with placebo.

  In the case of homeopathy, similarly, homeopaths want to believe that the power is in the pill, rather than in the whole process of going to visit a homeopath, having a chat and so on. It is crucially important to their professional identity. But I believe that going to see a homeopath is probably a helpful intervention, in some cases, for some people, even if the pills are just placebos. I think patients would agree, and I think it would be an interesting thing to measure. It would be easy, and you would do something called a pragmatic ‘waiting-list-controlled trial’.

  You take two hundred patients, say, all suitable for homeopathic treatment, currently in a GP clinic, and all willing to be referred on for homeopathy, then you split them randomly into two groups of one hundred. One group gets treated by a homeopath as normal, pills, consultation, smoke and voodoo, on top of whatever other treatment they are having, just like in the real world. The other group just sits on the waiting list. They get treatment as usual, whether that is ‘neglect’, ‘GP treatment’ or whatever, but no homeopathy. Then you measure outcomes, and compare who gets better the most.

  You could argue that it would be a trivial positive finding, and that it’s obvious the homeopathy group would do better; but it’s the only piece of research really waiting to be done. This is a ‘pragmatic trial’. The groups aren’t blinded, but they couldn’t possibly be in this kind of trial, and sometimes we have to accept compromises in experimental methodology. It would be a legitimate use of public money (or perhaps money from Boiron, the homeopathic pill company valued at $500 million), but there’s nothing to stop homeopaths from just cracking on and doing it for themselves: because despite the homeopaths’ fantasies, born out of a lack of knowledge, that research is difficult, magical and expensive, in fact such a trial would be very cheap to conduct.

  In fact, it’s not really money that’s missing from the alternative therapy research community, especially in Britain: it’s knowledge of evidence-based medicine, and expertise in how to do a trial. Their literature and debates drip with ignorance, and vitriolic anger at anyone who dares to appraise the trials. Their university courses, as far as they ever even dare to admit what they teach on them (it’s all suspiciously hidden away), seem to skirt around such explosive and threatening questions. I’ve suggested in various places, including at academic conferences, that the single thing that would most improve the quality of evidence in CAM would be funding for a simple, evidence-based medicine hotline, which anyone thinking about running a trial in their clinic could phone up and get advice on how to do it properly, to avoid wasting effort on an ‘unfair test’ that will rightly be regarded with contempt by all outsiders.

  In my pipe dream (I’m completely serious, if you’ve got the money) you’d need a handout, maybe a short course that people did to cover the basics, so they weren’t asking stupid questions, and phone support. In the meantime, if you’re a sensible homeopath and you want to do a GP-controlled trial, you could maybe try the badscience website forums, where there are people who might be able to give some pointers (among the childish fighters and trolls…).

  But would the homeopaths buy it? I think it would offend their sense of professionalism. You often see homeopaths trying to nuance their way through this tricky area, and they can’t quite make their minds up. Here, for example, is a Radio 4 interview, archived in full online, where Dr Elizabeth Thompson (consultant homeopathic physician, and honorary senior lecturer at the Department of Palliative Medicine at the University of Bristol) has a go.

  She starts off with some sensible stuff: homeopathy does work, but through non-specific effects, the cultural meaning of the process, the therapeutic relationship, it’s not about the pills, and so on. She practically comes out and says that homeopathy is all about cultural meaning and the placebo effect. ‘People have wanted to say homeopathy is like a pharmaceutical compound,’ she says, ‘and it isn’t, it is a complex intervention.’

  Then the interviewer asks: ‘What would you say to people who go along to their high street pharmacy, where you can buy homeopathic remedies, they have hay fever and they pick out a hay-fever remedy, I mean presumably that’s not the way it works?’ There is a moment of tension. Forgive me, Dr Thompson, but I felt you didn’t want to say that the pills work, as pills, in isolation, when you buy them in a shop: apart from anything else, you’d already said that they don’t.

  But she doesn’t want to break ranks and say the pills don’t work, either. I’m holding my breath. How will she do it? Is there a linguistic structure complex enough, passive enough, to negotiate through this? If there is, Dr Thompson doesn’t find it: ‘They might flick through and they might just be spot-on…[but] you’ve got to be very lucky to walk in and just get the right remedy.’ So the power is, and is not, in the pill: ‘P, and not-P’, as philosophers of logic would say.

  If they can’t finesse it with the ‘power is not in the pill’ paradox, how else do the homeopaths get around all this negative data? Dr Thompson—from what I have seen—is a fairly clear-thinking and civilised homeopath. She is, in many respects, alone. Homeopaths have been careful to keep themselves outside of the civilising environment of the university, where the influence and questioning of colleagues can help to refine ideas, and weed out the bad ones. In their rare forays, they enter them secretively, walling themselves and their ideas off from criticism or review, refusing to share even what is in their exam papers with outsiders.

  It is rare to find a homeopath engaging on the issue of the evidence, but what happens when they do? I can tell you. They get angry, they threaten to sue, they scream and shout at you at meetings, they complain spuriously and with ludicrous misrepresentations—time-consuming to expose, of course, but that’s the point of harassment—to the Press Complaints Commission and your editor, they send hate mail, and accuse you repeatedly of somehow being in the pocket of big pharma (falsely, although you start to wonder why you bother having principles when faced with this kind of behaviour). They bully, they smear, to the absolute top of the profession, and they do anything they can in a desperate bid to shut you up, and avoid having a discussion about the evidence. They have even been known to threaten violence (I won’t go into it here, but I manage these issues extremely seriously).

  I’m not saying I don’t enjoy a bit of banter. I’m just pointing out that you don’t get anything quite like this in most other fields, and homeopaths, among all the people in this book, with the exception of the odd nutritionist, seem to me to be a uniquely angry breed. Experiment for yourself by chatting with them about evidence, and let me know what you find.

  By now your head is hurting,
because of all those mischievous, confusing homeopaths and their weird, labyrinthine defences: you need a lovely science massage. Why is evidence so complicated? Why do we need all of these clever tricks, these special research paradigms? The answer is simple: the world is much more complicated than simple stories about pills making people get better. We are human, we are irrational, we have foibles, and the power of the mind over the body is greater than anything you have previously imagined.

  5 The Placebo Effect

  For all the dangers of CAM, to me the greatest disappointment is the way it distorts our understanding of our bodies. Just as the Big Bang theory is far more interesting than the creation story in Genesis, so the story that science can tell us about the natural world is far more interesting than any fable about magic pills concocted by an alternative therapist. To redress that balance, I’m offering you a whirlwind tour of one of the most bizarre and enlightening areas of medical research: the relationship between our bodies and our minds, the role of meaning in healing, and in particular the ‘placebo effect’.

  Much like quackery, placebos became unfashionable in medicine once the biomedical model started to produce tangible results. An editorial in 1890 sounded its death knell, describing the case of a doctor who had injected his patient with water instead of morphine: she recovered perfectly well, but then discovered the deception, disputed the bill in court, and won. The editorial was a lament, because doctors have known that reassurance and a good bedside manner can be very effective for as long as medicine has existed. ‘Shall [the placebo] never again have an opportunity of exerting its wonderful psychological effects as faithfully as one of its more toxic conveners?’ asked the Medical Press at the time.

  Luckily, its use survived. Throughout history, the placebo effect has been particularly well documented in the field of pain, and some of the stories are striking. Henry Beecher, an American anaesthetist, wrote about operating on a soldier with horrific injuries in a World War II field hospital, using salt water because the morphine was all gone, and to his astonishment the patient was fine. Peter Parker, an American missionary, described performing surgery without anaesthesia on a Chinese patient in the mid-nineteenth century: after the operation, she ‘jumped upon the floor’, bowed, and walked out of the room as if nothing had happened.

  Theodor Kocher performed 1,600 thyroidectomies without anaesthesia in Berne in the 1890s, and I take my hat off to a man who can do complicated neck operations on conscious patients. Mitchel in the early twentieth century was performing full amputations and mastectomies, entirely without anaesthesia; and surgeons from before the invention of anaesthesia often described how some patients could tolerate knife cutting through muscle, and saw cutting through bone, perfectly awake, and without even clenching their teeth. You might be tougher than you think.

  This is an interesting context in which to remember two televised stunts from 2006. The first was a rather melodramatic operation ‘under hypnosis’ on Channel 4: ‘We just want to start the debate on this important medical issue,’ explained the production company Zigzag, known for making shows like Mile High Club and Streak Party. The operation, a trivial hernia repair, was performed with medical drugs but at a reduced dose, and treated as if it was a medical miracle.

  The second was in Alternative Medicine: The Evidence, a rather gushing show on BBC2 presented by Kafhy Sykes (‘Professor of the Public Understanding of Science’). This series was the subject of a successful complaint at the highest level, on account of it misleading the audience. Viewers believed they had seen a patient having chest surgery with only acupuncture as anaesthesia: in fact this was not the case, and once again the patient had received an array of conventional medications to allow the operation to be performed.*

  ≡ The series also featured a brain-imaging experiment on acupuncture, funded by the BBC, and one of the scientists involved came out afterwards to complain not only that the results had been overinterpreted (which you would expect from the media, as we will see), but moreover, that the pressure from the hinder—that is to say, the BBC—to produce a positive result was overwhelming. This is a perfect example of the things which you do not do in science, and the fact that it was masterminded by a ‘Professor of the Public Understanding of Science’ goes some way towards explaining why we are in such a dismal position today. The programme was defended by the BBC in a letter with ten academic signatories. Several of these signatories have since said they did not sign the letter. The mind really does boggle.

  When you consider these misleading episodes alongside the reality—that operations have frequently been performed with no anaesthetics, no placebos, no alternative therapists, no hypnotists and no TV producers—these televised episodes suddenly feel rather less dramatic.

  But these are just stories, and the plural of anecdote is not data. Everyone knows about the power of the mind—whether it’s stories of mothers enduring biblical pain to avoid dropping a boiling kettle on their baby, or people lifting cars off their girlfriend like the Incredible Hulk—but devising an experiment that teases the psychological and cultural benefits of a treatment away from the biomedical effects is trickier than you might think. After all, what do you compare a placebo against? Another placebo? Or no treatment at all?

  The placebo on trial

  In most studies we don’t have a ‘no treatment’ group to compare both the placebo and the drug against, and for a very good ethical reason: if your patients are ill, you shouldn’t be leaving them untreated simply because of your own mawkish interest in the placebo effect. In fact, in most cases today it is considered wrong even to use a placebo in a trial: whenever possible you should compare your new treatment against the best pre-existing, current treatment.

  This is not just for ethical reasons (although it is enshrined in the Declaration of Helsinki, the international ethics bible). Placebo-controlled trials are also frowned upon by the evidence-based medicine community, because they know it’s an easy way to cook the books and get easy positive trial data to support your company’s big new investment. In the real world of clinical practice, patients and doctors aren’t so interested in whether a new drug works better than nothing, they’re interested in whether it works better than the best treatment they already have.

  There have been occasions in medical history where researchers were more cavalier. The Tuskegee Syphilis Study, for example, is one of America’s most shaming hours, if it is possible to say such a thing these days: 399 poor, rural African-American men were recruited by the US Public Health Service in 1932 for an observational study to see what happened if syphilis was left, very simply, untreated. Astonishingly, the study ran right through to 1972. In 1949 penicillin was introduced as an effective treatment for syphilis. These men did not receive that drug, nor did they receive Salvarsan, nor indeed did they receive an apology until 1997, from Bill Clinton.

  If we don’t want to do unethical scientific experiments with ‘no treatment’ groups on sick people, how else can we determine the size of the placebo effect on modern illnesses? Firstly, and rather ingeniously, we can compare one placebo with another.

  The first experiment in this field was a meta-analysis by Daniel Moerman, an anthropologist who has specialised in the placebo effect. He took the trial data from placebo-controlled trials of gastric ulcer medication, which was his first cunning move, because gastric ulcers are an excellent thing to study: their presence or absence is determined very objectively, with a gastroscopy camera passed down into the stomach, to avoid any doubt.

  Moerman took only the placebo data from these trials, and then, in his second ingenious move, from all of these studies, of all the different drugs, with their different dosing regimes, he took the ulcer healing rate from the placebo arm of trials where the ‘placebo’ treatment was two sugar pills a day, and compared that with the ulcer healing rate in the placebo arm of trials where the placebo was four sugar pills a day. He found, spectacularly, that four sugar pills are better than two (these findings have al
so been replicated in a different dataset, for those who are switched on enough to worry about the replicability of important clinical findings).

  What the treatment looks like

  So four pills are better than two: but how can this be? Does a placebo sugar pill simply exert an effect like any other pill? Is there a dose-response curve, as pharmacologists would find for any other drug? The answer is that the placebo effect is about far more than just the pill: it is about the cultural meaning of the treatment. Pills don’t simply manifest themselves in your stomach: they are given in particular ways, they take varying forms, and they are swallowed with expectations, all of which have an impact on a person’s beliefs about their own health, and in turn, on outcome. Homeopathy is, for example, a perfect example of the value in ceremony.

  I understand this might well seem improbable to you, so I’ve corralled some of the best data on the placebo effect into one place, and the challenge is this: see if you can come up with a better explanation for what is, I guarantee, a seriously strange set of experimental results.

  First up, Blackwell [1972] did a set of experiments on fifty-seven college students to determine the effect of colour—as well as the number of tablets—on the effects elicited. The subjects were sitting through a boring hour-long lecture, and were given either one or two pills, which were either pink or blue. They were told that they could expect to receive either a stimulant or a sedative. Since these were psychologists, and this was back when you could do whatever you wanted to your subjects—even lie to them—the treatment that all the students received consisted simply of sugar pills, but of different colours.