LOB-vs
Download Lectures on Biostatistics (1971).
Corrected and searchable version of Google books edition

Download review of Lectures on Biostatistics (THES, 1973).

Latest Tweets
Categories
Archives

acupuncture

Jump to follow-up

This piece is almost identical with today’s Spectator Health article.


This week there has been enormously wide coverage in the press for one of the worst papers on acupuncture that I’ve come across. As so often, the paper showed the opposite of what its title and press release, claimed. For another stunning example of this sleight of hand, try Acupuncturists show that acupuncture doesn’t work, but conclude the opposite: journal fails, published in the British Journal of General Practice).

Presumably the wide coverage was a result of the hyped-up press release issued by the journal, BMJ Acupuncture in Medicine. That is not the British Medical Journal of course, but it is, bafflingly, published by the BMJ Press group, and if you subscribe to press releases from the real BMJ. you also get them from Acupuncture in Medicine. The BMJ group should not be mixing up press releases about real medicine with press releases about quackery. There seems to be something about quackery that’s clickbait for the mainstream media.

As so often, the press release was shockingly misleading: It said

Acupuncture may alleviate babies’ excessive crying Needling twice weekly for 2 weeks reduced crying time significantly

This is totally untrue. Here’s why.

Luckily the Science Media Centre was on the case quickly: read their assessment.

The paper made the most elementary of all statistical mistakes. It failed to make allowance for the jelly bean problem.

The paper lists 24 different tests of statistical significance and focusses attention on three that happen to give a P value (just) less than 0.05, and so were declared to be "statistically significant". If you do enough tests, some are bound to come out “statistically significant” by chance. They are false postives, and the conclusions are as meaningless as “green jelly beans cause acne” in the cartoon. This is called P-hacking and it’s a well known cause of problems. It was evidently beyond the wit of the referees to notice this naive mistake. It’s very doubtful whether there is anything happening but random variability.

And that’s before you even get to the problem of the weakness of the evidence provided by P values close to 0.05. There’s at least a 30% chance of such values being false positives, even if it were not for the jelly bean problem, and a lot more than 30% if the hypothesis being tested is implausible. I leave it to the reader to assess the plausibility of the hypothesis that a good way to stop a baby crying is to stick needles into the poor baby.

If you want to know more about P values try Youtube or here, or here.

 

jelly bean

One of the people asked for an opinion on the paper was George Lewith, the well-known apologist for all things quackish. He described the work as being a "good sized fastidious well conducted study ….. The outcome is clear". Thus showing an ignorance of statistics that would shame an undergraduate.

On the Today Programme, I was interviewed by the formidable John Humphrys, along with the mandatory member of the flat-earth society whom the BBC seems to feel obliged to invite along for "balance". In this case it was professional acupuncturist, Mike Cummings, who is an associate editor of the journal in which the paper appeared. Perhaps he’d read the Science media centre’s assessment before he came on, because he said, quite rightly, that

"in technical terms the study is negative" "the primary outcome did not turn out to be statistically significant"

to which Humphrys retorted, reasonably enough, “So it doesn’t work”. Cummings’ response to this was a lot of bluster about how unfair it was for NICE to expect a treatment to perform better than placebo. It was fascinating to hear Cummings admit that the press release by his own journal was simply wrong.

Listen to the interview here

Another obvious flaw of the study is that the nature of the control group. It is not stated very clearly but it seems that the baby was left alone with the acupuncturist for 10 minutes. A far better control would have been to have the baby cuddled by its mother, or by a nurse. That’s what was used by Olafsdottir et al (2001) in a study that showed cuddling worked just as well as another form of quackery, chiropractic, to stop babies crying.

Manufactured doubt is a potent weapon of the alternative medicine industry. It’s the same tactic as was used by the tobacco industry. You scrape together a few lousy papers like this one and use them to pretend that there’s a controversy. For years the tobacco industry used this tactic to try to persuade people that cigarettes didn’t give you cancer, and that nicotine wasn’t addictive. The main stream media obligingly invite the representatives of the industry who convey to the reader/listener that there is a controversy, when there isn’t.

Acupuncture is no longer controversial. It just doesn’t work -see Acupuncture is a theatrical placebo: the end of a myth. Try to imagine a pill that had been subjected to well over 3000 trials without anyone producing convincing evidence for a clinically useful effect. It would have been abandoned years ago. But by manufacturing doubt, the acupuncture industry has managed to keep its product in the news. Every paper on the subject ends with the words "more research is needed". No it isn’t.

Acupuncture is pre-scientific idea that was moribund everywhere, even in China, until it was revived by Mao Zedong as part of the appalling Great Proletarian Revolution. Now it is big business in China, and 100 percent of the clinical trials that come from China are positive.

if you believe them, you’ll truly believe anything.

Follow-up

29 January 2017

Soon after the Today programme in which we both appeared, the acupuncturist, Mike Cummings, posted his reaction to the programme. I thought it worth posting the original version in full. Its petulance and abusiveness are quite remarkable.

I thank Cummings for giving publicity to the video of our appearance, and for referring to my Wikipedia page. I leave it to the reader to judge my competence, and his, in the statistics of clinical trials. And it’s odd to be described as a "professional blogger" when the 400+ posts on dcscience.net don’t make a penny -in fact they cost me money. In contrast, he is the salaried medical director of the British Medical Acupuncture Society.

It’s very clear that he has no understanding of the error of the transposed conditional, nor even the mulltiple comparison problem (and neither, it seems, does he know the meaning of the word ‘protagonist’).

I ignored his piece, but several friends complained to the BMJ for allowing such abusive material on their blog site. As a result a few changes were made. The “baying mob” is still there, but the Wikipedia link has gone. I thought that readers might be interested to read the original unexpurgated version. It shows, better than I ever could, the weakness of the arguments of the alternative medicine community. To quote Upton Sinclair:

“It is difficult to get a man to understand something, when his salary depends upon his not understanding it.”

It also shows that the BBC still hasn’t learned the lessons in Steve Jones’ excellent “Review of impartiality and accuracy of the BBC’s coverage of science“. Every time I appear in such a programme, they feel obliged to invite a member of the flat earth society to propagate their make-believe.

Acupuncture for infantile colic – misdirection in the media or over-reaction from a sceptic blogger?

26 Jan, 17 | by Dr Mike Cummings

So there has been a big response to this paper press released by BMJ on behalf of the journal Acupuncture in Medicine. The response has been influenced by the usual characters – retired professors who are professional bloggers and vocal critics of anything in the realm of complementary medicine. They thrive on oiling up and flexing their EBM muscles for a baying mob of fellow sceptics (see my ‘stereotypical mental image’ here). Their target in this instant is a relatively small trial on acupuncture for infantile colic.[1] Deserving of being press released by virtue of being the largest to date in the field, but by no means because it gave a definitive answer to the question of the efficacy of acupuncture in the condition. We need to wait for an SR where the data from the 4 trials to date can be combined.
On this occasion I had the pleasure of joining a short segment on the Today programme on BBC Radio 4 led by John Humphreys. My protagonist was the ever-amusing David Colquhoun (DC), who spent his short air-time complaining that the journal was even allowed to be published in the first place. You can learn all about DC care of Wikipedia – he seems to have a surprisingly long write up for someone whose profession career was devoted to single ion channels, perhaps because a significant section of the page is devoted to his activities as a quack-busting blogger. So why would BBC Radio 4 invite a retired basic scientist and professional sceptic blogger to be interviewed alongside one of the journal editors – a clinician with expertise in acupuncture (WMA)? At no point was it made manifest that only one of the two had ever been in a position to try to help parents with a baby that they think cries excessively. Of course there are a lot of potential causes of excessive crying, but I am sure DC would agree that it is unlikely to be attributable to a single ion channel.

So what about the research itself? I have already said that the trial was not definitive, but it was not a bad trial. It suffered from under-recruiting, which meant that it was underpowered in terms of the statistical analysis. But it was prospectively registered, had ethical approval and the protocol was published. Primary and secondary outcomes were clearly defined, and the only change from the published protocol was to combine the two acupuncture groups in an attempt to improve the statistical power because of under recruitment. The fact that this decision was made after the trial had begun means that the results would have to be considered speculative. For this reason the editors of Acupuncture in Medicine insisted on alteration of the language in which the conclusions were framed to reflect this level of uncertainty.

DC has focussed on multiple statistical testing and p values. These are important considerations, and we could have insisted on more clarity in the paper. P values are a guide and the 0.05 level commonly adopted must be interpreted appropriately in the circumstances. In this paper there are no definitive conclusions, so the p values recorded are there to guide future hypothesis generation and trial design. There were over 50 p values reported in this paper, so by chance alone you must expect some to be below 0.05. If one is to claim statistical significance of an outcome at the 0.05 level, ie a 1:20 likelihood of the event happening by chance alone, you can only perform the test once. If you perform the test twice you must reduce the p value to 0.025 if you want to claim statistical significance of one or other of the tests. So now we must come to the predefined outcomes. They were clearly stated, and the results of these are the only ones relevant to the conclusions of the paper. The primary outcome was the relative reduction in total crying time (TC) at 2 weeks. There were two significance tests at this point for relative TC. For a statistically significant result, the p values would need to be less than or equal to 0.025 – neither was this low, hence my comment on the Radio 4 Today programme that this was technically a negative trial (more correctly ‘not a positive trial’ – it failed to disprove the null hypothesis ie that the samples were drawn from the same population and the acupuncture intervention did not change the population treated). Finally to the secondary outcome – this was the number of infants in each group who continued to fulfil the criteria for colic at the end of each intervention week. There were four tests of significance so we need to divide 0.05 by 4 to maintain the 1:20 chance of a random event ie only draw conclusions regarding statistical significance if any of the tests resulted in a p value at or below 0.0125. Two of the 4 tests were below this figure, so we say that the result is unlikely to have been chance alone in this case. With hindsight it might have been good to include this explanation in the paper itself, but as editors we must constantly balance how much we push authors to adjust their papers, and in this case the editor focussed on reducing the conclusions to being speculative rather than definitive. A significant result in a secondary outcome leads to a speculative conclusion that acupuncture ‘may’ be an effective treatment option… but further research will be needed etc…

Now a final word on the 3000 plus acupuncture trials that DC loves to mention. His point is that there is no consistent evidence for acupuncture after over 3000 RCTs, so it clearly doesn’t work. He first quoted this figure in an editorial after discussing the largest, most statistically reliable meta-analysis to date – the Vickers et al IPDM.[2] DC admits that there is a small effect of acupuncture over sham, but follows the standard EBM mantra that it is too small to be clinically meaningful without ever considering the possibility that sham (gentle acupuncture plus context of acupuncture) can have clinically relevant effects when compared with conventional treatments. Perhaps now the best example of this is a network meta-analysis (NMA) using individual patient data (IPD), which clearly demonstrates benefits of sham acupuncture over usual care (a variety of best standard or usual care) in terms of health-related quality of life (HRQoL).[3]

30 January 2017

I got an email from the BMJ asking me to take part in a BMJ Head-to-Head debate about acupuncture. I did one of these before, in 2007, but it generated more heat than light (the only good thing to come out of it was the joke about leprechauns). So here is my polite refusal.

Hello

Thanks for the invitation, Perhaps you should read the piece that I wrote after the Today programme
https://www.dcscience.net/2017/01/20/if-your-baby-is-crying-what-do-you-do-stick-pins-in-it/#follow

Why don’t you do these Head to Heads about genuine controversies? To do them about homeopathy or acupuncture is to fall for the “manufactured doubt” stratagem that was used so effectively by the tobacco industry to promote smoking. It’s the favourite tool of snake oil salesman too, and th BMJ should see that and not fall for their tricks.

Such pieces night be good clickbait, but they are bad medicine and bad ethics.

All the best

David

Of all types of alternative medicine, acupuncture is the one that has received the most approval from regular medicine. The benefit of that is that it’s been tested more thoroughly than most others. The result is now clear. It doesn’t work. See the evidence in Acupuncture is a theatrical placebo.

This blog has documented many cases of misreported tests of acupuncture, often from people have a financial interests in selling it. Perhaps the most egregious spin came from the University of Exeter. It was published in a normal journal, and endorsed by the journal’s editor, despite showing clearly that acupuncture didn’t even have much placebo effect.

Acupuncture got a boost in 2009 from, of all unlikely sources, the National Institute for Health and Care Excellence (NICE). The judgements of NICE and the benefit / cost ratio of treatments are usually very good. But the guidance group that they assembled to judge treatments for low back pain was atypically incompetent when it came to assessment of evidence. They recommended acupuncture as one option. At the time I posted “NICE falls for Bait and Switch by acupuncturists and chiropractors: it has let down the public and itself“. That was soon followed by two more posts:

NICE fiasco, part 2. Rawlins should withdraw guidance and start again“,

and

The NICE fiasco, Part 3. Too many vested interests, not enough honesty“.

At the time, NICE was being run by Michael Rawlins, an old friend. No doubt he was unaware of the bad guidance until it was too late and he felt obliged to defend it.

Although the 2008 guidance referred only to low back pain, it gave an opening for acupuncturists to penetrate the NHS. Like all quacks, they are experts at bait and switch. The penetration of quackery was exacerbated by the privatisation of physiotherapy services to organisations like Connect Physical Health which have little regard for evidence, but a good eye for sales. If you think that’s an exaggeration, read "Connect Physical Health sells quackery to NHS".

When David Haslam took over the reins at NICE, I was optimistic that the question would be revisited (it turned out that he was aware of this blog). I was not disappointed. This time the guidance group had much more critical members.

The new draft guidance on low back pain was released on 24 March 2016. The final guidance will not appear until September 2016, but last time the final version didn’t differ much from the draft.

Despite modern imaging methods, it still isn’t possible to pinpoint the precise cause of low back pain (LBP) so diagnoses are lumped together as non-specific low back pain (NSLBP).

The summary guidance is explicit.

“1.2.8 Do not offer acupuncture for managing non-specific low back 7 pain with or without sciatica.”
 

The evidence is summarised section 13.6 of the main report (page 493).There is a long list of other proposed treatments that are not recommended.

Because low back pain is so common, and so difficult to treat, many treatments have been proposed. Many of them, including acupuncture, have proved to be clutching at straws. It’s to the great credit of the new guidance group that they have resisted that temptation.

Among the other "do not offer" treatments are

  • imaging (except in specialist setting)
  • belts or corsets
  • foot orthotics
  • acupuncture
  • ultrasound
  • TENS or PENS
  • opioids (for acute or chronic LBP)
  • antidepressants (SSRI and others)
  • anticonvulsants
  • spinal injections
  • spinal fusion for NSLBP (except as part of a randomised controlled trial)
  • disc replacement

At first sight, the new guidance looks like an excellent clear-out of the myths that surround the treatment of low back pain.

The positive recommendations that are made are all for things that have modest effects (at best). For example “Consider a group exercise programme”, and “Consider manipulation, mobilisation”. The use of there word “consider”, rather than “offer” seems to be NICE-speak -an implicit suggestion that it doesn’t work very well. My only criticism of the report is that it doesn’t say sufficiently bluntly that non-specific low back pain is largely an unsolved problem. Most of what’s seen is probably a result of that most deceptive phenomenon, regression to the mean.

One pain specialist put it to me thus. “Think of the billions spent on back pain research over the years in order to reach the conclusion that nothing much works – shameful really.” Well perhaps not shameful: it isn’t for want of trying. It’s just a very difficult problem. But pretending that there are solutions doesn’t help anyone.

Follow-up

Jump to follow-up

“Statistical regression to the mean predicts that patients selected for abnormalcy will, on the average, tend to improve. We argue that most improvements attributed to the placebo effect are actually instances of statistical regression.”

“Thus, we urge caution in interpreting patient improvements as causal effects of our actions and should avoid the conceit of assuming that our personal presence has strong healing powers.”

McDonald et al., (1983)

In 1955, Henry Beecher published "The Powerful Placebo". I was in my second undergraduate year when it appeared. And for many decades after that I took it literally, They looked at 15 studies and found that an average 35% of them got "satisfactory relief" when given a placebo. This number got embedded in pharmacological folk-lore. He also mentioned that the relief provided by placebo was greatest in patients who were most ill.

Consider the common experiment in which a new treatment is compared with a placebo, in a double-blind randomised controlled trial (RCT). It’s common to call the responses measured in the placebo group the placebo response. But that is very misleading, and here’s why.

The responses seen in the group of patients that are treated with placebo arise from two quite different processes. One is the genuine psychosomatic placebo effect. This effect gives genuine (though small) benefit to the patient. The other contribution comes from the get-better-anyway effect. This is a statistical artefact and it provides no benefit whatsoever to patients. There is now increasing evidence that the latter effect is much bigger than the former.

How can you distinguish between real placebo effects and get-better-anyway effect?

The only way to measure the size of genuine placebo effects is to compare in an RCT the effect of a dummy treatment with the effect of no treatment at all. Most trials don’t have a no-treatment arm, but enough do that estimates can be made. For example, a Cochrane review by Hróbjartsson & Gøtzsche (2010) looked at a wide variety of clinical conditions. Their conclusion was:

“We did not find that placebo interventions have important clinical effects in general. However, in certain settings placebo interventions can influence patient-reported outcomes, especially pain and nausea, though it is difficult to distinguish patient-reported effects of placebo from biased reporting.”

In some cases, the placebo effect is barely there at all. In a non-blind comparison of acupuncture and no acupuncture, the responses were essentially indistinguishable (despite what the authors and the journal said). See "Acupuncturists show that acupuncture doesn’t work, but conclude the opposite"

So the placebo effect, though a real phenomenon, seems to be quite small. In most cases it is so small that it would be barely perceptible to most patients. Most of the reason why so many people think that medicines work when they don’t isn’t a result of the placebo response, but it’s the result of a statistical artefact.

Regression to the mean is a potent source of deception

The get-better-anyway effect has a technical name, regression to the mean. It has been understood since Francis Galton described it in 1886 (see Senn, 2011 for the history). It is a statistical phenomenon, and it can be treated mathematically (see references, below). But when you think about it, it’s simply common sense.

You tend to go for treatment when your condition is bad, and when you are at your worst, then a bit later you’re likely to be better, The great biologist, Peter Medawar comments thus.

"If a person is (a) poorly, (b) receives treatment intended to make him better, and (c) gets better, then no power of reasoning known to medical science can convince him that it may not have been the treatment that restored his health"
(Medawar, P.B. (1969:19). The Art of the Soluble: Creativity and originality in science. Penguin Books: Harmondsworth).

This is illustrated beautifully by measurements made by McGorry et al., (2001). Patients with low back pain recorded their pain (on a 10 point scale) every day for 5 months (they were allowed to take analgesics ad lib).

The results for four patients are shown in their Figure 2. On average they stay fairly constant over five months, but they fluctuate enormously, with different patterns for each patient. Painful episodes that last for 2 to 9 days are interspersed with periods of lower pain or none at all. It is very obvious that if these patients had gone for treatment at the peak of their pain, then a while later they would feel better, even if they were not actually treated. And if they had been treated, the treatment would have been declared a success, despite the fact that the patient derived no benefit whatsoever from it. This entirely artefactual benefit would be the biggest for the patients that fluctuate the most (e.g this in panels a and d of the Figure).

fig2
Figure 2 from McGorry et al, 2000. Examples of daily pain scores over a 6-month period for four participants. Note: Dashes of different lengths at the top of a figure designate an episode and its duration.

The effect is illustrated well by an analysis of 118 trials of treatments for non-specific low back pain (NSLBP), by Artus et al., (2010). The time course of pain (rated on a 100 point visual analogue pain scale) is shown in their Figure 2. There is a modest improvement in pain over a few weeks, but this happens regardless of what treatment is given, including no treatment whatsoever.

artus2

FIG. 2 Overall responses (VAS for pain) up to 52-week follow-up in each treatment arm of included trials. Each line represents a response line within each trial arm. Red: index treatment arm; Blue: active treatment arm; Green: usual care/waiting list/placebo arms. ____: pharmacological treatment; – – – -: non-pharmacological treatment; . . .. . .: mixed/other. 

The authors comment

"symptoms seem to improve in a similar pattern in clinical trials following a wide variety of active as well as inactive treatments.", and "The common pattern of responses could, for a large part, be explained by the natural history of NSLBP".

In other words, none of the treatments work.

This paper was brought to my attention through the blog run by the excellent physiotherapist, Neil O’Connell. He comments

"If this finding is supported by future studies it might suggest that we can’t even claim victory through the non-specific effects of our interventions such as care, attention and placebo. People enrolled in trials for back pain may improve whatever you do. This is probably explained by the fact that patients enrol in a trial when their pain is at its worst which raises the murky spectre of regression to the mean and the beautiful phenomenon of natural recovery."

O’Connell has discussed the matter in recent paper, O’Connell (2015), from the point of view of manipulative therapies. That’s an area where there has been resistance to doing proper RCTs, with many people saying that it’s better to look at “real world” outcomes. This usually means that you look at how a patient changes after treatment. The hazards of this procedure are obvious from Artus et al.,Fig 2, above. It maximises the risk of being deceived by regression to the mean. As O’Connell commented

"Within-patient change in outcome might tell us how much an individual’s condition improved, but it does not tell us how much of this improvement was due to treatment."

In order to eliminate this effect it’s essential to do a proper RCT with control and treatment groups tested in parallel. When that’s done the control group shows the same regression to the mean as the treatment group. and any additional response in the latter can confidently attributed to the treatment. Anything short of that is whistling in the wind.

Needless to say, the suboptimal methods are most popular in areas where real effectiveness is small or non-existent. This, sad to say, includes low back pain. It also includes just about every treatment that comes under the heading of alternative medicine. Although these problems have been understood for over a century, it remains true that

"It is difficult to get a man to understand something, when his salary depends upon his not understanding it."
Upton Sinclair (1935)

Responders and non-responders?

One excuse that’s commonly used when a treatment shows only a small effect in proper RCTs is to assert that the treatment actually has a good effect, but only in a subgroup of patients ("responders") while others don’t respond at all ("non-responders"). For example, this argument is often used in studies of anti-depressants and of manipulative therapies. And it’s universal in alternative medicine.

There’s a striking similarity between the narrative used by homeopaths and those who are struggling to treat depression. The pill may not work for many weeks. If the first sort of pill doesn’t work try another sort. You may get worse before you get better. One is reminded, inexorably, of Voltaire’s aphorism "The art of medicine consists in amusing the patient while nature cures the disease".

There is only a handful of cases in which a clear distinction can be made between responders and non-responders. Most often what’s observed is a smear of different responses to the same treatment -and the greater the variability, the greater is the chance of being deceived by regression to the mean.

For example, Thase et al., (2011) looked at responses to escitalopram, an SSRI antidepressant. They attempted to divide patients into responders and non-responders. An example (Fig 1a in their paper) is shown.

Thase fig 1a

The evidence for such a bimodal distribution is certainly very far from obvious. The observations are just smeared out. Nonetheless, the authors conclude

"Our findings indicate that what appears to be a modest effect in the grouped data – on the boundary of clinical significance, as suggested above – is actually a very large effect for a subset of patients who benefited more from escitalopram than from placebo treatment. "

I guess that interpretation could be right, but it seems more likely to be a marketing tool. Before you read the paper, check the authors’ conflicts of interest.

The bottom line is that analyses that divide patients into responders and non-responders are reliable only if that can be done before the trial starts. Retrospective analyses are unreliable and unconvincing.

Some more reading

Senn, 2011 provides an excellent introduction (and some interesting history). The subtitle is

"Here Stephen Senn examines one of Galton’s most important statistical legacies – one that is at once so trivial that it is blindingly obvious, and so deep that many scientists spend their whole career being fooled by it."

The examples in this paper are extended in Senn (2009), “Three things that every medical writer should know about statistics”. The three things are regression to the mean, the error of the transposed conditional and individual response.

You can read slightly more technical accounts of regression to the mean in McDonald & Mazzuca (1983) "How much of the placebo effect is statistical regression" (two quotations from this paper opened this post), and in Stephen Senn (2015) "Mastering variation: variance components and personalised medicine". In 1988 Senn published some corrections to the maths in McDonald (1983).

The trials that were used by Hróbjartsson & Gøtzsche (2010) to investigate the comparison between placebo and no treatment were looked at again by Howick et al., (2013), who found that in many of them the difference between treatment and placebo was also small. Most of the treatments did not work very well.

Regression to the mean is not just a medical deceiver: it’s everywhere

Although this post has concentrated on deception in medicine, it’s worth noting that the phenomenon of regression to the mean can cause wrong inferences in almost any area where you look at change from baseline. A classical example concern concerns the effectiveness of speed cameras. They tend to be installed after a spate of accidents, and if the accident rate is particularly high in one year it is likely to be lower the next year, regardless of whether a camera had been installed or not. To find the true reduction in accidents caused by installation of speed cameras, you would need to choose several similar sites and allocate them at random to have a camera or no camera. As in clinical trials. looking at the change from baseline can be very deceptive.

Statistical postscript

Lastly, remember that it you avoid all of these hazards of interpretation, and your test of significance gives P = 0.047. that does not mean you have discovered something. There is still a risk of at least 30% that your ‘positive’ result is a false positive. This is explained in Colquhoun (2014),"An investigation of the false discovery rate and the misinterpretation of p-values". I’ve suggested that one way to solve this problem is to use different words to describe P values: something like this.

P > 0.05 very weak evidence
P = 0.05 weak evidence: worth another look
P = 0.01 moderate evidence for a real effect
P = 0.001 strong evidence for real effect

But notice that if your hypothesis is implausible, even these criteria are too weak. For example, if the treatment and placebo are identical (as would be the case if the treatment were a homeopathic pill) then it follows that 100% of positive tests are false positives.

Follow-up

12 December 2015

It’s worth mentioning that the question of responders versus non-responders is closely-related to the classical topic of bioassays that use quantal responses. In that field it was assumed that each participant had an individual effective dose (IED). That’s reasonable for the old-fashioned LD50 toxicity test: every animal will die after a sufficiently big dose. It’s less obviously right for ED50 (effective dose in 50% of individuals). The distribution of IEDs is critical, but it has very rarely been determined. The cumulative form of this distribution is what determines the shape of the dose-response curve for fraction of responders as a function of dose. Linearisation of this curve, by means of the probit transformation used to be a staple of biological assay. This topic is discussed in Chapter 10 of Lectures on Biostatistics. And you can read some of the history on my blog about Some pharmacological history: an exam from 1959.

Jump to follow-up

This discussion seemed to be of sufficient general interest that we submitted is as a feature to eLife, because this journal is one of the best steps into the future of scientific publishing. Sadly the features editor thought that " too much of the article is taken up with detailed criticisms of research papers from NEJM and Science that appeared in the altmetrics top 100 for 2013; while many of these criticisms seems valid, the Features section of eLife is not the venue where they should be published". That’s pretty typical of what most journals would say. It is that sort of attitude that stifles criticism, and that is part of the problem. We should be encouraging post-publication peer review, not suppressing it. Luckily, thanks to the web, we are now much less constrained by journal editors than we used to be.

Here it is.

Scientists don’t count: why you should ignore altmetrics and other bibliometric nightmares

David Colquhoun1 and Andrew Plested2

1 University College London, Gower Street, London WC1E 6BT

2 Leibniz-Institut für Molekulare Pharmakologie (FMP) & Cluster of Excellence NeuroCure, Charité Universitätsmedizin,Timoféeff-Ressowsky-Haus, Robert-Rössle-Str. 10, 13125 Berlin Germany.

Jeffrey Beall is librarian at Auraria Library, University of Colorado Denver.  Although not a scientist himself, he, more than anyone, has done science a great service by listing the predatory journals that have sprung up in the wake of pressure for open access.  In August 2012 he published “Article-Level Metrics: An Ill-Conceived and Meretricious Idea.  At first reading that criticism seemed a bit strong.  On mature consideration, it understates the potential that bibliometrics, altmetrics especially, have to undermine both science and scientists.

Altmetrics is the latest buzzword in the vocabulary of bibliometricians.  It attempts to measure the “impact” of a piece of research by counting the number of times that it’s mentioned in tweets, Facebook pages, blogs, YouTube and news media.  That sounds childish, and it is. Twitter is an excellent tool for journalism. It’s good for debunking bad science, and for spreading links, but too brief for serious discussions.  It’s rarely useful for real science.

Surveys suggest that the great majority of scientists do not use twitter (7 — 13%).  Scientific works get tweeted about mostly because they have titles that contain buzzwords, not because they represent great science.

What and who is Altmetrics for?

The aims of altmetrics are ambiguous to the point of dishonesty; they depend on whether the salesperson is talking to a scientist or to a potential buyer of their wares.

At a meeting in London , an employee of altmetric.com said “we measure online attention surrounding journal articles” “we are not measuring quality …” “this whole altmetrics data service was born as a service for publishers”, “it doesn’t matter if you got 1000 tweets . . .all you need is one blog post that indicates that someone got some value from that paper”. 

These ideas sound fairly harmless, but in stark contrast, Jason Priem (an author of the altmetrics manifesto) said one advantage of altmetrics is that it’s fast “Speed: months or weeks, not years: faster evaluations for tenure/hiring”.  Although conceivably useful for disseminating preliminary results, such speed isn’t important for serious science (the kind that ought to be considered for tenure) which operates on the timescale of years. Priem also says “researchers must ask if altmetrics really reflect impact” .  Even he doesn’t know, yet altmetrics services are being sold to universities, before any evaluation of their usefulness has been done, and universities are buying them.  The idea that altmetrics scores could be used for hiring is nothing short of terrifying. 

The problem with bibliometrics

The mistake made by all bibliometricians is that they fail to consider the content of papers, because they have no desire to understand research. Bibliometrics are for people who aren’t prepared to take the time (or lack the mental capacity) to evaluate research by reading about it, or in the case of software or databases, by using them.   The use of surrogate outcomes in clinical trials is rightly condemned.  Bibliometrics are all about surrogate outcomes.

If instead we consider the work described in particular papers that most people agree to be important (or that everyone agrees to be bad), it’s immediately obvious that no publication metrics can measure quality.  There are some examples in How to get good science (Colquhoun, 2007).  It is shown there that at least one Nobel prize winner failed dismally to fulfil arbitrary biblometric productivity criteria of the sort imposed in some universities (another example is in Is Queen Mary University of London trying to commit scientific suicide?).

Schekman (2013) has said that science

“is disfigured by inappropriate incentives. The prevailing structures of personal reputation and career advancement mean the biggest rewards often follow the flashiest work, not the best.”

Bibliometrics reinforce those inappropriate incentives.  A few examples will show that altmetrics are one of the silliest metrics so far proposed.

The altmetrics top 100 for 2103

The superficiality of altmetrics is demonstrated beautifully by the list of the 100 papers with the highest altmetric scores in 2013    For a start, 58 of the 100 were behind paywalls, and so unlikely to have been read except (perhaps) by academics. 

The second most popular paper (with the enormous altmetric score of 2230) was published in the New England Journal of Medicine.  The title was Primary Prevention of Cardiovascular Disease with a Mediterranean Diet.  It was promoted (inaccurately) by the journal with the following tweet:

nejm

Many of the 2092 tweets related to this article simply gave the title, but inevitably the theme appealed to diet faddists, with plenty of tweets like the following:

nejm

nejm

The interpretations of the paper promoted by these tweets were mostly desperately inaccurate. Diet studies are anyway notoriously unreliable. As John Ioannidis has said

"Almost every single nutrient imaginable has peer reviewed publications associating it with almost any outcome."

This sad situation comes about partly because most of the data comes from non-randomised cohort studies that tell you nothing about causality, and also because the effects of diet on health seem to be quite small. 

The study in question was a randomized controlled trial, so it should be free of the problems of cohort studies.  But very few tweeters showed any sign of having read the paper.  When you read it you find that the story isn’t so simple.  Many of the problems are pointed out in the online comments that follow the paper. Post-publication peer review really can work, but you have to read the paper.  The conclusions are pretty conclusively demolished in the comments, such as:

“I’m surrounded by olive groves here in Australia and love the hand-pressed EVOO [extra virgin olive oil], which I can buy at a local produce market BUT this study shows that I won’t live a minute longer, and it won’t prevent a heart attack.”

We found no tweets that mentioned the finding from the paper that the diets had no detectable effect on myocardial infarction, death from cardiovascular causes, or death from any cause.  The only difference was in the number of people who had strokes, and that showed a very unimpressive P = 0.04. 

Neither did we see any tweets that mentioned the truly impressive list of conflicts of interest of the authors, which ran to an astonishing 419 words.

“Dr. Estruch reports serving on the board of and receiving lecture fees from the Research Foundation on Wine and Nutrition (FIVIN); serving on the boards of the Beer and Health Foundation and the European Foundation for Alcohol Research (ERAB); receiving lecture fees from Cerveceros de España and Sanofi-Aventis; and receiving grant support through his institution from Novartis. Dr. Ros reports serving on the board of and receiving travel support, as well as grant support through his institution, from the California Walnut Commission; serving on the board of the Flora Foundation (Unilever). . . “

And so on, for another 328 words. 

The interesting question is how such a paper came to be published in the hugely prestigious New England Journal of Medicine.  That it happened is yet another reason to distrust impact factors.  It seems to be another sign that glamour journals are more concerned with trendiness than quality.

One sign of that is the fact that the journal’s own tweet misrepresented the work. The irresponsible spin in this initial tweet from the journal started the ball rolling, and after this point, the content of the paper itself became irrelevant. The altmetrics score is utterly disconnected from the science reported in the paper: it more closely reflects wishful thinking and confirmation bias.

The fourth paper in the altmetrics top 100 is an equally instructive example.

This work was also published in a glamour journal, Science. The paper claimed that a function of sleep was to “clear metabolic waste from the brain”.  It was initially promoted (inaccurately) on Twitter by the publisher of Science

After that, the paper was retweeted many times, presumably because everybody sleeps, and perhaps because the title hinted at the trendy, but fraudulent, idea of “detox”.  Many tweets were variants of “The garbage truck that clears metabolic waste from the brain works best when you’re asleep”.

science

But this paper was hidden behind Science’s paywall.  It’s bordering on irresponsible for journals to promote on social media papers that can’t be read freely.  It’s unlikely that anyone outside academia had read it, and therefore few of the tweeters had any idea of the actual content, or the way the research was done.  Nevertheless it got “1,479 tweets from 1,355 accounts with an upper bound of 1,110,974 combined followers”.  It had the huge Altmetrics score of 1848, the highest altmetric score in October 2013.

Within a couple of days, the story fell out of the news cycle.  It was not a bad paper, but neither was it a huge breakthrough.  It didn’t show that naturally-produced metabolites were cleared more quickly, just that injected substances were cleared faster when the mice were asleep or anaesthetised.  This finding might or might not have physiological consequences for mice.

Worse, the paper also claimed that “Administration of adrenergic antagonists induced an increase in CSF tracer influx, resulting in rates of CSF tracer influx that were more comparable with influx observed during sleep or anesthesia than in the awake state”.  Simply put, giving the sleeping mice a drug could reduce the clearance to wakeful levels.  But nobody seemed to notice the absurd concentrations of antagonists that were used in these experiments: “adrenergic receptor antagonists (prazosin, atipamezole, and propranolol, each 2 mM) were then slowly infused via the cisterna magna cannula for 15 min”.  Use of such high concentrations is asking for non-specific effects.  The binding constant (concentration to occupy half the receptors) for prazosin is less than 1 nM,  so infusing 2 mM is working at a million times greater than the concentration that should be effective. That’s asking for non-specific effects.  Most drugs at this sort of concentration have local anaesthetic effects, so perhaps it isn’t surprising that the effects resembled those of ketamine.

The altmetrics editor hadn’t noticed the problems and none of them featured in the online buzz.  That’s partly because to find it out you had to read the paper (the antagonist concentrations were hidden in the legend of Figure 4), and partly because you needed to know the binding constant for prazosin to see this warning sign.

The lesson, as usual, is that if you want to know about the quality of a paper, you have to read it. Commenting on a paper without knowing anything of its content is liable to make you look like an jackass.

A tale of two papers

Another approach that looks at individual papers is to compare some of one’s own papers.  Sadly, UCL shows altmetric scores on each of your own papers.  Mostly they are question marks, because nothing published before 2011 is scored.  But two recent papers make an interesting contrast.  One is from DC’s side interest in quackery, one was real science.  The former has an altmetric score of 169, the latter has an altmetric score of 2.

The first paper was “Acupuncture is a theatrical placebo”, which was published as an invited editorial in Anesthesia and Analgesia [download pdf].  The paper was scientifically trivial. It took perhaps a week to write. 

Nevertheless, it got promoted it on twitter, because anything to do with alternative medicine is interesting to the public.  It got quite a lot of retweets.  And the resulting altmetric score of 169 put it in the top 1% of all articles altmetric have tracked, and the second highest ever for Anesthesia and Analgesia

As well as the journal’s own website, the article was also posted on the DCScience.net blog (May 30, 2013) where it soon became the most viewed page ever (24,468 views as of 23 November 2013), something that altmetrics does not seem to take into account.

Compare this with the fate of some real, but rather technical, science.

My [DC] best scientific papers are too old (i.e. before 2011) to have an altmetrics score, but my best score for any scientific paper is 2.  This score was for Colquhoun & Lape (2012) “Allosteric coupling in ligand-gated ion channels”.  It was a commentary with some original material.

  The altmetric score was based on two tweets and 15 readers on Mendeley.  The two tweets consisted of one from me (“Real science; The meaning of allosteric conformation changes http://t.co/zZeNtLdU ”).

The only other tweet as abusive one from a cyberstalker who was upset at having been refused a job years ago.  Incredibly, this modest achievement got it rated “Good compared to other articles of the same age (71st percentile)”.

C&L

Conclusions about bibliometrics

Bibliometricians spend much time correlating one surrogate outcome with another, from which they learn little.  What they don’t do is take the time to examine individual papers.  Doing that makes it obvious that most metrics, and especially altmetrics, are indeed an ill-conceived and meretricious idea. Universities should know better than to subscribe to them.

Although altmetrics may be the silliest bibliometric idea yet, much this criticism applies equally to all such metrics.  Even the most plausible metric, counting citations, is easily shown to be nonsense by simply considering individual papers.  All you have to do is choose some papers that are universally agreed to be good, and some that are bad, and see how metrics fail to distinguish between them.  This is something that bibliometricians fail to do (perhaps because they don’t know enough science to tell which is which).  Some examples are given by Colquhoun (2007) (more complete version at dcscience.net).

Eugene Garfield, who started the metrics mania with the journal impact factor (JIF), was clear that it was not suitable as a measure of the worth of individuals.  He has been ignored and the JIF has come to dominate the lives of researchers, despite decades of evidence of the harm it does (e.g.Seglen (1997) and Colquhoun (2003) )  In the wake of JIF, young, bright people have been encouraged to develop yet more spurious metrics (of which ‘altmetrics’ is the latest).  It doesn’t matter much whether these metrics are based on nonsense (like counting hashtags) or rely on counting links or comments on a journal website.  They won’t (and can’t) indicate what is important about a piece of research- its quality.

People say – I can’t be a polymath. Well, then don’t try to be. You don’t have to have an opinion on things that you don’t understand. The number of people who really do have to have an overview, of the kind that altmetrics might purport to give, those who have to make funding decisions about work that they are not intimately familiar with, is quite small.  Chances are, you are not one of them. We review plenty of papers and grants.  But it’s not credible to accept assignments outside of your field, and then rely on metrics to assess the quality of the scientific work or the proposal.

It’s perfectly reasonable to give credit for all forms of research outputs, not only papers.   That doesn’t need metrics. It’s nonsense to suggest that altmetrics are needed because research outputs are not already valued in grant and job applications.  If you write a grant for almost any agency, you can put your CV. If you have a non-publication based output, you can always include it. Metrics are not needed. If you write software, get the numbers of downloads. Software normally garners citations anyway if it’s of any use to the greater community.

When AP recently wrote a criticism of Heather Piwowar’s altmetrics note in Nature, one correspondent wrote: "I haven’t read the piece [by HP] but I’m sure you are mischaracterising it". This attitude summarizes the too-long-didn’t-read (TLDR) culture that is increasingly becoming accepted amongst scientists, and which the comparisons above show is a central component of altmetrics.

Altmetrics are numbers generated by people who don’t understand research, for people who don’t understand research. People who read papers and understand research just don’t need them and should shun them.

But all bibliometrics give cause for concern, beyond their lack of utility. They do active harm to science.  They encourage “gaming” (a euphemism for cheating).  They encourage short-term eye-catching research of questionable quality and reproducibility. They encourage guest authorships: that is, they encourage people to claim credit for work which isn’t theirs.  At worst, they encourage fraud. 

No doubt metrics have played some part in the crisis of irreproducibility that has engulfed some fields, particularly experimental psychology, genomics and cancer research.  Underpowered studies with a high false-positive rate may get you promoted, but tend to mislead both other scientists and the public (who in general pay for the work). The waste of public money that must result from following up badly done work that can’t be reproduced but that was published for the sake of “getting something out” has not been quantified, but must be considered to the detriment of bibliometrics, and sadly overcomes any advantages from rapid dissemination.  Yet universities continue to pay publishers to provide these measures, which do nothing but harm.  And the general public has noticed.

It’s now eight years since the New York Times brought to the attention of the public that some scientists engage in puffery, cheating and even fraud.

Overblown press releases written by journals, with connivance of university PR wonks and with the connivance of the authors, sometimes go viral on social media (and so score well on altmetrics).  Yet another example, from Journal of the American Medical Association involved an overblown press release from the Journal about a trial that allegedly showed a benefit of high doses of Vitamin E for Alzheimer’s disease.

This sort of puffery harms patients and harms science itself.

We can’t go on like this.

What should be done?

Post publication peer review is now happening, in comments on published papers and through sites like PubPeer, where it is already clear that anonymous peer review can work really well. New journals like eLife have open comments after each paper, though authors do not seem to have yet got into the habit of using them constructively. They will.

It’s very obvious that too many papers are being published, and that anything, however bad, can be published in a journal that claims to be peer reviewed .  To a large extent this is just another example of the harm done to science by metrics  –the publish or perish culture. 

Attempts to regulate science by setting “productivity targets” is doomed to do as much harm to science as it has in the National Health Service in the UK.    This has been known to economists for a long time, under the name of Goodhart’s law.

Here are some ideas about how we could restore the confidence of both scientists and of the public in the integrity of published work.

  • Nature, Science, and other vanity journals should become news magazines only. Their glamour value distorts science and encourages dishonesty.
  • Print journals are overpriced and outdated. They are no longer needed.  Publishing on the web is cheap, and it allows open access and post-publication peer review.  Every paper should be followed by an open comments section, with anonymity allowed.  The old publishers should go the same way as the handloom weavers. Their time has passed.
  • Web publication allows proper explanation of methods, without the page, word and figure limits that distort papers in vanity journals.  This would also make it very easy to publish negative work, thus reducing publication bias, a major problem (not least for clinical trials)
  • Publish or perish has proved counterproductive. It seems just as likely that better science will result without any performance management at all. All that’s needed is peer review of grant applications.
  • Providing more small grants rather than fewer big ones should help to reduce the pressure to publish which distorts the literature. The ‘celebrity scientist’, running a huge group funded by giant grants has not worked well. It’s led to poor mentoring, and, at worst, fraud.  Of course huge groups sometimes produce good work, but too often at the price of exploitation of junior scientists
  • There is a good case for limiting the number of original papers that an individual can publish per year, and/or total funding. Fewer but more complete and considered papers would benefit everyone, and counteract the flood of literature that has led to superficiality.
  • Everyone should read, learn and inwardly digest Peter Lawrence’s The Mismeasurement of Science.

A focus on speed and brevity (cited as major advantages of altmetrics) will help no-one in the end. And a focus on creating and curating new metrics will simply skew science in yet another unsatisfactory way, and rob scientists of the time they need to do their real job: generate new knowledge.

It has been said

“Creation is sloppy; discovery is messy; exploration is dangerous. What’s a manager to do?
The answer in general is to encourage curiosity and accept failure. Lots of failure.”

And, one might add, forget metrics. All of them.

Follow-up

17 Jan 2014

This piece was noticed by the Economist. Their ‘Writing worth reading‘ section said

"Why you should ignore altmetrics (David Colquhoun) Altmetrics attempt to rank scientific papers by their popularity on social media. David Colquohoun [sic] argues that they are “for people who aren’t prepared to take the time (or lack the mental capacity) to evaluate research by reading about it.”"

20 January 2014.

Jason Priem, of ImpactStory, has responded to this article on his own blog. In Altmetrics: A Bibliographic Nightmare? he seems to back off a lot from his earlier claim (cited above) that altmetrics are useful for making decisions about hiring or tenure. Our response is on his blog.

20 January 2014.

Jason Priem, of ImpactStory, has responded to this article on his own blog, In Altmetrics: A bibliographic Nightmare? he seems to back off a lot from his earlier claim (cited above) that altmetrics are useful for making decisions about hiring or tenure. Our response is on his blog.

23 January 2014

The Scholarly Kitchen blog carried another paean to metrics, A vigorous discussion followed. The general line that I’ve followed in this discussion, and those mentioned below, is that bibliometricians won’t qualify as scientists until they test their methods, i.e. show that they predict something useful. In order to do that, they’ll have to consider individual papers (as we do above). At present, articles by bibliometricians consist largely of hubris, with little emphasis on the potential to cause corruption. They remind me of articles by homeopaths: their aim is to sell a product (sometimes for cash, but mainly to promote the authors’ usefulness).

It’s noticeable that all of the pro-metrics articles cited here have been written by bibliometricians. None have been written by scientists.

28 January 2014.

Dalmeet Singh Chawla,a bibliometrician from Imperial College London, wrote a blog on the topic. (Imperial, at least in its Medicine department, is notorious for abuse of metrics.)

29 January 2014 Arran Frood wrote a sensible article about the metrics row in Euroscientist.

2 February 2014 Paul Groth (a co-author of the Altmetrics Manifesto) posted more hubristic stuff about altmetrics on Slideshare. A vigorous discussion followed.

5 May 2014. Another vigorous discussion on ImpactStory blog, this time with Stacy Konkiel. She’s another non-scientist trying to tell scientists what to do. The evidence that she produced for the usefulness of altmetrics seemed pathetic to me.

7 May 2014 A much-shortened version of this post appeared in the British Medical Journal (BMJ blogs)

bmj blog

Jump to follow-up

A constant theme of this blog is that the NHS should not pay for useless treatments. By and large, NICE does a good job of preventing that. But NICE has not been allowed by the Department of Health to look at quackery.

I have the impression that privatisation of many NHS services will lead to an increase in the provision of myth-based therapies. That is part of the "bait and switch" tactic that quacks use in the hope of gaining respectability. A prime example is the "College of Medicine", financed by Capita and replete with quacks, as one would expect since it is the reincarnation of the Prince’s Foundation for Integrated Health.

One such treatment is acupuncture. Having very recently reviewed the evidence, we concluded that "Acupuncture is a theatrical placebo: the end of a myth". Any effects it may have are too small to be useful to patients. That’s the background for an interesting case study.

A colleague got a very painful frozen shoulder. His GP referred him to the Camden & Islington NHS Trust physiotherapy service. That service is now provided by a private company, Connect Physical Health.

That proved to be a big mistake. The first two appointments were not too bad, though they resulted in little improvement. But at the third appointment he was offered acupuncture. He hesitated, but agreed, in desperation to try it. It did no good. At the next appointment the condition was worse. After some very painful manipulation, the physiotherapist offered acupuncture again. This time he refused on the grounds that "I hadn’t noticed any effect the first time, because there is no evidence that it works and that I was concerned by her standards of hygiene". The physiotherapist then became "quite rude" and said that she would put down that the patient had refused treatment.

The lack of response was hardly surprising. NHS Evidence says

"There is no clinical evidence to show that other treatments, such as transcutaneous electrical nerve stimulation (TENS), Shiatsu massage or acupuncture are effective in treating frozen shoulder."

In fact it now seems beyond reasonable doubt that acupuncture is no more than a theatrical placebo.

According to Connect’s own web site “Our services are evidence-based”. That is evidently untrue in this case, so I asked them for the evidence that acupuncture was effective.

I’d noticed that in other places, Connect Physical Health also offers the obviously fraudulent craniosacral therapy (for example, here) and discredited chiropractic quackery. So I asked them about the evidence for their effectiveness too.

This is what they said.

Many thanks for your comments via our web site. In response, we thought you might like to access the sources for some of the evidence which underpins our MSK services:

Integrating Evidence-Based Acupuncture into Physiotherapy for the Benefit of the Patient – you can obtain the information you require from www.aacp.org.uk

The General Chiropractic Council www.gcc-uk.org/page.cfm

We have also attached a copy of the NICE Guidelines.

So, no Cochrane reviews, no NHS Evidence. Instead I was referred to the very quack organisations that promote the treatments in question, the Acupuncture Association of Chartered Physiotherapists, and the totally discredited General Chiropractic Council.

The NICE guidelines that they sent were nothing to do with frozen shoulder. They were the guidelines for low back pain which caused such a furore when they were first issued (and which, in any case, don’t recommend chiropractic explicitly).

When I pointed out these deficiencies I got this.

Your email below has been forwarded to me.  I am sorry if you feel that that that information we pointed you towards to enable you to make your own investigations into the evidence base for the services provided by Connect Physical Health and your hospital did not meet with your expectations.

‘ ‘ ‘

Please understand that our NHS services in Camden were commissioned by the Primary Care Trust.  The fully integrated MSK service model included provision for acupuncture and other manual therapy provided by our experienced Chartered Physiotherapists.  If you have any problems with the evidence base for the use of acupuncture or manual therapy within the service, which has been commissioned on behalf of the GPs in Camden Borough, then I would politely recommend that you direct your observations to the clinical commissioning authorities and other professional bodies who do spend time evidencing best practice and representing the academic arguments.  I am sure they will be pleased to pick up discussions with you about the relative merits of the interventions being procured by the NHS.

Yours sincerely,

Mark

Mark Philpott BSc BSc MSc MMACP MCSP
Head of Operations, Community MSK Services
Connect Physical Health
35 Apex Business Village
Cramlington
Northumberland NE23 7BF

So, "don’t blame us, blame the PCT". A second letter asked why they were shirking the little matter of evidence.

In response to your last email, I would like to say that Connect does not wish to be drawn into a debate over two therapeutic options (acupuncture and craniosacral therapy) that are widely practiced [sic] within and outside the NHS by very respectable practitioners.

You will be as aware, as Connect is, that there are lots of treatments that don’t have a huge evidence base that are practiced in mainstream medicine. Connect has seen many carefully selected patients helped by acupuncture and manual therapy (craniosacral therapy / chiropractic) over many years. Lack of evidence doesn’t mean they don’t work, just that benefit is not proven. Furthermore, nowhere on our website do we state that ALL treatments / services / modalities that Connect offer are ‘Evidence Based’. We do however offer many services that are evidence based, where the evidence exists. We aim to offer ‘choice’ to patients, from a range of services that are safe and delivered by suitably trained professionals and practitioners in line with Codes of Practice and Guidelines from the relevant governing bodies.

Connect’s service provision in Camden is meticulously assessed and of a high standard and we are proud of the services provided.

This response is so wrong, on so many levels, that I gave up on Mr Philpott at this point. At least he admitted implicitly that all of their treatments are not evidence-based. In that case their web site needs to change that claim.

If, by "governing bodies" he means jokes like the GCC or the CNHC then I suppose the behaviour of their employees is not surprising. Mr Philpott is evidently not aware that "craniosacral therapy" has been censured by the Advertising Standards Authority. Well he is now, but evidently doesn’t seem to give a damn.

Next I wrote to the PCT and it took several mails to find out who was responsible for the service. Three mails produced no response so I sent a Freedom of Information Act request. In the end I got some

"Connect PHC provide the Community musculoskeletal service for Camden. The specification for the service specifically asks for the provision of evidence based management and treatments see paragraph on Governance page 14 of attached.. Patients are treated with acupuncture as per the NICE Guidelines (May 2009) for  the management of low back pain … . .. Chiropractors are not employed in the service and craniosacral therapy is not provided as part of the service either."

Another letter, pointing out that they were using acupuncture for things other than low back pain got no more information. They did send a copy of the contract with Connect. It makes no mention whatsoever of alternative treatments. It should have done, so part of the responsibility for the failure must lie with the PCT.

The contract does, however, say (page 18)

The service to be led by a lead clinician/manager who can effectively demonstrate ongoing and evidence-based development of clinical guidelines, policies and protocols for effective working practices within the service

In my opinion, Connect Physical Health are in breach of contract

Another example of Connect ignoring evidence

The Connect Physical Health web site has an article about osteoarthritis of the knee

Physiotherapy can be extremely beneficial to help to reduce the symptoms of OA. Treatments such as mobilizations, rehab exercises, acupuncture and taping can help to reduce pain, increase range of movement, increase muscle strength and aid return to functional activities and sports.

There is little enough evidence that physiotherapy does any of these things, but at least it is free of mystical mumbo-jumbo. Although at one time the claim for acupuncture was thought to have some truth, the 2010 Cochrane review concludes otherwise

Sham-controlled trials show statistically significant benefits; however, these benefits are small, do not meet our pre-defined thresholds for clinical relevance, and are probably due at least partially to placebo effects from incomplete blinding.

This conclusion is much the same as has been reached for acupuncture treatments of almost everything. Two major meta-analyses come to similar conclusions. Madsen Gøtzsche & Hróbjartsson (2009) and Vickers et al (2012) both conclude that if there is an effect at all (dubious) then it is too small to be noticeable to the patient. (Be warned that in the case of Vickers et al. you need to read the paper itself because of the spin placed on the results in the abstract.). These papers are discussed in detail in our recent paper.

Why is Connect Physical Health not aware of this?

Their head of operations told me (see above) that

"Connect does not wish to be drawn into a debate [about acupuncture and craniosacral therapy]".

That outlook was confirmed when I left a comment on their osteoarthritis post. This is what it looked like almost a month later.

connect

Guess what? The comment has never appeared..

The attitude of Connect Physical Health to evidence is simply to ignore it if it gets in the way of making money, and to censor any criticism.

What have Camden NHS done about it?

The patient and I both complained to Camden NHS in August 2012. At first, they simply forwarded the complaints to Connect Physical Health with the unsatisfactory results shown above. It took until May 2013 to get any sort of reasonable response. That seems a very long time. In fact by the time the response arrived the PCT had been renamed a Clinical Commissioning Group (CCG) because of the vast top-down reorganisation inposed by Lansley’s Health and Social Care Act.

On 8 May 2013, this response was sent to the patient, Here is part of it.

I have received your email of complaint from the NHSNCL complaints department regarding your care.
I am sorry to read of your experience as we take the quality of care seriously in NHS Camden CCG.

You raise some very clear concerns and I will attempt to address these in order.

1)      The fact that you felt pressurised into having acupuncture is a concern as everybody should be given a choice. As part of the informed consent relating to acupuncture you should have been told about the treatment, it’s [sic] benefits and risks and then you sign to confirm you are happy to proceed. I understand that this was the case in your situation but I have reinforced that the consent is important and must be adhered to by the provider Connect Physical Health. There are clear standards of clinical practice that all Chartered Physiotherapists must follow which I will discuss further with the Connect Camden team Manager Nick Downing.

I do disagree with you around acupuncture; there is no conclusive  evidence for acupuncture in frozen shoulder but I have referenced a systematic review which concludes the studies were too small to draw any conclusions although shoulder function was significantly  improved at 4 weeks  (Green S et al. Acupuncture for shoulder pain. Cochrane Database Syst Rev 2005; 18: CD005319). There is a growing body of evidence supporting the use of acupuncture and until such time as there is specific evidence against it I don’t think we would be absolutely against the practice of this modality alongside other treatments.

.Best wishes

Strategy and Planning Directorate
NHS Camden CCG
75 Hampstead Road
London
NW1

This response raises more questions than it answers.

For example, what is "informed consent" worth if the therapist is his/herself misinformed about the treatment? It is the eternal dilemma of alternative medicine that it is no use referring to well-trained practitioners, when their training has inculcated myths and untruths.

There is not a "growing body of evidence supporting the use of acupuncture". Precisely the opposite is true.

And the statement "until such time as there is specific evidence against it I don’t think we would be absolutely against the practice of this modality alongside" betrays a basic misunderstanding of the scientific process.

So I sent the writer of this letter a reprint of our paper, "Acupuncture is a theatrical placebo: the end of a myth" (the blog version alone has had over 12,000 page views). A few days later we had an amiable lunch together and we had a constructive discussion about the problems of deciding what should be commissioned and what shouldn’t.

It seems to me to be clear that CCGs should take better advice before boasting that they commission evidence-based treatments.

Postscript

Stories like this are worrying to the majority of physiotherapists who don’t go in for mystical mumbo-jumbo of acupuncture. One of the best is Neil O’Connell who blogs at BodyInMind. He tweeted

It isn’t clear how many physiotherapists embrace nonsense, but the Acupuncture Association of Chartered Physiotherapists has around 6000 members, compared with 47,000 chartered physiotherapists (AACP), so it’s a smallish minority. The AACP claims that it is “Integrating Evidence-Based Acupuncture into Physiotherapy”. Like most politicians, the term “evidence-based” is thrown around with gay abandon. Clearly they don’t understand evidence.

Follow-up

12 June 2013

The Advertising Standards Authority has, one again, upheld complaints against the UCLH Trust, for making false claims in its advertising. This time, appropriately, it’s about acupuncture. Just about everything in their advertising leaflets was held to be unjustifiable. They’ve been in trouble before about false claims for homeopathy, hypnosis and craniosacral "therapy".

Of course all of these embarrassments come from one very small corner of the UCLH Trust, the Royal London Hospital for Integrated Medicine (previously known as the Royal London Homeopathic Hospital).

Why is it tolerated in an otherwise excellent NHS Trust? Well, the patron is the Queen herself (not Charles, aka the Quacktitioner Royal), She seems to exert more power behind the scenes than is desirable in In a constitutional monarchy

asa uclh

29 June 2013

I wrote to Dr Gill Gaskin about the latest ASA judgement against RLHIM. She is the person at the UCLH Trust who has responsibility for the quack hospital. She previously refused to do anything about the craniosacral nonsense that is promoted there. This time the ASA seems to have stung them into action at long last. I was told

In response to your question about proposed action:

All written information for patients relating to the services offered by the Royal London Hospital for Integrated Medicine are being withdrawn for review in the light of the ASA’s rulings (and the patient leaflets have already been withdrawn). It will be reviewed and modified where necessary item by item, and only reintroduced after sign-off through the Queen Square divisional clinical governance processes and the Trust’s patient information leaflet team.

With best wishes

Gill Gaskin

Dr Gill Gaskin
Medical Director
Specialist Hospitals Board
UCLH NHS Foundation Trust

It remains to be seen whether the re-written information is accurate or not.

The rules for advertising

The Advertising Standards Authority gives advice for advertisers about what’s permitted and what isn’t.

Acupuncture The CAP advice

Craniosacral therapy The CAP advice

Homeopathy The CAP advice and 2013 update

Chiropractic The CAP advice.

Jump to follow-up

Anesthesia & Analgesia is the official journal of the International Anesthesia Research Society. In 2012 its editor, Steven Shafer, proposed a head-to-head contest between those who believe that acupuncture works and those who don’t. I was asked to write the latter. It has now appeared in June 2013 edition of the journal [download pdf]. The pro-acupuncture article written by Wang, Harris, Lin and Gan appeared in the same issue [download pdf].

Acupuncture is an interesting case, because it seems to have achieved greater credibility than other forms of alternative medicine, despite its basis being just as bizarre as all the others. As a consequence, a lot more research has been done on acupuncture than on any other form of alternative medicine, and some of it has been of quite high quality. The outcome of all this research is that acupuncture has no effects that are big enough to be of noticeable benefit to patients, and it is, in all probablity, just a theatrical placebo.

After more than 3000 trials, there is no need for yet more. Acupuncture is dead.

aa1

aa2

Acupuncture is a theatrical placebo

David Colquhoun (UCL) and Steven Novella (Yale)

Anesthesia & Analgesia, June 2013 116:1360-1363.

Pain is a big problem. If you read about pain management centres you might think it had been solved. It hasn’t. And when no effective treatment exists for a medical problem, it leads to a tendency to clutch at straws.  Research has shown that acupuncture is little more than such a straw.

Although it is commonly claimed that acupuncture has been around for thousands of years, it hasn’t always been popular even in China.  For almost 1000 years it was in decline and in 1822 Emperor Dao Guang issued an imperial edict stating that acupuncture and moxibustion should be banned forever from the Imperial Medical Academy.

Acupuncture continued as a minor fringe activity in the 1950s.  After the Chinese Civil War, the Chinese Communist Party ridiculed traditional Chinese medicine, including acupuncture, as superstitious.  Chairman Mao Zedong later revived traditional Chinese Medicine as part of the Great Proletarian Cultural Revolution of 1966 (Atwood, 2009).  The revival was a convenient response to the dearth of medically-trained people in post-war China, and a useful way to increase Chinese nationalism.  It is said that Chairman Mao himself preferred Western medicine. His personal physician quotes him as saying “Even though I believe we should promote Chinese medicine, I personally do not believe in it. I don’t take Chinese medicine” Li {Zhisui Li. Private Life Of Chairman Mao: Random House, 1996}.

The political, or perhaps commercial, bias seems to still exist. It has been reported by Vickers et al. (1998) (authors who are sympathetic to alternative medicine) that

"all trials [of acupuncture] originating in China, Japan, Hong Kong, and Taiwan were positive"(4).

Acupuncture was essentially defunct in the West until President Nixon visited China in 1972.  Its revival in the West was largely a result of a single anecdote promulgated by journalist James Reston in the New York Times, after he’d had acupuncture in Beijing for post-operative pain in 1971. Despite his eminence as political journalist, Reston had no scientific background and evidently didn’t appreciate the post hoc ergo propter hoc fallacy, or the idea of regression to the mean.

After Reston’s article, acupuncture quickly became popular in the West. Stories circulated that patients in China had open heart surgery using only acupuncture (Atwood, 2009). The Medical Research Council (UK) sent a delegation, which included Alan Hodgkin, to China in 1972 to investigate these claims , about which they were skeptical.  In 2006 the claims were repeated in 2006 in a BBC TV program, but Simon Singh (author of Fermat’s Last Theorem) discovered that the patient had been given a combination of three very powerful sedatives (midazolam, droperidol, fentanyl) and large volumes of local anaesthetic injected into the chest.  The acupuncture needles were purely cosmetic.

Curiously, given that its alleged principles are as bizarre as those on any other sort of pre-scientific medicine, acupuncture seemed to gain somewhat more plausibility than other forms of alternative medicine.  The good thing about that is that more research has been done on acupuncture than on just about any other fringe practice.

The outcome of this research, we propose, is that the benefits of acupuncture, if any, are too small and too transient to be of any clinical significance.  It seems that acupuncture is little or no more than a theatrical placebo.  The evidence for this conclusion will now be discussed.

Three things that are not relevant to the argument

There is no point in discussing surrogate outcomes such as fMRI studies or endorphine release studies until such time as it has been shown that patients get a useful degree of relief. It is now clear that they don’t. 

There is also little point in invoking individual studies.  Inconsistency is a prominent characteristic of acupuncture research: the heterogeneity of results poses a problem for meta-analysis.  Consequently it is very easy to pick trials that show any outcome whatsoever.  Therefore we shall consider only meta-analyses.

The argument that acupuncture is somehow more holistic, or more patient-centred, than medicine seems us to be a red herring.  All good doctors are empathetic and patient-centred.  The idea that empathy is restricted to those who practice unscientific medicine seems both condescending to doctors, and it verges on an admission that empathy is all that alternative treatments have to offer.

There is now unanimity that the benefits, if any, of acupuncture for analgesia, are too small to be helpful to patients.

Large multicenter clinical trails conducted in Germany {Linde et al., 2005; Melchart et, 2005; Haake et al, 2007, Witt et al, 2005), and in the United States {Cherkin et al, 2009) consistently revealed that verum (or true) acupuncture and sham acupuncture treatments are no different in decreasing pain levels across multiple chronic pain disorders: migraine, tension headache, low back pain, and osteoarthritis of the knee.

If, indeed, sham acupuncture is no different from real acupuncture the apparent improvement that may be seen after acupuncture is merely a placebo effect.  Furthermore it shows meridians don’t exist, so the "theory" memorized by qualified acupuncturists is just myth. All that remains to be discussed is whether or not the placebo effect is big enough to be useful, and whether it is ethical to prescribe placebos.

Some recent meta-analyses have found that there may be a small difference between sham and real acupuncture.  Madsen Gøtzsche & Hróbjartsson {2009) looked at thirteen trials with 3025 patients, in which acupuncture was used to treat a variety of painful conditions.  There was a small difference between ‘real’ and sham acupuncture (it didn’t matter which sort of sham was used), and a somewhat bigger difference between the acupuncture group and the no-acupuncture group.  The crucial result was that even this bigger difference corresponded to only a 10 point improvement on a 100 point pain scale.  A consensus report (Dworkin, 2009) that a change of this sort should be described as a “minimal” change or “little change”.  It isn’t big enough for the patient to notice much effect.

The acupuncture and no-acupuncture groups were, of course, not blind to the patients and neither were they blind to the practitioner giving the treatment.  It isn’t possible to say whether the observed difference is a real physiological action or whether it’s a placebo effect of a rather dramatic intervention.  Interesting though it would be to know this, it matters not a jot, because the effect just isn’t big enough to produce any tangible benefit.

Publication bias is likely to be an even greater problem for alternative medicine than it is for real medicine, so it is particularly interesting that the result just described has been confirmed by authors who practise, or sympathise with, acupuncture.  Vickers et al. (2012) did a meta-analysis for 29 RCTs, with 17,922 patients.  The patients were being treated for a variety of chronic pain conditions. The results were very similar to those of Madsen et al.{2009).  Real acupuncture was better than sham, but by a tiny amount that lacked any clinical significance.  Again there was a somewhat larger difference in the non-blind comparison of acupuncture and no-acupuncture, but again it was so small that patients would barely notice it.

Comparison of these two meta-analyses shows how important it is to read the results, not just the summaries.  Although the outcomes were similar for both, the spin on the results in the abstracts (and consequently the tone of media reports) was very different. 

An even more extreme example of spin occurred in the CACTUS trial of acupuncture for " ‘frequent attenders’ with medically unexplained symptoms” (Paterson et al., 2011).  In this case, the results showed very little difference even between acupuncture and no-acupuncture groups, despite the lack of blinding and lack of proper controls.  But by ignoring the problems of multiple comparisons the authors were able to pick out a few results that were statistically significant, though trivial in size.  But despite this unusually negative outcome, the result was trumpeted as a success for acupuncture.  Not only the authors, but also their university’s PR department and even the Journal editor issued highly misleading statements.  This gave rise to a flood of letters to the British Journal of General Practice and much criticism on the internet.

From the intellectual point of view it would be interesting to know if the small difference between real and sham acupuncture found in some, but not all, recent studies is a genuine effect of acupuncture or whether it is a result of the fact that the practitioners are never blinded, or of publication bias.  But that knowledge is irrelevant for patients. All that matters for them is whether or not they get a useful degree of relief.

There is now unanimity between acupuncturists and non-acupuncturists that any benefits that may exist are too small to provide any noticeable benefit to patients.  That being the case it’s hard to see why acupuncture is still used.  Certainly such an accumulation of negative results would result in the withdrawal of any conventional treatment.

Specific conditions

Acupuncture should, ideally, be tested separately for effectiveness for each individual condition for which it has been proposed (like so many other forms of alternative medicine, that’s a very large number).  Good quality trials haven’t been done for all of them.  It’s unlikely that acupuncture works for rheumatoid arthritis, stopping smoking, irritable bowel syndrome or for losing weight.  And there is no good reason to think it works for addictions, asthma, chronic pain, depression, insomnia, neck pain, shoulder pain or frozen shoulder, osteoarthritis of the knee, sciatica, stroke or tinnitus and many other conditions (Ernst et al., 2011).

In 2009, the UK’s National Institute for Clinical Excellence (NICE) did recommend both acupuncture and chiropractic for back pain. This exercise in clutching at straws caused something of a furore.  In the light of NICE’s judgement the Oxford Centre for Evidence-based medicine updated its analysis of acupuncture for back pain.  Their verdict was

“Clinical bottom line. Acupuncture is no better than a toothpick for treating back pain.”

The paper by Artus et al. (2010) is of particular interest for the problem of back pain.  Their Fig 2 shows that there is a modest improvement in pain scores after treatment, but much the same effect, with the same time course is found regardless of what treatment is given, and even with no treatment at all.  They say

“we found evidence that these responses seem to follow a common trend of early rapid improvement in symptoms that slows down and reaches a plateau 6 months after the start of treatment, although the size of response varied widely. We found a similar pattern of improvement in symptoms following any treatment, regardless of whether it was index, active comparator, usual care or placebo treatment”.

It seems that most of what’s being seen is regression to the mean. And that is very likely to be the main reason why acupuncture sometimes appears to work when it doesn’t.

Although the article by Wang et al (2012) was written to defend the continued use of acupuncture, the only condition for which they claim that there is any reasonably strong evidence is for post-operative nausea and vomiting (PONV).  It would certainly be odd if a treatment that had been advocated for such a wide variety of conditions turned out to work only for PONV.  Nevertheless, let’s look at the evidence.

The main papers that are cited to support the efficacy of acupuncture in alleviation of PONV are all from the same author: Lee & Done (1999), and two Cochrane reviews, Lee & Done (2004), updated in Lee & Fan (2009).  We need only deal with this latest updated meta-analysis.

Although the authors conclude “P6 acupoint stimulation prevented PONV”, closer examination shows that this conclusion is very far from certain.  Even taken at face value, a relative risk of 0.7 can’t be described as “prevention”.  The trials that were included were not all tests of acupuncture but included several other more or less bizarre treatments (“acupuncture, electro-acupuncture, transcutaneous nerve stimulation, laser stimulation, capsicum plaster, an acu-stimulation device, and acupressure”).  The number needed to treat varied from a disastrous 34 to a poor 5 for patients with control rates of PONV of 10% and 70% respectively.

The meta-analysis showed, on average, similar effectiveness for acupumcture and anti-emetic drugs.  The problem is that the effectiveness of drugs is in doubt because an update to the Cochrane review has been delayed (Carlisle, 2012) by the discovery of major fraud by a Japanese anesthetist, Yoshitaka Fujii (Sumikawa, 2012). It has been suggested that metclopramide barely works at all (Bandolier, 2012; Henzi, 1999).

Of the 40 trials (4858 participants) that were included; only four trials reported adequate allocation concealment. Ninety percent of trials were open to bias from this source. Twelve trials did not report all outcomes.  The opportunities for bias are obvious. The authors themselves describe all estimates as being of “Moderate quality” which is defined this:Further research is likely to have an important impact on our confidence in the estimate of effect and may change the estimate”.  That being the case, perhaps the conclusion should have been “more research needed”.  In fact almost all trials of alternative medicines seem to end up with the conclusion that more research is needed.

Conclusions

It is clear from meta-analyses that results of acupuncture trials are variable and inconsistent, even for single conditions.  After thousands of trials of acupuncture, and hundreds of systematic reviews (Ernst et al., 2011), arguments continue unabated.  In 2011, Pain carried an editorial which summed up the present situation well.

“Is there really any need for more studies? Ernst et al. (2011) point out that the positive studies conclude that acupuncture relieves pain in some conditions but not in other very similar conditions. What would you think if a new pain pill was shown to relieve musculoskeletal pain in the arms but not in the legs? The most parsimonious explanation is that the positive studies are false positives. In his seminal article on why most published research findings are false, Ioannidis (2005) points out that when a popular but ineffective treatment is studied, false positive results are common for multiple reasons, including bias and low prior probability.”

Since it has proved impossible to find consistent evidence after more than 3000 trials, it is time to give up.  It seems very unlikely that the money that it would cost to do another 3000 trials would be well-spent. 

A small excess of positive results after thousands of trials is most consistent with an inactive intervention.  The small excess is predicted by poor study design and publication bias. Further, Simmons et al (2011) demonstrated that exploitation of "undisclosed flexibility in data collection and analysis" can produce statistically positive results even from a completely nonexistent effect.  With acupuncture in particular there is documented profound bias among proponents (Vickers et al., 1998).  Existing studies are also contaminated by variables other than acupuncture – such as the frequent inclusion of "electroacupuncture" which is essentially transdermal electrical nerve stimulation masquerading as acupuncture.

The best controlled studies show a clear pattern – with acupuncture the outcome does not depend on needle location or even needle insertion. Since these variables are what define "acupuncture" the only sensible conclusion is that acupuncture does not work. Everything else is the expected noise of clinical trials, and this noise seems particularly high with acupuncture research. The most parsimonious conclusion is that with acupuncture there is no signal, only noise.

The interests of medicine would be best-served if we emulated the Chinese Emperor Dao Guang and issued an edict stating that acupuncture and moxibustion should no longer be used in clinical practice. 

No doubt acupuncture will continue to exist on the High Streets where they can be tolerated as a voluntary self-imposed tax on the gullible (as long as they don’t make unjustified claims).

REFERENCES
 

1. Acupuncture Centre. . About Acupuncture. Available at: http://www.acupuncturecentre.org/aboutacupuncture.html. Accessed March 30, 2013

 

2. Atwood K. “Acupuncture Anesthesia”: a Proclamation from Chairman Mao (Part IV). Available at: http://www.sciencebasedmedicine.org/index.php/acupuncture-anesthesia-a-proclamation-from-chairman-mao-part-iv/. Accessed September 2, 2012

 

3. Li Z Private Life of Chairman Mao: The Memoirs of Mao’s Personal Physician. 1996 New York: Random House

 

4. Vickers A, Goyal N, Harland R, Rees R. Do certain countries produce only positive results? A systematic review of controlled trials. Control Clin Trials. 1998;19:159–66 Available at: http://bit.ly/WqVGWN. Accessed September 2, 2012

 

5. Reston J. Now, About My Operation in Peking; Now, Let Me Tell You About My Appendectomy in Peking … The New York Times. 1971 Available at: http://select.nytimes.com/gst/abstract.html?res=FB0D11FA395C1A7493C4AB178CD85F458785F9. Accessed March 30, 2013

 

6. Atwood K. “Acupuncture anesthesia”: a proclamation from chairman Mao (part I). Available at: http://www.sciencebasedmedicine.org/index.php/acupuncture-anesthesia-a-proclamation-of-chairman-mao-part-i/. Accessed September 2, 2012

 

7. Linde K, Streng A, Jürgens S, Hoppe A, Brinkhaus B, Witt C, Wagenpfeil S, Pfaffenrath V, Hammes MG, Weidenhammer W, Willich SN, Melchart D. Acupuncture for patients with migraine: a randomized controlled trial. JAMA. 2005;293:2118–25

 

8. Melchart D, Streng A, Hoppe A, Brinkhaus B, Witt C, Wagenpfeil S, Pfaffenrath V, Hammes M, Hummelsberger J, Irnich D, Weidenhammer W, Willich SN, Linde K. Acupuncture in patients with tension-type headache: randomised controlled trial. BMJ. 2005;331:376–82

 

9. Haake M, Müller HH, Schade-Brittinger C, Basler HD, Schäfer H, Maier C, Endres HG, Trampisch HJ, Molsberger A. German Acupuncture Trials (GERAC) for chronic low back pain: randomized, multicenter, blinded, parallel-group trial with 3 groups. Arch Intern Med. 2007;167:1892–8

 

10. Witt C, Brinkhaus B, Jena S, Linde K, Streng A, Wagenpfeil S, Hummelsberger J, Walther HU, Melchart D, Willich SN. Acupuncture in patients with osteoarthritis of the knee: a randomised trial. Lancet. 2005;366:136–43

 

11. Cherkin DC, Sherman KJ, Avins AL, Erro JH, Ichikawa L, Barlow WE, Delaney K, Hawkes R, Hamilton L, Pressman A, Khalsa PS, Deyo RA. A randomized trial comparing acupuncture, simulated acupuncture, and usual care for chronic low back pain. Arch Intern Med. 2009;169:858–66

 

12. Madsen MV, Gøtzsche PC, Hróbjartsson A. Acupuncture treatment for pain: systematic review of randomised clinical trials with acupuncture, placebo acupuncture, and no acupuncture groups. BMJ. 2009;338:a3115

 

13. Dworkin RH, Turk DC, McDermott MP, Peirce-Sandner S, Burke LB, Cowan P, Farrar JT, Hertz S, Raja SN, Rappaport BA, Rauschkolb C, Sampaio C. Interpreting the clinical importance of group differences in chronic pain clinical trials: IMMPACT recommendations. Pain. 2009;146:238–44

 

14. Vickers AJ, Cronin AM, Maschino AC, Lewith G, MacPherson H, Foster NE, Sherman KJ, Witt CM, Linde K. Acupuncture for chronic pain: individual patient data meta-analysis. Arch Intern Med. 2012;172:1444–53

 

15. Paterson C, Taylor RS, Griffiths P, Britten N, Rugg S, Bridges J, McCallum B, Kite G. Acupuncture for ‘frequent attenders’ with medically unexplained symptoms: a randomised controlled trial (CACTUS study). Br J Gen Pract. 2011;61:e295–e305

 

16. . Letters in response to Acupuncture for ‘frequent attenders’ with medically unexplained symptoms. Br J Gen Pract. 2011;61 Available at: http://www.ingentaconnect.com/content/rcgp/bjgp/2011/00000061/00000589. Accessed March 30, 2013

 

17. Colquhoun D. Acupuncturists show that acupuncture doesn’t work, but conclude the opposite: journal fails. 2011 Available at: https://www.dcscience.net/?p=4439. Accessed September 2, 2012

 

18. Ernst E, Lee MS, Choi TY. Acupuncture: does it alleviate pain and are there serious risks? A review of reviews. Pain. 2011;152:755–64

 

19. Colquhoun D. NICE falls for Bait and Switch by acupuncturists and chiropractors: it has let down the public and itself. 2009 Available at: https://www.dcscience.net/?p=1516. Accessed September 2, 2012

 

20. Colquhoun D. The NICE fiasco, part 3. Too many vested interests, not enough honesty. 2009 Available at: https://www.dcscience.net/?p=1593. Accessed September 2, 2012

 

21. Bandolier. . Acupuncture for back pain—2009 update. Available at: http://www.medicine.ox.ac.uk/bandolier/booth/painpag/Chronrev/Other/acuback.html. Accessed March 30, 2013

 

22. Artus M, van der Windt DA, Jordan KP, Hay EM. Low back pain symptoms show a similar pattern of improvement following a wide range of primary care treatments: a systematic review of randomized clinical trials. Rheumatology (Oxford). 2010;49:2346–56

 

23. Wang S-M, Harris RE., Lin Y-C, Gan TJ. Acupuncture in 21st century anesthesia: is there a needle in the haystack? Anesth Analg. 2013;116:1356–9

 

24. Lee A, Done ML. The use of nonpharmacologic techniques to prevent postoperative nausea and vomiting: a meta-analysis. Anesth Analg. 1999;88:1362–9

 

25. Lee A, Done ML. Stimulation of the wrist acupuncture point P6 for preventing postoperative nausea and vomiting. Cochrane Database Syst Rev. 2004:CD003281

 

26. Lee A, Fan LT. Stimulation of the wrist acupuncture point P6 for preventing postoperative nausea and vomiting. Cochrane Database Syst Rev. 2009:CD003281

 

27. Carlisle JB. A meta-analysis of prevention of postoperative nausea and vomiting: randomised controlled trials by Fujii etal. compared with other authors. Anaesthesia. 2012;67:1076–90

 

28. Sumikawa K. The results of investigation into Dr.Yoshitaka Fujii’s papers. Report of the Japanese Society of Anesthesiologists Special Investigation Committee. http://www.anesth.or.jp/english/pdf/news20120629.pdf

 

29. Bandolier. . Metoclopramide is ineffective in preventing postoperative nausea and vomiting. Available at: http://www.medicine.ox.ac.uk/bandolier/band71/b71-8.html. Accessed March 30, 2013

 

30. Henzi I, Walder B, Tramèr MR. Metoclopramide in the prevention of postoperative nausea and vomiting: a quantitative systematic review of randomized, placebo-controlled studies. Br J Anaesth. 1999;83:761–71

 

31. Hall H. Acupuncture’s claims punctured: not proven effective for pain, not harmless. Pain. 2011;152:711–2

 

32. Ioannidis JP. Why most published research findings are false. PLoS Med. 2005;2:e124

 

33. Simmons JP, Leif DN, Simonsohn U. False-positive psychology: undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychol Sci. 2011;22:1359–66

Follow-up

30 May 2013 Anesthesia & Analgesia has put the whole paper on line. No paywall now!

9 June 2013. Since this page was posted on May 30, it has had over 20,000 page views. Not bad.

26 July 2013. The Observer had a large double-page spread about acupuncture. It was written by David Derbyshire, largely on the basis of this article.

26 December 2013

Over christmas the flow of stuff that misrepresents the "thousands of years" of Chinese medicine has continued unabated. Of course one expects people who are selling Chinese herbs and acupuncture to lie. All businesses do. One does not expect such misrepresentation from British Columbia, Cardiff University School of medicine, or from Yale University. I left a comment on the Yale piece. Whether it passes moderation remains to be seen. Just in case, here it is.

One statement is undoubtedly baseless ““If it’s still in use after a thousand years there must be something right,” It’s pretty obvious to the most casual observer that many beliefs that have been round for a thousand years have proved to be utterly wrong.

In any case, it’s simply not true that most “Traditional” Chinese medicine has been around for thousands of years. Acupuncture was actually banned by the Emperor Dao Guang in 1822. The sort of Chinese medicine that is sold (very profitably) to the west was essentially dead in China until it was revived by Mao as part of the great proletarian cultural revolution (largely to stir up Chinese nationalism at that time). Of course he didn’t use it himself.

This history has been documented in detail now, and it surprises me to see it misrepresented, yet again, from a Yale academic.

Of course there might turn out to be therapeutically useful chemicals in Chinese herbs (it has happened with artemesinin). But it is totally irresponsible to pretend that great things are coming in the absence of good RCTs in human patients.

Yale should be ashamed of PR like this. And so should Cardiff University. It not only makes the universities look silly. It corrupts the whole of the rest of these institutions. Who knows how much more of their PR is mere puffery.

18 January 2014. I checked the Yale posting and found that the comment, above, had indeed been deleted. There is little point in having comments if you are going to delete anything that’s mildly critical. It is simply dishonest.

Jump to follow-up

The Scottish Universities Medical Journal asked me to write about the regulation of alternative medicine. It’s an interesting topic and not easy to follow because of the veritable maze of more than twenty overlapping regulators and quangos which fail utterly to protect the public against health fraud. In fact they mostly promote health fraud. The paper is now published, and here is a version with embedded links (and some small updates).

We are witnessing an increasing commercialisation of medicine. It’s really taken off since the passage of the Health and Social Security Bill into law. Not only does that mean having NHS hospitals run by private companies, but it means that “any qualified provider” can bid for just about any service.  The problem lies, of course, in what you consider “qualified” to mean.  Any qualified homeopath or herbalist will, no doubt, be eligible.  University College London Hospital advertised for a spiritual healer. The "person specification" specified a "quallfication", but only HR people think that a paper qualification means that spiritual healing is anything but a delusion.

uclh-spirit

The vocabulary of bait and switch

First, a bit of vocabulary.  Alternative medicine is a term that is used for medical treatments that don’t work (or at least haven’t been shown to work).  If they worked, they’d be called “medicine”.  The anti-malarial, artemesinin, came originally from a Chinese herb, but once it had been purified and properly tested, it was no longer alternative.  But the word alternative is not favoured by quacks.  They prefer their nostrums to be described as “complementary” –it sounds more respectable.  So CAM (complementary and alternative medicine became the politically-correct euphemism.  Now it has gone a stage further, and the euphemism in vogue with quacks at the moment is “integrated” or “integrative” medicine.  That means, very often, integrating things that don’t work with things that do.  But it sounds fashionable.  In reality it is designed to confuse politicians who ask for, say, integrated services for old people.

Put another way, the salespeople of quackery have become rather good at bait and switch. The wikepedia definition is as good as any.

Bait-and-switch is a form of fraud, most commonly used in retail sales but also applicable to other contexts. First, customers are “baited” by advertising for a product or service at a low price; second, the customers discover that the advertised good is not available and are “switched” to a costlier product.

As applied to the alternative medicine industry, the bait is usually in the form of some nice touchy-feely stuff which barely mentions the mystical nonsense. But when you’ve bought into it you get the whole panoply of nonsense. Steven Novella has written eloquently about the use of bait and switch in the USA to sell chiropractic, acupuncture, homeopathy and herbal medicine: "The bait is that CAM offers legitimate alternatives, the switch is that it primarily promotes treatments that don’t work or are at best untested and highly implausible.".

The "College of Medicine" provides a near-perfect example of bait and switch. It is the direct successor of the Prince of Wales’ Foundation for Integrated Health. The Prince’s Foundation was a consistent purveyor of dangerous medical myths. When it collapsed in 2010 because of a financial scandal, a company was formed called "The College for Integrated Health". A slide show, not meant for public consumption, said "The College represents a new strategy to take forward the vision of HRH Prince Charles". But it seems that too many people have now tumbled to the idea that "integrated", in this context, means barmpottery. Within less than a month, the new institution was renamed "The College of Medicine". That might be a deceptive name, but it’s a much better bait. That’s why I described the College as a fraud and delusion.

Not only did the directors, all of them quacks, devise a respectable sounding name, but they also succeeded in recruiting some respectable-sounding people to act as figureheads for the new organisation. The president of the College is Professor Sir Graham Catto, emeritus professor of medicine at the University of Aberdeen. Names like his make the bait sound even more plausible. He claims not to believe that homeopathy works, but seems quite happy to have a homeopathic pharmacist, Christine Glover, on the governing council of his college. At least half of the governing Council can safely be classified as quacks.

So the bait is clear. What about the switch? The first thing to notice is that the whole outfit is skewed towards private medicine: see The College of Medicine is in the pocket of Crapita Capita. The founder, and presumably the main provider of funds (they won’t say how much) is the huge outsourcing company, Capita. This is company known in Private Eye as Crapita. Their inefficiency is legendary. They are the folks who messed up the NHS computer system and the courts computer system. After swallowing large amounts of taxpayers’ money, they failed to deliver anything that worked. Their latest failure is the court translation service.. The president (Catto), the vice president (Harry Brunjes) and the CEO (Mark Ratnarajah) are all employees of Capita.

The second thing to notice is that their conferences and courses are a bizarre mixture of real medicine and pure quackery. Their 2012 conference had some very good speakers, but then it had a "herbal workshop" with Simon Mills (see a video) and David Peters (the man who tolerates dowsing as a way to diagnose which herb to give you). The other speaker was Dick Middleton, who represents the huge herbal company, Schwabe (I debated with him on BBC Breakfast), In fact the College’s Faculty of Self-care appears to resemble a marketing device for Schwabe.

Why regulation isn’t working, and can’t work

There are various levels of regulation. The "highest" level is the statutory regulation of osteopathy and chiropractic. The General Chiropractic Council (GCC) has exactly the same legal status as the General Medical Council (GMC). This ludicrous state of affairs arose because nobody in John Major’s government had enough scientific knowledge to realise that chiropractic, and some parts of osteopathy, are pure quackery,

The problem is that organisations like the GCC function more to promote chiropractic than to regulate them. This became very obvious when the British Chiropractic Association (BCA) decided to sue Simon Singh for defamation, after he described some of their treatments as “bogus”, “without a jot of evidence”.

In order to support Singh, several bloggers assessed the "plethora of evidence" which the BCA said could be used to justify their claims. When, 15 months later, the BCA produced its "plethora" it was shown within 24 hours that the evidence was pathetic. The demolition was summarised by lawyer, David Allen Green, in The BCA’s Worst Day.

In the wake of this, over 600 complaints were made to the GCC about unjustified claims made by chiropractors, thanks in large part to heroic work by two people, Simon Perry and Allan Henness. Simon Perry’s Fishbarrel (browser plugin) allows complaints to be made quickly and easily -try it). The majority of these complaints were rejected by the GCC, apparently on the grounds that chiropractors could not be blamed because the false claims had been endorsed by the GCC itself.

My own complaint was based on phone calls to two chiropractors, I was told such nonsense as "colic is down to, er um, faulty movement patterns in the spine". But my complaint  never reached the Conduct and Competence committee because it had been judged by a preliminary investigating committee that there was no case to answer. The impression one got from this (very costly) exercise was that the GCC was there to protect chiropractors, not to protect the public.

The outcome was a disaster for chiropractors, wno emerged totally discredited. It was also a disaster for the GCC which was forced to admit that it hadn’t properly advised chiropractors about what they could and couldn’t claim. The recantation culminated in the GCC declaring, in August 2010, that the mythical "subluxation" is a "historical concept " "It is not supported by any clinical research evidence that would allow claims to be made that it is the cause of disease.". Subluxation was a product of the fevered imagination of the founder of the chiropractic cult, D.D. Palmer. It referred to an imaginary spinal lesion that he claimed to be the cause of most diseases. .Since ‘subluxation’ is the only thing that’s distinguished chiropractic from any other sort of manipulation, the admission by the GCC that it does not exist, after a century of pretending that it does, is quite an admission.

The President of the BCA himself admitted in November 2011

“The BCA sued Simon Singh personally for libel. In doing so, the BCA began one of the darkest periods in its history; one that was ultimately to cost it financially,”

As a result of all this, the deficiencies of chiropractic, and the deficiencies of its regulator were revealed, and advertisements for chiropractic are somewhat less misleading. But this change for the better was brought about entirely by the unpaid efforts of bloggers and a few journalists, and not at all by the official regulator, the GCC. which was part of the problem. not the solution. And it was certainly not helped by the organisation that is meant to regulate the GCC, the Council for Health Regulatory Excellence (CHRE) which did nothing whatsoever to stop the farce.

At the other end of the regulatory spectrum, voluntary self-regulation, is an even worse farce than the GCC. They all have grand sounding "Codes of Practice" which, in practice, the ignore totally.

The Society of Homeopaths is just a joke. When homeopaths were caught out recommending sugar pills for prevention of malaria, they did nothing (arguably such homicidal advice deserves a jail sentence).

The Complementary and Natural Healthcare Council (CNHC) is widely know in the blogosphere as Ofquack. I know about them from the inside, having been a member of their Conduct and Competence Committee, It was set up with the help of a £900,000 grant from the Department of Health to the Prince of Wales, to oversee voluntary self-regulation. It fails utterly to do anything useful.. The CNHC code of practice, paragraph 15 , states

“Any advertising you undertake in relation to your professional activities must be accurate. Advertisements must not be misleading, false, unfair or exaggerated”. 

When Simon Perry made a complaint to the CNHC about claims being made by a CNHC-registered reflexologist, the Investigating Committee upheld all 15 complaints.  But it then went on to say that there was no case to answer because the unjustified claims were what the person had been taught, and were made in good faith. 
This is precisely the ludicrous situation which will occur again and again if reflexologists (and many other alternative therapies) are “accredited”.  The CNHC said, correctly, that the reflexologist had been taught things that were not true, but then did nothing whatsoever about it apart from toning down the advertisements a bit. They still register reflexologists who make outrageously false claims.

Once again we see that no sensible regulation is possible for subjects that are pure make-believe.

The first two examples deal (or rather, fail to deal) with regulation of outright quackery. But there are dozens of other quangos that sound a lot more respectable.

European Food Standards Agency (EFSA). One of the common scams is to have have your favourite quack treatment classified as a food not as a medicine. The laws about what you can claim have been a lot laxer for foods. But the EFSA has done a pretty good job in stopping unjustified claims for health benefits from foods. Dozens of claims made by makers of probiotics have been banned. The food industry, needless to say, objects very strongly to be being forced to tell the truth. In my view, the ESFA has not gone far enough. They recently issued a directive about claims that could legally be made. Some of these betray the previously high standards of the EFSA. For example you are allowed to say that "Vitamin C contributes to the reduction of tiredness and fatigue" (as long as the product contains above a specified amount of Vitamin C. I’m not aware of any trials that show vitamin C has the slightest effect on tiredness or fatigue, Although these laws do not come into effect until December 2012, they have already been invoked by the ASA has a reason not to uphold a complaint about a multivitamin pill which claimed that it “Includes 8 nutrients that can contribute to the reduction in tiredness and fatigue”

The Advertising Standards Authority (ASA). This is almost the only organisation that has done a good job on false health claims. Their Guidance on Health Therapies & Evidence says

"Whether you use the words ‘treatment’, ‘treat’ or ‘cure’, all are likely to be seen by members of the public as claims to alleviate effectively a condition or symptom. We would advise that they are not used"

"Before and after’ studies with little or no control, studies without human subjects, self-assessment studies and anecdotal evidence are unlikely to be considered acceptable"

"Before and after’ studies with little or no control, studies without human subjects, self-assessment studies and anecdotal evidence are unlikely to be considered acceptable"

They are spot on.

The ASA’s Guidance for Advertisers of Homeopathic Services is wonderful.

"In the simplest terms, you should avoid using efficacy claims, whether implied or direct,"

"To date, the ASA has have not seen persuasive evidence to support claims that homeopathy can treat, cure or relieve specific conditions or symptoms."

That seems to condemn the (mis)labelling allowed by the MHRA as breaking the rules.. Sadly, though, the ASA has no powers to enforce its decisions and only too often they are ignored. The Nightingale collaboration has produced an excellent letter that you can hand to any pharmacist who breaks the rules

The ASA has also judged against claims made by "Craniosacral therapists" (that’s the lunatic fringe of osteopathy). They will presumably uphold complaints about similar claims made (I’m ashamed to say) by UCLH Hospitals.

The private examination company Edexcel sets exams in antiscientific subjects, so miseducating children. The teaching of quackery to 16 year-olds has been approved by a maze of quangos, none  of which will take responsibility, or justify their actions. So far I’ve located no fewer than eight of them. The Office of the Qualifications and Examinations Regulator (OfQual), Edexcel, the Qualifications and Curriculum Authority (QCA), Skills for Health, Skills for Care, National Occupational Standards (NOS), private exam company VTCT and the schools inspectorate, Ofsted.. Asking any of these people why they approve of examinations in imaginary subjects meets with blank incomprehension. They fail totally to protect tha public from utter nonsense.

The Department of Education has failed to do anything about the miseducation of children in quackery. In fact it has encouraged it by, for the first time, giving taxpayers’ money to a Steiner (Waldorf) school (at Frome, in Somerset). Steiner schools are run by a secretive and cult-like body of people (read about it). They teach about reincarnation, karma, gnomes, and all manner of nonsense, sometimes with unpleasant racial overtones. The teachers are trained in Steiner’s Anthroposophy, so if your child gets ill at school they’ll probably get homeopathic sugar pills. They might well get measles or mumps too, since Steiner people don’t believe in vaccination.

Incredibly, the University of Aberdeen came perilously close to appointing a chair in anthroposophical medicine. This disaster was aborted by bloggers, and a last minute intervention from journalists. Neither the university’s regulatory mechanisms. nor any others, seemed to realise that a chair in mystical barmpottery was a bad idea.

Trading Standards offices and the Office of Fair Trading.

It is the statutory duty of Trading Standards to enforce the Consumer Protection Regulations (2008) This European legislation is pretty good. it caused a lawyer to write " Has The UK Quietly Outlawed “Alternative” Medicine?". Unfortunately Trading Standards people have consistently refused to enforce these laws. The whole organisation is a mess. Its local office arrangement fails totally to deal with the age of the internet. The situation is so bad that a group of us decided to put them to the test. The results were published in the Medico-Legal Journal, Rose et al., 2012. "Spurious Claims for Health-care Products: An Experimental Approach to Evaluating Current UK Legislation and its Implementation". They concluded "EU directive 2005/29/EC is
largely ineffective in preventing misleading health claims for consumer products in
the UK"

Skills for Health is an enormous quango which produces HR style "competences" for everything under the son. They are mostly quite useless. But those concerned with alternative medicine are not just useless. They are positively harmful. Totally barmy. There are competences and National Occupational Standards for every lunatic made-up therapy under the sun. When I phoned them to discover who’d written them, I learned that the had been drafted by the Prince of Wales’ Foundation for Magic Medicine. And when I joked by asking if they had a competence for talking to trees, I was told, perfectly seriously, “You’d have to talk to LANTRA, the land-based organisation for that.”

That was in January 2008. A lot of correspondence with the head of Skills for Health got nowhere at all. She understood nothing and it hasn’t improved a jot.

This organisation costs a lot of taxpayers’ money and it should have been consigned to the "bonfire of the quangos" (but of course there was no such bonfire in reality). It is a disgrace.

The Quality Assurance Agency (QAA) is supposed to ensure the quality of university courses. In fact it endorses courses in nonsense alternative medicine and so does more harm than good. The worst recent failure of the QAA was in the case of the University of Wales: see Scandal of the University of Wales and the Quality Assurance Agency. The university was making money by validating thousands of external degrees in everything from fundamentalist theology to Chinese Medicine. These validations were revealed as utterly incompetent by bloggers, and later by BBC Wales journalist Ciaran Jenkins (now working for Channel 4).

The mainstream media eventually caught up with bloggers. In 2010, BBC1 TV (Wales) produced an excellent TV programme that exposed the enormous degree validation scam run by the University of Wales. The programme can be seen on YouTube (Part 1, and Part 2). The programme also exposed, incidentally, the uselessness of the Quality Assurance Agency (QAA) which did nothing until the scam was exposed by TV and blogs. Eventually the QAA sent nine people to Malaysia to investigate a dodgy college that had been revealed by the BBC. The trip cost £91,000. It could have been done for nothing if anyone at the QAA knew how to use Google.

The outcome was that the University of Wales stopped endorsing external courses, and it was soon shut down altogether (though bafflingly, its vice-chancellor, Marc Clement was promoted). The credit for this lies entirely with bloggers and the BBC. The QAA did nothing to help until the very last moment.

Throughout this saga Universities UK (UUK), has maintained its usual total passivity. They have done nothing whatsoever about their members who give BSc degrees in anti-scientific subjects. (UUK used to known as the Committee of Vice-Chancellors and Principals).

Council for Health Regulatory Excellence (CHRE), soon to become the PSAHSC,

Back now to the CHRE, the people who failed so signally to sort out the GCC. They are being reorganised. Their consultation document says

"The Health and Social Care Act 20122 confers a new function on the Professional Standards Authority for Health and Social Care (the renamed Council for Healthcare Regulatory Excellence). From November 2012 we will set standards for organisations that hold voluntary registers for people working in health and social care occupations and we will accredit the register if they meet those standards. It will then be known as an ‘Accredited Register’. "

They are trying to decide what the criteria should be for "accreditation" of a regulatory body. The list of those interested has some perfectly respectable organisations, like the British Psychological Society. It also contains a large number of crackpot organisations, like Crystal and Healing International, as well as joke regulators like the CNHC.

They already oversee the Health Professions Council (HPC) which is due to take over Herbal medicine and Traditional Chinese Medicine, with predictably disastrous consequences.

Two of the proposed criteria for "accreditation" appear to be directly contradictory.

Para 2.5 makes the whole accreditation pointless from the point of view of patients

2.5 It will not be an endorsement of the therapeutic validity or effectiveness of any particular discipline or treatment.

Since the only thing that matters to the patient is whether the therapy works (and is safe), accrediting of organisations that ignore this will merely give the appearance of official approval of crystal healing etc etc. This appears to contradict directly

A.7 The organisation can demonstrate that there either is a sound knowledge base underpinning the profession or it is developing one and makes that explicit to the public.

A "sound knowledge base", if it is to mean anything useful at all, means knowledge that the treatment is effective. If it doesn’t mean that, what does it mean?

It seems that the official mind has still not grasped the obvious fact that there can be no sensible regulation of subjects that are untrue nonsense. If it is nonsense, the only form of regulation that makes any sense is the law.

Please fill in the consultation. My completed return can be downloaded as an example, if you wish.

Medicines and Healthcare products Regulatory Agency (MHRA) should be a top level defender of truth. Its strapline is

"We enhance and safeguard the health of the public by ensuring that medicines and medical devices work and are acceptably safe."

The MHRA did something (they won’t tell me exactly what) about one of the most cruel scams that I’ve ever encountered, Esperanza Homeopathic Neuropeptide, peddled for multiple sclerosis, at an outrageous price ( £6,759 for 12 month’s supply). Needless to say there was not a jot of evidence that it worked (and it wasn’t actually homeopathic).

Astoundingly, Trading Standards officers refused to do anything about it.

The MHRA admit (when pushed really hard) that there is precious little evidence that any of the herbs work, and that homeopathy is nothing more than sugar pills. Their answer to that is to forget that bit about "ensuring that medicines … work"

Here’s the MHRA’s Traditional Herbal Registration Certificate for devils claw tablets.

vitabiotics

The wording "based on traditional use only" has to be included because of European legislation. Shockingly, the MHRA have allowed them to relegate that to small print, with all the emphasis on the alleged indications. The pro-CAM agency NCCAM rates devil’s claw as "possibly effective" or "insufficient evidence" for all these indications, but that doesn’t matter because the MHRA requires no evidence whatsoever that the tablets do anything. They should, of course, added a statement to this effect to the label. They have failed in their duty to protect and inform the public by allowing this labelling.

But it gets worse. Here is the MHRA’s homeopathic marketing authorisation for the homeopathic medicinal product Arnicare Arnica 30c pillules

It is nothing short of surreal.

hom1
hom2

Since the pills contain nothing at all, they don’t have the slightest effect on sprains, muscular aches or bruising. The wording on the label is exceedingly misleading.

If you "pregnant or breastfeeding" there is no need to waste you doctor’s time before swallowing a few sugar pills.

"Do not take a double dose to make up for a missed one". Since the pills contain nothing, it doesn’t matter a damn.

"If you overdose . . " it won’t have the slightest effect because there is nothing in them

And it gets worse. The MHRA-approved label specifies ACTIVE INGREDIENT. Each pillule contains 30c Arnica Montana

No, they contain no arnica whatsoever.

hom3
hom4

It truly boggles the mind that men with dark suits and lots of letters after their names have sat for hours only to produce dishonest and misleading labels like these.

When this mislabeling was first allowed, it was condemned by just about every scientific society, but the MHRA did nothing.

The Nightingale Collaboration.

This is an excellent organisation, set up by two very smart skeptics, Alan Henness and Maria MacLachlan. Visit their site regularly, sign up for their newsletter Help with their campaigns. Make a difference.

Conclusions

The regulation of alternative medicine in the UK is a farce. It is utterly ineffective in preventing deception of patients.

Such improvements as have occurred have resulted from the activity of bloggers, and sometime the mainstream media. All the official regulators have, to varying extents, made things worse.

The CHRE proposals promise to make matters still worse by offering "accreditation" to organisations that promote nonsensical quackery. None of the official regulators seem to be able to grasp the obvious fact that is impossible to have any sensible regulation of people who promote nonsensical untruths. One gets the impression that politicians are more concerned to protect the homeopathic (etc, etc) industry than they are to protect patients.

Deception by advocates of alternative medicine harms patients. There are adequate laws that make such deception illegal, but they are not being enforced. The CHRE and its successor should restrict themselves to real medicine. The money that they spend on pseudo-regulation of quacks should be transferred to the MHRA or a reformed Trading Standards organisation so they can afford to investigate and prosecute breaches of the law. That is the only form of regulation that makes sense.

 

Follow-up

The shocking case of the continuing sale of “homeopathic vaccines” for meningitis, rubella, pertussis etc was highlighted in an excellent TV programme by BBC South West. The failure of the MHRA and the GPC do take any effective action is a yet another illustration of the failure of regulators to do their job. I have to agree with Andy Lewis when he concludes

“Children will die. And the fault must lie with Professor Sir Kent Woods, chairman of the regulator.”

Jump to follow-up

Although many university courses in quackery have now closed, two subjects that hang on in a few places are western herbalism, and traditional Chinese medicine (including acupuncture). The University of Westminster still runs Chinese medicine, and Western herbal medicine (with dowsing). So do the University of Middlesex and University of East London.

Since the passing of the Health and Social Security Act, these people have been busy with their customary bait and switch tactics, trying to get taxpayers’ money. It’s worth looking again at the nonsense these people talk.

Take for example, the well known herbalist, Simon Mills. At one time he was associated with the University of Exeter, but no longer. Perhaps his views are too weird even for their Third Gap section (the folks who so misrepresented their results in a trial of acupuncture). Unsurprisingly, he was involved in the late Prince’s Foundation for Magic Medicine, and, unsurprisingly, he is involved with its successor, the "College of Medicine", where he spoke along similar lines. You can get a good idea about his views from the video of a talk that he gave at Schumacher College in 2005. It’s rather long, and exceedingly uncritical, so here’s a shorter version to which some helpful captions have been added.

That talk is weird by any standards. He says, apparently with a straight face, that "all modern medicines are cold in the third degree"..And with ginger and cinnamon "You can stop a cold, generally speaking, in its tracks" (at 21′ 30" in the video). This is simply not true, but he says it, despite the fact that the Plant Medicine with site (of which he’s a director) which he is associated gives them low ratings

Simon Mills is also a director of SustainCare. Their web site says

SustainCare Community Interest Company is a social enterprise set up to return health care to its owners: “learning to look after ourselves and our families in ways that make sense and do not cost the earth“. It is founded on the principle that one’s health is a personal story, and that illness is best managed when we make our health care our own. The enterprise brings clinical expertise, long experience of academia, education and business, and the connections and resources to deliver new approaches.

"As its own social enterprise contribution to this project Sustaincare set up and supported Café Sustain as a demonstration Intelligent Waiting Room at Culm Valley Integrated Centre for Health in Devon". (yes, that’s Michael Dixon, again]

In the talk (see video) Mills appears to want to take medicine back to how it was 1900 years ago, in the time of Galen. The oblique speaking style is fascinating. He never quite admits that he thinks all that nonsense is true, but presumably it is how he treats patients. Yet a person with these bizarre pre-scientific ideas is thought appropriate to advise the MHRA

It’s characteristic of herbalists that they have a very long list of conditions for which each herb is said to be good. The sort of things said by Mills differ little from the 1900-year old ideas of Galen, io the 17th century ideas of Culpepper.

You can see some of the latter in my oldest book, Blagrave’s supplement to Culpepper’s famous herbal, published in 1674.

blagrave-1

See what he has to say about daffodils

daff

It is "under the dominion of Mars, and the roots hereof are hot and dry almost in the third degree".

"The root, boyled in posset drink, and drunk, causeth vomiting, and is used with good successe in the beginning of Agues, especiallyTertians, which frequently rage in the spring-time: a plaister made of the roots with parched Barley meal, and applied to swellings and imposthumes do dissolve them; the juice mingled with hony, frankincense, wine and myrrhe, and dropped into the Eares, is good against the corrupt filth and running matter of the Eares; the roots made hollow and boyled in oyl doth help Kib’d heels [or here]: the juice of the root is good for Morphew, and discolourings of the skin."

It seems that daffodils would do a lot in 1674. Even herbalists don’t seem to use it much now. A recent herbal site describes daffodil as "poisonous".

But the descriptions are very like those used by present day herbalists, as you can hear in Simon Mills’ talk.

Chinese medicine is even less tested than western herbs. Not a single Chinese herb has been shown to be useful for treating anything (though in a very few case, they have been found to contain drugs that are useful when purified, notably the anti-malarial compound, artemesinin). They are often contaminated, some are dangerously toxic. And they contribute to the extinction of tigers and rhinoceros because of the silly myths that these make useful medicines. The cruelty of bear bile farming is legendary.

In a recent report in China Daily (my emphasis).

In a congratulation letter, Vice-Premier Li Keqiang called for integration of TCM and Western medicine.

TCM, as a time-honored treasure of Chinese civilization, has contributed to the prosperity of China and brought impacts to world civilization, Li said.

He also urged medical workers to combine the merits of TCM with contemporary medicine to better facilitate the ongoing healthcare reform in China.

The trade in Chinese medicines survives only for two reasons. One is that thay are a useful tool for promoting Chinese nationalism. The other is that they are big business. Both are evident in the vice-premier’s statement.

I presume that it’s the business bit that is the reason why London South Bank University (ranked 114 ou ot 114) that led to one of their main lecture theatres being decorated with pictures like this.“Mr Li Changchun awarding 2010 Confucius Institute of the year to LSBU Vice Chancellor” . I’ll bet Mr Li Changchun uses real medicine himself, as most Chinese who can afford it do.

SBU

Presumably, what’s taught in their Confucius Institute is the same sort of dangerous make-believe nonsense.that’s taught on other such courses.

The "College of Medicine" run a classical bait and switch operation. Their "First Thursday lectures" have several good respectable speakers, but then they have Andrew Flower, He is "a former president of the Register of Chinese Herbal Medicine, a medical herbalist and acupuncturist. He has recently completed a PhD exploring the role of Chinese herbal medicine in the treatment of endometriosis". He’s associated with the Avicenna Centre for Chinese Medicine, and with the University of Southampton’s quack division The only bit of research I could find by Andrew Flower was a Cochrane review, Chinese herbal medicine for endometriosis. The main results tell us

"Two Chinese RCTs involving 158 women were included in this review. Both these trials described adequate methodology. Neither trial compared CHM with placebo treatment."

But the plain language summary says

"This review suggests that Chinese herbal medicine (CHM) may be useful in relieving endometriosis-related pain with fewer side effects than experienced with conventional treatment."

It sounds to me as though people as partisan as the authors of this should not be allowed to write Cochrane reviews.

Flower’s talk is followed by one from the notorious representative of the herbal industry, Michael McIntyre, talking on Herbal medicine: A major resource for the 21st century. That’s likely to be about as objective as if they’d invited a GSK drug rep to talk about SSRIs.

The people at Kings College London Institute of Pharmaceutical Sciences are most certainly not quacks. They have made a database of chemicals found in traditional Chinese medicine. It’s sold by a US company, Chem-TCM and it’s very expensive (Commercial license: $3,740.00. Academic/government license: $1,850.00). Not much open access there. It’s a good idea to look at chemicals of plant origin, but only as long as you don’t get sucked into the myths. It’s only too easy to fall for the bait and switch of quacks (like TCM salespeople). The sample page shows good chemical and botanical information, and predicted (not observed) pharmacological activity. More bizarrely, it shows also analysis of the actions claimed by TCM people.

cme-tcb-4a

chem-tcb-4b

It does seem odd to me to apply sophisticated classification methods to things that are mostly myth.

The multiple uses claimed for Chinese medicines are very like the make-believe claims made for western herbs by Galen, Culpepper and (with much less excuse) by Mills.

They are almost all untrue, but their proponents are good salesmen. Don’t let them get a foot in your door.

Follow-up

10 June 2012. No sooner did this post go public when I can across what must be one of the worst herbal scams ever: “Arthroplex

31 July 2012. Coffee is the subject of another entry in the 1674 edition of Blagrave.

blagrave2

Blagrave evidently had a lower regard for coffee than I have.

“But being pounded and baked, as do it to make the Coffee-liquor with, it then stinks most loathsomly, which is an argument of some Saturnine quality in it.”

“But there is no mention of an medicinal use thereof, by any Author either Antient of Modern”

Blagrave says also

“But this I may truly say of it [coffee]: Quod Anglorum Corpora quae huic liquori, tantopere indulgent, in Barbarorum naturam degenerasse videntur,”

This was translated expertly by Benet Salway, of UCL’s History department

“that the bodies of the English that indulge in this liquor to such an extent seem to degenerate into the nature of barbarians” 

My boss, Lucia Sivolotti got something very like that herself. Be very impressed.

Salway suggested that clearer Latin would have been “quod corpora Anglorum, qui tantopere indulgent huic liquori, degenerasse in naturam barbarorum videntur”.

I’d have passed that on to Blagrave, but I can’t find his email address.

I much prefer Alfréd Rényi’s aphorism (often misattributed to Paul Erdös)

“A mathematician is a machine for turning coffee into theorems”

Jump to follow-up

Open access is in the news again.

Index on Censorship held a debate on open data on December 6th.

The video of of the meeting is now on YouTube. A couple of dramatic moments in the video: At 48 min O’Neill & Monbiot face off about "competent persons" (and at 58 min Walport makes fun of my contention that it’s better to have more small grants rather than few big ones, on the grounds that it’s impossible to select the stars).

poster

The meeting has been written up on the Bishop Hill Blog, with some very fine cartoon minutes.

Bishop Hill blog (I love the Josh cartoons -pity he seems to be a climate denier, spoken of approvingly by the unspeakable James Delingpole.)

It was gratifying that my remarks seemed to be better received by the scientists in the audience than they were by some other panel members. The Bishop Hill blog comments "As David Colquhoun, the only real scientist there and brilliant throughout, said “Give them everything!” " Here’s a subsection of the brilliant cartoon minutes

notes-dc

The bit about "I just lied -but he kept his job" referred to the notorious case of Richard Eastell and the University of Sheffield.

We all agreed that papers should be open for anyone to read, free. Monbiot and I both thought that raw data should be available on request, though O’Neill and Walport had a few reservations about that.

A great deal of time and money would be saved if data were provided on request. It shouldn’t need a Freedom of Information Act (FOIA) request, and the time and energy spent on refusing FOIA requests is silly. It simply gives the impression that there is something to hide (Climate scientists must be ruthlessly honest about data). The University of Central Lancashire spent £80,000 of taxpayers’ money trying (unsuccessfully) to appeal against the judgment of the Information Commissioner that they must release course material to me. It’s hard to think of a worse way to spend money.

A few days ago, the Department for Business, Innovation and Skills (BIS) published a report which says (para 6.6)

“The Government . . . is committed to ensuring that publicly-funded research should be accessible
free of charge.”

That’s good, but how it can be achieved is less obvious. Scientific publishing is, at the moment, an unholy mess. It’s a playground for profiteers. It runs on the unpaid labour of academics, who work to generate large profits for publishers. That’s often been said before, recently by both George Monbiot (Academic publishers make Murdoch look like a socialist) and by me (Publish-or-perish: Peer review and the corruption of science). Here are a few details.

Extortionate cost of publishing

Mark Walport has told me that

The Wellcome Trust is currently spending around £3m pa on OA publishing costs and, looking at the Wellcome papers that find their way to UKPMC, we see that around 50% of this content is routed via the “hybrid option”; 40% via the “pure” OA journals (e.g. PLoS, BMC etc), and the remaining 10% through researchers self-archiving their author manuscripts.  

I’ve found some interesting numbers, with help from librarians, and through access to The Journal Usage Statistics Portal (JUSP).

Elsevier

UCL pays Elsevier the astonishing sum of €1.25 million, for access to its journals. And that’s just one university. That price doesn’t include any print editions at all, just web access and there is no open access. You have to have a UCL password to see the results. Elsevier has, of course, been criticised before, and not just for its prices.

Elsevier publish around 2700 scientific journals. UCL has bought a package of around 2100 journals. There is no possibility to pick the journals that you want. Some of the journals are used heavily ("use" means access of full text on the web). In 2010, the most heavily used journal was The Lancet, followed by four Cell Press journals

elsevier top

But notice the last bin. Most of the journals are hardly used at all. Among all Elsevier journals, 251 were not accessed even once in 2010. Among the 2068 journals bought by UCL, 56 were never accessed in 2010 and the most frequent number of accesses per year is between 1 and 10 (the second bin in the histogram, below). 60 percent of journals have 300 or fewer usages in 2010, Above 300, the histogram tails on up to 51878 accesses for The Lancet. The remaining 40 percent of journals are represented by the last bin (in red). The distribution is exceedingly skewed. The median is 187, i.e. half of the journals had fewer than 187 usages in 2010), but the mean number of usages (which is misleading for such a skewed distribution, was 662 usages).

histo

Nature Publishing Group

UCL bought 65 journals from NPG in 2010. They get more use than Elsevier, though surprisingly three of them were never accessed in 2010, and 17 had fewer than 1000 accesses in that year. The median usage was 2412, better than most. The leader, needless to say, was Nature itself, with 153,321.

Oxford University Press

The situation is even more extreme for 248 OUP journals, perhaps because many of the journals are arts or law rather than science.

OUP-jisto

The most frequent (modal) usage of was zero (54 journals), followed by 1 to 10 accesses (42 journals) 64 percent of journals had fewer than 200 usages, and the 36 percent with over 200 are pooled in the last (red) bin. The histogram extends right up to 16060 accesses for Brain. The median number of usages in 2010 was 66.

So far I haven’t been able to discover the costs of the contracts with OUP or Nature Publishing group. It seems that the university has agreed to confidentiality clauses. This itself is a shocking lack of transparency. If I can find the numbers I shall -watch this space.

Almost all of these journals are not open access. The academics do the experiments, most often paid for by the taxpayer. They write the paper (and now it has to be in a form that is almost ready for publication without further work), they send it to the journal, where it is sent for peer review, which is also unpaid. The journal sells the product back to the universities for a high price, where the results of the work are hidden from the people who paid for it.

It’s even worse than that, because often the people who did the work and wrote the paper, have to pay "page charges". These vary, but can be quite high. If you send a paper to the Journal of Neuroscience, it will probably cost you about $1000. Other journals, like the excellent Journal of Physiology, don’t charge you to submit a paper (unless you want a colour figure in the print edition, £200), but the paper is hidden from the public for 12 months unless you pay $3000.

The major medical charity, the Wellcome Trust, requires that the work it funds should be available to the public within 6 months of publication. That’s nothing like good enough to allow the public to judge the claims of a paper which hits the newspapers the day that it’s published. Nevertheless it can cost the authors a lot. Elsevier journals charge $3000 except for their most-used journals. The Lancet charges £400 per page and Cell Press journals charge $5000 for this unsatisfactory form of open access.

Open access journals

The outcry about hidden results has resulted in a new generation of truly open access journals that are open to everyone from day one. But if you want to publish in them you have to pay quite a lot.

Furthermore, although all these journals are free to read, most of them do not allow free use of the material they publish. Most are operating under all-rights-reserved copyrights. In 2009 under 10 percent of open access journals had true Creative Commons licence.

Nature Publishing Group has a true open access journal, Nature Communications, but it costs the author $5000 to publish there. The Public Library of Science journals are truly open access but the author is charged $2900 for PLoS Medicine though PLoS One costs the author only $1350

A 2011 report considered the transition to open access publishing but it doesn’t even consider radical solutions, and makes unreasonably low estimates of the costs of open access publishing.

Scam journals have flourished under the open access flag

Open access publishing has, so far, almost always involved paying a hefty fee. That has brought the rats out of the woodwork and one gets bombarded daily with offers to publish in yet another open access journal. Many of these are simply scams. You pay, we put it on the web and we won’t fuss about quality. Luckily there is now a guide to these crooks: Jeffrey Beall’s List of Predatory, Open-Access Publishers.

One that I hear from regularly is Bentham Open Journals

(a name that is particularly inappropriate for anyone at UCL). Jeffery Beall comments

"Among the first, large-scale gold OA publishers, Bentham Open continues to expand its fleet of journals, now numbering over 230. Bentham essentially operates as a scholarly vanity press."

They undercut real journals. A research article in The Open Neuroscience Journal will cost you a mere $800. Although these journals claim to be peer-reviewed, their standards are suspect. In 2009, a nonsensical computer-generated spoof paper was accepted by a Bentham Journal (for $800),

What can be done about publication, and what can be done about grants?

Both grants and publications are peer-reviewed, but the problems need to be discussed separately.

Peer review of papers by journals

One option is clearly to follow the example of the best open access journals, such as PLoS. The cost of $3000 to 5000 per paper would have to be paid by the research funder, often the taxpayer. It would be money subtracted from the research budget, but it would retain the present peer review system and should cost no more if the money that were saved on extortionate journal subscriptions were transferred to research budgets to pay the bills, though there is little chance of this happening.

The cost of publication would, in any case, be minimised if fewer papers were published, which is highly desirable anyway.

But there are real problems with the present peer review system. It works quite well for journals that are high in the hierarchy. I have few grumbles myself about the quality of reviews, and sometimes I’ve benefitted a lot from good suggestions made by reviewers. But for the user, the process is much less satisfactory because peer review has next to no effect on what gets published in journals. All it influences is which journal the paper appears in. The only effect of the vast amount of unpaid time and effort put into reviewing is to maintain a hierarchy of journals, It has next to no effect on what appears in Pubmed.

For authors, peer review can work quite well, but

from the point of view of the consumer, peer review is useless.

It is a myth that peer review ensures the quality of what appears in the literature.

A more radical approach

I made some more radical suggestions in Publish-or-perish: Peer review and the corruption of science.

It seems to me that there would be many advantages if people simply published their own work on the web, and then opened the comments. For a start, it would cost next to nothing. The huge amount of money that goes to publishers could be put to better uses.

Another advantage would be that negative results could be published. And proper full descriptions of methods could be provided because there would be no restrictions on length.

Under that system, I would certainly send a draft paper to a few people I respected for comments before publishing it. Informal consortia might form for that purpose.

The publication bias that results from non-publication of negative results is a serious problem, mainly, but not exclusively, for clinical trials. It is mandatory to register a clinical trial before it starts, but many of the results never appear. (see, for example, Deborah Cohen’s report for Index on Censorship). Although trials now have to be registered before they start, there is no check on whether or not the results are published. A large number of registered trials do not result in any publication, and this publication bias can costs thousands of lives. It is really important to ensure that all results get published,

The ArXiv model

There are many problems that would have to be solved before we could move to self-publication on the web. Some have already been solved by physicists and mathematicians. Their archive, ArXiv.org provides an example of where we should be heading. Papers are published on the web at no cost to either user or reader, and comments can be left. It is an excellent example of post-publication peer review. Flame wars are minimised by requiring users to register, and to show they are bona fide scientists before they can upload papers or comments. You may need endorsement if you haven’t submitted before.

Peer review of grants

The problems for grants are quite different from those for papers. There is no possibility of doing away with peer review for the award of grants, however imperfect the process may be. In fact candidates for the new Wellcome Trust investigator awards were alarmed to find that the short listing of candidates for their new Investigator Awards was done without peer review.

The Wellcome Trust has been enormously important for the support of medical and biological support, and never more than now, when the MRC has become rather chaotic (let’s hope the new CEO can sort it out). There was, therefore, real consternation when Wellcome announced a while ago its intention to stop giving project and programme grants altogether. Instead it would give a few Wellcome Trust Investigator Awards to prominent people. That sounds like the Howard Hughes approach, and runs a big risk of “to them that hath shall be given”.

The awards have just been announced, and there is a good account by Colin Macilwain in Science [pdf]. UCL did reasonable well with four awards, but four is not many for a place the size of UCL. Colin Macilwain hits the nail on the head.

"While this is great news for the 27 new Wellcome Investigators who will share £57 million, hundreds of university-based researchers stand to lose Wellcome funds as the trust phases out some existing programs to pay for the new category of investigators".

There were 750 applications, but on the basis of CV alone, they were pared down to a long-list if 173. The panels then cut this down to a short-list of 55. Up to this point no external referees were used, quite unlike the normal process for award of grants. This seems to me to have been an enormous mistake. No panel, however distinguished, can have the knowledge to distinguish the good from the bad in areas outside their own work, It is only human nature to favour the sort of work you do yourself. The 55 shortlisted people were interviewed, but again by a panel with an even narrower range of expertise, Macilwain again:

"Applications for MRC grants have gone up “markedly” since the Wellcome ones closed, he says: “We still see that as unresolved.” Leszek Borysiewicz, vice-chancellor of the University of Cambridge, which won four awards, believes the impact will be positive: “Universities will adapt to this way of funding research."

It certainly isn’t obvious to most people how Cambridge or UCL will "adapt" to funding of only four people.

The Cancer Research Campaign UK has recently made the same mistake.

One problem is that any scheme of this sort will inevitably favour big groups, most of whom are well-funded already. Since there is some reason to believe that small groups are more productive (see also University Alliance report), it isn’t obvious that this is a good way to go. I was lucky enough to get 45 minutes with the director of the Wellcome Trust, Mark Walport, to put these views. He didn’t agree with all I said, but he did listen.

One of the things that I put to him was a small statistical calculation to illustrate the great danger of a plan that funds very few people. The funding rate was 3.6% of the original applications, and 15.6% of the long-listed applications. Let’s suppose, as a rough approximation, that the 173 long-listed applications were all of roughly equal merit. No doubt that won’t be exactly true, but I suspect it might be more nearly true than the expert panels will admit. A quick calculation in Mathcad gives this, if we assume a 1 in 8 chance of success for each application.

Distribution of the number of successful applications

Suppose $ n $ grant applications are submitted. For example, the same grant submitted $ n $ times to selection boards of equal quality, OR $ n $ different grants of equal merit are submitted to the same board.

Define $ p $ = probability of success at each application

Under these assumptions, it is a simple binomial distribution problem.

According to the binomial distribution, the probability of getting $ r $ successful applications in $ n $ attempts is

\[ P(r)=\frac{n!}{r!\left(n-r\right)! }\; {p}^{r} \left(1-p \right)^{n-r} \]

For a success rate of 1 in 8, $ p = 0.125 $, so if you make $ n = 8 $ applications, the probability that $ r $ of them will succeed is shown in the graph.

Despite equal merit, almost as many people end up with no grant at all as almost as many people end up with no grant at all as get one grant. And 26% of people will get two or more grants.

mcd1-g

Of course it would take an entire year to write 8 applications. If we take a more realistic case of making four applications we have $ n = 4 $ (and $ p = 0.125 $, as before). In this case the graph comes out as below. You have a nearly 60% chance of getting nothing at all, and only a 1 in 3 chance of getting one grant.

mcd3-g

These results arise regardless of merit, purely as consequence of random chance. They are disastrous, and especially disastrous for the smaller, better-value, groups for which a gap in funding can mean loss of vital expertise. It also has the consequence that scientists have to spend most of their time not doing science, but writing grant applications. The mean number of applications before a success is 8, and a third of people will have to write 9 or more applications before they get funding. This makes very little sense.

Grant-awarding panels are faced with the near-impossible task of ranking many similar grants. The peer review system is breaking down, just as it has already broken down for journal publications.

I think these considerations demolish the argument for funding a small number of ‘stars’. The public might expect that the person making the application would take an active part in the research. Too often, now, they spend most of their time writing grant applications. What we need is more responsive-mode smallish programme grants and a maximum on the size of groups.

Conclusions

We should be thinking about the following changes,

  • Limit the number of papers that an individual can publish. This would increase quality, it would reduce the impossible load on peer reviewers and it would reduce costs.
  • Limit the size of labs so that more small groups are encouraged. This would increase both quality and value for money.
  • More (and so smaller) grants are essential for innovation and productivity.
  • Move towards self-publishing on the web so the cost of publishing becomes very low rather than the present extortionate costs. It would also mean that negative results could be published easily and that methods could be described in proper detail.

The entire debate is now on YouTube.

Follow-up

24 January 2012. The eminent mathematician, Tim Gowers, has a rather hard-hitting blog on open access and scientific publishing, Elsevier – my part in its downfall. I’m right with him. Although his post lacks the detailed numbers of mine, it shows that mathematicians has exactly the same problems of the rest of us.

11 April 2012. Thanks to Twitter, I came across a remarkably prescient article, in the Guardian, in 2001.
Science world in revolt at power of the journal owners, by James Meek. Elsevier have been getting away with murder for quite a while.

19 April 2012.

I got invited to give after-dinner talk on open access at Cumberland Lodge. It was for the retreat of out GEE Department (that is the catchy brand name we’ve had since 2007: I’m in the equally memorable NPP). I think it stands for Genetics, Evolution and Environment. The talk seemed to stiir up a lot of interest: the discussions ran on to the next day.

cumberland

It was clear that younger people are still as infatuated with Nature and Science as ever. And that, of course is the fault of their elders.

The only way that I can see, is to abandon impact factor as a way of judging people. It should have gone years ago,and good people have never used it. They read the papers. Access to research will never be free until we think oi a way to break the hegemony of Nature, Science and a handful of others. Stephen Curry has made some suggestions

Probably it will take action from above. The Wellcome Trust has made a good start. And so has Harvard. We should follow their lead (see also, Stephen Curry’s take on Harvard)

And don’t forget to sign up for the Elsevier boycott. Over 10,000 academics have already signed. Tim Gowers’ initiative took off remarkably.

24 July 2012. I’m reminded by Nature writer, Richard van Noorden (@Richvn) that Nature itself has written at least twice about the iniquity of judging people by impact factors. In 2005 Not-so-deep impact said

"Only 50 out of the roughly 1,800 citable items published in those two years received more than 100 citations in 2004. The great majority of our papers received fewer than 20 citations."

"None of this would really matter very much, were it not for the unhealthy reliance on impact factors by administrators and researchers’ employers worldwide to assess the scientific quality of nations and institutions, and often even to judge individuals."

And, more recently, in Assessing assessment” (2010).

27 April 2014

The brilliant mathematician,Tim Gowers, started a real revolt against old-fashioned publishers who are desperately trying to maintain extortionate profits in a world that has changed entirely. In his 2012 post, Elsevier: my part in its downfall, he declared that he would no longer publish in, or act as referee for, any journal published by Elsevier. Please follow his lead and sign an undertaking to that effect: 14,614 people have already signed.

Gowers has now gone further. He’s made substantial progress in penetrating the wall of secrecy with which predatory publishers (of which Elsevier is not the only example) seek to prevent anyone knowing about the profitable racket they are operating. Even the confidentiality agreements, which they force universities to sign, are themselves confidential.

In a new post, Tim Gowers has provided more shocking facts about the prices paid by universities. Please look at Elsevier journals — some facts. The jaw-dropping 2011 sum of €1.25 million paid by UCL alone, is now already well out-of-date. It’s now £1,381,380. He gives figures for many other Russell Group universities too. He also publishes some of the obstructive letters that he got in the process of trying to get hold of the numbers. It’s a wonderful aspect of the web that it’s easy to shame those who deserve to be shamed.

I very much hope the matter is taken to the Information Commissioner, and that a precedent is set that it’s totally unacceptable to keep secret what a university pays for services.

Jump to follow-up

I have in the past, taken an occasional interest in the philosophy of science. But in a lifetime doing science, I have hardly ever heard a scientist mention the subject. It is, on the whole, a subject that is of interest only to philosophers.

It’s true that some philosophers have had interesting things to say about the nature of inductive inference, but during the 20th century the real advances in that area came from statisticians, not from philosophers. So I long since decided that it would be more profitable to spend my time trying to understand R.A Fisher, rather than read even Karl Popper. It is harder work to do that, but it seemed the way to go.
RA Fisher

This post is based on the last part of chapter titled “In Praise of Randomisation. The importance of causality in medicine and its subversion by philosophers of science“. A talk was given at the meeting at the British Academy in December 2007, and the book will be launched on November 28th 2011 (good job it wasn’t essential for my CV with delays like that). The book is published by OUP for the British Academy, under the title Evidence, Inference and Enquiry (edited by Philip Dawid, William Twining, and Mimi Vasilaki, 504 pages, £85.00). The bulk of my contribution has already appeared here, in May 2009, under the heading Diet and health. What can you believe: or does bacon kill you?. It is one of the posts that has given me the most satisfaction, if only because Ben Goldacre seemed to like it, and he has done more than anyone to explain the critical importance of randomisation for assessing treatments and for assessing social interventions.

Having long since decided that it was Fisher, rather than philosophers, who had the answers to my questions, why bother to write about philosophers at all? It was precipitated by joining the London Evidence Group. Through that group I became aware that there is a group of philosophers of science who could, if anyone took any notice of them, do real harm to research. It seems surprising that the value of randomisation should still be disputed at this stage, and of course it is not disputed by anybody in the business.  It was thoroughly established after the start of small sample statistics at the beginning of the 20th century. Fisher’s work on randomisation and the likelihood principle put inference on a firm footing by the mid-1930s. His popular book, The Design of Experiments made the importance of randomisation clear to a wide audience, partly via his famous example of the lady tasting tea. The development of randomisation tests made it transparently clear (perhaps I should do a blog post on their beauty). By the 1950s. the message got through to medicine, in large part through Austin Bradford Hill.

Despite this, there is a body of philosophers who dispute it.  And of course it is disputed by almost all practitioners of alternative medicine (because their treatments usually fail the tests). Here are some examples.

“Why there’s no cause to randomise” is the rather surprising title of a report by Worrall (2004;  see also Worral, 2010), from the London School of Economics.  The conclusion of this paper is

“don’t believe the bad press that ‘observational studies’ or ‘historically controlled trials’ get – so long as they are properly done (that is, serious thought has gone in to the possibility of alternative explanations of the outcome), then there is no reason to think of them as any less compelling than an RCT.”

In my view this conclusion is seriously, and dangerously, wrong –it ignores the enormous difficulty of getting evidence for causality in real life, and it ignores the fact that historically controlled trials have very often given misleading results in the past, as illustrated by the diet problem..  Worrall’s fellow philosopher, Nancy Cartwright (Are RCTs the Gold Standard?, 2007), has made arguments that in some ways resemble those of Worrall.

Many words are spent on defining causality but, at least in the clinical setting the meaning is perfectly simple.  If the association between eating bacon and colorectal cancer is causal then if you stop eating bacon you’ll reduce the risk of cancer.  If the relationship is not causal then if you stop eating bacon it won’t help at all.  No amount of Worrall’s “serious thought” will substitute for the real evidence for causality that can come only from an RCT: Worrall seems to claim that sufficient brain power can fill in missing bits of information.  It can’t.  I’m reminded inexorably of the definition of “Clinical experience. Making the same mistakes with increasing confidence over an impressive number of years.” In Michael O’Donnell’s A Sceptic’s Medical Dictionary.

At the other philosophical extreme, there are still a few remnants of post-modernist rhetoric to be found in obscure corners of the literature.  Two extreme examples are the papers by Holmes et al. and by Christine Barry.  Apart from the fact that they weren’t spoofs, both of these papers bear a close resemblance to Alan Sokal’s famous spoof paper, Transgressing the boundaries: towards a transformative hermeneutics of quantum gravity (Sokal, 1996).  The acceptance of this spoof by a journal, Social Text, and the subsequent book, Intellectual Impostures, by Sokal & Bricmont (Sokal & Bricmont, 1998), exposed the astonishing intellectual fraud if postmodernism (for those for whom it was not already obvious).    A couple of quotations will serve to give a taste of the amazing material that can appear in peer-reviewed journals.  Barry (2006) wrote

 “I wish to problematise the call from within biomedicine for more evidence of alternative medicine’s effectiveness via the medium of the randomised clinical trial (RCT).”

“Ethnographic research in alternative medicine is coming to be used politically as a challenge to the hegemony of a scientific biomedical construction of evidence.”

“The science of biomedicine was perceived as old fashioned and rejected in favour of the quantum and chaos theories of modern physics.”
“In this paper, I have deconstructed the powerful notion of evidence within biomedicine, . . .”

The aim of this paper, in my view, is not obtain some subtle insight into the process of inference but to try to give some credibility to snake-oil salesmen who peddle quack cures.  The latter at least make their unjustified claims in plain English.

The similar paper by Holmes, Murray, Perron & Rail (Holmes et al., 2006) is even more bizarre.

“Objective The philosophical work of Deleuze and Guattari proves to be useful in showing how health sciences are colonised (territorialised) by an all-encompassing scientific research paradigm “that of post-positivism ” but also and foremost in showing the process by which a dominant ideology comes to exclude alternative forms of knowledge, therefore acting as a fascist structure. “,

It uses the word fascism, or some derivative thereof, 26 times.  And  Holmes, Perron & Rail (Murray et al., 2007)) end a similar tirade with

“We shall continue to transgress the diktats of State Science.”

It may be asked why it is even worth spending time on these remnants of the utterly discredited postmodernist movement.  One reason is that rather less extreme examples of similar thinking still exist in some philosophical circles.

Take, for example, the views expressed papers such as Miles, Polychronis and Grey (2006), Miles & Loughlin (2006), Miles, Loughlin & Polychronis (Miles et al., 2007) and Loughlin (2007)..  These papers form part of the authors’ campaign against evidence-based medicine, which they seem to regard as some sort of ideological crusade, or government conspiracy.  Bizarrely they seem to think that evidence-based medicine has something in common with the managerial culture that has been the bane of not only medicine but of almost every occupation (and which is noted particularly for its disregard for evidence).  Although couched in the sort of pretentious language favoured by postmodernists, in fact it ends up defending the most simple-minded forms of quackery.  Unlike  Barry (2006), they don’t mention alternative medicine explicitly, but the agenda is clear from their attacks on Ben Goldacre.  For example, Miles, Loughlin & Polychronis (Miles et al., 2007) say this.

“Loughlin identifies Goldacre [2006] as a particularly luminous example of a commentator who is able not only to combine audacity with outrage, but who in a very real way succeeds in manufacturing a sense of having been personally offended by the article in question. Such moralistic posturing acts as a defence mechanism to protect cherished assumptions from rational scrutiny and indeed to enable adherents to appropriate the ‘moral high ground’, as well as the language of ‘reason’ and ‘science’ as the exclusive property of their own favoured approaches. Loughlin brings out the Orwellian nature of this manoeuvre and identifies a significant implication.”

If Goldacre and others really are engaged in posturing then their primary offence, at least according to the Sartrean perspective adopted by Murray et al. is not primarily intellectual, but rather it is moral. Far from there being a moral requirement to ‘bend a knee’ at the EBM altar, to do so is to violate one’s primary duty as an autonomous being.”

This ferocious attack seems to have been triggered because Goldacre has explained in simple words what constitutes evidence and what doesn’t.  He has explained in a simple way how to do a proper randomised controlled trial of homeopathy.  And he he dismantled a fraudulent Qlink pendant, purported to shield you from electromagnetic radiation but which turned out to have no functional components (Goldacre, 2007).  This is described as being “Orwellian”, a description that seems to me to be downright bizarre.

In fact, when faced with real-life examples of what happens when you ignore evidence, those who write theoretical papers that are critical about evidence-based medicine may behave perfectly sensibly.  Although Andrew Miles edits a journal, (Journal of Evaluation in Clinical Practice), that has been critical of EBM for years. Yet when faced with a course in alternative medicine run by people who can only be described as quacks, he rapidly shut down the course (A full account has appeared on this blog).

It is hard to decide whether the language used in these papers is Marxist or neoconservative libertarian.  Whatever it is, it clearly isn’t science.  It may seem odd that postmodernists (who believe nothing) end up as allies of quacks (who’ll believe anything).  The relationship has been explained with customary clarity by Alan Sokal, in his essay Pseudoscience and Postmodernism: Antagonists or Fellow-Travelers? (Sokal, 2006).

Conclusions

Of course RCTs are not the only way to get knowledge. Often they have not been done, and sometimes it is hard to imagine how they could be done (though not nearly as often as some people would like to say).

It is true that RCTs tell you only about an average effect in a large population. But the same is true of observational epidemiology.  That limitation is nothing to do with randomisation, it is a result of the crude and inadequate way in which diseases are classified (as discussed above).   It is also true that randomisation doesn’t guarantee lack of bias in an individual case, but only in the long run.  But it is the best that can be done.  The fact remains that randomization is the only way to be sure of causality, and making mistakes about causality can harm patients, as it did in the case of HRT.

Raymond Tallis (1999), in his review of Sokal & Bricmont, summed it up nicely

“Academics intending to continue as postmodern theorists in the interdisciplinary humanities after S & B should first read Intellectual Impostures and ask themselves whether adding to the quantity of confusion and untruth in the world is a good use of the gift of life or an ethical way to earn a living. After S & B, they may feel less comfortable with the glamorous life that can be forged in the wake of the founding charlatans of postmodern Theory.  Alternatively, they might follow my friend Roger into estate agency — though they should check out in advance that they are up to the moral rigours of such a profession.”

The conclusions that I have drawn were obvious to people in the business a half a century ago. (Doll & Peto, 1980) said

“If we are to recognize those important yet moderate real advances in therapy which can save thousands of lives, then we need more large randomised trials than at present, not fewer. Until we have them treatment of future patients will continue to be determined by unreliable evidence.”

The towering figures are R.A. Fisher, and his followers who developed the ideas of randomisation and maximum likelihood estimation. In the medical area, Bradford Hill, Archie Cochrane, Iain Chalmers had the important ideas worked out a long time ago.

In contrast, philosophers like Worral, Cartwright, Holmes, Barry, Loughlin and Polychronis seem to me to make no contribution to the accumulation of useful knowledge, and in some cases to hinder it.  It’s true that the harm they do is limited, but that is because they talk largely to each other.  Very few working scientists are even aware of their existence.  Perhaps that is just as well.

References

Cartwright N (2007). Are RCTs the Gold Standard? Biosocieties (2007), 2: 11-20

Colquhoun, D (2010) University of Buckingham does the right thing. The Faculty of Integrated Medicine has been fired. https://www.dcscience.net/?p=2881

Miles A & Loughlin M (2006). Continuing the evidence-based health care debate in 2006. The progress and price of EBM. J Eval Clin Pract 12, 385-398.

Miles A, Loughlin M, & Polychronis A (2007). Medicine and evidence: knowledge and action in clinical practice. J Eval Clin Pract 13, 481-503.

Miles A, Polychronis A, & Grey JE (2006). The evidence-based health care debate – 2006. Where are we now? J Eval Clin Pract 12, 239-247.

Murray SJ, Holmes D, Perron A, & Rail G (2007).
Deconstructing the evidence-based discourse in health sciences: truth, power and fascis. Int J Evid Based Healthc 2006; : 4, 180–186.

Sokal AD (1996). Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity. Social Text 46/47, Science Wars, 217-252.

Sokal AD (2006). Pseudoscience and Postmodernism: Antagonists or Fellow-Travelers? In Archaeological Fantasies, ed. Fagan GG, Routledge,an imprint of Taylor & Francis Books Ltd.

Sokal AD & Bricmont J (1998). Intellectual Impostures, New edition, 2003, Economist Books ed. Profile Books.

Tallis R. (1999) Sokal and Bricmont: Is this the beginning of the end of the dark ages in the humanities?

Worrall J. (2004) Why There’s No Cause to Randomize. Causality: Metaphysics and Methods.Technical Report 24/04 . 2004.

Worrall J (2010). Evidence: philosophy of science meets medicine. J Eval Clin Pract 16, 356-362.

Follow-up

Iain Chalmers has drawn my attention to a some really interesting papers in the James Lind Library

An account of early trials is given by Chalmers I, Dukan E, Podolsky S, Davey Smith G (2011). The adoption of unbiased treatment allocation schedules in clinical trials during the 19th and early 20th centuries.  Fisher was not the first person to propose randomised trials, but he is the person who put it on a sound mathematical basis.

Another fascinating paper is Chalmers I (2010). Why the 1948 MRC trial of streptomycin used treatment allocation based on random numbers.

The distinguished statistician, David Cox contributed, Cox DR (2009). Randomization for concealment.

Incidentally, if anyone still thinks there are ethical objections to random allocation, they should read the account of retrolental fibroplasia outbreak in the 1950s, Silverman WA (2003). Personal reflections on lessons learned from randomized trials involving newborn infants, 1951 to 1967.

Chalmers also pointed out that Antony Eagle of Exeter College Oxford had written about Goldacre’s epistemology. He describes himself as a “formal epistemologist”. I fear that his criticisms seem to me to be carping and trivial. Once again, a philosopher has failed to make a contribution to the progress of knowledge.

There’s been no official announcement, but four more of Westminster’s courses in junk medicine have quietly closed.

For entry in 2011 they offer

University of Westminster (W50) qualification
Chinese Medicine: Acupuncture (B343) 3FT Hon BSc
Chinese Medicine: Acupuncture with Foundation (B341) 4FT/5FT Hon BSc/MSci
Complementary Medicine (B255) 3FT Hon BSc
Complementary Medicine (B301) 4FT Hon MHSci
Complementary Medicine: Naturopathy (B391) 3FT Hon BSc
Herbal Medicine (B342) 3FT Hon BSc
Herbal Medicine with Foundation Year (B340) 4FT/5FT Hon BSc/MSci
Nutritional Therapy (B400) 3FT Hon BSc
 

But for entry in 2012

University of Westminster (W50) qualification
Chinese Medicine: Acupuncture (B343) 3FT Hon BSc
Chinese Medicine: Acupuncture with Foundation (B341) 4FT/5FT Hon BSc/MSci
Herbal Medicine (B342) 3FT Hon BSc
Herbal Medicine with Foundation Year (B340) 4FT/5FT Hon BSc/MSc

 

At the end of 2006, Westminster was offering 14 different BSc degrees in seven flavours of junk medicine. In October 2008, it was eleven. This year it’s eight, and next year only four degrees in two subjects. Since "Integrated Health" was ‘merged’ with Biological Sciences in May 2010, two of the original courses have been dropped each year. This September there will be a final intake for Nutrition Therapy and Naturopathy. That leaves only two, Chinese Medicine (acupuncture and (Western) Herbal Medicine.

The official reason given for the closures is always that the number of applications has fallen. I’m told that the number of applications has halved over the last five or six years. If that’s right, it counts as a big success for the attempts of skeptics to show the public the nonsense that’s taught on these degrees. Perhaps it is a sign that we are emerging from the endarkenment.

Rumour has it that the remaining degrees will eventually close too. Let’s hope so. Meanwhile, here is another helping hand.

There is already quite a bit here about the dangers of Chinese medicine, e.g. here and, especially, here. A submission to the Department of Health gives more detail. There has been a lot on acupuncture here too. There is now little doubt that it’s no more than a theatrical, and not very effective, placebo. So this time I’ll concentrate on Western herbal medicine.

Western Herbal Medicine

Herbal medicine is just a branch of pharmacology and it could be taught as such. But it isn’t. It comes overlaid with much superstitious nonsense. Some of it can be seen in slides from Edinburgh Napier University (the difference being that Napier closed that course, and Westminster hasn’t)

Even if it were taught properly, it wouldn’t be appropriate for a BSc for several reasons.

First, there isn’t a single herbal that has full marketing authorisation from the MHRA. In other words, there isn’t a single herb for which there is good evidence that it works to a useful extent.

Second, the fact that the active principals in plants are virtually always given in an unknown dose makes them potentially dangerous. This isn’t 1950s pharmacology. It’s 1920s pharmacology, dating from a time before methods were worked out for standardising the potency of natural products (see Plants as Medicines).

Third, if you are going to treat illness with chemicals, why restrict yourself to chemicals that occur in plants?

It was the herbal medicine course that gave rise to the most virulent internal complaints at the University of Westminster. These complaints revealed the use of pendulum dowsing by some teachers on the course and the near-illegal, and certainly dangerous, teaching about herbs in cancer.

Here are a few slides from Principles of Herbal Medicine(3CT0 502). The vocabulary seems to be stuck in a time warp. When I first started in the late 1950s, words like tonic, carminative, demulcent and expectorant were common Over the last 40 years all these words have died out in pharmacology, for the simple reason that it became apparent that there were no such actions. But these imaginary categories are still alive and well in the herbal world.

There was a lecture on a categories of drugs so old-fashioned that I’ve never even heard the words: "nervines". and "adaptogens".

n1
N3
N2

N4

The "tonics" listed here seem quite bizarre. In the 1950s, “tonics” containing nux vomica (a small dose of strychnine) and gentian (tastes nasty) were common, but they vanished years ago, because they don’t work. None of those named here even get a mention in NCCAM’s Herbs-at-a-glance. Oats? Come on!

The only ‘relaxant’ here for which there is the slightest evidence is Valerian. I recall tincture of Valerian in a late 1950s pharmacy. It smells terrible,

According to NCCAM

  • Research suggests that valerian may be helpful for insomnia, but there is not enough evidence from well-designed studies to confirm this.
  • There is not enough scientific evidence to determine whether valerian works for other conditions, such as anxiety or depression.

Not much, for something that’s been around for centuries.

And for chamomile

  • Chamomile has not been well studied in people so there is little evidence to support its use for any condition.

None of this near-total lack of evidence is mentioned on the slides.

n5

What about the ‘stimulants‘? Rosemary? No evidence at all. Tea and coffee aren’t medicine (and not very good stimulants for me either).

Ginseng, on the other hand, is big business. That doesn’t mean it works of course. NCCAM says of Asian ginseng (Panax Ginseng).

  • Some studies have shown that Asian ginseng may lower blood glucose. Other studies indicate possible beneficial effects on immune function.
  • Although Asian ginseng has been widely studied for a variety of uses, research results to date do not conclusively support health claims associated with the herb. Only a few large, high-quality clinical trials have been conducted. Most evidence is preliminary—i.e., based on laboratory research or small clinical trials.

 

n8

Thymoleptics – antidepressants are defined as "herbs that engender a feeling of wellbeing. They uplift the spirit, improve the mood and counteract depression".

Oats, Lemon balm, Damiana, Vervain. Lavender and Rosemary are just old bits of folklore

NCCAM says

Some “sleep formula” products combine valerian with other herbs such as hops, lavender, lemon balm, and skullcap. Although many of these other herbs have sedative properties, there is no reliable evidence that they improve insomnia.

 

n10

The only serious contender here is St John’s Wort. At one time this was the prize exhibit for herbalists. It has been shown to be as good as the conventional SSRIs for treatment of mild to moderate depression. Sadly it has turned out that the SSRIs are themselves barely better than placebos. NCCAM says

  • There is scientific evidence that St. John’s wort may be useful for short-term treatment of mild to moderate depression. Although some studies have reported benefits for more severe depression, others have not; for example, a large study sponsored by NCCAM found that the herb was no more effective than placebo in treating major depression of moderate severity.

"Adaptogens" are another figment of the herbalists’ imaginations. They are defined in the lecture thus.

  • Herbs that have a normalising or balancing effect.
  • Mind and body are restored to optimum normal peak,
  • Increase threshold to physical and mental trauma and damage
  • Mental and physical activity and performance improved.
n12

Well, it would be quite nice if such drugs existed. Sadly they don’t.

NCCAM says

  • The evidence for using astragalus for any health condition is limited. High-quality clinical trials (studies in people) are generally lacking.

Another lecture dealt with "stimulating herbs". No shortage of them, it seems.

s2

Well at least one of these has quite well-understood effects in pharmacology, ephedrine, a sympathomimetic amine. It isn’t used much because it can be quite dangerous, even with the controlled dose that’s used in real medicine. In the uncontrolled dose in herbal medicines it is downright dangerous.

s1
s3
s4

 

This is what NCCAM says about Ephedra

  • An NCCAM-funded study that analyzed phone calls to poison control centers found a higher rate of side effects from ephedra, compared with other herbal products.
  • Other studies and systematic reviews have found an increased risk of heart, psychiatric, and gastrointestinal problems, as well as high blood pressure and stroke, with ephedra use.
  • According to the U.S. Food and Drug Administration (FDA), there is little evidence of ephedra’s effectiveness, except for short-term weight loss. However, the increased risk of heart problems and stroke outweighs any benefits.

It seems that what is taught in the BSc Herbal Medicine degree consists largely of folk-lore and old wives’ tales. Some of it could be quite dangerous for patients.

A problem for pharmacognosists

While talking about herbal medicine, it’s appropriate to mention a related problem, though it has nothing to do with the University of Westminster.

My guess is that not many people have even heard of pharmacognosy. If it were not for my humble origins as an apprentice pharmacist in Grange Road, Birkenhead (you can’t get much more humble than that) I might not know either.

Pharmacognosy is a branch of botany, the study of plant drugs. I recall inspecting powered digitalis leaves under a microscope. In Edinburgh, in the time of the great pharmacologist John Henry Gaddum, medical students might be presented in the oral exam with a jar of calabar beans and required to talk about their anticholinesterase effects of the physostigmine that they contain.

The need for pharmacognosy has now all but vanished, but it hangs on in the curriculum for pharmacy students. This has engendered a certain unease about the role of pharmacognists. They often try to justify their existence by rebranding themselves as "phytotherapists". There are even journals of phytotherapy. It sounds a lot more respectable that herbalism. At its best, it is more respectable, but the fact remains that there no herbs whatsoever that have well-documented medical uses.

The London School of Pharmacy is a case in point. Simon Gibbons (Professor of Phytochemistry, Department of Pharmaceutical and Biological Chemistry). The School of Pharmacy) has chosen, for reasons that baffle me, to throw in his lot with the reincarnated Prince of Wales Foundation known as the “College of Medicine“. That organisation exists largely (not entirely) to promote various forms of quackery under the euphemism “integrated medicine”. On their web site he says "Western science is now recognising the extremely high value of herbal medicinal products . . .", despite the fact that there isn’t a single herbal preparation with efficacy sufficient for it to get marketing authorisation in the UK. This is grasping at straws, not science.

The true nature of the "College of Medicine" is illustrated, yet again, by their "innovations network". Their idea of "innovation" includes the Bristol Homeopathic Hospital and the Royal London Hospital for Integrated medicine, both devoted to promoting the utterly discredited late-18th century practice of giving people pills that contain no medicine. Some "innovation".

It baffles me that Simon Gibbons is willing to appear on the same programme as Simon Mills and David Peters, and George Lewith. Mills’ ideas can be judged by watching a video of a talk he gave in which he ‘explains’ “hot and cold herbs”. It strikes me as pure gobbledygook. Make up your own mind. He too has rebranded himself as "phytotherapist" though in fact he’s an old-fashioned herbalist with no concern for good evidence. David Peters is the chap who, as Clinical Director of the University of Westminster’s ever-shrinking School of Quackery, tolerates dowsing as a way to select ‘remedies’.

The present chair of Pharmacognosy at the School of Pharmacy is Michael Heinrich. He, with Simon Gibbons, has written a book Fundamentals of pharmacognosy and phytotherapy. As well as much good chemistry, it contains this extraordinary statement

“TCM [traditional Chinese medicine] still contains very many remedies which were selected by their symbolic significance rather than their proven effects; however this does not mean that they are all ‘quack’remedies! There may even be some value in medicines such as tiger bone, bear gall, turtle shell, dried centipedes, bat dung and so on. The herbs, however, are well researched and are becoming increasingly popular as people become disillusioned with Western Medicine.”

It is irresponsible to give any solace at all to the wicked industries that kill tigers and torture bears to extract their bile. And it is simple untrue that “herbs are well-researched”. Try the test,

A simple test for herbalists. Next time you encounter a herbalist, ask them to name the herb for which there is the best evidence of benefit when given for any condition. Mostly they refuse to answer, as was the case with Michael McIntyre (but he is really an industry spokesman with few scientific pretensions). I asked Michael Heinrich, Professor of Pharmacognosy at the School of Pharmacy. Again I couldn’t get a straight answer. Usually, when pressed, the two things that come up are St John’s Wort and Echinacea. Let’s see what The National Center for Complementary and Alternative Medicine (NCCAM) has to say about them. NCCAM is the branch of the US National Institutes of Health which has spent around a billion dollars of US taxpayers’ money on research into alternative medicine, For all that effort they have failed to come up with a single useful treatment. Clearly they should be shut down. Nevertheless, as an organisation that is enthusiastic about alternative medicine, their view can only be overoptimistic.

For St John’s Wort . NCCAM says

  • There is scientific evidence that St. John’s wort may be useful for short-term treatment of mild to moderate depression. Although some studies have reported benefits for more severe depression, others have not; for example, a large study sponsored by NCCAM found that the herb was no more effective than placebo in treating major depression of moderate severity.

For Echinacea NCCAM says

  • Study results are mixed on whether echinacea can prevent or effectively treat upper respiratory tract infections such as the common cold. For example, two NCCAM-funded studies did not find a benefit from echinacea, either as Echinacea purpurea fresh-pressed juice for treating colds in children, or as an unrefined mixture of Echinacea angustifolia root and Echinacea purpurea root and herb in adults. However, other studies have shown that echinacea may be beneficial in treating upper respiratory infections.

If these are the best ones, heaven help the rest.

Follow-up

Jump to follow-up

Almost all the revelations about what’s taught on university courses in alternative medicine have come from post-1992 universities. (For readers not in the UK, post-1992 universities are the many new univerities created in 1992, from former polytechnics etc, and Russell group universities are the "top 20" research-intensive universities)

It is true that all the undergraduate courses are in post-1992 universities, but the advance of quackademia is by no means limited to them. The teaching at St Bartholomew’s Hospital Medical School, one of the oldest, was pretty disgraceful for example, though after protests from their own students, and from me, it is now better, I believe.

soton logo

Quackery creeps into all universities to varying extents. The good ones (like Southampton) don’t run "BSc" degrees, but it still infiltrates through two main sources,

The first is via their HR departments, which are run by people who tend to be (I quote) "credulous and moronic" when it comes to science.

The other main source is in teaching to medical students. The General Medical Council says that medical students must know something about alterantive medicine and that’s quite right, A lot of their patients will use it. The problem is that the guidance is shockingly vague .

“They must be aware that many patients are interested in and choose to use a range of alternative and complementary therapies. Graduates must be aware of the existence and range of such therapies, why some patients use them, and how these might affect other types of treatment that patients are receiving.” (from Tomorrow’s Doctors, GMC)

In many medical schools, the information that medical students get is quite accurate. At UCL and at King’s (London) I have done some of the familiarisation myself. In other good medical schools, the students get some shocking stuff. St Bartholomew’s Hospital medical School was one example. Edinburgh University was another.
But there is one Russell group university where alternative myths are propagated more than any other that I know about. That is the University of Southampton.

In general, Southampton is a good place, I worked there for three years myself (1972 – 1975). The very first noise spectra I measured were calculated on a PDP computer in their excellent Institute of Sound and Vibration Research, before I wrote my own programs to do it.

But Southanpton also has a The Complementary and Integrated Medicine Research Unit . Oddly the unit’s web site, http://www.cam-research-group.co.uk, is not a university address, and a search of the university’s web site for “Complementary and Integrated Medicine Research Unit” produces no result. Nevertheless the unit is “within the School of Medicine at the University of Southampton”

Notice the usual euphemisms ‘complementary’ and ‘integrated’ in the title: the word ‘alternative’ is never used. This sort of word play is part of the bait and switch approach of alternative medicine.

The unit is quite big: ten research staff, four PhD students and two support staff It is headed by George Lewith.

Teaching about alternative medicine to Southampton medical students.

The whole medical class seems to get quite a lot compared with other places I know about. That’s 250 students (210 on the 5-year course plus another 40 from the 4-year graduate-entry route).

Year 1:  Lecture by David Owen on ‘holism’ within the Foundation Course given to all 210 medical students doing the standard (5-year) course.

 

Year 2: Lecture by Lewith (on complementary medicine, focusing on acupuncture for pain) given within the nervous systems course to the whole medical student year-group (210 students).

 

Year 3 SBOM (scientific basis of medicine) symposium: The 3-hour session (“Complementary or Alternative Medicine: You Decide”). I’m told that attendance at this symposium is often pretty low, but many do turn up and all of them are officially ‘expected’ to attend.

 

There is also an optional CAM special study module chosen by 20 students in year 3, but also a small number of medical students (perhaps 2 – 3 each year?) choose to do a BMedSci research project supervised by the CAM research group and involving 16-18 weeks of study from October to May in Year 4. The CAM research group also supervise postgraduate students doing PhD research.

As always, a list of lectures doesn’t tell you much. What we need to know is what’s taught to the students and something about the people who teach it. The other interesting question is how it comes about that alternative medicine has been allowed to become so prominent in a Russell group university. It must have support from on high. In this case it isn’t hard to find out where it comes from. Here are some details.

Year 1 Dr David Owen

David Owen is not part of Lewith’s group, but a member of the Division of Medical Education headed by Dr Faith Hill (of whom, more below). He’s one of the many part-time academics in this area, being also a founder of The Natural Practice .

Owen is an advocate of homeopathy (a past president of the Faculty of Homeopathy). Homeopathy is, of course, the most barmy and discredited of all the popular sorts of alternative medicine. Among those who have discredited it is the head of the alt med unit, George Lewith himself (though oddly he still prescribes it).

And he’s also a member of the British Society of Environmental Medicine (BSEM). That sounds like a very respectable title, but don’t be deceived. It is an organisation that promotes all sorts of seriously fringe ideas. All you have to do is notice that the star speaker at their 2011 conference was none other than used-to-be a doctor, Andrew Wakefield, a man who has been responsible for the death of children from measles by causing an unfounded scare about vaccination on the basis of data that turned out to have been falsified. There is still a letter of support for Wakefield on the BSEM web site.

The BSEM specialises in exaggerated claims about ‘environmental toxins’ and uses phony allergy tests like kinesiology and the Vega test that misdiagnose allergies, but provide en excuse to prescribe expensive but unproven nutritional supplements, or expensive psychobabble like "neuro-linguistic programming".

Other registered "ecological physicians" include the infamous Dr Sarah Myhill, who, in 2010, was the subject of a damning verdict by the GMC, and Southampton’s George Lewith.

If it is wrong to expose medical students to someone who believes that dose-response curves have a negative slope (the smaller the dose the bigger the effect -I know, it’s crazy), then it is downright wicked to expose students to a supporter of Andrew Wakefield.

David Owen’s appearance on Radio Oxford, with the indomitable Andy Lewis appears on his Quackometer blog.

Year 2 Dr George Lewith

Lewith is a mystery wrapped in an enigma. He’s participated in some research that is quite good by the (generally pathetic) standards of the world of alternative medicine.

In 2001 he showed that the Vega test did not work as a method of allergy diagnosis. "Conclusion Electrodermal testing cannot be used to diagnose environmental allergies", published in the BMJ .[download reprint].

In 2003 he published "A randomized, double-blind, placebo-controlled proving trial of Belladonna 30C” [download reprint] that showed homeopathic pills with no active ingredients had no effects: The conclusion was "”Ultramolecular homeopathy has no observable clinical effects" (the word ultramolecular, in this context, means that the belladonna pills contained no belladonna).

In 2010 he again concluded that homeopathic pills were no more than placebos, as described in Despite the spin, Lewith’s paper surely signals the end of homeopathy (again). [download reprint]

What i cannot understand is that, despite his own findings, his private practice continues to prescribe the Vega machine and continues to prescribe homeopathic pills. And he continues to preach this subject to unfortunate medical students.

Lewith is also one of the practitioners recommended by BSEM. He’s a director of the "College of Medicine". And he’s also an advisor to a charity called Yes To Life. (see A thoroughly dangerous charity: YesToLife promotes nonsense cancer treatments).

3rd year Student Selected Unit

The teaching team includes:

  • David Owen – Principal Clinical Teaching Fellow SoM, Holistic Physician
  • George Lewith – Professor of Health Research and Consultant Physician
  • Caroline Eyles – Homeopathic Physician
  • Susan Woodhead – Acupuncturist
  • Elaine Cooke – Chiropractic Practitioner
  • Phine Dahle – Psychotherapist
  • Keith Carr – Reiki Master
  • Christine Rose – Homeopath and GP
  • David Nicolson – Nutritionalist
  • Shelley Baker – Aromatherapist
  • Cheryl Dunford – Hypnotherapist
  • Dedj Leibbrandt – Herbalist

More details of the teaching team here. There is not a single sceptic among them, so the students don’t get a debate, just propaganda.

In this case. there’s no need for the Freedom of Information Act. The handouts. and the powerpoints are on their web site. They seem to be proud of them

Let’s look at some examples

Chiropractic makes an interesting case, because, in the wake of the Singh-BCA libel case, the claims of chiropractors have been scrutinised as never before and most of their claims have turned out to be bogus. There is a close relationship between Lewith’s unit and the Anglo-European Chiropractic College (the 3rd year module includes a visit there). In fact the handout provided for students, Evidence for Chiropractic Care , was written by the College. It’s interesting because it provides no real evidence whatsoever for the effectiveness of chiropractic care. It’s fairly honest in stating that the view at present is that, for low back pain, it isn’t possible to detect any difference between the usefulness of manipulation by a physiotherapist, by an osteopath or by a chiropractor. Of course it does not draw the obvious conclusion that this makes chiropractic and osteopathy entirely redundant -you can get the same result without all the absurd mumbo jumbo that chiropractors and osteopaths love, or their high-pressure salesmanship and superfluous X-rays. Neither does it mention the sad, but entirely possible, outcome that none of the manipulations are effective for low back pain. There is, for example, no mention of the fascinating paper by Artus et al [download reprint]. This paper concludes

"symptoms seem to improve in a similar pattern in clinical trials following a wide
variety of active as well as inactive treatments."

This paper was brought to my attention through the blog run by the exellent physiotherapist, Neil O’Connell. He comments

“If this finding is supported by future studies it might suggest that we can’t even claim victory through the non-specific effects of our interventions such as care, attention and placebo. People enrolled in trials for back pain may improve whatever you do. This is probably explained by the fact that patients enrol in a trial when their pain is at its worst which raises the murky spectre of regression to the mean and the beautiful phenomenon of natural recovery.”

This sort of critical thinking is conspicuously absent from this (and all the other) Southampton handouts. The handout is a superb example of bait and switch: No nonsense about infant colic, innate energy or imaginary subluxations appears in it.

Acupuncture is another interesting case because there is quite a lot of research evidence, in stark contrast to the rest of traditional Chinese medicine, for which there is very little research.

There is a powerpoint show by Susan Woodhead (though it is labelled British Acupuncture Council).

The message is simple and totally uncritical. It works.

acu-1

In fact there is now a broad consensus about acupuncture.

(1) Real acupuncture and sham acupuncture have been found to be indistinguishable in many trials. This is the case regardless of whether the sham is a retractable needle (or even a toothpick) in the "right" places, or whether it is real needles inserted in the "wrong" places. The latter finding shows clearly that all that stuff about meridians and flow of Qi is sheer hocus pocus. It dates from a pre-scientific age and it was wrong.

(2) A non-blind comparison of acupuncture versus no acupuncture shows an advantage for acupuncture. But the advantage is usually too small to be of any clinical significance. In all probability it is a placebo effect -it’s hard to imagine a more theatrical event than having someone in a white coat stick long needles into you, like a voodoo doll. Sadly, the placebo effect isn’t big enough to be of much use.

Needless to say, none of this is conveyed to the medical students of Southampton. Instead they are shown crude ancient ideas that date from long before anything was known about physiology as though they were actually true. These folks truly live in some alternative universe. Here are some samples from the acupuncture powerpoint show by Susan Woodhead.

acu-5

acu-2

acu-3

Well this is certainly a "different diagnostic language", but no attempt is made to say which one is right. In the mind of the acupuncurist it seems both are true. It is a characteristic of alternative medicine advocates that they have no difficulty in believing simultaneously several mutually contradictory propositions.

As a final exmple of barminess, just look at the acupuncture points (allegedly) on the ear The fact that it is a favoured by some people in the Pentagon as battlefield acupuncture, is more reminiscent of the mad general, Jack D. Ripper, in Dr Strangelove than it is of science.

acu-4

There is an equally uncritical handout on acupuncture by Val Hopwood. It’s dated March 2003, a time before some of the most valuable experiments were done.

The handout says "sham acupuncture
is generally less effective than true acupuncture", precisely the opposite of what’s known now. And there are some bits that give you a good laugh, always helpful in teaching. I like

“There is little doubt that an intact functioning nervous system is required for acupuncture to produce
analgesia or, for that matter, any physiological changes” 

and

Modern techniques: These include hybrid techniques such as electro-acupuncture . . . and Ryadoraku [sic] therapy and Vega testing. 

Vega testing!! That’s been disproved dozens of times (not least by George Lewith). And actually the other made-up nonsense is spelled Ryodoraku.

It’s true that there is a short paragraph at the end of the handout headed "Scientific evaluation of acupuncture" but it doesn’t cite a single reference and reads more like excuses for why acupuncture so often fails when it’s tested properly.

Homeopathy. Finally a bit about that most boring of topics, the laughable medicine that contains no medicine, homeopathy. Caroline Eyles is a member for the Society of Homeopaths, the organisation that did nothing when its members were caught out in the murderous practice of recommending homeopathy for prevention of malaria. The Society of Homeopaths also endorses Jeremy Sherr, a man so crazy that he believes he can cure AIDS and malaria with sugar pills.

The homeopathy handout given to the students has 367 references, but somehow manages to omit the references to their own boss’s work showing that the pills are placebos. The handout has all the sciencey-sounding words, abused by people who don’t understand them.

"The remedy will be particularly effective if matched to the specific/particular characteristics of the individual (the ‘totality’ of the patient) on all levels, including the emotional and mental levels, as well as just the physical symptoms. ‘Resonance’ with the remedy’s curative power will then be at it’s [sic] best." 

The handout is totally misleading about the current state of research. It says

"increasing clinical research confirms it’s [sic] clinical effectiveness in treating patients, including babies and animals (where a placebo effect would be hard to justify)."

The powerpont show by Caroline Eyles shows all the insight of a mediaeval vitalist

home1

hom2

Anyone who has to rely on the utterly discredited Jacques Benveniste as evidence is clearly clutching at straws. What’s more interesting about this slide the admission that "reproducibility is a problem -oops, an issue" and that RCTs (done largely by homeopaths of course) have "various methodological flaws and poor external validity". You’d think that if that was the best that could be produced after 200 yours, they’d shut up shop and get another job. But, like aging vicars who long since stopped believing in god, but are damned if they’ll give up the nice country rectory, they struggle on, sounding increasingly desperate.

How have topics like this become so embedded in a medical course at a Russell group university?

The details above are a bit tedious and repetitive. It’s already established that hardly any alternative medicine works. Don’t take my word for it. Check the web site of the US National Center for Complementary and Alternative Medicine (NCCAM) who, at a cost of over $2 billion have produced nothing useful.

A rather more interesting question is how a good university like Southampton comes to be exposing its medical students to teaching like this. There must be some powerful allies higher up in the university. In this case it’s pretty obvious who thay are.

Professor Stephen Holgate MD DSc CSc FRCP FRCPath FIBiol FBMS FMed Sci CBE has to be the primary suspect, He’s listed as one of Southampton’s Outstanding Academics. His work is nothing to do with alternative medicine but he’s been a long term supporter of the late unlamented Prince of Wales’ Foundation, and he’s now on the advisory board of it’s successor, the so called "College of Medicine" (for more information about that place see the new “College of Medicine” arising from the ashes of the Prince’s Foundation for Integrated Health, and also Don’t be deceived. The new “College of Medicine” is a fraud and delusion ). His description on that site reads thus.

"Stephen Holgate is MRC Clinical Professor of Immunopharmacology at the University of Southampton School of Medicine and Honorary Consultant Physician at Southampton University Hospital Trust. He is also chair of the MRC’s Populations and Systems Medicine Board. Specialising in respiratory medicine, he is the author of over 800 peer-reviewed papers and contributions to scientific journals and editor of major textbooks on asthma and rhinitis. He is Co-Editor of Clinical and Experimental Allergy, Associate Editor of Clinical Science and on the editorial board of 25 other scientific journals."

Clearly a busy man. Personally I’m deeply suspicious of anyone who claims to be the author of over 800 papers. He graduated in medicine in 1971, so that is an average of over 20 papers a year since then, one every two or three weeks. I’d have trouble reading that many, never mind writing them.

Holgate’s long-standing interest in alternative medicine is baffling. He’s published on the topic with George Lewith, who, incidentally, is one of the directors of the "College of Medicine"..

It may be unkind to mention that, for many years now, I’ve been hearing rumours that Holgate is suffering from an unusually bad case of Knight starvation.

The Division of Medical Education appears to be the other big source of support for. anti-scientific medicine. That is very odd, I know, but it was also the medical education people who were responsible for mis-educating medical students at. St. Bartholomew’s and at Edinburgh university. Southampton’s Division of Medical Education has a mind-boggling 60 academic and support staff. Two of them are of particular interest here.

Faith Hill is director of the division. Her profile doesn’t say anything about alternative medicine, but her interest is clear from a 2003 paper, Complementary and alternative medicine: the next generation of health promotion?. The research consisted of reporting anecdotes from interviews of 52 unnamed people (this sort of thing seems to pass for research in the social sciences). It starts badly by misrepresenting the conclusions of the House of Lords report (2000) on CAM. Although it comes to no useful conclusions, it certainly shows a high tolerance of nonsensical treatments.

Chris Stephens is  Associate Dean of Medical Education & Student Experience. His sympathy is shown by a paper he wrote In 2001, with David Owen (the homeopath, above) and George Lewith: Can doctors respond to patients’ increasing interest in complementary and alternative medicine?. Two of the conclusions of this paper were as follows.

"Doctors are training in complementary and alternative medicine and report benefits both for their patients and themselves"

Well, no actually. It wasn’t true then, and it’s probably even less true now. There’s now a lot more evidence and most of it shows alternative medicine doesn’t work.

"Doctors need to address training in and practice of complementary and alternative medicine within their own organisations"

Yes they certainly need to do that.

And the first thing that Drs Hill and Stephens should do is look a bit more closely about what’s taught in their own university, I hope that this post helps them,

Follow-up

4 July 2011. A correspondent has just pointed out that Chris Stephens is a member of the General Chiropractic Council. The GCC is a truly pathetic pseudo-regulator. In the wake of the Simon Singh affair it has been kept busy fending off well-justified complaints against untrue claims made by chiropractors. The GCC is a sad joke, but it’s even sadder to see a Dean of Medical Education at the University of Southampton being involved with an organisation that has treated little matters of truth with such disdain.

A rather unkind tweet from (ex)-chiropractor @RichardLanigan.

“Chris is just another light weight academic who likes being on committees. Regulatory bodies are full of them”

Jump to follow-up

Reply to David Katz.

The Atlantic is an American magazine founded (as The Atlantic Monthly) in Boston, Massachusetts, in 1857. It is a literary and cultural magazine with a very distinguished history. Its contributors include Mark Twain and Martin Luther King. So it was pretty exciting to be asked to write something for it, even with a 12 hour deadline.

atlantic logo

Sadly though, in recent years, the coverage of science in The Atlantic has been less than good The inimitable David Gorski has explained the problem in Blatant pro-alternative medicine propaganda in The Atlantic. The immediate cause of the kerfuffle was the publication of an article, The Triumph of New-Age Medicine. It was written by a journalist, David Freedman. It is very long and really not very good. It has been deconstructed also by Steven Novella.

Freedman’s article is very long, but it boils down to saying I know it doesn’t work but isn’t it nice. The article was followed up with Fix or Fraud: a ‘debate’., The debate is rather disappointing. It suffers from the problem, not unknown at the BBC, of thinking that ‘balance’ means giving equal time to people who think the earth is flat as it gives to people who think it is a oblate spheroid. The debate consists of 800 word contributions from seven people, six of whom are flat earthers, and one of which is very good. Try Steven Salzberg, A ‘triumph’ of hype over reality for some real sense. One of the flat earthers is director of a National Institutes of Health institute, NCCAM.

And this is a magazine that published not only Mark Twain, but Abraham Flexner, the man who, in 1910, put US medical education on a firm scientific footing, You can read Flexner in their archive. Mark Twain said

[A reply to letters recommending remedies]:

Dear Sir (or Madam):–I try every remedy sent to me. I am now on No. 67. Yours is 2,653. I am looking forward to its beneficial results. – quoted in My Father Mark Twain, by Clara Clemens

and

"allopathy is good for the sane and homeopathy for the insane"

So here is the piece, produced rather rapidly, for the debate. This is the original unedited version, slightly longer than appears in The Atlantic.

The title for The Atlantic piece, America, Land of the Health Hucksters, was theirs not mine. There is no shortage of health hucksters in the UK. but at least they mostly haven’t become as embedded within univerities and hospitals as much as in the USA.

David Freeman’s article, “The Triumph of New Age Medicine” starts by admitting that most alternative treatments don’t work, and ends by recommending them.  He takes a lot more words to say it, but that seems a fair synopsis.  It is the sort of thing you might expect in a cheap supermarket magazine, not in Atlantic.

The article is a prime example of rather effective sales technique, much beloved of used car salesmen and health hucksters. It’s called bait and switch.

It’s true that medicine can’t cure everything. That’s hardly surprising given that serious research has been going on for barely 100 years, and it turns out that the humans are quite complicated.  But the answer to the limitations of medicine is not to invent fairy stories, which is what the alternative medicine industry does.  There is no sensible option but to keep the research going and to test its results honestly.  It’s sad but true that Big Pharma has at times corrupted medicine, by concealing negative results.  But that corruption has been revealed by real scientists, not by health hucksters.  In the end, science is self-correcting and the truth emerges.  Health hucksters, on the other hand, seem incapable of giving up their beliefs whatever the evidence says.

The idea of patient-centered care is fashionable and care is great, if you can’t cure.  But there’s a whole spectrum in the wellbeing industry, from serious attempts to make people happier, to the downright nuts.  The problem is that caring for patients make a very good bait, and the switch to woo tends to follow not far behind.

I write from the perspective of someone who lives in a country that achieves health care for all its citizens at half the cost of the US system, and gets better outcomes in life expectancy and infant mortality. The view from outside is that US medicine rather resembles US religion. It has been taken over by fundamentalists who become very rich by persuading a gullible public to believe things that aren’t true.

One of Freedman’s problems is, I think, that he vastly overestimates the power of the placebo effect.  It exists, for sure, but in most cases, it seems to be small, erratic and transient.  Acupuncture is a good example.  It’s quite clear now that real acupuncture and sham acupuncture are indistinguishable, so it’s also quite clear that the ‘principles’ on which it’s based are simply hokum.  If you do a non-blind comparison of acupuncture with no acupuncture, there is in some trials (not all) a small advantage for the acupuncture group.  But it is too small to be of much benefit to the patient.

By far the more important reason why ineffective voodoo like acupuncture appears to work is the “get better anyway” effect (known technically as regression to the mean).  You take the needles or pills when you are at your worst, the next day you feel better.  It’s natural to attribute the fact that you feel better to the needles or pills, when all you are seeing is natural fluctuations in the condition.  It’s like Echinacea will cure your cold in only seven days when otherwise it would have taken a week.

If the article itself was naïve and uncritical, the follow up was worse. It is rather surprising to me that a magazine like Atlantic should think it worth printing an advertorial for Andrew Weil’s business.

Surely, though, Josephine Briggs, as director of an NIH institute is more serious?  Sadly, no.  Her piece is a masterpiece of clutching at straws. The fact is that her institute has spent over $ 2 billion of US taxpayers’ money and, for all that money it has produced not a single useful treatment.  All that NCCAM has done is to show that several things do not work, something we pretty much knew already. If I were a US taxpayer, I’d be somewhat displeased by that. It should be shut down now.

At first sight Dean Ornish sounds more respectable.  He bases his arguments on diet and life style changes, which aren’t alternative at all.  He’s done some research too.  The problem is that it’s mostly preliminary and inconclusive research, on the basis of which he vastly exaggerates the strength of the evidence for what can be achieved by diet alone. It’s classical bait and switch again.  The respectable, if ill-founded, arguments get you the foot in the door, and the woo follows later.

This is all very sad for a country that realized quite early that the interests of patients were best served by using treatments that had been shown to work. The Flexner report of 1910 led the world in the rational education of physicians. But now even places like Yale and Harvard peddle snake oil to their students through their "integrative medicine" departments. It’s hard to see why the USA is in the vanguard of substituting wishful thinking for common sense and reason.

The main reason, I’d guess, is money. Through NCCAM and the Bravewell Collaborative, large amounts of money have been thrown to the winds and businesses like Yale and Harvard have been quick to abandon their principles and grab the money. Another reason for the popularity of alternative medicine in parts of academia is that it’s a great deal easier to do ‘science’ when you are allowed to make up the answers. The "integrative medicine" symposium held at Yale in 2008 boggled the mind. Dr David Katz listed a lot of things he’d tried and which failed to work, His conclusion was not that they should be abandoned, but that we needed a "more fluid concept of evidence".  You can see it on YouTube,

Senator Tom Harkin’s promotion of NCCAM has done for the U.S. reputation in medicine what Dick Cheney did for the U.S. reputation in torture. It is hard to look at the USA from outside without thinking of the decline and fall of the Roman Empire.

One had hoped that era was over with the election of Obama, but the hucksters won’t give up without a fight. They are making too much money to do that.

Follow-up

The comments that appeared in The Atlantic on this piece were mostly less than enlightening -not quite what one expected of an intellectual magazine. Nevertheless I tried to answer all but the plain abusive comments.

More interesting, though, was an editorial by Jennie Rothenberg Gritz, the Atlantic Senior Editor who asked me to contribute. The Man Who Invented Medical School. It picked up on my mention of Abraham Flexner, and his famous 1910 report [download from Carnegie Foundation] which first put US medial education on a form rational footing. based on science. Now, 100 years later that’s being unpicked both in the USA and here. ms Gritz seemed to think that Flexner would have approved of Dean Ornish. In a response I begged to differ. I’m pretty sure that Felxner would have been furious of he could have seen the reecent march of quackademia, particularly, but not exclusively, in the USA. It is exactly the sort of thing his report set out, successfully, to abolish. He wrote, for example,

“the practitioner is subjected, year in, year out, to the steady bombardment of the unscrupulous manufacturer, persuasive to the uncritical, on the principle that “what I tell you three times is true.” Against bad example and persistent asseveration, only precise scientific concepts and a critical
appreciation of the nature and limits of actual demonstration can protect the young physician.” (Flexner report, 1910, pp 64-65)

It is this very “appreciation of the nature and limits of actual demonstration” that is now being abandoned by the alternative medicine industry. despite the fact that real medicine was in its infancy at the time he w as writing, he was very perceptive about the problems. Perhaps Freedman should read the report.

David Katz seems to have spotted my piece in The Atlantic, and has responded at great length in the Huffington Post (quite appropriate, given the consistent support of HuffPo for nonsense medicine). HuffPo allows only short comments with no links so I’ll reply to him here.

I fear that Dr Katz doth protest a great deal too much. He seems to object to a comment that I made about him in The Atlantic.

“… [He] listed a lot of things he’d tried and which failed to work. His conclusion was not that they should be abandoned, but that we needed a ‘a more fluid concept of evidence.'”

You don’t have to take my word for it. You can take it from the words of Dr Katz.

"What do we do when the evidence we have learned, or if we care to be more provocative, with which we have been indoctrinated, does now fully meet the needs of our patients"

It seems odd to me to regard teaching about how you distinguish what;s true and what isn’t as "indoctrination", though I can understand that knowledge of that subject could well diminish the income of alternative practitioners. You went on to say

"Some years ago the CDC funded us with a million dollars to do what they referred to initially as a systematic review of the evidence base for complementary and alternative medicine,  Anybody who’s ever been involved in systematic reviews knows that’s a very silly thing. . . . Well we knew it was silly but a million dollars sounded real [mumbled] took the money and then we figurered we’d figure out what to do with it [smiles broadly]. That’s what we did ". . .

I do hope you told the CDC that you did not spend the million dollars for the sensible purpose for which it was awarded.

This infusion of calcium, magnesium and D vitamins and vitamin C ameliorates the symptoms of fibromyalgia.  . . .  We did typical placebo controlled randomized double-blind trial for several months . . . we saw an improvement in both our treatment and placebo groups . . .

You then describe how you tested yoga for asthma and homeopathy for attention deficit hyperactivity disorder, Neither of them worked either. Your reaction to this string of failures was just to say “we need a more fluid concept of evidence”

After telling an anecdote about one patient who got better after taking homeopathic treatment you said £I don’t care to get into a discussion of how, or even whether, homeopathy works”.  Why not?  It seems it doesn’t matter much to you whether the things you sell to patients work or not.

You then went on to describe quite accurately that anti-oxidants don’t work and neither do multivitamin supplements for prevention of cardiovascular problems,  And once again you fail to accept the evidence, even evidence you have found yourself. Your response was

“So here too is an invitation to think more fluidly about of evidence. Absence of evidence is not evidence of absence.” 

That last statement is the eternal cry of every quack.  It’s true, of course, but that does not mean that absence of evidence gives you a licence to invent the answer.  But inventing the answer is what you do, time after time, You seem quite incapable of saying the most important thing that anyone in your position should. I don’t know the answer.

Jump to follow-up

One wonders about the standards of peer review at the British Journal of General Practice. The June issue has a paper, "Acupuncture for ‘frequent attenders’ with medically unexplained symptoms: a randomised controlled trial (CACTUS study)". It has lots of numbers, but the result is very easy to see. Just look at their Figure.

Paterson BJGP

There is no need to wade through all the statistics; it’s perfectly obvious at a glance that acupuncture has at best a tiny and erratic effect on any of the outcomes that were measured.

But this is not what the paper said. On the contrary, the conclusions of the paper said

Conclusion

The addition of 12 sessions of five-element acupuncture to usual care resulted in improved health status and wellbeing that was sustained for 12 months.

How on earth did the authors manage to reach a conclusion like that?

The first thing to note is that many of the authors are people who make their living largely from sticking needles in people, or advocating alternative medicine. The authors are Charlotte Paterson, Rod S Taylor, Peter Griffiths, Nicky Britten, Sue Rugg, Jackie Bridges, Bruce McCallum and Gerad Kite, on behalf of the CACTUS study team. The senior author, Gerad Kite MAc , is principal of the London Institute of Five-Element Acupuncture London. The first author, Charlotte Paterson, is a well known advocate of acupuncture. as is Nicky Britten.

The conflicts of interest are obvious, but nonetheless one should welcome a “randomised controlled trial” done by advocates of alternative medicine. In fact the results shown in the Figure are both interesting and useful. They show that acupuncture does not even produce any substantial placebo effect. It’s the authors’ conclusions that are bizarre and partisan. Peer review is indeed a broken process.

That’s really all that needs to be said, but for nerds, here are some more details.

How was the trial done?

The description "randomised" is fair enough, but there were no proper controls and the trial was not blinded. It was what has come to be called a "pragmatic" trial, which means a trial done without proper controls. They are, of course, much loved by alternative therapists because their therapies usually fail in proper trials. It’s much easier to get an (apparently) positive result if you omit the controls. But the fascinating thing about this study is that, despite the deficiencies in design, the result is essentially negative.

The authors themselves spell out the problems.

“Group allocation was known by trial researchers, practitioners, and patients”

So everybody (apart from the statistician) knew what treatment a patient was getting. This is an arrangement that is guaranteed to maximise bias and placebo effects.

"Patients were randomised on a 1:1 basis to receive 12 sessions of acupuncture starting immediately (acupuncture group) or starting in 6 months’ time (control group), with both groups continuing to receive usual care."

So it is impossible to compare acupuncture and control groups at 12 months, contrary to what’s stated in Conclusions.

"Twelve sessions, on average 60 minutes in length, were provided over a 6-month period at approximately weekly, then fortnightly and monthly intervals"

That sounds like a pretty expensive way of getting next to no effect.

"All aspects of treatment, including discussion and advice, were individualised as per normal five-element acupuncture practice. In this approach, the acupuncturist takes an in-depth account of the patient’s current symptoms and medical history, as well as general health and lifestyle issues. The patient’s condition is explained in terms of an imbalance in one of the five elements, which then causes an imbalance in the whole person. Based on this elemental diagnosis, appropriate points are used to rebalance this element and address not only the presenting conditions, but the person as a whole".

Does this mean that the patients were told a lot of mumbo jumbo about “five elements” (fire earth, metal, water, wood)? If so, anyone with any sense would probably have run a mile from the trial.

"Hypotheses directed at the effect of the needling component of acupuncture consultations require sham-acupuncture controls which while appropriate for formulaic needling for single well-defined conditions, have been shown to be problematic when dealing with multiple or complex conditions, because they interfere with the participative patient–therapist interaction on which the individualised treatment plan is developed. 37–39 Pragmatic trials, on the other hand, are appropriate for testing hypotheses that are directed at the effect of the complex intervention as a whole, while providing no information about the relative effect of different components."

Put simply that means: we don’t use sham acupuncture controls so we can’t distinguish an effect of the needles from placebo effects, or get-better-anyway effects.

"Strengths and limitations: The ‘black box’ study design precludes assigning the benefits of this complex intervention to any one component of the acupuncture consultations, such as the needling or the amount of time spent with a healthcare professional."

"This design was chosen because, without a promise of accessing the acupuncture treatment, major practical and ethical problems with recruitment and retention of participants were anticipated. This is because these patients have very poor self-reported health (Table 3), have not been helped by conventional treatment, and are particularly desperate for alternative treatment options.". 

It’s interesting that the patients were “desperate for alternative treatment”. Again it seems that every opportunity has been given to maximise non-specific placebo, and get-well-anyway effects.

There is a lot of statistical analysis and, unsurprisingly, many of the differences don’t reach statistical significance. Some do (just) but that is really quite irrelevant. Even if some of the differences are real (not a result of random variability), a glance at the figures shows that their size is trivial.

My conclusions

(1) This paper, though designed to be susceptible to almost every form of bias, shows staggeringly small effects. It is the best evidence I’ve ever seen that not only are needles ineffective, but that placebo effects, if they are there at all, are trivial in size and have no useful benefit to the patient in this case..

(2) The fact that this paper was published with conclusions that appear to contradict directly what the data show, is as good an illustration as any I’ve seen that peer review is utterly ineffective as a method of guaranteeing quality. Of course the editor should have spotted this. It appears that quality control failed on all fronts.

Follow-up

In the first four days of this post, it got over 10,000 hits (almost 6,000 unique visitors).

Margaret McCartney has written about this too, in The British Journal of General Practice does acupuncture badly.

The Daily Mail exceeds itself in an article by Jenny Hope whch says “Millions of patients with ‘unexplained symptoms’ could benefit from acupuncture on the NHS, it is claimed”. I presume she didn’t read the paper.

The Daily Telegraph scarcely did better in Acupuncture has significant impact on mystery illnesses. The author if this, very sensibly, remains anonymous.

Many “medical information” sites churn out the press release without engaging the brain, but most of the other newspapers appear, very sensibly, to have ignored ther hyped up press release. Among the worst was Pulse, an online magazine for GPs. At least they’ve publish the comments that show their report was nonsense.

The Daily Mash has given this paper a well-deserved spoofing in Made-up medicine works on made-up illnesses.

“Professor Henry Brubaker, of the Institute for Studies, said: “To truly assess the efficacy of acupuncture a widespread double-blind test needs to be conducted over a series of years but to be honest it’s the equivalent of mapping the DNA of pixies or conducting a geological study of Narnia.” ”

There is no truth whatsoever in the rumour being spread on Twitter that I’m Professor Brubaker.

Euan Lawson, also known as Northern Doctor, has done another excellent job on the Paterson paper: BJGP and acupuncture – tabloid medical journalism. Most tellingly, he reproduces the press release from the editor of the BJGP, Professor Roger Jones DM, FRCP, FRCGP, FMedSci.

"Although there are countless reports of the benefits of acupuncture for a range of medical problems, there have been very few well-conducted, randomised controlled trials. Charlotte Paterson’s work considerably strengthens the evidence base for using acupuncture to help patients who are troubled by symptoms that we find difficult both to diagnose and to treat."

Oooh dear. The journal may have a new look, but it would be better if the editor read the papers before writing press releases. Tabloid journalism seems an appropriate description.

Andy Lewis at Quackometer, has written about this paper too, and put it into historical context. In Of the Imagination, as a Cause and as a Cure of Disorders of the Body. “In 1800, John Haygarth warned doctors how we may succumb to belief in illusory cures. Some modern doctors have still not learnt that lesson”. It’s sad that, in 2011, a medical journal should fall into a trap that was pointed out so clearly in 1800. He also points out the disgracefully inaccurate Press release issued by the Peninsula medical school.

Some tweets

Twitter info 426 clicks on http://bit.ly/mgIQ6e alone at 15.30 on 1 June (and that’s only the hits via twitter). By July 8th this had risen to 1,655 hits via Twitter, from 62 different countries,

@followthelemur Selina
MASSIVE peer review fail by the British Journal of General Practice http://bit.ly/mgIQ6e (via @david_colquhoun)

@david_colquhoun David Colquhoun
Appalling paper in Brit J Gen Practice: Acupuncturists show that acupuncture doesn’t work, but conclude the opposite http://bit.ly/mgIQ6e
Retweeted by gentley1300 and 36 others

@david_colquhoun David Colquhoun.
I deny the Twitter rumour that I’m Professor Henry Brubaker as in Daily Mash http://bit.ly/mt1xhX (just because of http://bit.ly/mgIQ6e )

@brunopichler Bruno Pichler
http://tinyurl.com/3hmvan4
Made-up medicine works on made-up illnesses (me thinks Henry Brubaker is actually @david_colquhoun)

@david_colquhoun David Colquhoun,
HEHE RT @brunopichler: http://tinyurl.com/3hmvan4 Made-up medicine works on made-up illnesses

@psweetman Pauline Sweetman
Read @david_colquhoun’s take on the recent ‘acupuncture effective for unexplained symptoms’ nonsense: bit.ly/mgIQ6e

@bodyinmind Body In Mind
RT @david_colquhoun: ‘Margaret McCartney (GP) also blogged acupuncture nonsense http://bit.ly/j6yP4j My take http://bit.ly/mgIQ6e’

@abritosa ABS
Br J Gen Practice mete a pata na poça: RT @david_colquhoun […] appalling acupuncture nonsense http://bit.ly/j6yP4j http://bit.ly/mgIQ6e

@jodiemadden Jodie Madden
amusing!RT @david_colquhoun: paper in Brit J Gen Practice shows that acupuncture doesn’t work,but conclude the opposite http://bit.ly/mgIQ6e

@kashfarooq Kash Farooq
Unbelievable: acupuncturists show that acupuncture doesn’t work, but conclude the opposite. http://j.mp/ilUALC by @david_colquhoun

@NeilOConnell Neil O’Connell
Gobsmacking spin RT @david_colquhoun: Acupuncturists show that acupuncture doesn’t work, but conclude the opposite http://bit.ly/mgIQ6e

@euan_lawson Euan Lawson (aka Northern Doctor)
Aye too right RT @david_colquhoun @iansample @BenGoldacre Guardian should cover dreadful acupuncture paper http://bit.ly/mgIQ6e

@noahWG Noah Gray
Acupuncturists show that acupuncture doesn’t work, but conclude the opposite, from @david_colquhoun: http://bit.ly/l9KHLv

 

8 June 2011 I drew the attention of the editor of BJGP to the many comments that have been made on this paper. He assured me that the matter would be discussed at a meeting of the editorial board of the journal. Tonight he sent me the result of this meeting.

Subject: BJGP
From: “Roger Jones” <rjones@rcgp.org.uk>
To: <d.colquhoun@ucl.ac.uk>

Dear Prof Colquhoun

We discussed your emails at yesterday’s meeting of the BJGP Editorial Board, attended by 12 Board members and the Deputy Editor

The Board was unanimous in its support for the integrity of the Journal’s peer review process for the Paterson et al paper – which was accepted after revisions were made in response to two separate rounds of comments from two reviewers and myself – and could find no reason either to retract the paper or to release the reviewers’ comments

Some Board members thought that the results were presented in an overly positive way; because the study raises questions about research methodology and the interpretation of data in pragmatic trials attempting to measure the effects of complex interventions, we will be commissioning a Debate and Analysis article on the topic.

In the meantime we would encourage you to contribute to this debate throught the usual Journal channels

Roger Jones

Professor Roger Jones MA DM FRCP FRCGP FMedSci FHEA FRSA
Editor, British Journal of General Practice

Royal College of General Practitioners
One Bow Churchyard
London EC4M 9DQ
Tel +44 203 188 7400

It is one thing to make a mistake, It is quite another thing to refuse to admit it. This reply seems to me to be quite disgraceful.

20 July 2011. The proper version of the story got wider publicity when Margaret McCartney wrote about it in the BMJ. The first rapid response to this article was a lengthy denial by the authors of the obvious conclusion to be drawn from the paper. They merely dig themselves deeper into a hole. The second response was much shorter (and more accurate).

Thank you Dr McCartney

Richard Watson, General Practitioner
Glasgow

The fact that none of the authors of the paper or the editor of BJGP have bothered to try and defend themselves speaks volumes.

Like many people I glanced at the report before throwing it away with an incredulous guffaw. You bothered to look into it and refute it – in a real journal. That last comment shows part of the problem with them publishing, and promoting, such drivel. It makes you wonder whether anything they publish is any good, and that should be a worry for all GPs.

 

30 July 2011. The British Journal of General Practice has published nine letters that object to this study. Some of them concentrate on problems with the methods. others point out what I believe to be the main point, there us essentially no effect there to be explained. In the public interest, I am posting the responses here [download pdf file]

Thers is also a response from the editor and from the authors. Both are unapologetic. It seems that the editor sees nothing wrong with the peer review process.

I don’t recall ever having come across such incompetence in a journal’s editorial process.

Here’s all he has to say.

The BJGP Editorial Board considered this correspondence recently. The Board endorsed the Journal’s peer review process and did not consider that there was a case for retraction of the paper or for releasing the peer reviews. The Board did, however, think that the results of the study were highlighted by the Journal in an overly-positive manner. However,many of the criticisms published above are addressed by the authors themselves in the full paper.

 

If you subscribe to the views of Paterson et al, you may want to buy a T-shirt that has a revised version of the periodic table.

t shirt

 

5 August 2011. A meeting with the editor of BJGP

Yesterday I met a member of the editorial board of BJGP. We agreed that the data are fine and should not be retracted. It’s the conclusions that should be retracted. I was also told that the referees’ reports were "bland". In the circumstances that merely confirmed my feeling that the referees failed to do a good job.

Today I met the editor, Roger Jones, himself. He was clearly upset by my comment and I have now changed it to refer to the whole editorial process rather than to him personally. I was told, much to my surprise, that the referees were not acupuncturists but “statisticians”. That I find baffling. It soon became clear that my differences with Professor Jones turned on interpretations of statistics.

It’s true that there were a few comparisons that got below P = 0.05, but the smallest was P = 0.02. The warning signs are there in the Methods section: "all statistical tests were …. deemed to be statistically significant if P < 0.05". This is simply silly -perhaps they should have read Lectures on Biostatistics. Or for a more recent exposition, the XKCD cartoon in which it’s proved that green jelly beans are linked to acne (P = 0.05). They make lots of comparisons but make no allowance for this in the statistics. Figure 2 alone contains 15 different comparisons: it’s not surprising that a few come out "significant", even if you don’t take into account the likelihood of systematic (non-random) errors when comparing final values with baseline values.

Keen though I am on statistics, this is a case where I prefer the eyeball test. It’s so obvious from the Figure that there’s nothing worth talking about happening, it’s a waste of time and money to torture the numbers to get "significant" differences. You have to be a slavish believer in P values to treat a result like that as anything but mildly suggestive. A glance at the Figure shows the effects, if there are any at all, are trivial.

I still maintain that the results don’t come within a million miles of justifying the authors’ stated conclusion “The addition of 12 sessions of five-element acupuncture to usual care resulted in improved health status and wellbeing that was sustained for 12 months.” Therefore I still believe that a proper course would have been to issue a new and more accurate press release. A brief admission that the interpretation was “overly-positive”, in a journal that the public can’t see, simply isn’t enough.

I can’t understand either, why the editorial board did not insist on this being done. If they had done so, it would have been temporarily embarrassing, certainly, but people make mistakes, and it would have blown over. By not making a proper correction to the public, the episode has become a cause célèbre and the reputation oif the journal will suffer permanent harm. This paper is going to be cited for a long time, and not for the reasons the journal would wish.

Misinformation, like that sent to the press, has serious real-life consequences. You can be sure that the paper as it still stands, will be cited by every acupuncturist who’s trying to persuade the Department of Health that he’s a "qualified provider".

There was not much unanimity in the discussion up to this point, Things got better when we talked about what a GP should do when there are no effective options. Roger Jones seemed to think it was acceptable to refer them to an alternative practitioner if that patient wanted it. I maintained that it’s unethical to explain to a patient how medicine works in terms of pre-scientific myths.

I’d have love to have heard the "informed consent" during which "The patient’s condition is explained in terms of imbalance in the five elements which then causes an imbalance in the whole person". If anyone had tried to explain my conditions in terms of my imbalance in my Wood, Water, Fire, Earth and Metal. I’d think they were nuts. The last author. Gerad Kite, runs a private clinic that sells acupuncture for all manner of conditions. You can find his view of science on his web site. It’s condescending and insulting to talk to patients in these terms. It’s the ultimate sort of paternalism. And paternalism is something that’s supposed to be vanishing in medicine. I maintained that this was ethically unacceptable, and that led to a more amicable discussion about the possibility of more honest placebos.

It was good of the editor to meet me in the circumstances. I don’t cast doubt on the honesty of his opinions. I simply disagree with them, both at the statistical level and the ethical level.

 

30 March 2014

I only just noticed that one of the authors of the paper, Bruce McCallum (who worked as an acupuncturist at Kite’s clinic) appeared in a 2007 Channel 4 News piece. I was a report on the pressure to save money by stopping NHS funding for “unproven and disproved treatments”. McCallum said that scientific evidence was needed to show that acupuncture really worked. Clearly he failed, but to admit that would have affected his income.

Watch the video (McCallum appears near the end).

Jump to follow-up

Last year the Royal London Homeopathic Hospital was rebranded as the Royal London Hospital for Integrated Medicine (RLHIM). The exercise seems to have been entirely cosmetic. Sadly, they still practise the same nonsense, as described in Royal London Homeopathic Hospital rebranded. But how different will things be at the Royal London Hospital for Integrated Medicine?.

Recently I came across a totally disgraceful pamphlet issued by the RLHIM [download pamphlet].

If you haven’t come across craniosacral therapy (and who could blame you, a new form of nonsense is invented daily), try these sources.

uclh-cranio-vs

In short, it is yet another weird invention of an eccentric American osteopath, dating from the 1930s. Like Osteopathy and Chiropractic, there is no ancient wisdom involved, just an individual with an eye for what makes money.

What the UCLH pamphlet claims

UCLH-cs2

The claims made in this pamphlet are utterly baseless. In fact there isn’t the slightest evidence that craniosacral therapy is good for anything. And its ‘principles’ are pure nonsense.

No doubt that is why the Advertising Standards Authority has already delivered a damning indictment of rather similar claims made in a leaflet issued by the Craniosacral Therapy Association (CSTA)

The Advertising Standards judgement concluded

" . . the ad breached CAP Code clauses 3.1 (Substantiation), 7.1 (Truthfulness) and 50.1 (Health and beauty products and therapies)."

"We noted that the CSTA believed that the leaflet was merely inviting readers to try CST to see if it could alleviate some of their symptoms and did not discourage them from seeing a doctor. However, we considered that the list of serious medical conditions in the ad, and the references to the benefit and help provided by CST, could encourage readers to use CST to relieve their symptoms rather than seek advice from a medical professional. We therefore concluded that the ad could discourage readers from seeking essential treatment for serious medical conditions from a qualified medical practitioner.

Complaint through the official channels. It took 3 months to extract “No comment” from Dr Gill Gaskin

Given that I have every reason to be grateful to UCL Hospitals for superb care, i was hesitant to leap into print to condemn the irresponsible pamphlet issued by one of their hospitals. It seemed better to go through the proper channels and make a complaint in private to the UCL Hospitals Trust.

On 21st December 2010 I wrote to the directors of UCLH Trust

I have just come across the attached pamphlet.

“Craniosacral” therapy is a preposterous made-up invention.

More to the point, there is no worthwhile evidence for the claims made in the pamphlet.

The leaflet is, I contend, illegal under the Consumer protection regulations 2008. It is also deeply embarrassing that UCLH should be lending its name to this sort of thing.

If you can think of any reason why I should not refer the pamphlet to the Advertising Standards Association, and to the office of Trading Standards, please let me know quickly.

Best regards

David Colquhoun

On 7th January 2011 I got an acknowledgment, which told me that my letter had been forwarded to the Medical Director for Specialist Hospitals for a response.

The Specialist Hospitals of the Trust include the Eastman Dental Hospital, The Heart Hospital, The National Hospital for Neurology & Neurosurgery (the famous Queen’s Square hospital) and, yes, The Royal London Hospital for Integrated Medicine. I’ve been a patient at three of them and have nothing but praise, Queen’s Square and the UCLH baby unit saved the life of my wife and my son in 1984 (see Why I love the National Health Service).

The Medical Director for Specialist Hospitals is Dr Gill Gaskin, and it is to her that my letter was forwarded. Of course it is not her fault that, in 2002, the Royal London Homeopathic Hospital (as it then was) was acquired by the UCLH Trust in 2002, The excuse given at the time was that the space was needed and the nonsense espoused by the RLHH would be squeezed out. That hasn’t yet happened.

After that nothing happened so I wrote directly to Dr Gaskin on 14th February 2011

Dear Dr Gaskin
The letter below was sent to the Trust on 20 December last year. I am told it was forwarded to you. I’m disappointed that I have still had no reply, after almost two months.  It was a serious enquiry and it has legal implications for the Trust. Would it help to talk about it in person?
David Colquhoun

I got a quick reply, but sadly, as so often, the complaint had simply been forwarded to the object of the complaint. This sort of buck-passing is standard procedure for heading off complaints in any big organisation, in my experience.

From: <Gill.Gaskin@uclh.nhs.uk>

To: <d.colquhoun@ucl.ac.uk>

Cc: <jocelyn.laws@uclh.nhs.uk>, <Rachel.Maybank@uclh.nhs.uk>

Dear Professor Colquhoun
 
I received your email in January.
I have now received the response from the Associate Clinical Director of the Royal London Hospital for Integrated Medicine, which is as set out below.
 

The brochure makes no claims of efficacy for Craniosacral Therapy (CST).  In terms of safety, only two randomised trials have reported adverse effects, neither found an excess of adverse effects of CST over control interventions (disconnected magnetotherapy equipment and static magnets respectively):
 
(Castro-Sanchez A et al.  A randomized controlled trial investigating the effects of craniosacral therapy on pain and heart rate variability in fibromyalgia patients. Clin Rehabil 2011 25: 25–35.  published online 11 August 2010 DOI: 10.1177/0269215510375909
Mann JD et al. Craniosacral therapy for migraine: Protocol development for an exploratory controlled clinical trial.   BMC Complementary and Alternative Medicine 2008, 8:28 published 9 June 2008 doi:10.1186/1472-6882-8-28)

The only reports of adverse effects of CST relate to its use in traumatic brain injury.  (Greenman PE, McPartland JM. Cranial findings and iatrogenesis from craniosacral manipulation in patients with traumatic brain syndrome. J Am Osteopath Assoc 1995;95:182-88).

The RLHIM does not treat this condition and it is not mentioned in the brochure. 

The Craniosacral Therapy Association is planning a safety audit, to be launched later this year.  The RLHIM intends to participate in this.

With best wishes
 
Gill Gaskin



Dr Gill Gaskin

Medical Director

Specialist Hospitals Board

UCLH NHS Foundation Trust



I don’t know who wrote this self-serving nonsense because there is no sign on the web of a job called "Associate Clinical Director of the Royal London Hospital for Integrated Medicine".

It is absurd to say that the leaflet makes “makes no claims of efficacy”. It says "Craniosacral therapy can be offered to children and adults for a variety of conditions:" and then goes on to list a whole lot of conditions, some of which are potentially serious, like "Recurrent ear infections and sinus infections, glue ear " and "Asthma". Surely anyone would suppose that if a UCLH Hopsital were offering a treatment for conditions like these, there would be at least some evidence that they worked. And there is no such evidence. This reply seemed to me to verge on the dishonest.

Remember too that this response was written on 16th February 2011, long after the Advertising Standards Association had said that there is no worthwhile evidence for claims of this sort, on 8th September 2010.

I replied at once

Thanks for the reply, but I thought that this was your responsibility. Naturally the RLHIM will stick up for itself, so asking them gets us nowhere at all.  The buck stops with the Trust (in particular with you, I understand) and it is for you to judge whether pamphlets such as that I sent bring the Trust into disrepute

. . ..

I’d be very pleased to hear your reaction (rather than that of the RLHIM) to these comments.  It seems a reasonable thing to ask for, since responsibility for the RLHIM rests with you

David Colquhoun

On the 13th March, after a couple of reminders, Dr Gaskin said "I will respond to you tomorrow or Tuesday,". No such luck though. On 25th March, more than three months after I first wrote, I eventually got a reply (my emphasis).

I do not wish to comment further on the matter of the leaflets as a complaint to the advertising standards authority would be dealt with formally.

I am aware of your views on complementary medicine, and of course am entirely open to you pointing out areas where you believe there is misleading information, and I ask colleagues to review such areas when highlighted.

I would make several additional comments:

– patients are referred into NHS services by their GPs (or occasionally by consultants in other services) and cannot self-refer

– patients attending the Royal London Hospital for Integrated Medicine report positively on NHS Choices

– GPs continue to make referrals to the Royal London Hospital for Integrated Medicine and many request that patients stay under follow-up, when UCLH seeks to reconfirm this

– UCLH is engaging with North Central London NHS commissioners on work on their priorities, and that includes work on complementary medicine (and combinations of conventional and complementary approaches)

I think you will understand that I will not wish to engage in lengthy correspondence, and have many other competing priorities at present.

With best wishes

Gill Gaskin

So, after three months’ effort, all I could get was ‘no comment‘, plus some anecdotes about satisfied customers -the stock in trade of all quacks.

I guess it is well known that complaints against any NHS organisation normally meet with a stonewall. That happens with any big organisation (universities too). Nevertheless it strikes me as dereliction of duty to respond so slowly, and in the end to say nothing anyway.

The Advertising Standards Authority have already given their judgement, and it appears to be based on sounder medicine than Dr Gaskin’s ‘reply’..

There are plans afoot to refer the UCLH pamphlet to the the Office of Trading Standards.IIt is for them to decide whether to prosecute the UCLH Trust for making false health claims. It is sad to have to say that they deserve to be prosecuted.

Follow-up

28 March 2011. Two days after this post went up, a Google search for “Dr Gill Gaskin” brought up this post as #5 on the first page. Amazing.

On 25 May, the same search alluded to this post in positions 2, 3, 4 and 5 on the first page of Google.

29 June 2013

Despite several judgements by the Advertising Standards Authority (ASA) against claims made for craniosacral therapy, nothing was done.
But after UCLH Trust was comprehensively condemned by the ASA for the claims made for acupuncture by the RLHIM, at last we got action. All patient pamphlets have been withdrawn, and patient information is being revised.

. It isn’t obvious why this has taken more that two years (and one can only hope that the revised information will be more accurate)