That’s all sadly true. Physiotherapy. and the better osteopaths, are distinguished from outright quacks (like chiropractors) by their use of the language of physiology rather than mysticism. That doesn’t mean that their treatments work any better, but it’s a good start. ]]>

This is true, but acupuncture of LBP is a widespread practice in non-privatised physiotherapy departments too and has been for a long time.

Acupuncture is the new “short-wave diathermy’ as one colleague put it too me (‘useless’ SWD being the most common physiotherapy treatment in years gone by).

Physiotherapists have a long history of relying on passive treatments that have little evidence to support their use.

]]>While Dr Waney Squier may not have been well served by the FTP process, she did let slip something that suggested deep bias:

“I’ve done my best to give an opinion based on my experience, based on the best evidence I can find to support my view.”

No, the “view” should come after the evidence not before. And experience is not necessarily evidence.

]]>No! Wait! They’ve started a petition instead: Keep Acupuncture in the NICE Guidelines for Low Back Pain and Sciatica

]]>The Royal Statistical Society has published a 122 page document “Communicating and Interpreting Statistical Evidence in the Administration of Criminal Justice“. It is a primer on the nature of statistics, from a legal point of view.

There are relevant shorter posts on the web site of Norman Fenton.

]]>The study by Colloca and Benedetti (2005) [download pdf] was aware of regression to the mean (see Box 1) and it refers to an earlier version (Hróbjartsson & Gøtzsche, 2005) of the study to which I refer, namely Hróbjartsson & Gøtzsche (2010). It also acknowledges that the only way to distinguish between a genuine placebo effect and regression to the mean is to compare placebo with no-treatment. When this is done, it’s found that there is often (though not always) a difference, but that this difference, the genuine placebo effect) is too small to be useful to patients.

I don’t deny for a moment that real placebo effects exist, but in seems that, in real clinical settings, they usually aren’t big enough to be useful, and that the greatest part of difference between drug and placebo is accounted for be regression to the mean.

Colloca and Benedetti suggest that clinical trials should routinely include a comparison between open and covert administration of the drug. That would be interesting to placebo researchers, but it seems to me that it would be better to more often include a no-treatment arm. That would answer the important question in an ethically-acceptable way

]]>Dr. G. Otte

]]>Dr. G. Otte

]]>Well RCTs are normally run with test and control groups in parallel, so, in principle both groups should experience the same regression to the mean, The problem arises largely in trials that look at difference from baseline. But of course it is a weakness of meta-analysis that you have to choose which trials to exclude.

]]>Thanks for those excellent quotations.

Of course you are quite right about the zero effect. I’ve fixed that now. Thank you.

As I said, regression to the mean has been known for so long that it’s amazing that one still needs to write about it. The problem, I suppose, is that it is not in the interests of quacks to think about it. It’s also not in the interests of anyone who is being pressured to produce positive results.

As you point out, the hazards of *post hoc* subset analysis have also been known for a long time. The only safe rule is not to believe it until you can show that you have a method that will pick out responders *before* you give the treatment. That’s the aim of personalized medicine, but progress in that field has been rather slow, so far, as Tim Caulfield fas recently pointed out in the BMJ (“Genetics and personalized medicine—where’s the revolution?)

Regarding:

“One excuse that’s commonly used when a treatment shows no effect, or a small effect, in proper RCTs is to assert that the treatment actually has a good effect, but only in a subgroup of patients (“responders”) while others don’t respond at all (“non-responders”).”

If the average treatment effect is zero then claiming that there is a subset for whom the treatment effect is positive also amounts to claiming that there is another subset for whom the treatment effect is negative (for whom the treatment is harmful). Without the latter the average effect could not be zero. The U.S. mathematical psychologist Robyn Dawes called not seeing this “the subset fallacy” in his book Everyday Irrationality (2001, ch.6).

On findings of subgroup differences in clinical trials, the U.S. biostatistician Curtis Meinert wrote:

“most subgroup differences identified by subgroup analyses have a nasty habit of not reproducing in subsequent trials” (An insider’s guide to clinical trials. Oxford, 2011, page 156)

In the same vein, Richard Peto has said that “you

should always do subgroup analysis and never believe the results”.

In other words, any finding from subgroup analysis needs to be confirmed in a separate medical trial. Otherwise, one is committing what is also known as the Texas sharpshooter fallacy: https://en.wikipedia.org/wiki/Texas_sharpshooter_fallacy

]]>Thanks for your comment. It’s true that analysis of covariance (ANCOVA) is another way to make some allowance for regression to the mean. But it makes assumptions about linear relationship between response and time which may well not be justified. It’s better, in general, to have a parallel control group which should show the same regression to the mean as the test group. The paper that you mention (Vickers & Altman, 2001) inadvertently provides a rather good example. It looks at acupuncture, which was fashionable in 2001, but which is now well-established to be no more than a myth. Yet the ANCOVA analysis decreases the *P* value this giving the impression that it works for shoulder pain. A classical false positive.