LOB-vs
Download Lectures on Biostatistics (1971).
Corrected and searchable version of Google books edition

Download review of Lectures on Biostatistics (THES, 1973).

Latest Tweets
Categories
Archives

University of Sheffield

There is a widespread belief that science is going through a crisis of reproducibility.  A meeting was held to discuss the problem.  It was organised by Academy of Medical Sciences, the Wellcome Trust, MRC and BBSRC, and It was chaired by Dorothy Bishop (of whose blog I’m a huge fan).  It’s good to see that scientific establishment is beginning to take notice.  Up to now it’s been bloggers who’ve been making the running.  I hadn’t intended to write a whole post about it, but some sufficiently interesting points arose that I’ll have a go.

The first point to make is that, as far as I know, the “crisis” is limited to, or at least concentrated in, quite restricted areas of science.  In particular, it doesn’t apply to the harder end of sciences. Nobody in physics, maths or chemistry talks about a crisis of reproducibility.  I’ve heard very little about irreproducibility in electrophysiology (unless you include EEG work).  I’ve spent most of my life working on single-molecule biophysics and I’ve never encountered serious problems with irreproducibility.  It’s a small and specialist field so I think if I would have noticed if it were there.  I’ve always posted on the web our analysis programs, and if anyone wants to spend a year re-analysing it they are very welcome to do so (though I have been asked only once).

The areas that seem to have suffered most from irreproducibility are experimental psychology, some areas of cell biology, imaging studies (fMRI) and genome studies.  Clinical medicine and epidemiology have been bad too.  Imaging and genome studies seem to be in a slightly different category from the others. They are largely statistical problems that arise from the huge number of comparisons that need to be done.  Epidemiology problems stem largely from a casual approach to causality. The rest have no such excuses.

The meeting was biased towards psychology, perhaps because that’s an area that has had many problems.  The solutions that were suggested were also biased towards that area.  It’s hard to see some of them could be applied to electrophysiology for example.

There was, it has to be said, a lot more good intentions than hard suggestions.  Pre-registration of experiments might help a bit in a few areas.  I’m all for open access and open data, but doubt they will solve the problem either, though I hope they’ll become the norm (they always have been for me).

All the tweets from the meeting hve been collected as a Storify. The most retweeted comment was from Liz Wager

@SideviewLiz: Researchers are incentivised to publish, get grants, get promoted but NOT incentivised to be right! #reprosymp

This, I think, cuts to the heart if the problem.  Perverse incentives, if sufficiently harsh, will inevitably lead to bad behaviour.  Occasionally it will lead to fraud. It’s even led to (at least) two suicides.  If you threaten people in their forties and fifties with being fired, and losing their house, because they don’t meet some silly metric, then of course people will cut corners.  Curing that is very much more important than pre-registration, data-sharing and concordats, though the latter occupied far more of the time at the meeting.  

The primary source of the problem is that there is not enough money for the number of people who want to do research (a matter that was barely mentioned).  That leads to the unpalatable conclusion that the only way to cure the problem is to have fewer people competing for the money.  That’s part of the reason that I suggested recently a two-stage university system.  That’s unlikely to happen soon. So what else can be done in the meantime?

The responsibility for perverse incentives has to rest squarely on the shoulders of the senior academics and administrators who impose them.  It is at this level that the solutions must be found.  That was said, but not firmly enough. The problems are mostly created by the older generation   It’s our fault.

IncidentalIy, I was not impressed by the fact that the Academy of Medical Sciences listed attendees with initials after peoples’ names. There were eight FRSs but I find it a bit embarrassing to be identified as one, as though it made any difference to the value of what I said.

It was suggested that courses in research ethics for young scientists would help.  I disagree.  In my experience, young scientists are honest and idealistic. The problems arise when their idealism is shattered by the bad example set by their elders.  I’ve had a stream of young people in my office who want advice and support because they feel they are being pressured by their elders into behaviour which worries them. More than one of them have burst into tears because they feel that they have been bullied by PIs.

One talk that I found impressive was Ottloline Leyser who chaired the recent report on The Culture of Scientific Research in the UK, from the Nuffield Council on Bioethics.  But I found that report to be bland and its recommendations, though well-meaning, unlikely to result in much change.  The report was based on a relatively small, self-selected sample of 970 responses to a web survey, and on 15 discussion events.  Relatively few people seem to have spent time filling in the text boxes, For example

“Of the survey respondents who provided a negative comment on the effects of competition in science, 24 out of 179 respondents (13 per cent) believe that high levels of competition between individuals discourage research collaboration and the sharing of data and methodologies.&rdquo:

Such numbers are too small to reach many conclusions, especially since the respondents were self-selected rather than selected at random (poor experimental design!).  Nevertheless, the main concerns were all voiced.  I was struck by

“Almost twice as many female survey respondents as male respondents raise issues related to career progression and the short term culture within UK research when asked which features of the research environment are having the most negative effect on scientists”

But no conclusions or remedies were put forward to remedy this problem.  It was all put rather better, and much more frankly, some time ago by Peter Lawrence.  I do have the impression that bloggers (including Dorothy Bishop) get to the heart of the problems much more directly than any official reports.

The Nuffield report seemed to me to put excessive trust in paper exercises, such as the “Concordat to Support the Career Development of Researchers”.  The word “bullying” does not occur anywhere in the Nuffield document, despite the fact that it’s problem that’s been very widely discussed and a problem that’s critical for the problems of reproducibility. The Concordat (unlike the Nuffield report) does mention bullying.

"All managers of research should ensure that measures exist at every institution through which discrimination, bullying or harassment can be reported and addressed without adversely affecting the careers of innocent parties. "

That sounds good, but it’s very obvious that there are many places simply ignore it. All universities subscribe to the Concordat. But signing is as far as it goes in too many places.   It was signed by Imperial College London, the institution with perhaps the worst record for pressurising its employees, but official reports would not dream of naming names or looking at publicly available documentation concerning bullying tactics. For that, you need bloggers.

On the first day, the (soon-to-depart) Dean of Medicine at Imperial, Dermot Kelleher, was there. He seemed a genial man, but he would say nothing about the death of Stefan Grimm. I find that attitude incomprehensible. He didn’t reappear on the second day of the meeting.

The San Francisco Declaration on Research Assessment (DORA) is a stronger statement than the Concordat, but its aims are more limited.  DORA states that the impact factor is not to be used as a substitute “measure of the quality of individual research articles, or in hiring, promotion, or funding decisions”. That’s something that I wrote about in 2003, in Nature. In 2007 it was still rampant, including at Imperial College. It still is in many places.  The Nuffield Council report says that DORA has been signed by “over 12,000 individuals and 500 organisations”, but fails to mention the fact that only three UK universities have signed up to DORA (oneof them, I’m happy to say, is UCL).  That’s a pretty miserable record. And, of course, it remains to be seen whether the signatories really abide by the agreement.  Most such worthy agreements are ignored on the shop floor.

The recommendations of the Nuffield Council report are all worthy, but they are bland and we’ll be lucky if they have much effect. For example

“Ensure that the track record of researchers is assessed broadly, without undue reliance on journal impact factors”

What on earth is “undue reliance”?  That’s a far weaker statement than DORA. Why?

And

“Ensure researchers, particularly early career researchers, have a thorough grounding in research ethics”

In my opinion, what we should say to early career researchers is “avoid the bad example that’s set by your elders (but not always betters)”. It’s the older generation which has produced the problems and it’s unbecoming to put the blame on the young.  It’s the late career researchers who are far more in need of a thorough grounding in research ethics than early-career researchers.

Although every talk was more or less interesting, the one I enjoyed most was the first one, by Marcus Munafo.  It assessed the scale of the problem (though with a strong emphasis on psychology, plus some genetics and epidemiology),  and he had good data on under-powered studies.  It also made a fleeting mention of the problem of the false discovery rate.  Since the meeting was essentially about the publication of results that aren’t true, I would have expected the statistical problem of the false discovery rate to have been given much more prominence than it was. Although Ioannidis’ now-famous paper “Why most published research is wrong” got the occasional mention, very little attention (apart from Munafo and Button) was given to the problems which he pointed out. 

I’ve recently convinced myself that, if you declare that you’ve made a discovery when you observe P = 0.047 (as is almost universal in the biomedical literature) you’ll be wrong 30 – 70%  of the time (see full paper, "An investigation of the false discovery rate and the misinterpretation of p-values".and simplified versions on Youtube and on this blog).  If that’s right, then surely an important way to reduce the publication of false results is for journal editors to give better advice about statistics.  This is a topic that was almost absent from the meeting.  It’s also absent from the Nuffield Council report (the word “statistics” does not occur anywhere).

In summary, the meeting was very timely, and it was fun.  But I ended up thinking it had a bit too much of preaching good intentions to the converted. It failed to grasp some of the nettles firmly enough. There was no mention of what’s happening at Imperial, or Warwick, or Queen Mary, or at Kings College London. Let’s hope that when it’s written up, the conclusion will be a bit less bland than those of most official reports. 

It’s overdue that we set our house in order, because the public has noticed what’s going on. The New York Times was scathing in 2006. This week’s Economist said

"Modern scientists are doing too much trusting and not enough verifying -to the detriment of the whole of science, and of humanity.
Too many of the findings that fill the academic ether are the result of shoddy experiments or poor analysis"

"Careerism also encourages exaggeration and the cherry­picking of results."

This is what the public think of us. It’s time that vice-chancellors did something about it, rather than willy-waving about rankings.

Conclusions

After criticism of the conclusions of official reports, I guess that I have to make an attempt at recommendations myself.  Here’s a first attempt.

  1. The heart of the problem is money. Since the total amount of money is not likely to increase in the short term, the only solution is to decrease the number of applicants.  This is a real political hot-potato, but unless it’s tackled the problem will persist.  The most gentle way that I can think of doing this is to restrict research to a subset of universities. My proposal for a two stage university system might go some way to achieving this.  It would result in better postgraduate education, and it would be more egalitarian for students. But of course universities that became “teaching only” would see (wrongly) as demotion, and it seems that UUK is unlikely to support any change to the status quo (except, of course, for increasing fees).
  2. Smaller grants, smaller groups and fewer papers would benefit science.
  3. Ban completely the use of impact factors and discourage use of all metrics. None has been shown to measure future quality.  All increase the temptation to “game the system” (that’s the usual academic euphemism for what’s called cheating if an undergraduate does it).
  4. “Performance management” is the method of choice for bullying academics.  Don’t allow people to be fired because they don’t achieve arbitrary targets for publications or grant income. The criteria used at Queen Mary London, and Imperial, and Warwick and at Kings, are public knowledge.  They are a recipe for employing spivs and firing Nobel Prize winners: the 1991 Nobel Laureate in Physiology or Medicine would have failed Imperial’s criteria in 6 years out of 10 years when he was doing the work which led to the prize.
  5. Universities must learn that if you want innovation and creativity you have also to tolerate a lot of failure.
  6. The ranking of universities by ranking businesses or by the REF encourages bad behaviour by encouraging vice-chancellors to improve their ranking, by whatever means they can. This is one reason for bullying behaviour.  The rankings are totally arbitrary and a huge waste of money.  I’m not saying that universities should be unaccountable to taxpayers. But all you have to do is to produce a list of publications to show that very few academics are not trying. It’s absurd to try to summarise a whole university in a single number. It’s simply statistical illiteracy
  7. Don’t waste money on training courses in research ethics. Everyone already knows what’s honest and what’s dodgy (though a bit more statistics training might help with that).  Most people want to do the honest thing, but few have the nerve to stick to their principles if the alternative is to lose your job and your home.  Senior university people must stop behaving in that way.
  8. University procedures for protecting the young are totally inadequate. A young student who reports bad behaviour of his seniors is still more likely to end up being fired than being congratulated (see, for example, a particularly bad case at the University of Sheffield).  All big organisations close ranks to defend themselves when criticised.  Even extreme cases, as when an employee commits suicide after being bullied, universities issue internal reports which blame nobody
  9. Universities must stop papering over the cracks when misbehaviour is discovered. It seems to be beyond the wit of PR people to realise that often it’s best (and always the cheapest) to put your hands up and say “sorry, we got that wrong”
  10. There an urgent need to get rid of the sort of statistical illiteracy that allows P = 0.06 to be treated as failure and P = 0.04 as success. This is almost universal in biomedical papers, and given the hazards posed by the false discovery rate, could well be a major contribution to false claims. Journal editors need to offer much better statistical advice than is the case at the moment.

Follow-up

Jump to follow-up

Synexus is "The world’s largest multi-national company entirely focused on the recruitment and running of clinical trials company that runs clinical trials and screening programmes".

logo

I should say at the outset that I’m deeply impressed by our local GP practice. I can’t imagine a better GP than mine; he has the ideal mix of knowledge and empathy. I do, however, worry about the fragmentisation of the NHS and its creeping privatisation.

I came across Synexus because my wife had a letter (on our GP practice letterhead) inviting her to go for osteoporosis screening, and possibly to "take part in a study". Download the letter.

Notice that the form gives no idea of what the "study" might be. Notice also, more seriously, the small print on the second page of the form. Here it is in normal size print.

"If you contact Synexus and/or return the attached tear-off slip Synexus may, with your consent, use the data you provide for the purposes of informing you of the study, of medical products and processes that might be of interest to you. Your information will be held by, and access to it limited to, Synexus Ltd and/or companies within the Synexus group of companies and/or third parties acting on their behalf"

You are invited, in near-illegible small print, to allow all your medical data to be handed over to Synexus [see comment, below], and an unspecified number of other companies and third parties. It also gives the company permission to "use the data you provide for the purposes of informing you. . . of medical products and processes that might be of interest to you". This appears to mean that in the future you’ll be pestered with mailings that bypass your GP and advertise (private?) screening etc. For the purposes of screening there should be no need to hand over any data whatsoever (and the practice manager ensures me that they don’t).

My wife asked my advice about whether she should sign up for "the study" if invited to do so, so I asked the GP practice what the trial was about. Rather to my surprise, they didn’t know. Neither did Hertfordshire NHS. So I asked the National Osteoporosis Society, and they didn’t know either. After several emails and a phone call, I eventually got the details from Synexus.

I have two concerns about this. One is the argument that’s been raging about the value of indiscrimate screening, The case against it has been put perfectly in Margaret McCartney’s recent book, The Patient Paradox. There’s a good case that too much money is spent on people who are well, and not enough on those who are ill. Of course prevention is better than cure. The problem is that in many cases the screening tests aren’t accurate enough, so many people get diagnosed and treated when they are not actually ill.

On top of that, there is now a serious worry about screening tests promoted by private companies, for profit. Lifeline has been criticised, for good reasons. The men’s health charity, Movember, promotes PSA screening for prostate cancer, one of the most unreliable tests in existence. There is now a web site that collates evidence about private health screening. Many of the tests are available on the NHS, and the NHS advice about them is being re-written so that it gives information about risks as well as benefits.

The NHS advice on screening for osteoporosis is still ambiguous. The evidence for benefit of screening at age 60 is not clear.

The main question, though, is this. If my wife were offered an opportunity to "take part in a study", should she say yes, or no? My first inclination was to say yes. Clinical trials are the only way to find out whether treatments work or not. If people don’t volunteer for trials, we’ll never know. But before saying yes, one would want to know that the trial was organised properly, so that it could answer a relevant question. That’s why I was surprised when I found it so hard to discover the details. Nobody seemed to know even where the trial was registered. It’s no use searching trial registers for "Synexus": you need to know who is paying for it.

Eventually Dr John Robinson of Synexus turned out to be very helpful. The protocol number is 20070337 with a EudraCT number 2011-001456-11. The trial is registered at ClinicalTrails.gov and it has ethical approval. It’s a trial of a new osteoporosis treatment made by Amgen, AMG 785. It’s a monoclonal antibody against sclerostin, a protein that inhibits bone formation. It sounds like a good idea, but we won’t know how well it works until it’s been tested. The allocation of patients to AMG 785 or placebo is randomised and double blind. The patient Information sheet for participants looks pretty good to me.

Nevertheless, I have some reservations about the trial. First, its organisation is odd. “After taking AMG 785 or placebo for one year, all study participants will be taking denosumab for the following year”. Denusomab is another product of the same company, Amgen. It has already been approved by NICE.  When I asked Dr Robinson why this arrangement had been chosen, this is what he said.

"Previous studies have shown that the maximal benefit on bone density is seen after 12 months and that treatment after this period shows a lower increase, it is for this reason treatment with AMG 785 is for 12 months in this study.

Other studies have also shown benefit in further improving and maintaining the increase in bone density and reducing fracture risk by subsequently treating patients with Alendronate after 12 months of AMG 785. This study is investigating whether similar or better findings occur with denosumab."

This does not make any sense to me. If the object is to compare AMG 785 with denusomab, they should be compared side by side, not sequentially. That brings us straight to the main problem with the trial design. It asks the wrong question. What the doctor needs to know is whether AMG 785 is more effective than existing treatments, not whether it is better than placebo. When I asked Dr Robinson about this, he said

"To quote from the protocol: A placebo-controlled study was chosen because it permits a minimally confounded demonstration of efficacy and safety of AMG 785 in the treatment of PMO. Using an active control such as a bisphosphonate means that more patients have to be enrolled to show benefit from AMG 785. The study already plans to enrol 6000 women. Increasing this number would add to the time required to complete the study. In addition the use of a placebo control is also within regulatory guidelines. "

What this means, in plain English, is that they are expecting a rather small difference between AMG 785 and existing treatments. It would take a very large number of patients to show this difference. If the difference is indeed small, it would be hard to justify the (doubtless eye-watering) cost of AMG 795 (denusomab costs £185.00 per dose). Testing a new drug against placebo, or against a low dose of something not very effective, is one of the stratagems listed in Chapter 4, Bad trials, in Ben Goldacre’s Bad Pharma. It makes the new drug look good, but it asks the wrong question.

The National Osteoporosis Society should be an organisation to which patients could turn to for advice in cases like this. In this case they were not helpful. They didn’t know much about the trial. I hope that this is not related to the fact that they get a lot of funding from Synexus. I noticed too that one of their advisors is the infamous Professor Richard Eastell, who admitted in print to lying in a paper, about a drug for osteoporosis made by Proctor & Gamble. It’s getting quite hard to find a medical charity that isn’t in the pocket of Big Pharma. or quacks (or even occasionally, both).

Conclusion. The trial asks the wrong question. On those grounds alone, I think that my advice would be not to volunteer for the trial.

Follow-up

I should have mentioned an interesting and relevant Cochrane review, New treatments compared to established treatments in randomized trials (2012), The authors’ conclusions are as follows.

“Society can expect that slightly more than half of new experimental treatments will prove to be better than established treatments when tested in RCTs, but few will be substantially better. This is an important finding for patients (as they contemplate participation in RCTs), researchers (as they plan design of the new trials), and funders (as they assess the ’return on investment’).”

15 May 2013. As noted in the comments, Synexus has been censured by the Advertising Standards Authority, because the ASA judged that they did not give sufficient prominence to the fact that there advertising of free screening was actually a way to recruit people into clinical trials.