Truth, falsehood and evidence: investigations of dubious and dishonest science

Corrected and searchable version of Google books edition

Latest Tweets
Hits on this blog

 Today, 25 September, is the first anniversary of the needless death of Stefan Grimm. This post is intended as a memorial. He should be remembered, in the hope that some good can come from his death.

On 1 December 2014, I published the last email from Stefan Grimm, under the title “Publish and perish at Imperial College London: the death of Stefan Grimm“. Since then it’s been viewed 196,000 times. The day after it was posted, the server failed under the load.

Since than, I posted two follow-up pieces. On December 23, 2014 “Some experiences of life at Imperial College London. An external inquiry is needed after the death of Stefan Grimm“. Of course there was no external inquiry.

And on April 9, 2015, after the coroner’s report, and after Imperial’s internal inquiry, "The death of Stefan Grimm was “needless”. And Imperial has done nothing to prevent it happening again".

The tragedy featured in the introduction of the HEFCE report on the use of metrics.

 “The tragic case of Stefan Grimm, whose suicide in September 2014 led Imperial College to launch a review of its use of performance metrics, is a jolting reminder that what’s at stake in these debates is more than just the design of effective management systems.” “Metrics hold real power: they are constitutive of values, identities and livelihoods ”

I had made no attempt to contact Grimm’s family, because I had no wish to intrude on their grief. But in July 2015, I received, out of the blue, a hand-written letter from Stefan Grimm’s mother. She is now 80 and living in Munich. I was told that his father, Dieter Grimm, had died of cancer when he was only 59. I also learned that Stefan Grimm was distantly related to Wilhelm Grimm, one of the Gebrüder Grimm.

The letter was very moving indeed. It said "Most of the infos about what happened in London, we got from you, what you wrote in the internet".

I responded as sympathetically as I could, and got a reply which included several of Stefan’s drawings, and then more from his sister. The drawings were done while he was young. They show amazing talent, but by the age of 25 he was too busy with science to expoit his artistic talents.

With his mother’s permission, I reproduce ten of his drawings here, as a memorial to a man who whose needless death was attributable to the very worst of the UK university system. He was killed by mindless and cruel "performance management", imposed by Imperial College London. The initial reaction of Imperial gave little hint of an improvement. I hope that their review of the metrics used to assess people will be a bit more sensible,

His real memorial lies in his published work, which continues to be cited regularly after his death.

His drawings are a reminder that there is more to human beings than getting grants. And that there is more to human beings than science.

Click the picture for an album of ten of his drawings. In the album there are also pictures of two books that were written for children by Stefan’s father, Dieter Grimm.

Dated Christmas eve,1979 (age 16)

### Follow-up

Well well. It seems that Imperial are having an "HR Showcase: Supporting our people" on 15 October. And the introduction is being given by none other than Professor Martin Wilkins, the very person whose letter to Grimm must bear some responsibility for his death. I’ll be interested to hear whether he shows any contrition. I doubt whether any employees will dare to ask pointed questions at this meeting, but let’s hope they do.

This is very quick synopsis of the 500 pages of a report on the use of metrics in the assessment of research. It’s by far the most thorough bit of work I’ve seen on the topic. It was written by a group, chaired by James Wilsdon, to investigate the possible role of metrics in the assessment of research.

The report starts with a bang. The foreword says

 "Too often, poorly designed evaluation criteria are “dominating minds, distorting behaviour and determining careers.”1 At their worst, metrics can contribute to what Rowan Williams, the former Archbishop of Canterbury, calls a “new barbarity” in our universities." "The tragic case of Stefan Grimm, whose suicide in September 2014 led Imperial College to launch a review of its use of performance metrics, is a jolting reminder that what’s at stake in these debates is more than just the design of effective management systems."  "Metrics hold real power: they are constitutive of values, identities and livelihoods "

And the conclusions (page 12 and Chapter 9.5) are clear that metrics alone can measure neither the quality of research, nor its impact.

"no set of numbers,however broad, is likely to be able to capture the multifaceted and nuanced judgements on the quality of research outputs that the REF process currently provides"

"Similarly, for the impact component of the REF, it is not currently feasible to use quantitative indicators in place of narrative impact case studies, or the impact template"

These conclusions are justified in great detail in 179 pages of the main report, 200 pages of the literature review, and 87 pages of Correlation analysis of REF2014 scores and metrics

The correlation analysis shows clearly that, contrary to some earlier reports, all of the many metrics that are considered predict the outcome of the 2014 REF far too poorly to be used as a substitute for reading the papers.

There is the inevitable bit of talk about the "judicious" use of metrics tp support peer review (with no guidance about what judicious use means in real life) but this doesn’t detract much from an excellent and thorough job.

Needless to say, I like these conclusions since they are quite similar to those recommended in my submission to the report committee, over a year ago.

Of course peer review is itself fallible. Every year about 8 million researchers publish 2.5 million articles in 28,000 peer-reviewed English language journals (STM report 2015 and graphic, here). It’s pretty obvious that there are not nearly enough people to review carefully such vast outputs. That’s why I’ve said that any paper, however bad, can now be printed in a journal that claims to be peer-reviewed. Nonetheless, nobody has come up with a better system, so we are stuck with it.

It’s certainly possible to judge that some papers are bad. It’s possible, if you have enough expertise, to guess whether or not the conclusions are justified. But no method exists that can judge what the importance of a paper will be in 10 or 20 year’s time. I’d like to have seen a frank admission of that.

If the purpose of research assessment is to single out papers that will be considered important in the future, that job is essentially impossible. From that point of view, the cost of research assessment could be reduced to zero by trusting people to appoint the best people they can find, and just give the same amount of money to each of them. I’m willing to bet that the outcome would be little different. Departments have every incentive to pick good people, and scientists’ vanity is quite sufficient motive for them to do their best.

Such a radical proposal wasn’t even considered in the report, which is a pity. Perhaps they were just being realistic about what’s possible in the present climate of managerialism.

Other recommendations include

"HEIs should consider signing up to the San Francisco Declaration on Research Assessment (DORA)"

4. "Journal-level metrics, such as the Journal Impact Factor (JIF), should not be used."

It’s astonishing that it should be still necessary to deplore the JIF almost 20 years after it was totally discredited. Yet it still mesmerizes many scientists. I guess that shows just how stupid scientists can be outside their own specialist fields.

DORA has over 570 organisational and 12,300 individual signatories, BUT only three universities in the UK have signed (Sussex, UCL and Manchester). That’s a shocking indictment of the way (all the other) universities are run.

One of the signatories of DORA is the Royal Society.

"The RS makes limited use of research metrics in its work. In its publishing activities, ever since it signed DORA, the RS has removed the JIF from its journal home pages and marketing materials, and no longer uses them as part of its publishing strategy. As authors still frequently ask about JIFs, however, the RS does provide them, but only as one of a number of metrics".

That’s a start. I’ve advocated making it a condition to get any grant or fellowship, that the university should have signed up to DORA and Athena Swan (with checks to make sure they are actually obeyed).

And that leads on naturally to one of the most novel and appealing recommendations in the report.

 "A blog will be set up at http://www.ResponsibleMetrics.org The site will celebrate responsible practices, but also name and shame bad practices when they occur" "every year we will award a “Bad Metric” prize to the most egregious example of an inappropriate use of quantitative indicators in research management."

This should be really interesting. Perhaps I should open a book for which university is the first to win "Bad Metric" prize.

The report covers just about every aspect of research assessment: perverse incentives, whether to include author self-citations, normalisation of citation impact indicators across fields and what to do about the order of authors on multi-author papers.

It’s concluded that there are no satisfactory ways of doing any of these things. Those conclusions are sometimes couched in diplomatic language which may, uh, reduce their impact, but they are clear enough.

The perverse incentives that are imposed by university rankings are considered too. They are commercial products and if universities simply ignored them, they’d vanish. One important problem with rankings is that they never come with any assessment of their errors. It’s been known how to do this at least since Goldstein & Spiegelhalter (1996, League Tables and Their Limitations: Statistical Issues in Comparisons Institutional Performance). Commercial producers of rankings don’t do it, because to do so would reduce the totally spurious impression of precision in the numbers they sell. Vice-chancellors might bully staff less if they knew that the changes they produce are mere random errors.

Metrics, and still more altmetrics, are far too crude to measure the quality of science. To hope to do that without reading the paper is pie in the sky (even reading it, it’s often impossible to tell).

The only bit of the report that I’m not entirely happy about is the recommendation to spend more money investigating the metrics that the report has just debunked. It seems to me that there will never be a way of measuring the quality of work without reading it. To spend money on a futile search for new metrics would take money away from science itself. I’m not convinced that it would be money well-spent.

### Follow-up

The last email of Stefan Grimm, and its follow-up post, has been read over 195,000 times now.

After Grimm’s death, Imperial announced that it would investigate itself The report is now available.

 Performance Management: Review of policies, procedures and support available to staff Following the tragic death of a member of the College’s staff community, Professor Stefan Grimm, the Provost invited the Senior Consul, Professor Richard Thompson, and the Director of Human Resources, Mrs Louise Lindsay, to consider the relevant College policies, procedures and the support available to all staff during performance review.

The report is even worse than I expected. It can be paraphrased as saying ‘our bullying was not done sufficiently formally -we need more forms and box-ticking’.

At the heart of the problem is Imperial’s Personal Review and Development Plan (PRDP). Here is an extract.

"Professor Grimm had been under review in the informal process for nearly two years. His line manager was using this period to help Professor Grimm obtain funding or alternative work (the review panel saw evidence of the efforts made in this regard). The subsequent formal process would have involved a minimum of two formal meetings with time to improve in-between formal meetings before consideration would have been given to the termination of Professor Grimm’s employment. Understandably there is a reluctance to move into formal hearings, particularly when the member of staff is hard working and diligent, but the formal stages would have provided more clarity to Professor Grimm on process and support through the written documentation, representation at meetings and HR involvement."

"It is recommended that the new capability procedure and ordinance include greater clarity on timescales for informal action and how this might operate in different roles."

It seems to be absurd to describe Wilkins’ letter has an attempt to "help" Professor Grimm, It was a direct threat to the livelihood of a competent 51 year-old full professor. Having flow charts for the bullying would not have helped. Neither would the provision by HR of "resilience" courses (what I’ve seen of such classes makes me feel suicidal at the thought of how far universities have sunk into pseudo-scientific HR babble).

I’ll skip straight to the conclusions, with my comments on them in italic.

1. Expand the Harassment Support Contact Programme to train volunteers, academic staff, who can be matched with individuals going through informal processes.

Looks like a charade to me. If they want to fire people without enough grants, they’ll do it.

2. Refresh and re-launch information on the employee assistance services widespread distribution and regular update of promotional material.

Ditto

3. Ensure regular training is given to new and experienced managers in core HR procedures.

Train senior people to bully properly.

4. Create a separate guidance and support document for staff to supplement document. The document to include a clear and concise summary of the informal formal process, a flowchart, the support available to staff and frequently asked questions

Pretend that staff are being helped by threatening to fire them.

5. Direct managers to inform HR before commencing the informal stage of performance management. All managers to have a briefing from their local HR representative of the instigation of performance management.

Make sure you’ve filled in the forms and ticked the boxes before you start bullying. HR don’t understand performance and should have no role in the process.

6. Create a separate policy for performance management in the form of procedure, which includes clear definitions for informal and formal performance
management and further guidance on the timescales and correspondence in stages. Provide clarity on the role of the PRDP appraisal in performance management.

The role PRDP is to increase the status of Imperial College, but pretend it’s to benefit its victims.

7. Create template documentation for performance management correspondence and formal stages of the process. Direct managers to ensure all correspondence reviewed by an HR representative before it is sent to a member of staff.

Bullying is OK if you’ve filled in enough forms.

In summary, these proposals merely add more bureaucracy. They won’t change anything. As one supposed, they are merely a smokescreen for carrying on as at present.

There is only one glimmer of hope in the whole report.

 Additional recommendation Although this was not within the remit of the current review, a number of concerns were raised with the reviewers about the application and consistency of approach in the use of performance metrics in academia and in the College. The reviewers recommend that the College undertake a wider consultation and review of the application of performance metrics within Imperial College with recommendations to be considered by the Provost’s Board in the summer term.

I’ve been telling them since 2007 that the metrics they use to judge people are plain silly [download the paper]. So have many other people. Could the message have sunk in at last? We’ll see.

What should be done about performance?

I’ve been very critical of the metrics that are used by Imperial (and some other places) to harass even quite senior people. So, it might well be asked how I think that standards should be maintained. If people are paid by the taxpayers, it isn’t unreasonable to expect them to work to the best of their abilities. The following observations come to mind.

• Take a lesson from Bell Labs in its heyday (before performance managers got power) . "First, management had to be technically competent; at Bell Labs, all managers were former researchers. Second, no researchers should have to raise funds. They should be free of that pressure. Third, research should and would be supported for years – if you want your company to last, take the long view. And finally, a project could be terminated without damning the researcher. There should be no fear of failure."
• Take a lesson from the great Max Perutz about how to run a successful lab."Max had the knack of picking extraordinary talent. But he also had the vision of creating a working environment where talented people were left alone to pursue their ideas. This philosophy lives on in the LMB and has been adopted by other research institutes as well. Max insisted that young scientists should be given full responsibility and credit for their work. There was to be no hierarchy, and everybody from the kitchen ladies to the director were on first-name terms. The groups were and still are small, and senior scientists work at the bench."
• Read Gus John "The results of the Guardian higher education network’s survey on bullying in higher education should give the entire sector cause to worry about the competence and style of leaders and managers in the sector"
• The vast majority of scientists whom I know work absurdly long hours. They are doing their best without any harassment from "performance managers". Some are more successful, and/or lucky, than others. That’s how it is. Get used to it.
• Rankings of universities are arbitrary and silly, but worse, they provide an incentive to vice-chancellors to justify their vast salaries by pushing their institution up the rankings by fair means or foul. It’s no exaggeration to suspect that things like the Times Higher Education rankings and the REF contributed to the death of Stefan Grimm.
• Realise that HR know nothing about science: their "performance management" kills original science, and it leads to corruption. It must bear some of the blame for the crisis in the reproducibility of published work.
• If you want innovation, you have to tolerate lots and lots of failure

### Follow-up

Stop press On April 7th, the coroner said the Grimm had asphyxiated himself on 25 September, 2014. He described the death as "needless"/ And Imperial’s HR director, Louise Lindsay, when asked if the new procedures would have saved his life, said "not clear it would have resulted in a different outcome.". So we have it from the horse’s mouth. Imperial has done nothing to prevent more tragedies happening.

10 April 2015

King’s College London has just issued a draft for its "performance management" system. You can read all about it here.

"Performance management is a direct incentive to do shoddy short-cut science."

17 April 2015

Alice Gast declines to apologise

At 06.22 on Radio 4’s Today Programme, Tanya Beckett interviewed Alice Gast. President of Imperial College London. After a 4-minute commercial for Imperial, Gast is asked about the death of Stefan Grimm. Her reply doesn’t even mention Grimm. “professors are under a lot of pressure . . .”. Not a word of apology or explanation is offered. I find it hard to comprehend such a heartless approach to her employees.

Listen to the interview

1 May 2015

The Imperial students’ newspaper, Felix Online, carried a description of the internal report and the inquest: Review in response to Grimm’s death completed. Results criticised by external academics: “Imperial doesn’t get it.”, It’s pretty good..

I wonder what undergraduates feel about being taught by people who write letters like Martin Wilkins‘ did?

The University of Warwick seems determined to wrest the title of worst employer from Imperial College London and Queen Mary College London. In little over a year, Warwick has had four lots of disastrous publicity, all self-inflicted.

First came the affair of Thomas Docherty.

Thomas Docherty

Professor of English and Comparative Literature, Thomas Docherty was suspended in January 2014 by Warwick because of "inappropriate sighing", "making ironic comments" and "projecting negative body language". Not only was Docherty punished, but also his students.

"As well as being banned from campus, from the library, and from email contact with his colleagues, Docherty was prohibited from supervising his graduate students and from writing references. Indiscriminate, disproportionate, and unjust measures against the professor were also deeply unfair to his students."

Ludicrously, rather than brushing the matter aside, senior management at Warwick hired corporate lawyers to argue that his behaviour was grounds for dismissal.

The story appeared in every UK newspaper and rapidly spread abroad. It must have been the most ham-fisted bit of PR ever. But rather than firing the HR department, The University of Warwick let the matter fester for a full nine months before reinstating Docherty in September 2014.

The university managed to get the worst possible outcome. The suspension provoked world-wide derision and in the end they admitted they’d been wrong.  Jeremy Treglown, a professor emeritus of Warwick (and former editor of The Times Literary Supplement) described the episode as being like “something out of Kafka”.

And guess what, nobody was blamed and nobody resigned.

The firing people of doing cheap research

Warwick has followed the bad example set by Queen Mary College London, Kings College London and Imperial College London , If you don’t average an external grant income of at least £75,000 a year over the past four years, you job is at risk. Apart from its cruelty, the taxpayer is likely to take a dim view of academics being compelled to make research as expensive as possible. Some people need no more than a paper and pencil to do brilliant work. If you are one of them, don’t go to any of these universities.

It’s simply bad management. They shouldn’t have taken on so many people if they can’t pay the bills. Many universities took on extra staff in order to cheat on the REF. Now they have to cast some aside like worn-out old boots..

The tone of voice

Warwick University has very recently issued a document "Warwick tone of voice: Full guidelines. March 2015". It’s a sign of their ham-fisted management style that it wasn’t even hidden behind a password. They seem to be proud of it. Of course it provoked a storm of hilarity on social media. Documents like that are designed to instruct people not to give truthful opinions but to act as advertising agents for their university. The actual effect is, of course, exactly the opposite. They reduce the respect for the institution that issues such documents.

Here are some quotations (try not to laugh -you might get fired).

"What is tone of voice and why do we need a ‘Warwick’ tone of voice?
The tone of our language defines the way people respond to us. By writing in a tone that’s true to our brand, we can express what it is that makes University of Warwick unique."

"Our brand: defined by possibility

What is it that makes us unique? We’re a university with modern values and a formidable record of academic and commercial achievement — but not the only one. So what sets us apart?

The difference lies in our approach to everything we do. Warwick is a place that fundamentally rejects the notion of obstacles — a place where the starting point is always ‘anything is possible’. "

Then comes the common thread. It’s all to do with rankings.

“What if we raised our research profile to even higher levels of international excellence? Then we could be ranked as one of the world’s top fifty universities."

The people who sell university rankings (and the REF) have much to answer for,

There’s a good post about this fiasco, from people whose job is branding. "How not to write guidelines".

Outsourcing teaching

As if all this were not enough, on April 5th 2015, we heard that "Warwick Uni to outsource hourly paid academics to subsidiary". Universities already rely totally on people on people on short-term contracts. Most research is done by PhD students and post-doctoral students on three (or sometimes five) year contracts. They are supervised (not always very well) by people who spend most of their time writing grant applications. Science must be one of the most insecure jobs going.

Increasingly we are seeing casualisation of academics. A three year contract looks like luxury compared with being hired by the hour. It’s rapidly approaching zero-hours contracts for PhDs. In fact it’s reported that people hired by TeachHigher won’t even have a contract: "staff hired under TeachHigher will be working explicitly not on a contract, but rather, an ‘agreement’ ".

The organisation behind this is called TeachHigher. And guess who owns it? The University of Warwick. It is a subsidiary of the Warwick Employment Group which already runs several other employment agencies, including Unitemps which deals with cleaners, security and catering staff.

The university claims that it isn’t "outsourcing" because TeachHigher is part of the university. For now, anyway. It’s reported that "The university plans to turn the project into a commercial franchise, similar to another subsidiary used to pay cleaners and catering staff, it can sell to other institutions."

The Warwick students’ newspaper "spoke to a PhD student who was fired last year from a teaching job with Unitemps after participating in strike action, who felt one of the aims of creating TeachHigher may “to prevent collective action from taking place.”"

Bringing the university into disrepute is something for which you can be fired. The vice-chancellor, Nigel Thrift, has allowed Warwick to become a laughing stock four times in a single year. Perhaps it is time that the chair of Council, George Cox, did something about it?

Universities don’t have to be run like that. UCL isn’t, for one.

### Follow-up

9 April 2015 It seems that TeachHigher was proposing to pay a lecturer £5 per hour. This may not be accurate but it’s certainly caused a stir.

Laurie Taylor, ever-topical, was on the Docherty case in Times Higher Education.

 Riga, Riga, roses I’ve nothing against Latvia per se, but I can’t in all honesty see any real parallels between a university in such a faraway and somewhat desolate place as Riga and our own delightful campus.” That was how Jamie Targett, our Director of Corporate Affairs, responded to the news that the European Court of Human Rights had found that a professor at Riga Stradiņš University had been unfairly sacked for criticising senior management. University staff, the court ruled, must be free to criticise management without fear of dismissal or disciplinary action. Targett “thoroughly rejected” the suggestion from our reporter Keith Ponting (30) that there might be “a parallel” between what happened at Riga and our own university’s decision to ban Professor Busby of our English Department from campus for nine months for a disciplinary offence. This, insisted Targett, was a “wholly inappropriate parallel”. For whereas the Latvian professor had been disciplined for speaking out against “alleged nepotism, plagiarism, corruption and mismanagement” in his department, Professor Busby had been banned from campus and from contact with students and colleagues for nine months for the “far more heinous offence” of “sighing” during an appointments interview. Targett said he “trusted that any fair-minded person, whether from Latvia or indeed the Outer Caucasus, would be able to see the essential difference in the scale of offence”.

10 April 2015

The London Review of Books has a rather similar piece, Mind Yout Tone, by Glen Newey.

"It’s tough to pick winners amid the textureless blather that has lately seeped from campus PR outfits".

"In a keen field, though, it’s Warwick’s drill-sheet that takes the jammie dodger".

17 April 2015

Anyone would have thought that Laurie Taylor had read this post. His inimitable Poppletonian column this week was entirely devoted to Warwick.

It makes a nice change to be able to compliment an official government report.

 Ever since the House of Lords report in 2000, the government has been vacillating about what should be done about herbalists. At the moment both western herbalists and traditional Chinese medicine (TCM) are essentially unregulated. Many (but not all) herbalists have been pushing for statutory regulation, which they see as government endorsement. It would give them a status like the General Medical Council.

A new report has ruled out this possibility, for very good reasons [download local copy].

### Back story (abridged!)

My involvement began with the publication in 2008 of a report on the Regulation of Practitioners of Acupuncture, Herbal Medicine, Traditional Chinese Medicine . That led to my post, A very bad report: gamma minus for the vice-chancellor. The report was chaired by the late Professor Michael Pittilo BSc PhD CBiol FIBiol FIBMS FRSH FLS FRSA, Principal and Vice-Chancellor of The Robert Gordon University, Aberdeen. The membership of the group consisted entirely of quacks and the vice -chancellor’s university ran a course in homeopathy (now closed).

The Pittilo report recommended statutory regulation and "The threshold entry route to the register will normally be through a Bachelor degree with Honours". It ignored entirely the little problem that you can’t run a BSc degree in a subject that’s almost entirely devoid of evidence. It said, for example that acupuncturists must understand " yin/yang, 5 elements/phases, eight principles, cyclical rhythms, qi ,blood and body fluids". But of course there is nothing to "understand"! They are all pre-scientific myths. This “training dilemma” was pointed out in one of my earliest posts, You’d think it was obvious, but nonetheless the then Labour government seemed to take this absurd report seriously.

In 2009 a consultation was held on the Pittilo report. I and many of my friends spent a lot of time pointing out the obvious. Eventually the problem was again kicked into the long grass.

The THR scheme

Meanwhile European regulations caused the creation of the Traditional Herbal Registration (THR) scheme. It’s run by the Medicines and Healthcare products Regulatory Authority (MHRA). This makes it legal to put totally misleading claims on labels of herbal concoctions, as long as they are registered with THR, They also get an impressive-looking certification mark. All that’s needed to get THR registration is that the ‘medicines’ are not obviously toxic and they have been in use for 30 years. There is no need to supply any information whatsoever about whether they work or not. This appears to contradict directly the MHRA’s brief:

"”We enhance and safeguard the health of the public by ensuring that medicines and medical devices work and are acceptably safe."

After much effort, I elicited an admission from the MHRA that there was no reason to think that any herbal concoctions were effective, and that there was nothing to prevent them from adding a statement to say so on the label. They just chose not to do so. That’s totally irresponsible in my opinion. See Why does the MHRA refuse to label herbal products honestly? Kent Woods and Richard Woodfield tell me. Over 300 herbal products have been registered under the THR scheme (a small percentage of the number of products being used). So far only one product of Tibetan medicine and one traditional Chinese medicine have been registered under THR. These are the only ones that can be sold legally now, because no herbs whatsoever have achieved full marketing authorisation -that requires good evidence of efficacy and that doesn’t exist for any herb.

### The current report

Eventually, in early 2014, the Tory-led government set up yet another body, "Herbal Medicines and Practitioners Working Group " (HMPWG). My heart sank when I saw its membership (Annex A.2). The vice-chair was none other that the notorious David Tredinnick MP (Con, Bosworth). It was stuffed with people who had vested interests. I wrote to the chair and to the few members with scientific credentials to put my views to them.

But my fears were unfounded, because the report of the HMPWG was not written by the group, but by its chair only. David Walker is deputy chief medical officer and he had clearly listened. Here are some quotations.

The good thing about the European laws is that

"This legislation effectively banned the importation and sale of large-scale manufactured herbal medicine products. This step severely limited the scope of some herbal practitioners to continue practising, particularly those from the Traditional Chinese Medicine (TCM) and Ayurvedic traditions."

The biggest loophole is that

"At present under UK law it is permitted for a herbal practitioner to see individual patients, offer diagnoses and prepare herbal treatments on their own premises, as long as these preparations do not contain banned or restricted substances. This is unchanged by the Traditional Herbal Medicinal Products Directive. "

Walker recognised frankly that there is essentially no good evidence that any herb, western or Chinese, works well enough to make an acceptable treatment. And importantly he, unlike Pittilo, realised that this precludes statutory regulation.

"There are a small number of studies indicating benefit from herbal medicine in a limited range of conditions but the majority of herbal medicine practice is not supported by good quality evidence. A great deal of international, primary research is of poor quality. "

"ts. Herbal medicine practice is therefore currently based upon traditional practice rather than science. It is difficult to differentiate good practice from poor practice on the basis of this evidence in a way that could establish standards for statutory regulation"

The second problem was the harms done by herbs. Herbalists, western and Chinese, have no satisfactory way of reporting side effects

" . .   . there is very limited understanding of the risks to patient safety from herbal medicines and herbal practice. A review of safety data was commissioned from HMAC as part of this review. This review identified many anecdotal reports and case studies but little systematically collected data. Most herbal medicine products have not been through the rigorous licensing process that is required of conventional pharmaceutical products to establish their safety and efficacy. Indeed, only a small proportion have even been subject to the less rigorous Traditional Herbal Registration (THR) process. "

"The anecdotal evidence of risk to patients from herbal products in the safety review highlighted the prominence of manufactured herbal medicines in the high profile serious incidents which have been reported in recent years. Many of these reports relate to harm thought to be caused by industrially manufactured herbal products which contained either dangerous herbs, the wrong constituents, toxic contaminants or adulterants. All such industrially manufactured products are now only available under European regulations if their safety is assured through MHRA licensing or THR
accreditation; and specific dangerous herbs have been banned under UK law. This has weakened the case for introduction of statutory regulation as a further safety measure. "

Then Walker identified correctly the training dilemma. Although it seems obvious, this is a big advance for a government document. Degrees that teach nonsense are not good training: they are miseducation.

"The third issue is the identification of educational standards for training practitioners and the benchmarking of standards for accrediting practitioners. With no good data on efficacy or safety, it is difficult for practitioners and patients to understand or quantify the potential benefits and risks of a proposed therapeutic intervention. Training programmes could accredit knowledge and skills in some areas including pharmacology and physiology, professional ethics and infection control but without a credible evidence base relating to the safety and effectiveness of herbal medicine it is hard to see how they could form the basis of accreditation in this field of practice.

There are a number of educational university programmes offering courses in herbal medicine although the number has declined in recent years. Some of these courses are accredited by practitioner organisations which is a potential governance risk as the accreditation may be based on benchmarks established by tradition and custom rather than science.
"

"The herbal medicine sector is in a dilemma" is Walker’s conclusion.

"Some practitioners would like to continue to practise as
they do now, with no further regulation, and accept that their practice is based on tradition and personal experience rather than empirical science. The logical consequence of adopting this form of practice is that we should take a precautionary approach in order to ensure public safety. The public should be protected through consumer legislation to prevent false claims, restricting the use of herbal products which are known to be hazardous to health"

The problem with this is, if course, is that although there is plenty of law, it’s rarely enforced : see Most alternative medicine is illegal Trading Standards very rarely enforce the Consumer Protection Regulations (2008) but Walker is too diplomatic to mention that fact.

"The herbals sector must recognise that its overall approach (including the rationale for use of products and methods of treatment, education and training, and interaction with the NHS) needs to be more science and evidence based if in order to be established as a profession on the same basis as other groups that are statutorily regulated."

### So what happens next?

In the short term nothing will happen.

The main mistake has been avoided: there wil be no statutory regulation.

The other options are (a) do nothing, or (b) go for accreditation of a voluntary register (AR) by the Professional Standards Authority for Health and Social Care (PSA). Walker ends up recommending the latter, but only after a lot more work (see pages 28-29 of report). Of particular interest is recommendation 5.

"As a first step it would be helpful for the sector organisations to develop an umbrella voluntary register that could support the development of standards and begin to collaborate on the collection of safety data and the establishment of an academic infrastructure to develop training and research. This voluntary register could in due course seek accreditation from the Professional Standards Authority for Health and Social Care (PSA)."

So it looks as though nothing will happen for a long time, and herbalists and TCM may end up with the utterly ineffectual PSA. After all, the PSA have accredited voluntary registers of homeopaths, so clearly nothing is too delusional for them. It’s very obvious that, unlike Walker, the PSA are quite happy to ignore the training dilemma.

### Omissions from the report

Good though this report is, by Department of Health standards, it omits some important points.

Endangered species and animal cruelty aren’t mentioned in the report. Traditional Chinese medicine, and its variants, are responsible for the near-extinction of rhinoceros, tiger and other species because of the superstitious belief that they have medicinal value. It’s not uncommon to find animal parts in Chinese medicines sold in the UK despite it being illegal

And the unspeakably cruel practice of farming bears to collect bile is a direct consequence of TCM.

A bile bear in a “crush cage” on Huizhou Farm, China (Wikipedia)

Statutory regulation of Chiropractors

The same arguments used in Walker’s report to deny statutory regulation of herbalism, would undoubtedly lead to denial of statutory regulation of chiropractors. The General Chiropractic Council was established in 1994, and has a status that’s the same as the General Medical Council. That was a bad mistake. The GCC has not protected the public, in fact it has acted as an advertising agency for chiropractic quackery.

Perhaps Prof. Walker should be asked to review the matter.

### Follow-up

You can also read minutes of the HMPWG meetings (and here). But, as usual, all the interesting controversies have been sanitised.

Edzard Ernst has also commented on this topic: Once again: the regulation of nonsense will generate nonsense – the case of UK herbalists.

DOI: 10.15200/winn.142809.94999

The Research Excellence Framework (REF) is the latest in a series of 6-yearly attempts to assess the quality of research in UK universities. It’s used to decide how to allocate about £1.6 billion per year of taxpayers’ money, the so-called "quality-related" (QR) allocation.

It could have been done a lot worse. One of the best ideas was that only four papers could be submitted, whatever the size of a research group. After much argument, the judgment panels were told not to use journal impact factors as a proxy for quality (or, for lack of quality), though it’s clear that many people did not believe that this would be obeyed, But it cost at least £60 million. At UCL alone, it took 50 – 75 person-years of work. and the papers that were submitted were assessed by people who often would have no deep knowledge about the field, It was a shocking waste of time and money, and its judgements in the end were much the same as last time.

### Did the REF benefit science?

It’s frequently said that the REF improved the UK’s science output. The people who claim this need a course in the critical assessment of evidence. Firstly, there is no reason to think that science has improved in quality in the last 6 years, and secondly any changes that might have occurred are hopelessly confounded with the passage of time, the richest source of false correlations.

I’d argue that the REF has harmed science by encouraging the perverse incentives that have done so much to corrupt academia. The REF, and all the other university rankings produced by journalists, are taken far too seriously by vice-chancellors and that does active harm. As one academic put it

"This isn’t about science – it’s about bragging rights, or institutional willy-waving. "

There are now serious worries about lack of reproducibility of published work, waste of money spent on unreliable studies, publication of too many small under-powered studies, bad statistical practice (like ignoring the false discovery rate), and about exaggerated claims by journals, university PR people and authors themselves. These result in no small part from the culture of metrics and the mismeasurement of science. The REF has added to the pressures.

It is highly unsatisfactory, so the only real question becomes what should be done instead?

### What’s to be done?

Transferring all the QR money to Research Councils won’t work. It would merely encourage the grossly bad behaviour that we’ve seen at Imperial College London, Warwick University, Kings College London and Queen Mary College London, all of whom have fired successful senior staff simple because their grant income wasn’t deemed big enough. (This is odd because the same managers whine continually that they make a loss on research grants, but that’s another question.) It’s been suggested that this could be avoided by reducing considerably the overheads that come with grants, but this would leave a shortfall that, without QR, would be impossible to make up.

At present a HEFCE working group is considering the possibility that metrics might be used in the next REF. It’s a sensible group of people, and they are well aware of the corrupting influence of metrics, and the lack of evidence that they measure the quality of research. So if reading papers takes too much time and money, and metrics are likely to lead to widespread "gaming" (a euphemism for cheating), what should be done?

I made a suggestion in 2010, but it seems to have been totally ignored, despite appearing in the Times (in their premier
Thunderer opinion column. So I’ll try to make the case again, in the context of the REF.

A complete re-thinking of tertiary education is needed,

### Proposal for a two stage higher education system

It seems to be a good thing that such a large proportion of the population now get higher education.  But the university system has failed to change to cope with the huge increase in the number of students.

The system of highly specialist honours degrees might have been adequate when 5% of the population did degrees, but that system seems quite inappropriate when 50% are doing them.

There are barely enough university teachers who are qualified to teach specialist 3rd year or postgraduate courses.  And many  teachers must have suffered from (in my field) trying to teach the subtleties of the exponential probability density function to a huge third year class, most of whom have already decided that they want to be bankers or estate agents.

These considerations have driven me to conclude, somewhat reluctantly, that the whole system needs to be altered.

Honours degrees were intended as a prelude to research and 50% of the population are not going to do research (fortunately for the economy). Vice-chancellors have insisted on imposing on large numbers of undergraduates, highly specialist degrees which are not what they want or need.

I believe that all first degrees should be ordinary degrees, and these should be less specialist than now.  Some institutions would specialise in teaching such degrees, others would become predominantly postgraduate institutions, which would have the time. money and expertise to do proper advanced teaching, rather than the advanced Powerpoint courses that dominate what passes for Graduate Schools in the UK.

There would, of course, be almighty rows about which universities would be re-allocated to teach ordinary degrees. That’s not a reason to educate students in 2015 using a pre-war system.

### The two-stage system would be more egalitarian than the present one

I anticipate that some people might think that this system is a reversion to the pre-1992 divide between polytechnics and universities. It isn’t. The pre-1992 system labelled you as either polytechnic or university: it was a two-tier system. I’m proposing a two stage system. The two sorts of institution work in series, not in parallel.

Such a system would be more egalitarian than now, not less.

Everyone would start out with the same broad undergraduate education, and the decision about whether to specialise, and the area in which to specialise, would not have to be made before leaving (high) school, as now, but would be postponed until two or three years later. That’s a lot better, especially for people from poorer backgrounds.

If this were done, most research would be done in the postgraduate institutions.  Of course there are some good researchers in institutions that would become essentially teaching-only, so there would have to be chances for such people to move to postgraduate universities, and for some people to move in the other direction.

This procedure would, no doubt, result in a reduction in the huge number of papers that are published (but read by nobody).  That is another advantage of my proposal.   It’s commonly believed that there is a large amount of research that is either trivial or wrong.  In biomedical research, it’s been estimated that 85% of resources are wasted (Macleod et al., 2014).

It’s well-known that any paper, however bad, can be published in a peer-reviewed journal.  Pubmed, amazingly, indexes something like 30 jouranls devoted to quack medicine, in which papers by quacks are peer-reviewed by other quacks, and which are then solemnly counted by bean-counters as though they were real research.  The pressure to publish when you have nothing to say is one of the perverse incentives of the metrics culture.

It seems likely that standards of research in second-stage universities would be at least as high as at present. It that’s the case then QR could simply be allocated on the basis of the number of people in a department.  Dorothy Bishop has shown that even under the present system, the amount of QR money received is strongly correlated with the size of the department (correlation coefficient = 0.995 for psychology/neuroscience).

From Dorothy Bishop’s blog.
r = 0.995

Using metrics produces only a tiny increase in the correlation coefficient for RAE data. It could hardly be any higher than 0.995

In other words, after all the huge amount of time, effort and money that’s been put into assessment of research, every submitted researcher ends up getting much the same amount of money.

That system wouldn’t work at the moment, because, sadly, universities would, no doubt, submit the departmental cat for a share of the cash.  But it could work under a system such as I’ve described.  The allocation of QR would take microseconds and cost nothing.

### How much would the two-stage system cost?

To have any hope of being accepted by politicians, the two-stage system would probably have to cost no more than the existing system. As far as I know. nobody seems to have made any serious attempt to work out the costs. Perhaps they should. It won’t be easy because an important element of the two-stage system is to improve postgraduate education, and postgraduate education was forgotten in the government’s "reforms"

Much would depend on whether the first stage, ordinary degrees could be taught in two years. In an institution that does little research, there would be no justification for the long summer vacation. Something comparable with (high) school holidays would be more appropriate, and if a decent job could be done in two years, that could save enough money to pay for the rest. It would also mimimise the debt that hangs round the neck of graduates.

The cost of running the second stage would depend on how many students opted (and qualified) to carry on to do an honours degree, and on how many of those wanted go on to graduate school and higher degrees. The numbers of people that went on to specialist honours degrees would inevitably be smaller than now, so their education would be cheaper. But, crucially, they could be educated better. And because of the specialist researchers in a postgraduate institution, it would be possible to have real postgraduate education in advanced research methods,

At present, Graduate Schools in the UK (unlike those in the USA) rarely teach topics beyond advanced Powerpoint, and that’s a recipe for later mediocrity.

In order to estimate the actual cost, we’d need to know how many people wanted to go beyond the first degree (and qualified to do so). If this were not to large, the proposed system could well be cheaper than the presnet one, as well as being more egalitarian, and providing better postgraduate education. The Treasury should like that.

### The California System

It will not have escaped the readers’ attention that the two stage system proposed here has much in common with higher education in the USA. In particular, it resembles the University of California system, which was started in 1960. It became a model for the rest of the world.

Meanwhile, the UK persists with a pre-war system of specialist honours degrees that is essentially unchanged since only a handful of people went to universities.

It’s time for the UK to have a serious debate about whether we need to change.

### Follow-up

I just noticed this, from the inimitable Laurie Taylor. It is dated 4 July 2013. Who says the REF does not encourage cheating?

### Appointments

Are you a distinguished academic researcher looking to supplement your income? Then look no further. Poppleton is offering 24 extraordinarily well-paid and extraordinarily part-time posts to leading scholars in almost any discipline who will help to raise its profile in the research excellence framework.

These posts will follow what is known as the Cardiff-Swansea paradigm in that successful candidates need not have conducted any of their distinguished research at Poppleton, have no need to ever visit the actual campus, and can be assured that their part-time contracts will expire immediately after the date of the REF census.

3 February 2015

The day after this post appeared the Guardian published a version of it which discussed only the two-stage degree proposals but omits the bit about the Research Excellence Framework (REF 2014). The title was "Honours degrees aren’t for all – some unis should only teach two-year courses". There are a lot more comments there than than here.. I assume that the headline was written by one of those pesky subeditors who failed to understand what’s important (the two year degrees were just a suggestion, nothing to do with the main proposals).

3 April 2015

As an experiment, this blog has been re-posted on the Winnower. The advantage of this is that it now has a digital object identifier, DOI: 10.15200/winn.142809.94999

The tragedy of the apparent suicide of Stefan Grimm is now known worldwide. His last email has been read by more than 160,000 people from over 200 countries. This post gathers together some of the reactions to his death. It’s a Christmas card for the people who are responsible.

 Alice Gast (president) James Stirling (provost) Dermot Kelleher (VP (health)

“This isn’t about science – it’s about bragging rights, or institutional willy-waving.” from Grimm’s Tale

### The back story

On Monday 1st December I published Stefan Grimm’s last email. It has been read by more than 160,000 people from over 200 different countries.

On Tuesday 2nd December, Stefan Grimm’s immediate boss, Martin Wilkins, wrote to me. He claimed “We met from time to time to discuss science and general matters. These meetings were always cordial. ”

On Wednesday 3rd December, the Dean of Medicine, Dermot Kelleher, mailed all Faculty of Medicine staff (not the rest of the College). Read the letter. It said very little. But it did include the words

“I regret I did not know Stefan personally, and I looked to colleagues to describe to me his life and the impact of his work at Imperial “

It seems a bit odd that the Dean of Medicine did not know a senior professor, but that seems to be life at Imperial.

On Thursday 4th December, Times Higher Education printed the same last email, and also the text of a threatening letter sent to Grimm in March.by his boss, Martin Wilkins. The letter was very far from being cordial, contrary to what Wilkins claimed. It included these words.

“I am of the opinion that you are struggling to fulfil the metrics of a Professorial post at Imperial College which include maintaining established funding in a programme of research with an attributable share of research spend of £200k p.a and must now start to give serious consideration as to whether you are performing at the expected level of a Professor at Imperial College.”

For a successful 51 year old with a good publication record to get a letter like that must have been devastating.

On Friday 5th December, Imperial made its first public announcement of his death. more than three months after it happened. By this time a damning account of his death had appeared even in the Daily Mail. The announcement read as though the world was unaware of his last words. It was a PR disaster: weasel words and crocodile tears. It made Imperial College appear to be totally heartless. The official announcement was accompanied by the phone numbers for the Samaritans. the chaplaincy and mental health first-aiders. Giving a person a phone number to call when you’ve destroyed their life is not an adequate substitute for treating staff properly.

Imperial are still trying to pretend that Grimm’s death is nothing to do with them, despite the fact that the whole world now knows quite enough of the facts to see otherwise.

### The Coroner’s Inquest

The inquest into Grimm’s death was adjourned on October 8th, pending investigations into its cause. If you know anything relevant you should email the Coroner’s officer who is responsible for the investigation. That’s Molly Stewart (Molly.Stewart@lbhf.gov.uk). It is rather important that all the information doesn’t come from the College authorities, which cannot be relied on to tell the truth.

### Some reports about the regime at Imperial College

Since my post went up on December 1st, I’ve had a stream of emails which testify to the reign of terror operated by the senior management at Imperial. The problem is by no means restricted to the Faculty of Medicine, though the problems seem to be worst there.

Many of these correspondents don’t want to speak in public. That’s certainly true of people who still work at Imperial, who have been warned to deflect all enquiries to HR. Here are some of the stories that I can reveal.

The Research Excellence Framework (REF) results were announced on 18th December. All university PR people hunted through the results, and all found something to boast tediously about. The letter from Imperial’s provost, James Stirling (read it), is pretty standard stuff. as is the letter from the Dean of Medicine, Dermot Kelleher (read it). Needless to say, neither letter mentioned the price in human misery, and even death, that Imperial had paid for its high ranking. I felt compelled to tweet

Kelleher promoted. Astonishingly, the very next day, the Dean of Medicine, on whose watch Grimm died, was promoted. You can read the letter from Imperial’s president, Alice Gast, in which this is announced. He is to be Vice President (Health), as a reward no doubt, for the cruel regime he ran as Dean. The letter has all the usual vacuous managerial buzzwords, e.g. “to support and grow the multidisciplinary paradigm in health”. Remember DC’s rule number one: never trust anyone who uses the word ‘paradigm’. Needless to say, still no mention of treating staff better.

Dr William J Astle.

Dr Astle is one of many people who wrote to me about his experiences at Imperial College. Although he still appears on Imperial’s web site, he now works as a statistician in a bioinformatics team at the University of Cambridge (see their web site).

He wrote again on 23 October 2014, to pass on an email (read the mail) that was sent to Department staff after Grimm’s last email had been circulated.(on 21 October). It is from a Faculty Operating Officer and ends with a warning to refer media enquries to a PR person (the Press and Internal Communications Manager, John-Paul Jones).

When he saw the internal email from Provost James Stirling with the usual self-congratulatory stuff about the REF, Astle wrote again to Stirling, His letter ends thus.

“Putting university staff in fear of losing their jobs leads to an atmosphere of obsequiousness and obedience to authority that prevents academics from fulfilling their institutional role. In a free society it is essential academics have the autonomy to determine their line of work, to question institutional and state authority and to do risky research. Once again I emphasise – in my experience the atmosphere in the faculty of medicine at IC is not conducive to this.”

Stirling did not reply to this letter. Neither Gast nor Stirling have replied to mine either. Discourtesy seems to be part of the job description of senior managers.

Christine Yates

Christine Yates says

“I was employed at Imperial College London from s” August 2002 to October 2012. For these 10 years I was the College’s Equality and Diversity Consultant in the Human Resources Department, reporting to the HR deputy director, Kim Everitt. In turn, Kim Everitt reports to the HR director, Louise Lindsay. Throughout this time I was the College’s sole equalities consultant, and over time built up the Equalities Unit and managed a team of five.”

“I was dismissed on 8th October 2012 following a Disciplinary Hearing in response to an allegation of gross misconduct “for continued wilful refusal to follow your Head of Department’s (HOD) instructions not to be involved in individual cases”.

As part of her job, she was responsible for establishing and maintaining the Harassment Support Contact Scheme, which was designed to help staff who felt they were being harassed, bullied, and victimised. She was also responsible for the College’s first Athena SWAN (scientific women’s academic network) .successful application, along with the establishment of disabilities, race equalities, and sexual orientation networks, all of which attained quality professional kite marks over time. The Athena Swan award is particularly ironic, given that Imperial’s present brutal assessment system must be even more unfair to women than it is to men. In 2003 (when Richard Sykes was still in charge), a third of female employees at Imperial reported that they were bullied. The improvement since then seems to have been small.

One of many cases she dealt with involved the harassment and bullying of a senior female academic by her male boss. Yates maintains, with good evidence, that complaints about this behaviour were never investigated properly by HR. This displeased HR. Incidents like this undoubtedly contributed to her dismissal.

“In Dr ***’s [female] case, it is clear to me that no independent investigations have been held and that College procedures are being flouted or rnanipulated with the alleged harasser (Professor **** [male]) being protected and permitted to continue his misconduct.”

“In my position as the College’s Equalities Consultant, I was aware of many cases and outcomes. Or ***’s is one of the most distressing and badly handled cases I was witness to, and the manner in which HR protect senior academics who have gravely offended, and who under any reasonable circumstances would be found to be guilty of gross misconduct, is a sad indictment of Imperial College”

You can read the statement that Christine Yates has already sent to the Coroner’s officer. Unfortunately the attachments have had to be removed here because they deal with specific cases.

“The Coroner’s Office needs to be aware of the pattern of behaviour that ensues whenever bad practice is brought to the College’s attention. In response to whistle blows and other complaints the College tries to discredit the complainant. When this fails they will invariably state that they will hold a ‘review’ usually undertaken by those responsible for the bad behaviour and thus with a vested interest in covering up any misconduct and impropriety. It is noted this pattern remains unchanged, “

A problem with a paper

An anonymous correspondent has sent me a lot of emails that concern a paper that was in revision at the time of Grimm’s death. The title of the paper is “Role of non-coding RNAs in apoptosis revealed in a functional genome-wide screen”.

On October 6th, one author wrote to his co-authors “I worked closely with Stefan on the screen data this year. We re-interpreted the mathematical analysis performed in the original manuscript, providing a more rigorous statistical foundation of the gene rankings. As a result, the gene list Stefan and I have generated is now different.”.

Clearly Grimm was aware of the need for revision before he died. Given that everyone was under such intense pressure to publish, it’s likely that the prospect of a prolonged delay in publication might well have contributed to his depression and his death.

The author who wrote on October 6th outlined some options. One was to leave the paper as it was, but to include all the raw data and submit to a journal such as Plos One or the preprint server BioRxiv. This option “requires minimal work, and would result in no change in the author list. However we would aim for a lower-impact journal.”. His preferred option, though, was to rewrite the paper altogether (and for himself to become first co-author) “as it is in all our shared interest to get the work published in as good a journal as possible. “.

Two days later, on October 8th, the same author thanked his co-authors for their responses. As a result of the responses he got, he asked to have his name removed from the paper because he did not agree with what was contained in the manuscript. “However, given that I believe the gene list is wrong, I request my name to be removed from the author list. If any other authors do not wish for the raw data to be disclosed then I hope you think it’s reasonable for me to close off my involvement with the paper.”.

The paper has 11 authors, including Stefan Grimm. . I have written to all but one of the authors to try to ascertain the facts. Of the four co-authors who have replied, all but one said that they hadn’t seen the final paper. One said that they were unaware that they were on the author list, and said they probably shouldn’t be.

I have tried to protect the authors (some of whom are still at Imperial) by not mentioning their names. But one co-author is sufficiently senior to be mentioned by name. Alan Boobis answered by my mail cordially enough when I first wrote to him, but declined to give much useful information, apart from confirming that Grimm was the senior author on the paper. On October 9th he wrote to all co-authors, thus.

 From: Boobis, Alan R [a.boobis@imperial.ac.uk] Sent: 09 October 2014 18:15 To: xxxxxx [co-authors] Subject: Re: News About Stefan & Screen Paper Dear all The situation regarding this manuscript needs to be dealt with rationally. There is a real danger that the reputations of individuals and of the College will be harmed. I suggest that we all need to agree the most appropriate way forward. I am out of the country this week but will have my secretary liaise with you next week to arrange a suitable time (face to face or by phone) to discuss this. Best wishes, Alan

I have no idea what the outcome of this meeting was. Personally. I always worry a bit when people want meetings “face to face or by phone”. Written records are much more informative.

I should like to make it clear that I’m not suggesting any misconduct whatsoever. The author who wished to withdraw acted with principle and courage, and mistakes happen. They are perhaps especially likely in multi-author papers where some authors don’t understand the input from others. But it is sad to see the emphasis on the long-discredited journal impact factor that was forced on them by Imperial’s policies. And it’s sad to see that several co-authors had not actually seen the final paper. This smacks of “citation-mongering”, yet another bad effect of the metrics culture that has pervaded all of academia, and which is enforced in an especially simple-minded way at Imperial.

This sad episode is yet another illustration of the way that Imperial’s policies are damaging people, and, in the end, damaging science.

### Some discussions of the Imperial problem

Since Grimm’s last email was revealed, it’s been discussed in many blogs and articles. Here are a few of them.

Grimm’s tale (2 December). This perceptive blog reproduces part of the nasty threatening letter sent by Martin Wilkins to Grimm.

“Your current level of funding does not constitute the appropriate level for a professor at Imperial College. Unless you submit and are awarded a Platform grant as PI in the next 12 months we will seek to initiate disciplinary action against you.”

This isn’t about science – it’s about bragging rights, or institutional willy-waving. Grimm was informed – in public – that he was to be fired, and left waiting for the axe to fall while the axe-wielder marauded around the campus boasting about it like an even more pathetic Alan Sugar.”

That sums it up for me. It’s very sad.

“Martin Wilkins to Professor Stefan Grimm, a few months before the latter committed suicide. Imperial College had been pressuring Grimm to get 200, 000 pounds in grants in order for him to remain employed. They threatened to sack him as he only had 135,000 pounds.

Sounds a lot like loan sharks.”

Clearly universities like Imperial are no longer places for scholarship. They are more like anxiety machines.

The Nuffield Council on Bioethics produced an important report in the midst of the scandal about Grimm: The culture of scientific research 2014. Paragraph 1.7 produced a chilling statistic

1.7 Compromising on research integrity and standards

• Almost six in ten (58%) respondents are aware of scientists feeling under pressure to
compromise on research integrity and standards, with poor methodology and data fraud
frequently mentioned in the free text responses.
• Just over a quarter (26%) of those taking part in the survey have felt tempted to compromise
on research integrity.

Stefan Grimm and the British University system. This blog, written by a geneticist. Federico Calboli, based in Helsinki, gives an indication of the harm that Imperial is inflicting not only on itself, but on the whole of UK academia, and hence on the UK economy

“As always in the real world the best laid plans often conflict with how the world actually works, and this conflict gives rise to a number of unintended consequences. The first unintended consequence is that the pursuit of what managements defines as ‘novel’ and ‘glamorous’ will diminish the intellectual value of British academia as a whole.”

“Unfortunately, since academia, funding bodies and the editorial boards of papers have been taken over by top down management culture, solid rigorous science is penalised in favour of anything that can be branded as ’novel’, ‘cutting edge’, ‘state of the art’ and similar platitudes.”

“This policy will leave British academia directionless and intellectually empty, and will transform any research in technology and data driven drivel that can at most pick up low hanging fruits and will deliver less and less as time goes on.”

Still more shaming, Calboli continues thus.

“The second problem with how British academia is managed is the culture of intellectual dishonesty that is forced upon people. People are not allowed to just express their goals in simple honest terms. They are required to spin and embellish everything in order to have half a chance of getting some funding or publishing in a high impact journal – both crucial to contribute to the ‘excellence metric”.

“Only the shameless cynics thrive in such environment”.

The blog finishes with a rallying cry.

“the email that Prof Grimm sent in October did not magically make its way to the press by itself. While many people are feeling disenchanted with academia and leave, more and more insiders are taking a combative stance against the mindless hogwash that threatens the foundations of British academia and the people that push it. We should all stand up and be counted, or we will not be able to complain in the future. It would be great if management could live up to its role and abandon the idea that scientific research is simple, predictable and quickly profitable, and actually help build the future of British academia.”

All this reflects similar sentiments to those that I expressed in 2007 [the RAE was the predecessor of the REF]

“The policies described here will result in a generation of ‘spiv’ scientists, churning out 20 or even more papers a year, with very little originality. They will also, inevitably, lead to an increase in the sort of scientific malpractice that was recently pilloried viciously, but accurately, in the New York Times, and a further fall in the public’s trust in science. That trust is already disastrously low, and one reason for that is, I suggest, pressures like those described here which lead scientists to publish when they have nothing to say.”

““All of us who do research (rather than talk about it) know the disastrous effects that the Research Assessment Exercise has had on research in the United Kingdom: short-termism, intellectual shallowness, guest authorships and even dishonesty”. Now we can add to that list bullying, harassment and an incompetent box-ticking style of assessment that tends to be loved by HR departments.

This process might indeed increase your RAE score in the short term (though there is no evidence that it it does even that). But, over a couple of decades, it will rid universities of potential Nobel prize winners.”

### Conclusions

The policies adopted by Imperial College have harmed Imperial’s reputation throughout the world. Worse still, they have tainted the reputation of all UK universities. They have contributed to the corruption of science. and they have, in all probability, killed a successful man,

I hope that Alice Gast (president), James Stirling (provost). Dermot Kelleher (Dean, now vice president), and Martin Wilkins (who was left to weild the knife) have a good Christmas. If I were in their shoes, I’d feel so guilty that I wouldn’t be able to sleep at night.

Their proposal that HR policies should be investigated by, inter alia, the head of HR has provoked worldwide derision.

Their refusal to set up an independent external inquiry is reprehensible.

Not for the first time, a fine institution is being brought into disrepute by its leadership. Council please note.

 Alice Gast James Stirling Dermot Kelleher

Perhaps the best description of what’s going on is from Grimm’s Tale “This isn’t about science – it’s about bragging rights, or institutional willy-waving.”. Gast, Stirling and Kelleher should stop the willy-waving. They should either set about rectifying the damage they’ve done. Or they should resign. Now.

The chair of universities HR association, Kim Frost, said

“Bullying is a very emotive term, and what one person experiences as bullying will often be simple performance management from their manager’s point of view.”.

That’s scary because it shows that she hasn’t the slightest idea about “performance management”. I have news for HR people. They are called experiments because we don’t know whether they will work. If they don’t work that’s not a reason to fire anyone. No manager can make an experiment come out as they wish. The fact of the matter is that it’s impossible to manage research. If you want real innovation you have to tolerate lots and lots of failure. “Performance management” is an oxymoron. Get used to it.

This sorry episode has far more general lessons for the way the REF is conducted and for the metrics sales industry. Both share some of the guilt.

That will have to wait for another post.

### Follow-up

25 December 2014. Universities "eliminate tenure because Starbucks does not have tenure"

I was struck by this excerpt from a Christmas newsletter from a colleague. Buried among the family news was buried this lament. He’s writing about Rush University, Chicago, but much the same could be said about many universities, not only in the USA.

 Rush Medical Center built an $800 million hospital building that is clinically state-of-the-art and architecturally unique. Now it is poised to become a world class center of basic and clinical research. Sadly, rather than listen to researchers who have devoted their careers to Rush, senior administration hears advice from fly-by-night financial consultants who apply the same “Business Model” to medical care, education, and research as to a shoe factory. Perhaps because fiscal consulting requires little skill or training*, they do not distinguish between a researcher and a Starbucks employee [literally true!]. They eliminate tenure because Starbucks does not have tenure. {To be fair, they have only eliminated “tenure of salary” – one may continue working with a title, but without pay!} They cannot imagine that world-class research is an art that requires years of training, cultivating an international network of colleagues, and most importantly, continuity of funding. Because their work is so trivial, they cannot fathom that researchers could be utterly unique and irreplacable. And they do not care – they will destroy research at Rush, collect their multi-million dollar fee, and move on to the next shoe factory. *Lesson 1. Fire people who do real work, cut wages, steal from pension funds, eliminate unions and job security. Congratulations you are now a qualified fiscal consultant! 26 December 2014  Grimm is not the only one. In the same month, September 2014, Tony Veitch was found dead. He was a senior scientist in the lab at Kew Botanical Gardens. He was 49, much the same age as Stefan Grimm. It’s presumed that he committed suicide after being told to reapply for his own job. !7 January 2015 I hear that Imperial College’s UCU passed this motion.  Motion 3: Branch condemns bullying and harassment of staff at Imperial This branch strongly condemns the bullying and harassment of staff at Imperial, particularly by some managers. We call upon the senior management of the College to ensure that all managers are properly trained to deal with staff in a fair and considerate manner and on how to refrain from bullying and harassment. In light of a recent tragic case at Imperial, the College management must ensure that they fulfil their duty of care to all staff at all times. Of course every employer claims that they do this. I wonder how the officials can mouth these platitudes when the facts, now well known, show them to be untrue, The first post and this one have been viewed over 173.000 times, from at least 170 countries (UK, USA,and then almost 10,000 views from China). I realise that this must have harmed Imperial, but they have brought it on themselves. Neither the president nor the rector have had the courtesy to answer perfectly polite letters. I wrote also on 29 December to the chair of Council. Eliza Manningham-Buller. She has still not acknowledged receipt, never mind replied. I am amazed by the discourtesy of people who regard themselves as too important to reply to letters.  To chair of Council, Imperial College London 29 December 2014 Dear Lady Manningham-Buller A problem with management at Imperial It cannot have escaped your notice that a senior member of Imperial’s staff was found dead, after being told that he’d lose his job if he didn’t raise £200,000 in grants within a year. When I posted Stefan Grimm’s last email on my blog on December 1st it went viral (Publish and perish at Imperial College London), It has been read by over 160,000 people from over 200 countries. That being the case, Imperial’s first official mention of the matter on December 4th looked pretty silly. It was written as though his email was not already common knowledge –totally hamfisted public relations. After posting Grimm’s last mail, I was deluged with mails about people who had been badly treated at Imperial. I posted a few of them on December 23rd (Some experiences of life at Imperial College London. An external inquiry is needed after the death of Stefan Grimm). The policy of telling staff that their research must be expensive is not likely to be appreciated by the taxpayer. Neither will it improve the quality of science. On the contrary, the actions of the College are very likely to deter good scientists from working there (I already heard of two examples of people who turned down jobs at Imperial). I think it is now clear that the senior management team is pursuing policies that are damaging the reputation of Imperial. I hope that Council will take appropriate action. Best regards David Colquhoun _________________________________________ D. Colquhoun FRS Professor of Pharmacology, NPP, University College London 20 January 2015 Today I got a reply to the letter (above) that I sent to Eliza Manningham-Buller on 29 December. You can download it. I guess it’s not surprising that the reply says nothing helpful. It endorses the idea that HR should investigate their own practices, an idea that the outside world greets with ridicule. It reprimands me for making "unprofessional" comments about individuals. That’s what happens when people behave badly. It would be unprofessional to fail to point out what’s going on. It’s the job of journalists to name people. All else is PR. It suggests that I may have not followed the Samaritans’ guidelines for reporting of suicide. I’ve read their document and I don’t believe that either I, or Times Higher Education, have breached the guidelines. The letter says. essentially, please shut up, you are embarrassing Imperial. It’s fascinating to see the rich and powerful close ranks when criticised. But it is very disappointing. It seems to me to be very much in the public interest to have published the last email of Stefan Grimm. But I guess the last person you’d expect to champion transparency is an ex-head of MI5. Felix, Imperial’s student newspaper, carried an interesting article Death of Professor Grimm: the world reacts. The events at Imperial have been noted all over the world (at least 170 countries according to my own Google analytics) but the response has been especially big in China. Alienating a country like China seems to me to rank as bringing the College into disrepute. 9 February 2015 Death in Academia and the mis-measurement of science. Good article in Euroscientist by Arren Frood 25 February 2015 I see that Dermot Kelleher is leaving Imperial for the University of British Columbia. Perhaps he hopes that he’ll be able to escape his share of the blame for the death of Stefan Grimm? Let’s hope, for the sake of UBC, that he’s learned a lesson from the episode. 10 March 2015 The Vancouver Sun has been asking questions. An article by Pamela Fayerman includes the following. "Recently, Imperial College was engulfed in a controversy involving a tragedy. . . . a medical school professor, Stefan Grimm, took his own life last fall. He left an email that accused unnamed superiors of bullying through demands that he garner more research grants. The “publish or peril” adage that scientists so often cite seems like it may apply in this case. The college said it would set up an internal inquiry into the circumstances around the toxicology professor’s death, but the results have not been released. UBC provost Dave Farrar said in an interview that the death of the professor at Imperial College was never even discussed during the recruitment process. Kelleher said in a long distance phone interview that the tragedy had nothing to do with his reasons for leaving Imperial. And he can’t speak about the case since it is currently under review by a coroner." Well, I guess he would say that, wouldn’t he? Kelleher has been at Imperial for less than three years, and the generous intepretation of his departure is that he didn’t like the bullying regime. It had been going on long before Kelleher arrived, as documented on this blog in in 2007. It’s interesting to speculate about why he wasn’t asked about Grimm’s death (if that’s true). Did the University of British Columbia think it was irrelevant? Or did they want him to establish a similar regime of “performance management” at UBC? Or were the senior people at UBC not even aware of the incident? Perhaps the third option is the most likely: it’s only too characteristic of senior managers to be unaware of what’s happening on the shop floor. Just as in banks. 11 March 2015 It’s beginning to look like an exodus. The chair of Imperial’s council, Eliza Manningham-Buller, is also leaving. Despite her condescending response to my inquiries, perhaps she too is scared of what will be revealed about bullying. I just hope that she doesn’t bring Imperial’s ideas about "performance management" to the Wellcome Trust. Jump to follow-up  This week’s Times Higher Education carried a report of the death, at age 51, of Professor Stefan Grimm: Imperial College London to ‘review procedures’ after death of academic. He was professor of toxicology in the Faculty of Medicine at Imperial. Now Stefan Grimm is dead. Despite having a good publication record, he failed to do sufficiently expensive research, so he was fired (or at least threatened with being fired). “Speaking to Times Higher Education on condition of anonymity, two academics who knew Professor Grimm, who was 51, said that he had complained of being placed under undue pressure by the university in the months leading up to his death, and that he had been placed on performance review.” Having had cause to report before on bullying at Imperial’s Department of Medicine, I was curious to know more. Martin Wilkins wrote to Grimm on 10 March 2014. The full text is on THE. "I am of the opinion that you are struggling to fulfil the metrics of a Professorial post at Imperial College which include maintaining established funding in a programme of research with an attributable share of research spend of £200k p.a and must now start to give serious consideration as to whether you are performing at the expected level of a Professor at Imperial College." "Please be aware that this constitutes the start of informal action in relation to your performance, however should you fail to meet the objective outlined, I will need to consider your performance in accordance with the formal College procedure for managing issues of poor performance (Ordinance ­D8) which can be found at the following link. http://www3.imperial.ac.uk/secretariat/collegegovernance/provisions/ordinances/d8" [The link to ordinances in this letter doesn’t work now. But you can still read them here (click on the + sign).] It didn’t take long to get hold of an email from Grimm that has been widely circulated within Imperial. The mail is dated a month after his death. It isn’t known whether it was pre-set by Grimm himself or whether it was sent by someone else. It’s even possible that it wasn’t written by Grimm himself, though if it is an accurate description of what happened, that’s not crucial. No doubt any Imperial staff member would be in great danger if they were to publish the mail. So, as a public service, I shall do so. The email from Stefan Grimm, below, was prefaced by an explanation written by the person who forwarded it (I don’t know who that was).  Dear Colleagues, You may have already heard about the tragic death of Professor Stefan Grimm a former member of the Faculty of Medicine at Imperial College. He died suddenly and unexpectedly in early October. As yet there is no report about the cause of his death. Some two weeks later a delayed email from him was received by many of the senior staff of the medical school, and other researchers worldwide. It has been forwarded to me by one of my research collaborators. From my reading of it I believe that Stefan wanted it circulated as widely as possible and for that reason I am sending it to you. It is appended below. This email represents just one side of an acrimonious dispute, but it may be indicative of more deep seated problems. best wishes Begin forwarded message: From: Stefan Grimm Date: 21 October 2014 23:41:03 BST To: Subject: How Professors are treated at Imperial College Dear all, If anyone is interested how Professors are treated at Imperial College: Here is my story. On May 30th ’13 my boss, Prof Martin Wilkins, came into my office together with his PA and ask me what grants I had. After I enumerated them I was told that this was not enough and that I had to leave the College within one year – “max” as he said. He made it clear that he was acting on behalf of Prof Gavin Screaton, the then head of the Department of Medicine, and told me that I would have a meeting with him soon to be sacked. Without any further comment he left my office. It was only then that I realized that he did not even have the courtesy to close the door of my office when he delivered this message. When I turned around the corner I saw a student who seems to have overheard the conversation looking at me in utter horror. Prof Wilkins had nothing better to do than immediately inform my colleagues in the Section that he had just sacked me. Why does a Professor have to be treated like that? All my grant writing stopped afterwards, as I was waiting for the meeting to get sacked by Prof Screaton. This meeting, however, never took place. In March ’14 I then received the ultimatum email below. 200,000 pounds research income every year is required. Very interesting. I was never informed about this before and cannot remember that this is part of my contract with the College. Especially interesting is the fact that the required 200,000.- pounds could potentially also be covered by smaller grants but in my case a programme grant was expected. Our 135,000.- pounds from the University of Dammam? Doesn’t count. I have to say that it was a lovely situation to submit grant applications for your own survival with such a deadline. We all know what a lottery grant applications are. There was talk that the Department had accepted to be in dept for some time and would compensate this through more teaching. So I thought that I would survive. But the email below indicates otherwise. I got this after the student for whom I “have plans” received the official admission to the College as a PhD student. He waited so long to work in our group and I will never be able to tell him that this should now not happen. What these guys don’t know is that they destroy lives. Well, they certainly destroyed mine. The reality is that these career scientists up in the hierarchy of this organization only look at figures to judge their colleagues, be it impact factors or grant income. After all, how can you convince your Department head that you are working on something exciting if he not even attends the regular Departmental seminars? The aim is only to keep up the finances of their Departments for their own career advancement. These formidable leaders are playing an interesting game: They hire scientists from other countries to submit the work that they did abroad under completely different conditions for the Research Assessment that is supposed to gauge the performance of British universities. Afterwards they leave them alone to either perform with grants or being kicked out. Even if your work is submitted to this Research Assessment and brings in money for the university, you are targeted if your grant income is deemed insufficient. Those submitted to the research assessment hence support those colleagues who are unproductive but have grants. Grant income is all that counts here, not scientific output. We had four papers with original data this year so far, in Cell Death and Differentiation, Oncogene, Journal of Cell Science and, as I informed Prof Wilkins this week, one accepted with the EMBO Journal. I was also the editor of a book and wrote two reviews. Doesn’t count. This leads to a interesting spin to the old saying “publish or perish”. Here it is “publish and perish”. Did I regret coming to this place? I enormously enjoyed interacting with my science colleagues here, but like many of them, I fell into the trap of confusing the reputation of science here with the present reality. This is not a university anymore but a business with very few up in the hierarchy, like our formidable duo, profiteering and the rest of us are milked for money, be it professors for their grant income or students who pay 100.- pounds just to extend their write-up status. If anyone believes that I feel what my excellent coworkers and I have accomplished here over the years is inferior to other work, is wrong. With our apoptosis genes and the concept of Anticancer Genes we have developed something that is probably much more exciting than most other projects, including those that are heavily supported by grants. Was I perhaps too lazy? My boss smugly told me that I was actually the one professor on the whole campus who had submitted the highest number of grant applications. Well, they were probably simply not good enough. I am by far not the only one who is targeted by those formidable guys. These colleagues only keep quiet out of shame about their situation. Which is wrong. As we all know hitting the sweet spot in bioscience is simply a matter of luck, both for grant applications and publications. Why does a Professor have to be treated like that? One of my colleagues here at the College whom I told my story looked at me, there was a silence, and then said: “Yes, they treat us like sh*t”. Best regards, Stefan Grimm There is now a way for staff to register their opinions of their employers.The entries for Imperial College on Glassdoor.com suggest that bullying there is widespread (on contrast, the grumbles about UCL are mostly about lack of space). Googling ‘imperial college employment tribunal’ shows a history of bullying that is not publicised. In fact victims are often forced to sign gagging clauses. In fairness, AcademicFOI.com shows that the problems are not unique to Imperial. Over 3 years (it isn’t clear which years) , 810 university staff went to employment tribunals. And 5528 staff were gagged. Not a proud record Imperial’s Department of Medicine web site says that one of its aims is to “build a strong and supportive academic community”. Imperial’s spokesman said “Stefan Grimm was a valued member of the Faculty of Medicine”. The ability of large organisations to tell barefaced lies never ceases to amaze me. I asked Martin Wilkins to comment on the email from Grimm. His response is the standard stuff that HR issues on such occasions. Not a word of apology, no admission of fault. It says “Imperial College London seeks to give every member of its community the opportunity to excel and to create a supportive environment in which their careers may flourish.”. Unless, that is, your research is insufficiently expensive, in which case we’ll throw you out on the street at 51. For completeness, you can download Wilkins’ mail. After reading this post, Martin Wilkins wrote again to me (12.21 on 2nd December), He said “You will appreciate that I am unable to engage in any further discussion – not because of any institutional policy but because there is an ongoing inquest into the circumstances of his death. What I can say is that there was no ongoing correspondence. We met from time to time to discuss science and general matters. These meetings were always cordial. My last meeting with him was to congratulate him on his recent paper, accepted by EMBOL " The emails now revealed show that the relationship could hardly have been less “cordial”. Martin Wilkins appears to be less than frank about what happened. If anyone has more correspondence which ought to be known, please send it to me. I don’t reveal sources (if you prefer, use my non-College email david.colquhoun72 (at) gmail.com). The problem is by no means limited to Imperial. Neither is it universal at Imperial: some departments are quite happy about how they are run. Kings College London, Warwick University and Queen Mary College London have been just as brutal as Imperial. But in these places nobody has died. Not yet. ### Follow-up Here are a few of the tweets that appeared soon after this post appeared. 3 December 2014 The day after this post went public, I wrote to the vice-chancellor of Imperial College, thus.  To: alice.gast@imperial.ac.uk cc: w.j.stirling@imperial.ac.uk, s.johal@imperial.ac.uk. d.humphris@imperial.ac.uk Dear Professor Gast You may be aware that last night, at 18.30, I published Stefan Grimm’s last email, see http://www.dcscience.net/?p=6834 In the 12 hours that it’s been public it’s had at least 10,000 views. At the moment, 230 people. from all round the world, are reading it. It seems to be going viral. I appreciate that you are new to the job of rector, so you may not realise that this sort of behaviour has been going on for years at Imperial (especially in Medicine) -I last wrote about the dimwitted methods being used to assess people in Medicine on 2007 -see http://www.dcscience.net/?p=182 Now it seems likely that the policy has actually killed someone (itwas quite predictable that this would happen, sooner or later). I hope that your your humanity will ensure a change of policy in your approach to “performance management”. Failing that, the bad publicity that you’re getting may be enough to persuade you to do so. Best regards David Colquhoun __________________________________ D. Colquhoun FRS Professor of Pharmacology NPP, University College London Gower Street Today I updated the numbers: 44,000 hits after 36 hours. I tried to put it politely, but I have not yet had a reply. 4 December 2014 More than one source at Imperial has sent me a copy of an email sent to staff by the dean of the Faculty of Medicine. It’s dated 03 December 2014 16:44. It was sent almost 24 hours after my post. It is, I suppose, just possible that Kelleher was unaware of my post. But he must surely have seen the internally-circulated version of Grimm’s letter. It isn’t mentioned: that makes the weasel words and crocodile tears in the email even more revolting than they otherwise would be. Both his account and Wilkins’ account contradict directly the account in Grimm’s mail. Somebody is not telling the truth. This post has broken all records (for this blog). It has been viewed over 50,000 times in 48 hours. It is still getting 35-40 visitors per minute, as it has for the last 2 days. How much longer will managers at Imperial be able to pretend that the cat hasn’t escaped from the bag? 5 December 2014  Late last night. Imperial made, at last. a public comment on the death of Stefan Grimm: Statement on Professor Stefan Grimm by Caroline Davis (Communications and Public Affairs). This bit of shameless public relations appears under a tasteful picture of lilies. It says “Members of Imperial’s community may be aware of media reports of the tragic loss of Stefan Grimm, professor of toxicology in the Faculty of Medicine”. They could hardly have missed the reports. As of 07.25 this morning, this post alone has been viewed 97,626 times, from all over the world. The statement is a masterpiece of weasel words, crocodile tears and straw man arguments. “Contrary to claims appearing on the internet, Professor Grimm’s work was not under formal review nor had he been given any notice of dismissal”. I saw no allegations that he had actually been fired. He was undoubtedly threatened with being fired. That’s entirely obvious from the email sent by Martin Wilkins to Stefan Grimm. on 10 March. The full text of that mail was published yesterday in Times Higher Education. It’s worth reproducing the full text of that mail. To write like that to a successful professor, aged 51, is simply cruel. It is obviously incompatible with the PR guff that was issued yesterday. It seems to me to be very silly of Imperial College to try to deny the obvious. I don’t know how people like Martin Wilkins and Caroline Davis manage to sleep at night.  Date: 10 March 2014 Dear Stefan I am writing following our recent meetings in which we discussed your current grant support and the prospects for the immediate future. The last was our discussion around your PRDP, which I have attached. As we discussed, any significant external funding you had has now ended. I know that you have been seeking further funding support with Charities such as CRUK and the EU commission but my concern is that despite submitting many grants, you have been unsuccessful in persuading peer-review panels that you have a competitive application. Your dedication to seek funding is not in doubt but as time goes by, this can risk becoming a difficult situation from which to extricate oneself. In other words, grant committees can become fatigued from seeing a series of unsuccessful applications from the same applicant. I am of the opinion that you are struggling to fulfil the metrics of a Professorial post at Imperial College which include maintaining established funding in a programme of research with an attributable share of research spend of £200k p.a and must now start to give serious consideration as to whether you are performing at the expected level of a Professor at Imperial College. Over the course of the next 12 months I expect you to apply and be awarded a programme grant as lead PI. This is the objective that you will need to achieve in order for your performance to be considered at an acceptable standard. I am committed to doing what I can to help you succeed and will meet with you monthly to discuss your progression and success in achieving the objective outlined. You have previously initiated discussions in our meetings regarding opportunities outside of Imperial College and I know you have been exploring opportunities elsewhere. Should this be the direction you wish to pursue, then I will do what I can to help you succeed. Please be aware that this constitutes the start of informal action in relation to your performance, however should you fail to meet the objective outlined, I will need to consider your performance in accordance with the formal College procedure for managing issues of poor performance (Ordinance ­D8) which can be found at the following link. http://www3.imperial.ac.uk/secretariat/collegegovernance/provisions/ordinances/d8 Should you have any questions on the above, please do get in touch. Best wishes Martin These fixed performance targets are simply absurd. It’s called "research" because you don’t know how ir will come out. I’m told that if you apply for an Academic Clinical Fellowship at Imperial you are told “Objectives and targets: The goal would be to impart sufficient training in the chosen subspecialty, as to enable the candidate to enter a MD/PhD programme at the end of the fellowship. During the entire academic training programme, the candidate is expected to publish at least five research articles in peer-reviewed journals of impact factor greater than 4.” That’s a recipe for short term, unoriginal research. It’s an incentive to cut corners. Knowing that a paper has been written under that sort of pressure makes me less inclined to believe that the work has been done thoroughly. It is a prostitution of science. Later on 5 December. This post has now had 100,000 views in a bit less that four days. At 13.30, I was at Kings College London, to talk to medical students about quackery etc. They were a smart lot, but all the questions were about Stefan Grimm. The national press have begun to notice the tragedy. The Daily Mail, of all "newspapers" has a fair account of the death. It quotes Professor James Stirling, Provost of Imperial College London, as intoning the standard mantra: “Imperial seeks to give every member of its community the opportunity to excel and to create a supportive environment in which their careers may flourish. Where we become aware that the College is falling short of this standard of support to its members, we will act”. In my opinion the email above shows this is simply untrue. This sort of absurd and counterproductive pressure has been the rule in the Department of Medicine for years. I can’t believe that James Stirling didn’t now about it. If he did know, he should be fired for not anticipating the inevitable tragic consequences of his policies. If he didn’t know what was going on, he should be fired for not knowing. . It is simply absurd for Imperial to allow (In)human resources to investigate itself. Nobody will believe the result. An independent external inquiry is needed. Soon. Stefan Grimm’s death is, ultimately, the fault of the use of silly metrics to mismeasure people. If there were no impact factors, no REF, no absurd university rankings, and no ill-educated senior academics and HR people who take them seriously, he’d probably still be alive. 8 December 2014 After one week, I wrote again to the senior management at Imperial (despite the fact that my earlier letters had been ignored). This time I had one simple suggestion. If Imperial want genuinely to set things right they should get an independent external inquiry. Their present proposal that the people who let things go so far should investigate themselves has been greeted with the scepticism that it so richly deserves. I still live in hope that someone will be sufficiently courteous to answer this time.  To: alice.gast@imperial.ac.uk cc: w.j.stirling@imperial.ac.uk, s.johal@imperial.ac.uk. d.humphris@imperial.ac.uk, d.kelleher@imperial.ac.uk Dear Professor Gast My post of Stefan Grimm’s email last Monday evening, has been viewed 130,000 times from at least 175 different countries. Your failure to respond to my letters is public knowledge. When you finally posted a statement about Grimm on Thursday it so obviously contradicted the emails which I, and Times Higher Education had already published, that it must have done your reputation more harm than good. May I suggest that the best chance to salvage your reputation would be to arrange for an independent external inquiry into the policies that contributed to Grimm’s death. You must surely realise that your announcement that HR will investigate its own policies has been greeted with universal scepticism. Rightly or wrongly, its conclusions will simply not be believed. I believe that an external inquiry would show Imperial is genuine in wishing to find out how to improve the way it treats the academics who are responsible for its reputation. Best regards David Colquhoun __________________________________ D. Colquhoun FRS Professor of Pharmacology NPP, University College London Gower Street Here is a map of the location of 200 hits on 4 December (one of 20 such maps in a 4 hour period). 10 December 2014 Eventually I got a reply, of sorts, from Dermot Kelleher. It’s in the style of the true apparatchik "shut up and go away".  Dear Dr Colquhoun Many thanks for your enquiry. Can I just say that College will liaise with the Coroner as required on this issue. In light of this, I do not believe that further correspondence will be helpful at present. Best wishes Dermot Jump to follow-up The Higher Education Funding Council England (HEFCE) gives money to universities. The allocation that a university gets depends strongly on the periodical assessments of the quality of their research. Enormous amounts if time, energy and money go into preparing submissions for these assessments, and the assessment procedure distorts the behaviour of universities in ways that are undesirable. In the last assessment, four papers were submitted by each principal investigator, and the papers were read. In an effort to reduce the cost of the operation, HEFCE has been asked to reconsider the use of metrics to measure the performance of academics. The committee that is doing this job has asked for submissions from any interested person, by June 20th. This post is a draft for my submission. I’m publishing it here for comments before producing a final version for submission. ### Draft submission to HEFCE concerning the use of metrics. I’ll consider a number of different metrics that have been proposed for the assessment of the quality of an academic’s work. Impact factors The first thing to note is that HEFCE is one of the original signatories of DORA (http://am.ascb.org/dora/ ). The first recommendation of that document is :"Do not use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions" .Impact factors have been found, time after time, to be utterly inadequate as a way of assessing individuals, e.g. [1], [2]. Even their inventor, Eugene Garfield, says that. There should be no need to rehearse yet again the details. If HEFCE were to allow their use, they would have to withdraw from the DORA agreement, and I presume they would not wish to do this. Article citations Citation counting has several problems. Most of them apply equally to the H-index. 1. Citations may be high because a paper is good and useful. They equally may be high because the paper is bad. No commercial supplier makes any distinction between these possibilities. It would not be in their commercial interests to spend time on that, but it’s critical for the person who is being judged. For example, Andrew Wakefield’s notorious 1998 paper, which gave a huge boost to the anti-vaccine movement had had 758 citations by 2012 (it was subsequently shown to be fraudulent). 2. Citations take far too long to appear to be a useful way to judge recent work, as is needed for judging grant applications or promotions. This is especially damaging to young researchers, and to people (particularly women) who have taken a career break. The counts also don’t take into account citation half-life. A paper that’s still being cited 20 years after it was written clearly had influence, but that takes 20 years to discover, 3. The citation rate is very field-dependent. Very mathematical papers are much less likely to be cited, especially by biologists, than more qualitative papers. For example, the solution of the missed event problem in single ion channel analysis [3,4] was the sine qua non for all our subsequent experimental work, but the two papers have only about a tenth of the number of citations of subsequent work that depended on them. 4. Most suppliers of citation statistics don’t count citations of books or book chapters. This is bad for me because my only work with over 1000 citations is my 105 page chapter on methods for the analysis of single ion channels [5], which contained quite a lot of original work. It has had 1273 citations according to Google scholar but doesn’t appear at all in Scopus or Web of Science. Neither do the 954 citations of my statistics text book [6] 5. There are often big differences between the numbers of citations reported by different commercial suppliers. Even for papers (as opposed to book articles) there can be a two-fold difference between the number of citations reported by Scopus, Web of Science and Google Scholar. The raw data are unreliable and commercial suppliers of metrics are apparently not willing to put in the work to ensure that their products are consistent or complete. 6. Citation counts can be (and already are being) manipulated. The easiest way to get a large number of citations is to do no original research at all, but to write reviews in popular areas. Another good way to have ‘impact’ is to write indecisive papers about nutritional epidemiology. That is not behaviour that should command respect. 7. Some branches of science are already facing something of a crisis in reproducibility [7]. One reason for this is the perverse incentives which are imposed on scientists. These perverse incentives include the assessment of their work by crude numerical indices. 8. “Gaming” of citations is easy. (If students do it it’s called cheating: if academics do it is called gaming.) If HEFCE makes money dependent on citations, then this sort of cheating is likely to take place on an industrial scale. Of course that should not happen, but it would (disguised, no doubt, by some ingenious bureaucratic euphemisms). 9. For example, Scigen is a program that generates spoof papers in computer science, by stringing together plausible phases. Over 100 such papers have been accepted for publication. By submitting many such papers, the authors managed to fool Google Scholar in to awarding the fictitious author an H-index greater than that of Albert Einstein http://en.wikipedia.org/wiki/SCIgen 10. The use of citation counts has already encouraged guest authorships and such like marginally honest behaviour. There is no way to tell with an author on a paper has actually made any substantial contribution to the work, despite the fact that some journals ask for a statement about contribution. 11. It has been known for 17 years that citation counts for individual papers are not detectably correlated with the impact factor of the journal in which the paper appears [1]. That doesn’t seem to have deterred metrics enthusiasts from using both. It should have done. Given all these problems, it’s hard to see how citation counts could be useful to the REF, except perhaps in really extreme cases such as papers that get next to no citations over 5 or 10 years. The H-index This has all the disadvantages of citation counting, but in addition it is strongly biased against young scientists, and against women. This makes it not worth consideration by HEFCE. Altmetrics Given the role given to “impact” in the REF, the fact that altmetrics claim to measure impact might make them seem worthy of consideration at first sight. One problem is that the REF failed to make a clear distinction between impact on other scientists is the field and impact on the public. Altmetrics measures an undefined mixture of both sorts if impact, with totally arbitrary weighting for tweets, Facebook mentions and so on. But the score seems to be related primarily to the trendiness of the title of the paper. Any paper about diet and health, however poor, is guaranteed to feature well on Twitter, as will any paper that has ‘penis’ in the title. It’s very clear from the examples that I’ve looked at that few people who tweet about a paper have read more than the title. See Why you should ignore altmetrics and other bibliometric nightmares [8]. In most cases, papers were promoted by retweeting the press release or tweet from the journal itself. Only too often the press release is hyped-up. Metrics not only corrupt the behaviour of academics, but also the behaviour of journals. In the cases I’ve examined, reading the papers revealed that they were particularly poor (despite being in glamour journals): they just had trendy titles [8] There could even be a negative correlation between the number of tweets and the quality of the work. Those who sell altmetrics have never examined this critical question because they ignore the contents of the papers. It would not be in their commercial interests to test their claims if the result was to show a negative correlation. Perhaps the reason why they have never tested their claims is the fear that to do so would reduce their income. Furthermore you can buy 1000 retweets for$8.00  http://followers-and-likes.com/twitter/buy-twitter-retweets/  That’s outright cheating of course, and not many people would go that far. But authors, and journals, can do a lot of self-promotion on twitter that is totally unrelated to the quality of the work.

It’s worth noting that much good engagement with the public now appears on blogs that are written by scientists themselves, but the 3.6 million views of my blog do not feature in altmetrics scores, never mind Scopus or Web of Science.  Altmetrics don’t even measure public engagement very well, never mind academic merit.

Evidence that metrics measure quality

Any metric would be acceptable only if it measured the quality of a person’s work.  How could that proposition be tested?  In order to judge this, one would have to take a random sample of papers, and look at their metrics 10 or 20 years after publication. The scores would have to be compared with the consensus view of experts in the field.  Even then one would have to be careful about the choice of experts (in fields like alternative medicine for example, it would be important to exclude people whose living depended on believing in it).  I don’t believe that proper tests have ever been done (and it isn’t in the interests of those who sell metrics to do it).

The great mistake made by almost all bibliometricians is that they ignore what matters most, the contents of papers.  They try to make inferences from correlations of metric scores with other, equally dubious, measures of merit.  They can’t afford the time to do the right experiment if only because it would harm their own “productivity”.

The evidence that metrics do what’s claimed for them is almost non-existent.  For example, in six of the ten years leading up to the 1991 Nobel prize, Bert Sakmann failed to meet the metrics-based publication target set by Imperial College London, and these failures included the years in which the original single channel paper was published [9]  and also the year, 1985, when he published a paper [10] that was subsequently named as a classic in the field [11].  In two of these ten years he had no publications whatsoever. See also [12].

Application of metrics in the way that it’s been done at Imperial and also at Queen Mary College London, would result in firing of the most original minds.

Gaming and the public perception of science

Every form of metric alters behaviour, in such a way that it becomes useless for its stated purpose.  This is already well-known in economics, where it’s know as Goodharts’s law http://en.wikipedia.org/wiki/Goodhart’s_law “"When a measure becomes a target, it ceases to be a good measure”.  That alone is a sufficient reason not to extend metrics to science.  Metrics have already become one of several perverse incentives that control scientists’ behaviour. They have encouraged gaming, hype, guest authorships and, increasingly, outright fraud [13].

The general public has become aware of this behaviour and it is starting to do serious harm to perceptions of all science.  As long ago as 1999, Haerlin & Parr [14] wrote in Nature, under the title How to restore Public Trust in Science,

“Scientists are no longer perceived exclusively as guardians of objective truth, but also as smart promoters of their own interests in a media-driven marketplace.”

And in January 17, 2006, a vicious spoof on a Science paper appeared, not in a scientific journal, but in the New York Times.  See http://www.dcscience.net/?p=156

The use of metrics would provide a direct incentive to this sort of behaviour.  It would be a tragedy not only for people who are misjudged by crude numerical indices, but also a tragedy for the reputation of science as a whole.

Conclusion

There is no good evidence that any metric measures quality, at least over the short time span that’s needed for them to be useful for giving grants or deciding on promotions).  On the other hand there is good evidence that use of metrics provides a strong incentive to bad behaviour, both by scientists and by journals. They have already started to damage the public perception of science of the honesty of science.

The conclusion is obvious. Metrics should not be used to judge academic performance.

What should be done?

If metrics aren’t used, how should assessment be done? Roderick Floud was president of Universities UK from 2001 to 2003. He’s is nothing if not an establishment person. He said recently:

“Each assessment costs somewhere between £20 million and £100 million, yet 75 per cent of the funding goes every time to the top 25 universities. Moreover, the share that each receives has hardly changed during the past 20 years.
It is an expensive charade. Far better to distribute all of the money through the research councils in a properly competitive system.”

The obvious danger of giving all the money to the Research Councils is that people might be fired solely because they didn’t have big enough grants. That’s serious -it’s already happened at Kings College London, Queen Mary London and at Imperial College. This problem might be ameliorated if there were a maximum on the size of grants and/or on the number of papers a person could publish, as I suggested at the open data debate. And it would help if univerities appointed vice-chancellors with a better long term view than most seem to have at the moment.

Aggregate metrics? It’s been suggested that the problems are smaller if one looks at aggregated metrics for a whole department. rather than the metrics for individual people. Clearly looking at departments would average out anomalies. The snag is that it wouldn’t circumvent Goodhart’s law. If the money depended on the aggregate score, it would still put great pressure on universities to recruit people with high citations, regardless of the quality of their work, just as it would if individuals were being assessed. That would weigh against thoughtful people (and not least women).

The best solution would be to abolish the REF and give the money to research councils, with precautions to prevent people being fired because their research wasn’t expensive enough. If politicians insist that the "expensive charade" is to be repeated, then I see no option but to continue with a system that’s similar to the present one: that would waste money and distract us from our job.

1.   Seglen PO (1997) Why the impact factor of journals should not be used for evaluating research. British Medical Journal 314: 498-502. [Download pdf]

2.   Colquhoun D (2003) Challenging the tyranny of impact factors. Nature 423: 479. [Download pdf]

3.   Hawkes AG, Jalali A, Colquhoun D (1990) The distributions of the apparent open times and shut times in a single channel record when brief events can not be detected. Philosophical Transactions of the Royal Society London A 332: 511-538. [Get pdf]

4.   Hawkes AG, Jalali A, Colquhoun D (1992) Asymptotic distributions of apparent open times and shut times in a single channel record allowing for the omission of brief events. Philosophical Transactions of the Royal Society London B 337: 383-404. [Get pdf]

5.   Colquhoun D, Sigworth FJ (1995) Fitting and statistical analysis of single-channel records. In: Sakmann B, Neher E, editors. Single Channel Recording. New York: Plenum Press. pp. 483-587.

7.   Ioannidis JP (2005) Why most published research findings are false. PLoS Med 2: e124.[full text]

8.   Colquhoun D, Plested AJ Why you should ignore altmetrics and other bibliometric nightmares.  Available: http://www.dcscience.net/?p=6369

9.   Neher E, Sakmann B (1976) Single channel currents recorded from membrane of denervated frog muscle fibres. Nature 260: 799-802.

10.   Colquhoun D, Sakmann B (1985) Fast events in single-channel currents activated by acetylcholine and its analogues at the frog muscle end-plate. J Physiol (Lond) 369: 501-557. [Download pdf]

11.   Colquhoun D (2007) What have we learned from single ion channels? J Physiol 581: 425-427.[Download pdf]

13.   Oransky, I. Retraction Watch.  Available: http://retractionwatch.com/18-6-2014

14.   Haerlin B, Parr D (1999) How to restore public trust in science. Nature 400: 499. 10.1038/22867 [doi].[Get pdf]

### Follow-up

Some other posts on this topic

Why Metrics Cannot Measure Research Quality: A Response to the HEFCE Consultation

Manipulating Google Scholar Citations and Google Scholar Metrics: simple, easy and tempting

Driving Altmetrics Performance Through Marketing

Death by Metrics (October 30, 2013)

Not everything that counts can be counted

Using metrics to assess research quality By David Spiegelhalter “I am strongly against the suggestion that peer–review can in any way be replaced by bibliometrics”

1 July 2014

My brilliant statistical colleague, Alan Hawkes, not only laid the foundations for single molecule analysis (and made a career for me) . Before he got into that, he wrote a paper, Spectra of some self-exciting and mutually exciting point processes, (Biometrika 1971). In that paper he described a sort of stochastic process now known as a Hawkes process. In the simplest sort of stochastic process, the Poisson process, events are independent of each other. In a Hawkes process, the occurrence of an event affects the probability of another event occurring, so, for example, events may occur in clusters. Such processes were used for many years to describe the occurrence of earthquakes. More recently, it’s been noticed that such models are useful in finance, marketing, terrorism, burglary, social media, DNA analysis, and to describe invasive banana trees. The 1971 paper languished in relative obscurity for 30 years. Now the citation rate has shot threw the roof.

The papers about Hawkes processes are mostly highly mathematical. They are not the sort of thing that features on twitter. They are serious science, not just another ghastly epidemiological survey of diet and health. Anybody who cites papers of this sort is likely to be a real scientist. The surge in citations suggests to me that the 1971 paper was indeed an important bit of work (because the citations will be made by serious people). How does this affect my views about the use of citations? It shows that even highly mathematical work can achieve respectable citation rates, but it may take a long time before their importance is realised. If Hawkes had been judged by citation counting while he was applying for jobs and promotions, he’d probably have been fired. If his department had been judged by citations of this paper, it would not have scored well. It takes a long time to judge the importance of a paper and that makes citation counting almost useless for decisions about funding and promotion.

Stop press. Financial report casts doubt on Trainor’s claims

Science has a big problem. Most jobs are desperately insecure. It’s hard to do long term thorough work when you don’t know whether you’ll be able to pay your mortgage in a year’s time. The appalling career structure for young scientists has been the subject of much writing by the young (e.g. Jenny Rohn) and the old, e.g Bruce Alberts. Peter Lawrence (see also Real Lives and White Lies in the Funding of Scientific Research, and by me.

Until recently, this problem was largely restricted to post-doctoral fellows (postdocs). They already have PhDs and they are the people who do most of the experiments. Often large numbers of them work for a single principle investigator (PI). The PI spends most of his her time writing grant applications and traveling the world to hawk the wares of his lab. They also (to variable extents) teach students and deal with endless hassle from HR.

The salaries of most postdocs are paid from grants that last for three or sometimes five years. If that grant doesn’t get renewed. they are on the streets.

Universities have come to exploit their employees almost as badly as Amazon does.

The periodical research assessments not only waste large amounts of time and money, but they have distorted behaviour. In the hope of scoring highly, they recruit a lot of people before the submission, but as soon as that’s done with, they find that they can’t afford all of them, so some get cast aside like worn out old boots. Universities have allowed themselves to become dependent on "soft money" from grant-giving bodies. That strikes me as bad management.

The situation is even worse in the USA where most teaching staff rely on research grants to pay their salaries.

I have written three times about the insane methods that are being used to fire staff at Queen Mary College London (QMUL).
Is Queen Mary University of London trying to commit scientific suicide? (June 2012)
Queen Mary, University of London in The Times. Does Simon Gaskell care? (July 2012) and a version of it appeared th The Times (Thunderer column)
In which Simon Gaskell, of Queen Mary, University of London, makes a cock-up (August 2012)

The ostensible reason given there was to boost its ratings in university rankings. Their vice-chancellor, Simon Gaskell, seems to think that by firing people he can produce a university that’s full of Nobel prize-winners. The effect, of course, is just the opposite. Treating people like pawns in a game makes the good people leave and only those who can’t get a job with a better employer remain. That’s what I call bad management.

At QMUL people were chosen to be fired on the basis of a plain silly measure of their publication record, and by their grant income. That was combined with terrorisation of any staff who spoke out about the process (more on that coming soon).

Kings College London is now doing the same sort of thing. They have announced that they’ll fire 120 of the 777 staff in the schools of medicine and biomedical sciences, and the Institute of Psychiatry. These are humans, with children and mortgages to pay. One might ask why they were taken on the first place, if the university can’t afford them. That’s simply bad financial planning (or was it done in order to boost their Research Excellence submission?).

Surely it’s been obvious, at least since 2007, that hard financial times were coming, but that didn’t dent the hubris of the people who took an so many staff. HEFCE has failed to find a sensible way to fund universities. The attempt to separate the funding of teaching and research has just led to corruption.

The way in which people are to be chosen for the firing squad at Kings is crude in the extreme. If you are a professor at the Institute of Psychiatry then, unless you do a lot of teaching, you must have a grant income of at least £200,000 per year. You can read all the details in the Kings’ “Consultation document” that was sent to all employees. It’s headed "CONFIDENTIAL – Not for further circulation". Vice-chancellors still don’t seem to have realised that it’s no longer possible to keep things like this secret. In releasing it, I take ny cue from George Orwell.

"Journalism is printing what someone else does not want printed: everything else is public relations.”

There is no mention of the quality of your research, just income. Since in most sorts of research, the major cost is salaries, this rewards people who take on too many employees. Only too frequently, large groups are the ones in which students and research staff get the least supervision, and which bangs per buck are lowest. The university should be rewarding people who are deeply involved in research themselves -those with small groups. Instead, they are doing exactly the opposite.

Women are, I’d guess, less susceptible to the grandiosity of the enormous research group, so no doubt they will suffer disproportionately. PhD students will also suffer if their supervisor is fired while they are halfway through their projects.

An article in Times Higher Education pointed out

"According to the Royal Society’s 2010 report The Scientific Century: Securing our Future Prosperity, in the UK, 30 per cent of science PhD graduates go on to postdoctoral positions, but only around 4 per cent find permanent academic research posts. Less than half of 1 per cent of those with science doctorates end up as professors."

The panel that decides whether you’ll be fired consists of Professor Sir Robert Lechler, Professor Anne Greenough, Professor Simon Howell, Professor Shitij Kapur, Professor Karen O’Brien, Chris Mottershead, Rachel Parr & Carol Ford. If they had the slightest integrity, they’d refuse to implement such obviously silly criteria.

Universities in general. not only Kings and QMUL have become over-reliant on research funders to enhance their own reputations. PhD students and research staff are employed for the benefit of the university (and of the principle investigator), not for the benefit of the students or research staff, who are treated as expendable cost units, not as humans.

One thing that we expect of vice-chancellors is sensible financial planning. That seems to have failed at Kings. One would also hope that they would understand how to get good science. My only previous encounter with Kings’ vice chancellor, Rick Trainor, suggests that this is not where his talents lie. While he was president of the Universities UK (UUK), I suggested to him that degrees in homeopathy were not a good idea. His response was that of the true apparatchik.

“. . . degree courses change over time, are independently assessed for academic rigour and quality and provide a wider education than the simple description of the course might suggest”

That is hardly a response that suggests high academic integrity.

The students’ petition is on Change.org.

### Follow-up

The problems that are faced in the UK are very similar to those in the USA. They have been described with superb clarity in “Rescuing US biomedical research from its systemic flaws“, This article, by Bruce Alberts, Marc W. Kirschner, Shirley Tilghman, and Harold Varmus, should be read by everyone. They observe that ” . . . little has been done to reform the system, primarily because it continues to benefit more established and hence more influential scientists”. I’d be more impressed by the senior people at Kings if they spent time trying to improve the system rather than firing people because their research is not sufficiently expensive.

10 June 2014

Progress on the cull, according to an anonymous correspondent

“The omnishambles that is KCL management

1) We were told we would receive our orange (at risk) or green letters (not at risk, this time) on Thursday PM 5th June as HR said that it’s not good to get bad news on a Friday!

2) We all got a letter on Friday that we would not be receiving our letters until Monday, so we all had a tense weekend

3) I finally got my letter on Monday, in my case it was “green” however a number of staff who work very hard at KCL doing teaching and research are “orange”, un bloody believable

As you can imagine the moral at King’s has dropped through the floor”

18 June 2014

Dorothy Bishop has written about the Trainor problem. Her post ends “One feels that if KCL were falling behind in a boat race, they’d respond by throwing out some of the rowers”.

The students’ petition can be found on the #KCLHealthSOS site. There is a reply to the petition, from Professor Sir Robert Lechler, and a rather better written response to it from students. Lechler’s response merely repeats the weasel words, and it attacks a few straw men without providing the slightest justification for the criteria that are being used to fire people. One can’t help noticing how often knighthoods go too the best apparatchiks rather than the best scientists.

14 July 2014

A 2013 report on Kings from Standard & Poor’s casts doubt on Trainor’s claims

A few things stand out.

• KCL is in a strong financial position with lower debt than other similar Universities and cash reserves of £194 million.
• The report says that KCL does carry some risk into the future especially that related to its large capital expansion program.
• The report specifically warns KCL over the consequences of any staff cuts. Particularly relevant are the following quotations
• Page p3 “Further staff-cost curtailment will be quite difficult …pressure to maintain its academic and non-academic service standards will weigh on its ability to cut costs further.”
• page 4 The report goes on to say (see the section headed outlook, especially the final paragraph) that any decrease in KCL’s academic reputation (e.g. consequent on staff cuts) would be likely to impair its ability to attract overseas students and therefore adversely affect its financial position.
• page 10 makes clear that KCL managers are privately aiming at 10% surplus, above the 6% operating surplus they talk about with us. However, S&P considers that ‘ambitious’. In other words KCL are shooting for double what a credit rating agency considers realistic.

One can infer from this that

1. what staff have been told about the cuts being an immediate necessity is absolute nonsense
2. KCL was warned against staff cuts by a credit agency
3. the main problem KCL has is its overambitious building policy
4. KCL is implementing a policy (staff cuts) which S & P warned against as they predict it may result in diminishing income.

What on earth is going on?

16 July 2014

I’ve been sent yet another damning document. The BMA’s response to Kings contains some numbers that seem to have escaped the attention of managers at Kings.

10 April 2015

King’s draft performance management plan for 2015

This document has just come to light (the highlighting is mine).

It’s labelled as "released for internal consultation". It seems that managers are slow to realise that it’s futile to try to keep secrets.

The document applies only to Institute of Psychiatry, Psychology and Neuroscience at King’s College London: "one of the global leaders in the fields" -the usual tedious blah that prefaces every document from every university.

It’s fascinating to me that the most cruel treatment of staff so often seems to arise in medical-related areas. I thought psychiatrists, of all people, were meant to understand people, not to kill them.

This document is not quite as crude as Imperial’s assessment, but it’s quite bad enough. Like other such documents, it pretends that it’s for the benefit of its victims. In fact it’s for the benefit of willy-waving managers who are obsessed by silly rankings.

Here are some of the sillier bits.

"The Head of Department is also responsible for ensuring that aspects of reward/recognition and additional support that are identified are appropriately followed through"

And, presumably, for firing people, but let’s not mention that.

"Academics are expected to produce original scientific publications of the highest quality that will significantly advance their field."

That’s what everyone has always tried to do. It can’t be compelled by performance managers. A large element of success is pure luck. That’s why they’re called experiments.

" However, it may take publications 12-18 months to reach a stable trajectory of citations, therefore, the quality of a journal (impact factor) and the judgment of knowledgeable peers can be alternative indicators of excellence."

It can also take 40 years for work to be cited. And there is little reason to believe that citations, especially those within 12-18 months, measure quality. And it is known for sure that "the quality of a journal (impact factor)" does not correlate with quality (or indeed with citations).

"H Index and Citation Impact: These are good objective measures of the scientific impact of
publications"

NO, they are simply not a measure of quality (though this time they say “impact” rather than “excellence”).

The people who wrote that seem to be unaware of the most basic facts about science.

Then

"Carrying out high quality scientific work requires research teams"

Sometimes it does, sometimes it doesn’t. In the past the best work has been done by one or two people. In my field, think of Hodgkin & Huxley, Katz & Miledi or Neher & Sakmann. All got Nobel prizes. All did the work themselves. Performance managers might well have fired them before they got started.

By specifying minimum acceptable group sizes, King’s are really specifying minimum acceptable grant income, just like Imperial and Warwick. Nobody will be taken in by the thin attempt to disguise it.

The specification that a professor should have "Primary supervision of three or more PhD students, with additional secondary supervision." is particularly iniquitous. Everyone knows that far too many PhDs are being produced for the number of jobs that are available. This stipulation is not for the benefit of the young. It’s to ensure a supply of cheap labour to churn out more papers and help to lift the university’s ranking.

The document is not signed, but the document properties name its author. But she’s not a scientist and is presumably acting under orders, so please don’t blame her for this dire document. Blame the vice-chancellor.

Performance management is a direct incentive to do shoddy short-cut science.

No wonder that The Economist says "scientists are doing too much trusting and not enough verifying—to the detriment of the whole of science, and of humanity".

Feel ashamed.

This is a web version of a review of Peter Gotzsche’s book. It appeared in the April 2014 Healthwatch Newsletter. Read the whole newsletter. It has lots of good stuff. Their newsletters are here. Healthwatch has been exposing quackery since 1989. Their very first newsletter is still relevant.

 Most new drugs and vaccines are developed by the pharmaceutical industry. The industry has produced huge benefits for mankind. But since the Thatcherite era it has come to be dominated by marketing people who appear to lack any conscience. That’s what gave rise to the Alltrials movement. It was founded in January 2013 with the aim of ensuring that all past and present clinical trials are registered before they start and that and their results are published The industry has been dragged, kicking and screaming, towards a new era of transparency, with two of the worst offenders, GSK and Roche, now promising to release all data. Let’s hope this is the beginning of real open science.

This version is not quite identical with the published version in which several changes were enforced by Healthwatch’s legal adviser. They weren’t very big changes, but here is the original.

### Deadly Medicines and Organised Crime

By Peter Gøtzsche, reviewed by David Colquhoun
ISBN-10: 1846198844 ISBN-13: 978-1846198847

As someone who has spent a lifetime teaching pharmacology, this book is a bitter pill to swallow.  It makes Goldacre’s Bad Pharma seem quite mild.

In fairness, the bits of pharmacology that I’ve taught concern mostly drugs that do work quite well.  Things like neuromuscular blocking agents, local anaesthetics, general anaesthetics, anticoagulants, cardiac glycosides and thyroid drugs all do pretty much what is says on the label.

Peter Gøtzsche is nothing if not evidence man.  He directs the Nordic Cochrane group, and he talks straight.  His book is about drugs that don’t work as advertised.  There is no doubt whatsoever that the pharmaceutical industry has behaved very badly indeed in the last couple of decades.  You don’t have to take my word for it, nor Peter Gotzche’s, nor Ben Goldacre’s.  They have told us about it themselves.  Not voluntarily of course, but in internal emails that have been revealed during court proceedings, and from whistleblowers.

Peter Rost was vice president marketing for the huge pharmaceutical company, Pfizer, until he was fired after the company failed to listen to his complaints about illegal marketing of human growth hormone as an anti-ageing drug.  After this he said:

“It is scary how many similarities there are between this industry and the mob. The mob makes obscene amounts of money, as does this industry. The side effects of organized crime are killings and deaths, and the side effects are the same in this industry. The mob bribes politicians and others, and so does the drug industry … “

The pharmaceutical industry is the biggest defrauder of the US federal government under the False Claims Act.  Roche led a cartel that, according to the US Justice Department’s antitrust division, was the most pervasive and harmful criminal antitrust conspiracy ever uncovered.  Multibillion dollar fines have been levied on all of the big companies (almost all in the USA, other countries have been supine), though the company’s profits are so huge they are regarded as marketing expenses.

It’s estimated that adverse effects of drugs kill more people than anything but cancer and heart disease, roughly half as many as cigarettes.  This horrifying statistic is announced at the beginning of the book, though you have to wait until Chapter 21 to find the data.  I’d have liked to see a more critical discussion of the problems of causality in deciding why someone died, which are just as big as those in deciding why somebody recovered.  Nevertheless, nobody seems to deny that the numbers who are killed by their treatments are alarmingly high.

Gøtzsche’s book deals with a wide range of drugs that don’t do what it says on the label, but which have made fortunes because of corruption of the scientific process. These include non-steroidal anti-inflammatory drugs (NSAIDs), an area described as “a horror story filled with extravagant claims, bending of the rules, regulatory inaction, . . .”.  Other areas where there has been major misbehaviour include diabetes (Avandia), and the great Tamiflu scandal. and the great Tamiflu scandal. It took five years of pressure before Roche released the hidden data about Tamiflu trials. It barely works. Goldacre commented “government’s Tamiflu stockpile wouldn’t have done us much good in the event of a flu epidemic”

But the worst single area is psychiatry.

Two of the chapters in the book deal with psychiatry.  Nobody has the slightest idea how the brain works (don’t believe the neuroscience hype) or what causes depression or psychosis.  Treatments are no more than guesses and none of them seems to work very well.

The problems with the SSRI antidepressant, paroxetine (Seroxat in UK, Paxil in USA) were brought to public attention, not by a regulator, but by a BBC Panorama television programme.  The programme revealed that a PR company, which worked for GSK, had written

"Originally we had planned to do extensive media relations surrounding this study until we actually viewed the results.  Essentially the study did not really show it was effective in treating adolescent depression, which is not something we want to publicise."

This referred to the now-notorious study 329. It was intended to show that paroxetine should be recommended for adolescent depression.  The paper that eventually appeared in 2001 grossly misrepresented the results.  The conclusions stated “Paroxetine is generally well tolerated and effective for major depression in adolescents”, despite the fact that GSK already knew this wasn’t true. The first author of this paper was Martin Keller, chair of psychiatry at Brown University, RI, with 21 others.

But the paper wasn’t written by them, but by ghost authors working for GSK. Keller admitted that he hadn’t checked the results properly.

That’s not all. Gøtzsche comments thus.

“Keller is some character. He double- billed his travel expenses, which were reimbursed both by his university and the drug sponsor. Further, the Massachusetts Department of Mental Health had paid Brown’s psychiatry department, which Keller chaired, hundreds of thousands of dollars to fund research that wasn’t being conducted. Keller himself received hundreds of thousands of dollars from drug companies every year that he didn’t disclose.”

His department received $50 million in research funding. Brown University has never admitted that there was a problem. It still boasts about this infamous paper The extent of corruption at Brown University rivals the mob. The infamous case of Richard Eastell at Sheffield university is no better. He admitted in print to lying about who’d seen the data. The university did nothing but fire the whistleblower. Another trial, study 377, also showed that paroxetine didn’t work. GSK suppressed it. “There are no plans to publish data from Study 377” (Seroxat/Paxil Adolescent Depression. Position piece on the phase III clinical studies. GlaxoSmithKline document. 1998 Oct.) Where were the regulatory agencies during all this? The MHRA did ban use of paroxetine in adolescents in 2003, but their full investigation didn’t report until 2008. It came to much the same conclusions as the TV programme six years earlier about the deceit. But despite that, no prosecution was brought. GSK got away with a deferential rap on the knuckles. Fiona Godlee (editor of the BMJ, which had turned down the paper) commented “We shouldn’t have to rely on investigative journalists to ask the difficult questions” Now we can add bloggers to that list of people who ask difficult questions. The scam operated by the University of Wales, in ‘validating’ external degrees was revealed by my blog and by BBC TV Wales. The Quality Assurance Agency came in only at the last moment. Regulators regularly fail to regulate.  Despite all this, the current MHRA learning module on SSRIs contains little hint that SSRIs simply don’t work for mild or moderate depression. Neither does the current NICE guidance. Some psychiatrists still think they do work, despite there being so many negative trials. The psychiatrists’ narrative goes like this. You don’t expect to see improvements for many weeks (despite the fact that serotonin uptake is stopped immediately). You may get worse before you get better. And if the first sort of pill doesn’t work, try another one. That’s pretty much identical with what a homeopath will tell you. The odds are that its meaning is, wait a while and you’ll get better eventually, regardless of treatment. It’s common to be told that they must work because when you stop taking them, you get worse. But, perhaps more likely, when you stop taking them you get withdrawal symptoms, because the treatment itself caused a chemical imbalance. Gøtzsche makes a strong case that most psychiatric drugs do more harm than good, if taken for any length of time. Marcia Angell makes a similar case in The Illusions of Psychiatry. Gøtzsche will inevitably be accused of exaggerating. Chapter 14 ends thus. “Merck stated only 6 months before it withdrew Vioxx that ‘MSD is fully committed to the highest standards of scientific integrity, ethics, and protection of patient’s wellbeing in our research. We have a tradition of partnership with leaders in the academic research community. Great. Let’s have some more of such ethical partnerships. They often kill our patients while everyone else prospers. Perhaps Hells Angels should consider something similar in their PR: We are fully committed to the highest standards of integrity, ethics and protection of citizens’ well- being when we push narcotic drugs. We have a tradition of partnership with leaders in the police force”. But the evidence is there. The book has over 900 references. Much of the wrongdoing has been laid bare by legal actions. I grieve for the state of my subject. The wrongdoing by pharma is a disgrace. The corruption of universities and academics is even worse, because they are meant to be our defence against commercial corruption. All one can do is to take consolation from the fact that academics, like Gøtzsche and Goldacre, and a host of bloggers, are the people who are revealing what’s wrong. As a writer for the business magazine, Fortune, said “For better or worse, the drug industry is going to have to get used to Dr. Peter Rost – and others like him.” At a recent meeting I said that it was tragic that medicine, the caring profession, was also the most corrupt (though I’m happy to admit that other jobs might be as bad if offered as much money). At present there is little transparency. There is no way that I can tell whether my doctor is taking money from pharma, data are still hidden from public scrutiny by regulatory agencies (which are stuffed with people who take pharma money) as well as by companies. Governments regard business as more important than patients. In the UK, the Government continued promotion of the fake bomb detector for many years after they’d been told it was fake. Their attitude to fake medicines is not much different. Business is business, right? One side effect of the horrific corruption is that it’s used as a stick by the alternative medicine industry. That’s silly of them, because their business is more or less 100% mendacious marketing of ineffective treatments. At least half of pharma products really do work. Fines are useless. Nothing will change until a few CEOs, a few professors and a few vice-chancellors spend time in jail for corruption. Read this book. Get angry. Do something. ### Follow-up Jump to follow-up This post is the original version of a post by Michael Vagg. It was posted at the Conversation but taken down within hours, on legal advice. Sadly, the Conversation has a track record for pusillanimous behaviour of this sort. It took minutes before the cached version reappeared on freezepage.com. I’m reposting it from there in the interests of free speech. La Trobe "university" should be ashamed that it’s prostituted itself for the sake of$15 m.

La Trobe’s deputy vice-chancellor, Keith Nugent, gives a make-believe response to the resignation of Ken Harvey in a video. It is, in my opinion, truly pathetic.

Update, The next day, the article was reposted at the Conversation. The changes they’d made can be seen in a compare document. The biggest change was removal of "has just decided to join the ranks of the spivs and hucksters of the vitamin industry". This seems to me to be perfectly fair comment. It should not have been censored by the Conversation.

The recent memorandum of understanding signed between supplement company Swisse and La Trobe University to establish a Complementary Medicine Evidence Centre (CMEC) looks to me like the latest effort by a corporation to cloak their business interests in a veil of science. Unlike the UTS Sydney Australian Research Centre in Complementary and Integrative Medicine (ARCCIM), which at least has significant NHMRC funding, the La Trobe version will undertake “independent research” into complementary and alternative medicine (CAM) products that are made by the major (and so far only) donor to the Centre. Southern Cross University also has a very close relationship with the Blackmores brand of CAM products, due to the personal interest of Marcus Blackmore, the company Chairman. Blackmores claims to spend a lazy couple of million a year on their branded research centre. The Blackmores Research Centre studies Blackmores products. Presumably this situation (so similar to the proposed La Trobe model) is a coincidence since the research centre is providing completely “independent” research.

The conflict of interest in such research centres is so laughably obvious that A/Prof Ken Harvey, a leading campaigner against shonky health products, a life member of Choice andThe Conversation contributor, has resigned his appointment at La Trobe in protest. Ken clearly points out in his letter of resignation that by accepting the money from Swisse, he believes La Trobe has unacceptably compromised its integrity. His letter cites multiple instances of non-compliance with TGA regulations by Swisse, as well as their disrespect for the regulatory process that governs corporate truth-telling in their industry.This story from last year gives a bit of background to the quixotic battle Harvey has fought against the massive coffers and unscrupulous business practices of Big Supplement. He has been more effective than the TGA itself at hindering the rampant gaming of the TGA Register of Therapeutic Goods by supplement and vitamin manufacturers.

Clearly as a man of principle, he could not be expected to continue his association with a university that has a close relationship to a company with such a history of regulatory infringements. The untenability of Ken’s position is underlined by the fact that La Trobe itself republished on their website one of his TC articles about Swisse’s regulatory tapdancing only the previous year!

Ken has been sued, traduced and generally railed against by a multi-billion dollar industry for the hideous crime of insisting that they tell the truth about their products and not mislead consumers. We need another hundred like him. That his own university has decided to take the money on offer from Swisse must be a bitter blow to him. It would be interesting to know whether any other universities were approached by Swisse in a similar way and had the courage to decline the offer.

The infiltration of academia by privately funded CAM institutes is old news in the United States. The Science Based Medicine blog has christened the phenomenon “quackademic medicine” and written about it at some length. It seems the Australian CAM industry has no need to hide behind astroturfing organisations like the American group the Bravewell Collaborative to get its agenda attended to. Companies like Blackmores and Swisse can seemingly just offer to fund research institutes and cash-strapped tertiary institutions can’t resist. Friends of Science in Medicine and others have had a bit to say about the irresponsibility of educational institutions lending credibility to pseudoscience and how this practice damages universities’ standing as exemplars of scholarship and intellectual leaders within their communities.

I can say without qualification that none of the much-maligned Big Pharma companies have their own fully-funded research centres at any university. Let alone a branded one where the studies are restricted to a single company’s products. It would be utterly unacceptable for the integrity of any university for such an outrageously conflicted institution to be given any support. What would it be like if GSK or Pfizer founded a research institute at a university and forced the researchers to only study their own products?

Imagine the outrage. Imagine what a laughing stock such a research centre would be. That’s medical research in clown shoes. That’s academic credibility in a cheap suit trying to sell you steak knives.

Vitamin and supplement companies will always be profitable because their sales pitch is based on psychological flaws that everyone has. Just ask the gaming, alcohol and tobacco companies. All of them are massively profitable. Sometimes their cash can even do good, but there’s always an angle by which they profit.

Look at these guys up close, and the warts appear. All of them seek to improve their image by splashing money on hanging around with the glamorous, the successful, the smart and the credible. They hope that the magic dust of celebrity and academia will disguise the stench of the swamp they crawled out from. La Trobe Uni has just decided to join the ranks of the spivs and hucksters of the vitamin industry, and they will now have to live with having a research centre with the academic and professional credibility of the Ponds Institute. Sadly for La Trobe, they won’t have Ken Harvey to keep things reality-based.

### Follow-up

8 February 2014. Deputy vice-chancellor, Keith Nugent, tried to defend the university’s decision to take money from the "spivs and hucksters of the vitamin industry" in The Age. I sent the following letter to The Age. Let’s hope they publish it.

 Keith Nugent, deputy vice-chancellor of La Trobe University, has offered a defence of the university’s decision to take a large amount of money from vitamin and herb company, Swisse.   He justifies this by saying that we need to know whether or not the products work.  Nugent seems to be unaware that we already know.  There have been many good double-blind randomized trials and they have just about all shown that dosing yourself with vitamins and minerals does most people no good at all. Some have shown that high doses actually harm you.  Perhaps the university should have checked what’s already known before taking the money. Perhaps Nugent is also unaware that trials with industry sponsorship tend to come out favourable to the companies’ product.  For that reason, the results are treated with scepticism by the scientific community. If the research is worth doing, then it will be funded from the normal sources.  There should be no need to take money from a company with a very strong financial interest in the outcome. D. Colquhoun FRS Professor of Pharmcology University College London

This discussion seemed to be of sufficient general interest that we submitted is as a feature to eLife, because this journal is one of the best steps into the future of scientific publishing. Sadly the features editor thought that " too much of the article is taken up with detailed criticisms of research papers from NEJM and Science that appeared in the altmetrics top 100 for 2013; while many of these criticisms seems valid, the Features section of eLife is not the venue where they should be published". That’s pretty typical of what most journals would say. It is that sort of attitude that stifles criticism, and that is part of the problem. We should be encouraging post-publication peer review, not suppressing it. Luckily, thanks to the web, we are now much less constrained by journal editors than we used to be.

Here it is.

### Scientists don’t count: why you should ignore altmetrics and other bibliometric nightmares

David Colquhoun1 and Andrew Plested2

1 University College London, Gower Street, London WC1E 6BT

2 Leibniz-Institut für Molekulare Pharmakologie (FMP) & Cluster of Excellence NeuroCure, Charité Universitätsmedizin,Timoféeff-Ressowsky-Haus, Robert-Rössle-Str. 10, 13125 Berlin Germany.

Jeffrey Beall is librarian at Auraria Library, University of Colorado Denver.  Although not a scientist himself, he, more than anyone, has done science a great service by listing the predatory journals that have sprung up in the wake of pressure for open access.  In August 2012 he published “Article-Level Metrics: An Ill-Conceived and Meretricious Idea.  At first reading that criticism seemed a bit strong.  On mature consideration, it understates the potential that bibliometrics, altmetrics especially, have to undermine both science and scientists.

Altmetrics is the latest buzzword in the vocabulary of bibliometricians.  It attempts to measure the “impact” of a piece of research by counting the number of times that it’s mentioned in tweets, Facebook pages, blogs, YouTube and news media.  That sounds childish, and it is. Twitter is an excellent tool for journalism. It’s good for debunking bad science, and for spreading links, but too brief for serious discussions.  It’s rarely useful for real science.

Surveys suggest that the great majority of scientists do not use twitter (7 — 13%).  Scientific works get tweeted about mostly because they have titles that contain buzzwords, not because they represent great science.

What and who is Altmetrics for?

The aims of altmetrics are ambiguous to the point of dishonesty; they depend on whether the salesperson is talking to a scientist or to a potential buyer of their wares.

At a meeting in London , an employee of altmetric.com said “we measure online attention surrounding journal articles” “we are not measuring quality …” “this whole altmetrics data service was born as a service for publishers”, “it doesn’t matter if you got 1000 tweets . . .all you need is one blog post that indicates that someone got some value from that paper”.

These ideas sound fairly harmless, but in stark contrast, Jason Priem (an author of the altmetrics manifesto) said one advantage of altmetrics is that it’s fast “Speed: months or weeks, not years: faster evaluations for tenure/hiring”.  Although conceivably useful for disseminating preliminary results, such speed isn’t important for serious science (the kind that ought to be considered for tenure) which operates on the timescale of years. Priem also says “researchers must ask if altmetrics really reflect impact” .  Even he doesn’t know, yet altmetrics services are being sold to universities, before any evaluation of their usefulness has been done, and universities are buying them.  The idea that altmetrics scores could be used for hiring is nothing short of terrifying.

The problem with bibliometrics

The mistake made by all bibliometricians is that they fail to consider the content of papers, because they have no desire to understand research. Bibliometrics are for people who aren’t prepared to take the time (or lack the mental capacity) to evaluate research by reading about it, or in the case of software or databases, by using them.   The use of surrogate outcomes in clinical trials is rightly condemned.  Bibliometrics are all about surrogate outcomes.

If instead we consider the work described in particular papers that most people agree to be important (or that everyone agrees to be bad), it’s immediately obvious that no publication metrics can measure quality.  There are some examples in How to get good science (Colquhoun, 2007).  It is shown there that at least one Nobel prize winner failed dismally to fulfil arbitrary biblometric productivity criteria of the sort imposed in some universities (another example is in Is Queen Mary University of London trying to commit scientific suicide?).

Schekman (2013) has said that science

“is disfigured by inappropriate incentives. The prevailing structures of personal reputation and career advancement mean the biggest rewards often follow the flashiest work, not the best.”

Bibliometrics reinforce those inappropriate incentives.  A few examples will show that altmetrics are one of the silliest metrics so far proposed.

The altmetrics top 100 for 2103

The superficiality of altmetrics is demonstrated beautifully by the list of the 100 papers with the highest altmetric scores in 2013    For a start, 58 of the 100 were behind paywalls, and so unlikely to have been read except (perhaps) by academics.

The second most popular paper (with the enormous altmetric score of 2230) was published in the New England Journal of Medicine.  The title was Primary Prevention of Cardiovascular Disease with a Mediterranean Diet.  It was promoted (inaccurately) by the journal with the following tweet:

Many of the 2092 tweets related to this article simply gave the title, but inevitably the theme appealed to diet faddists, with plenty of tweets like the following:

The interpretations of the paper promoted by these tweets were mostly desperately inaccurate. Diet studies are anyway notoriously unreliable. As John Ioannidis has said

"Almost every single nutrient imaginable has peer reviewed publications associating it with almost any outcome."

This sad situation comes about partly because most of the data comes from non-randomised cohort studies that tell you nothing about causality, and also because the effects of diet on health seem to be quite small.

The study in question was a randomized controlled trial, so it should be free of the problems of cohort studies.  But very few tweeters showed any sign of having read the paper.  When you read it you find that the story isn’t so simple.  Many of the problems are pointed out in the online comments that follow the paper. Post-publication peer review really can work, but you have to read the paper.  The conclusions are pretty conclusively demolished in the comments, such as:

“I’m surrounded by olive groves here in Australia and love the hand-pressed EVOO [extra virgin olive oil], which I can buy at a local produce market BUT this study shows that I won’t live a minute longer, and it won’t prevent a heart attack.”

We found no tweets that mentioned the finding from the paper that the diets had no detectable effect on myocardial infarction, death from cardiovascular causes, or death from any cause.  The only difference was in the number of people who had strokes, and that showed a very unimpressive P = 0.04.

Neither did we see any tweets that mentioned the truly impressive list of conflicts of interest of the authors, which ran to an astonishing 419 words.

“Dr. Estruch reports serving on the board of and receiving lecture fees from the Research Foundation on Wine and Nutrition (FIVIN); serving on the boards of the Beer and Health Foundation and the European Foundation for Alcohol Research (ERAB); receiving lecture fees from Cerveceros de España and Sanofi-Aventis; and receiving grant support through his institution from Novartis. Dr. Ros reports serving on the board of and receiving travel support, as well as grant support through his institution, from the California Walnut Commission; serving on the board of the Flora Foundation (Unilever). . . “

And so on, for another 328 words.

The interesting question is how such a paper came to be published in the hugely prestigious New England Journal of Medicine.  That it happened is yet another reason to distrust impact factors.  It seems to be another sign that glamour journals are more concerned with trendiness than quality.

One sign of that is the fact that the journal’s own tweet misrepresented the work. The irresponsible spin in this initial tweet from the journal started the ball rolling, and after this point, the content of the paper itself became irrelevant. The altmetrics score is utterly disconnected from the science reported in the paper: it more closely reflects wishful thinking and confirmation bias.

The fourth paper in the altmetrics top 100 is an equally instructive example.

 This work was also published in a glamour journal, Science. The paper claimed that a function of sleep was to “clear metabolic waste from the brain”.  It was initially promoted (inaccurately) on Twitter by the publisher of Science.  After that, the paper was retweeted many times, presumably because everybody sleeps, and perhaps because the title hinted at the trendy, but fraudulent, idea of “detox”.  Many tweets were variants of “The garbage truck that clears metabolic waste from the brain works best when you’re asleep”.

But this paper was hidden behind Science’s paywall.  It’s bordering on irresponsible for journals to promote on social media papers that can’t be read freely.  It’s unlikely that anyone outside academia had read it, and therefore few of the tweeters had any idea of the actual content, or the way the research was done.  Nevertheless it got “1,479 tweets from 1,355 accounts with an upper bound of 1,110,974 combined followers”.  It had the huge Altmetrics score of 1848, the highest altmetric score in October 2013.

Within a couple of days, the story fell out of the news cycle.  It was not a bad paper, but neither was it a huge breakthrough.  It didn’t show that naturally-produced metabolites were cleared more quickly, just that injected substances were cleared faster when the mice were asleep or anaesthetised.  This finding might or might not have physiological consequences for mice.

Worse, the paper also claimed that “Administration of adrenergic antagonists induced an increase in CSF tracer influx, resulting in rates of CSF tracer influx that were more comparable with influx observed during sleep or anesthesia than in the awake state”.  Simply put, giving the sleeping mice a drug could reduce the clearance to wakeful levels.  But nobody seemed to notice the absurd concentrations of antagonists that were used in these experiments: “adrenergic receptor antagonists (prazosin, atipamezole, and propranolol, each 2 mM) were then slowly infused via the cisterna magna cannula for 15 min”.  Use of such high concentrations is asking for non-specific effects.  The binding constant (concentration to occupy half the receptors) for prazosin is less than 1 nM,  so infusing 2 mM is working at a million times greater than the concentration that should be effective. That’s asking for non-specific effects.  Most drugs at this sort of concentration have local anaesthetic effects, so perhaps it isn’t surprising that the effects resembled those of ketamine.

The altmetrics editor hadn’t noticed the problems and none of them featured in the online buzz.  That’s partly because to find it out you had to read the paper (the antagonist concentrations were hidden in the legend of Figure 4), and partly because you needed to know the binding constant for prazosin to see this warning sign.

The lesson, as usual, is that if you want to know about the quality of a paper, you have to read it. Commenting on a paper without knowing anything of its content is liable to make you look like an jackass.

A tale of two papers

Another approach that looks at individual papers is to compare some of one’s own papers.  Sadly, UCL shows altmetric scores on each of your own papers.  Mostly they are question marks, because nothing published before 2011 is scored.  But two recent papers make an interesting contrast.  One is from DC’s side interest in quackery, one was real science.  The former has an altmetric score of 169, the latter has an altmetric score of 2.

 The first paper was “Acupuncture is a theatrical placebo”, which was published as an invited editorial in Anesthesia and Analgesia [download pdf].  The paper was scientifically trivial. It took perhaps a week to write.  Nevertheless, it got promoted it on twitter, because anything to do with alternative medicine is interesting to the public.  It got quite a lot of retweets.  And the resulting altmetric score of 169 put it in the top 1% of all articles altmetric have tracked, and the second highest ever for Anesthesia and Analgesia.  As well as the journal’s own website, the article was also posted on the DCScience.net blog (May 30, 2013) where it soon became the most viewed page ever (24,468 views as of 23 November 2013), something that altmetrics does not seem to take into account.

Compare this with the fate of some real, but rather technical, science.

 My [DC] best scientific papers are too old (i.e. before 2011) to have an altmetrics score, but my best score for any scientific paper is 2.  This score was for Colquhoun & Lape (2012) “Allosteric coupling in ligand-gated ion channels”.  It was a commentary with some original material.   The altmetric score was based on two tweets and 15 readers on Mendeley.  The two tweets consisted of one from me (“Real science; The meaning of allosteric conformation changes http://t.co/zZeNtLdU ”). The only other tweet as abusive one from a cyberstalker who was upset at having been refused a job years ago.  Incredibly, this modest achievement got it rated “Good compared to other articles of the same age (71st percentile)”.

Bibliometricians spend much time correlating one surrogate outcome with another, from which they learn little.  What they don’t do is take the time to examine individual papers.  Doing that makes it obvious that most metrics, and especially altmetrics, are indeed an ill-conceived and meretricious idea. Universities should know better than to subscribe to them.

Although altmetrics may be the silliest bibliometric idea yet, much this criticism applies equally to all such metrics.  Even the most plausible metric, counting citations, is easily shown to be nonsense by simply considering individual papers.  All you have to do is choose some papers that are universally agreed to be good, and some that are bad, and see how metrics fail to distinguish between them.  This is something that bibliometricians fail to do (perhaps because they don’t know enough science to tell which is which).  Some examples are given by Colquhoun (2007) (more complete version at dcscience.net).

Eugene Garfield, who started the metrics mania with the journal impact factor (JIF), was clear that it was not suitable as a measure of the worth of individuals.  He has been ignored and the JIF has come to dominate the lives of researchers, despite decades of evidence of the harm it does (e.g.Seglen (1997) and Colquhoun (2003) )  In the wake of JIF, young, bright people have been encouraged to develop yet more spurious metrics (of which ‘altmetrics’ is the latest).  It doesn’t matter much whether these metrics are based on nonsense (like counting hashtags) or rely on counting links or comments on a journal website.  They won’t (and can’t) indicate what is important about a piece of research- its quality.

People say – I can’t be a polymath. Well, then don’t try to be. You don’t have to have an opinion on things that you don’t understand. The number of people who really do have to have an overview, of the kind that altmetrics might purport to give, those who have to make funding decisions about work that they are not intimately familiar with, is quite small.  Chances are, you are not one of them. We review plenty of papers and grants.  But it’s not credible to accept assignments outside of your field, and then rely on metrics to assess the quality of the scientific work or the proposal.

It’s perfectly reasonable to give credit for all forms of research outputs, not only papers.   That doesn’t need metrics. It’s nonsense to suggest that altmetrics are needed because research outputs are not already valued in grant and job applications.  If you write a grant for almost any agency, you can put your CV. If you have a non-publication based output, you can always include it. Metrics are not needed. If you write software, get the numbers of downloads. Software normally garners citations anyway if it’s of any use to the greater community.

When AP recently wrote a criticism of Heather Piwowar’s altmetrics note in Nature, one correspondent wrote: "I haven’t read the piece [by HP] but I’m sure you are mischaracterising it". This attitude summarizes the too-long-didn’t-read (TLDR) culture that is increasingly becoming accepted amongst scientists, and which the comparisons above show is a central component of altmetrics.

Altmetrics are numbers generated by people who don’t understand research, for people who don’t understand research. People who read papers and understand research just don’t need them and should shun them.

But all bibliometrics give cause for concern, beyond their lack of utility. They do active harm to science.  They encourage “gaming” (a euphemism for cheating).  They encourage short-term eye-catching research of questionable quality and reproducibility. They encourage guest authorships: that is, they encourage people to claim credit for work which isn’t theirs.  At worst, they encourage fraud.

No doubt metrics have played some part in the crisis of irreproducibility that has engulfed some fields, particularly experimental psychology, genomics and cancer research.  Underpowered studies with a high false-positive rate may get you promoted, but tend to mislead both other scientists and the public (who in general pay for the work). The waste of public money that must result from following up badly done work that can’t be reproduced but that was published for the sake of “getting something out” has not been quantified, but must be considered to the detriment of bibliometrics, and sadly overcomes any advantages from rapid dissemination.  Yet universities continue to pay publishers to provide these measures, which do nothing but harm.  And the general public has noticed.

It’s now eight years since the New York Times brought to the attention of the public that some scientists engage in puffery, cheating and even fraud.

Overblown press releases written by journals, with connivance of university PR wonks and with the connivance of the authors, sometimes go viral on social media (and so score well on altmetrics).  Yet another example, from Journal of the American Medical Association involved an overblown press release from the Journal about a trial that allegedly showed a benefit of high doses of Vitamin E for Alzheimer’s disease.

This sort of puffery harms patients and harms science itself.

We can’t go on like this.

What should be done?

Post publication peer review is now happening, in comments on published papers and through sites like PubPeer, where it is already clear that anonymous peer review can work really well. New journals like eLife have open comments after each paper, though authors do not seem to have yet got into the habit of using them constructively. They will.

It’s very obvious that too many papers are being published, and that anything, however bad, can be published in a journal that claims to be peer reviewed .  To a large extent this is just another example of the harm done to science by metrics  –the publish or perish culture.

Attempts to regulate science by setting “productivity targets” is doomed to do as much harm to science as it has in the National Health Service in the UK.    This has been known to economists for a long time, under the name of Goodhart’s law.

Here are some ideas about how we could restore the confidence of both scientists and of the public in the integrity of published work.

• Nature, Science, and other vanity journals should become news magazines only. Their glamour value distorts science and encourages dishonesty.
• Print journals are overpriced and outdated. They are no longer needed.  Publishing on the web is cheap, and it allows open access and post-publication peer review.  Every paper should be followed by an open comments section, with anonymity allowed.  The old publishers should go the same way as the handloom weavers. Their time has passed.
• Web publication allows proper explanation of methods, without the page, word and figure limits that distort papers in vanity journals.  This would also make it very easy to publish negative work, thus reducing publication bias, a major problem (not least for clinical trials)
• Publish or perish has proved counterproductive. It seems just as likely that better science will result without any performance management at all. All that’s needed is peer review of grant applications.
• Providing more small grants rather than fewer big ones should help to reduce the pressure to publish which distorts the literature. The ‘celebrity scientist’, running a huge group funded by giant grants has not worked well. It’s led to poor mentoring, and, at worst, fraud.  Of course huge groups sometimes produce good work, but too often at the price of exploitation of junior scientists
• There is a good case for limiting the number of original papers that an individual can publish per year, and/or total funding. Fewer but more complete and considered papers would benefit everyone, and counteract the flood of literature that has led to superficiality.
• Everyone should read, learn and inwardly digest Peter Lawrence’s The Mismeasurement of Science.

A focus on speed and brevity (cited as major advantages of altmetrics) will help no-one in the end. And a focus on creating and curating new metrics will simply skew science in yet another unsatisfactory way, and rob scientists of the time they need to do their real job: generate new knowledge.

It has been said

“Creation is sloppy; discovery is messy; exploration is dangerous. What’s a manager to do?
The answer in general is to encourage curiosity and accept failure. Lots of failure.”

And, one might add, forget metrics. All of them.

### Follow-up

17 Jan 2014

This piece was noticed by the Economist. Their ‘Writing worth reading‘ section said

"Why you should ignore altmetrics (David Colquhoun) Altmetrics attempt to rank scientific papers by their popularity on social media. David Colquohoun [sic] argues that they are “for people who aren’t prepared to take the time (or lack the mental capacity) to evaluate research by reading about it.”"

20 January 2014.

Jason Priem, of ImpactStory, has responded to this article on his own blog. In Altmetrics: A Bibliographic Nightmare? he seems to back off a lot from his earlier claim (cited above) that altmetrics are useful for making decisions about hiring or tenure. Our response is on his blog.

20 January 2014.

Jason Priem, of ImpactStory, has responded to this article on his own blog, In Altmetrics: A bibliographic Nightmare? he seems to back off a lot from his earlier claim (cited above) that altmetrics are useful for making decisions about hiring or tenure. Our response is on his blog.

23 January 2014

The Scholarly Kitchen blog carried another paean to metrics, A vigorous discussion followed. The general line that I’ve followed in this discussion, and those mentioned below, is that bibliometricians won’t qualify as scientists until they test their methods, i.e. show that they predict something useful. In order to do that, they’ll have to consider individual papers (as we do above). At present, articles by bibliometricians consist largely of hubris, with little emphasis on the potential to cause corruption. They remind me of articles by homeopaths: their aim is to sell a product (sometimes for cash, but mainly to promote the authors’ usefulness).

It’s noticeable that all of the pro-metrics articles cited here have been written by bibliometricians. None have been written by scientists.

28 January 2014.

Dalmeet Singh Chawla,a bibliometrician from Imperial College London, wrote a blog on the topic. (Imperial, at least in its Medicine department, is notorious for abuse of metrics.)

29 January 2014 Arran Frood wrote a sensible article about the metrics row in Euroscientist.

2 February 2014 Paul Groth (a co-author of the Altmetrics Manifesto) posted more hubristic stuff about altmetrics on Slideshare. A vigorous discussion followed.

5 May 2014. Another vigorous discussion on ImpactStory blog, this time with Stacy Konkiel. She’s another non-scientist trying to tell scientists what to do. The evidence that she produced for the usefulness of altmetrics seemed pathetic to me.

7 May 2014 A much-shortened version of this post appeared in the British Medical Journal (BMJ blogs)