Peter A. Lawrence of the Department of Zoology, University of Cambridge, and the MRC Laboratory of Molecular Biology, Cambridge has written a beautifully argued article, The Mismeasurement of Science. It appeared in Current Biology, August 7, 2007: 17 (15), r583. [Download pdf]
It should be read by every scientist. Even more importantly, it should be read by every vice chancellor and university president, by every HR person and by every one of the legion of inactive scientists who, increasingly, tell active scientists what to do.
Here are some quotations.
“The use of both the H-index and impact factors to evaluate scientists has increased unethical behaviour: it rewards those who gatecrash their names on to author lists. This is very common, even standard, with many people authoring papers whose contents they are largely a stranger to.”
“. . . trying to meet the measures involves changing research strategy: risks should not be taken . . .”
“. . . hype your work, slice the findings up as much as possible (four papers good, two papers bad), compress the results (most top journals have little space, a typical Nature letter now has the density of a black hole), simplify your conclusions but complexify the material (more difficult for reviewers to fault it!), . . . .it has become profitable to ignore or hide results that do not fit with the story being sold â€” a mix of evidence tends to make a paper look messy and lower its appeal.”
“These measures are pushing people into having larger groups. It is a simple matter of arithmetic. Since the group leader authors all the papers, the more people, the more papers. If a larger proportion of young scientists in a larger group fail, as I suspect, this is not recorded. And because no account is taken of wasted lives and broken dreams, these failures do not make a group leader look less productive.”
“It is time to help the pendulum of power swing back to favour the person who actually works at the bench and tries to discover things.”
The position of women
Lawrence argues eloquently a point that I too have been advocating for years. It is well known that, in spite of an increased proportion of women entering biomedical research as students, there has been little, if any, increase in the representation of women at the top. This causes much hand-wringing among university bureaucrats, who fail to notice that one reason for it is the very policies that they themselves advocate. Women, I suspect, are less willing to embrace the semi-dishonest means that are needed to advance in science. As Lawrence puts it
“Gentle people of both sexes vote with their feet and leave a profession that they, correctly, perceive to discriminate against them . Not only do we lose many original researchers, I think science would flourish more in an understanding and empathetic workplace.”
The success of the LMB
It is interesting that Peter Lawrence is associated with the Laboratory for Molecular Biology, one of the most successful labs of all time. In an account of the life of Max Perutz, Danielle Rhodes said this.
“As evidenced by the success of the LMB, Max had the knack of picking extraordinary talent. But he also had the vision of creating a working environment where talented people were left alone to pursue their ideas. This philosophy lives on in the LMB and has been adopted by other research institutes as well. Max insisted that young scientists should be given full responsibility and credit for their work. There was to be no hierarchy, and everybody from the kitchen ladies to the director were on first-name terms. The groups were and still are small, and senior scientists work at the bench. Although I never worked with Max directly, I had the great privilege of sharing a laboratory with him for many years. The slight irritation of forever being taken to be his secretary when answering the telephoneâ€”the fate of femalesâ€”was amply repaid by being able to watch him work and to talk with him. He would come into the laboratory in the morning, put on his lab-coat and proceed to do his experiments. He did everything himself, from making up solutions, to using the spectrophotometer and growing crystals. Max led by example and carried out his own experiments well into his 80s.”
Max Perutz himself, in a history of the LMB said
“Experience had taught me that laboratories often fail because their scientists never talk to each other. To stimulate the exchange of ideas, we built a canteen where people can chat at morning coffee, lunch and tea. It was managed for over twenty years by my wife, Gisela, who saw to it that the food was good and that it was a place where people would make friends. Scientific instruments were to be shared, rather than being jealously guarded as people’s private property; this saved money and also forced people to talk to each other. When funds ran short during the building of the lab, I suggested that money could be saved by leaving all doors without locks to symbolise the absence of secrets.”
That is how to get good science.
Now download a copy of Lawrence’s paper and send it to every bureaucrat in your university.
- The Times Higher Education Supplement, 10 Aug 2007. had a feature on this paper. Read it here if you have a subscription, or download a copy.
- In the same issue, Denis Noble and Sir Philip Cohen emphasise the importance of basic research. Cohen says
“In 1994, after 25 years in the relative research wilderness, the whole thing changed.
“Suddenly I was the best thing since sliced bread,” Sir Philip said. “We set up the Division of Signal Transduction Therapy, which is the largest-ever collaboration between the pharmaceutical industry and academia in the UK.”
But the present research funding culture could prevent similar discoveries. “In today’s climate that research would not have been funded,” Sir Philip said. “The space programme hasn’t allowed us to colonise the universe, but it has given us the internet – a big payoff that industry could never have envisaged.” (Download a copy.)
- Comments from Pennsylvania at http://other95.blogspot.com
- How to slow down science. Another reference to Lawrence’s paper from a US (but otherwise anonymous) blog, BayBlab.
How to select candidates
I have, at various times, been asked how I would select candidates for a job, if not by counting papers and impact factors. This is a slightly modified version of a comment that I left on a blog, which describes roughly what I’d advocate
After a pilot study the entire Research Excellence Framework (which attempts to assess the quality of research in every UK university) made the following statement.
“No sub-panel will make any use of journal impact factors, rankings, lists or the perceived standing of publishers in assessing the quality of research outputs”
It seems that the REF is paying attention to the science not to bibliometricians.
It has been the practice at UCL to ask people to nominate their best papers (2 -4 papers depending on age). We then read the papers and asked candidates hard questions about them (not least about the methods section). It’s a method that I learned a long time ago from Stephen Heinemann, a senior scientist at the Salk Institute. It’s often been surprising to learn how little some candidates know about the contents of papers which they themselves select as their best. One aim of this is to find out how much the candidate understands the principles of what they are doing, as opposed to following a recipe.
Of course we also seek the opinions of people who know the work, and preferably know the person. Written references have suffered so much from ‘grade inflation’ that they are often worthless, but a talk on the telephone to someone that knows both the work, and the candidate, can be useful, That, however, is now banned by HR who seem to feel that any knowledge of the candidate’s ability would lead to bias.
It is not true that use of metrics is universal and thank heavens for that. There are alternatives and we use them.
Incidentally, the reason that I have described the Queen Mary procedures as insane, brainless and dimwitted is because their aim to increase their ratings is likely to be frustrated. No person in their right mind would want to work for a place that treats its employees like that, if they had any other option. And it is very odd that their attempt to improve their REF rating uses criteria that have been explicitly ruled out by the REF. You can’t get more brainless than that.
This discussion has been interesting to me, if only because it shows how little bibliometricians understand how to get good science.