LOB-vs
Download Lectures on Biostatistics (1971).
Corrected and searchable version of Google books edition

Download review of Lectures on Biostatistics (THES, 1973).

Latest Tweets
Categories
Archives

Jump to follow-up

Research quangos lead to mediocrity is the headline title of a letter to The Times appeared on 6 December 2010. It is reproduced below for those who can’t (or won’t) pay Rupert Murdoch to see it.

rcuk1

The letter is about the current buzzword, "research impact", a term that trips off the lips of every administrator and politician daily. Since much research is funded by the taxpayer, it seems reasonable to ask if it gives value for money. The best answer can be found in St Paul’s cathedral.

The plaque for Christopher Wren bears the epitaph

LECTOR, SI MONUMENTUM REQUIRIS, CIRCUMSPICE.

Reader, if you seek his memorial – look around you.

Much the same could be said for the impact of any science. Look at your refrigerator, your mobile phone, your computer, your central heating boiler, your house. Look at the X-ray machine and MRI machines in your hospital. Look at the aircraft that takes you on holiday. Look at your DVD player and laser surgery. Look, even, at the way you can turn a switch and light your room. Look at almost anything that you take for granted in your everyday life, They are all products of science; products, eventually, of the enlightenment.

BUT remember also that these wonderful products did not appear overnight. They evolved slowly over many decades or even centuries, and they evolved from work that, at the time, appeared to be mere idle curiosity. Electricity lies at the heart of everyday life. It took almost 200 years to get from Michael Faraday’s coils to your mobile phone. At the time, Faraday’s work seemed to politicians to be useless. Michael Faraday was made a fellow of the Royal Society in 1824.

. . . after Faraday was made a fellow of the Royal Society[,] the prime minister of the day asked what good this invention could be, and Faraday answered: “Why, Prime Minister, someday you can tax it.”

Whether this was really said is doubtful, but that hardly matters. It is the sort of remark made by politicians every day.

In May 2008, I read a review of ”The myths of Innovation” by Scott Berkun. The review seems to have vanished from the web, but I noted it in diary. These words should be framed on the wall of every politician and administrator. Here are some quotations.

“One myth that will disappoint most businesses is the idea that innovation can be managed. Actually, Berkun calls this one ‘Your boss knows more about innovation than you’. After all, he says, many people get their best ideas while they’re wandering in their bathrobes, filled coffee mug in hand, from the kitchen to their home PC on a day off rather than sitting in a cubicle in a suit during working hours. But professional managers can’t help it: their job is to control every variable as much as possible, and that includes innovation.”

“Creation is sloppy; discovery is messy; exploration is dangerous. What’s a manager to do?
The answer in general is to encourage curiosity and accept failure. Lots of failure.”

I commented at the time "What a pity that university managers are so far behind those of modern businesses. They seem to be totally incapable of understanding these simple truths. That is what happens when power is removed from people who know about research and put into the hands of lawyers, HR people, MBAs and failed researchers."

That is even more true two years later. The people who actually do research have been progressively disempowered. We are run by men in dark suits who mistake meetings for work. You have only to look at history to see that great discoveries arise from the curiosity of creative people, and that,. rarely, these ideas turn out to be of huge economic importance, many decades later.

The research impact plan, has been now renamed "Pathways to Impact". It means that scientists are being asked to explain the economic impact of their research before they have even got any results.

All that shows is how science is being run by dimwits who simply don’t understand how science works. This amounts to nothing less than being compelled to lie if you want any research funding. And, worse stiil, the pressure to lie comes not primarily from government, but from that curious breed of ex-scientists, failed scientists and non-scientists who control the Research Councils.

rcuk2
How much did RCUK pay for the silly logo?

We are being run by people who would have told Michael Faraday to stop messing about with wires and coils and to do something really useful, like inventing better leather washers for steam pumps.

Welcome to the third division. Brought to you be Research Counclls and politicians.

Here is the letter in The Times. It is worded slightly more diplomatically than my commentary. but will, no doubt, have just as little effect. What would the signatories know about science? Several off them don’t even wear black suits.

times headline

Sir,

The governance of UK academic research today is delegated to a quangocracy comprising 11 funding and research councils, and to an additional body – Research Councils UK. Ill considered changes over the past few decades have transformed what was arguably the world’s most creative academic sector into one often described nowadays as merely competitive.

In their latest change, research councils introduce a new criterion for judging proposals – “Pathways to Impact” – against which individual researchers applying for funds must identify who might benefit from their proposed research and how they might benefit. Furthermore, the funding councils are planning to begin judging researchers’ departments in 2014 on the actual benefits achieved and to adjust their funding accordingly, thereby increasing pressure on researchers to deliver short-term benefits. However, we cannot understand why the quangocracy has ignored abundant evidence showing that the outcomes of high-quality research are impossible to predict.

We are mindful of the need to justify investment in academic research, but “Pathways to Impact” focuses on the predictable, leads to mediocrity, and reduces returns to the taxpayer. In our opinion as experienced researchers, few if any of the 20th century’s great discoveries and their huge economic stimuli could have happened if a policy of focussing on attractive short-term benefits had applied because great discoveries are always unpredicted. We therefore have an acutely serious problem.

Abolishing “Pathways to Impact” would not only save the expense of its burgeoning bureaucracy; it would also be a step towards liberating creativity and indicate that policy-makers have at last regained their capacity for world-class thinking.

Donald W Braben
University College London,
And the following scientists who also sign in a personal capacity: 

John F Allen, Queen Mary, University of London;
William Amos, University of Cambridge;
Michael Ashburner FRS, University of Cambridge;
Jonathan Ashmore FRS, University College London;
Tim Birkhead FRS, University of Sheffield;
Mark S Bretscher FRS, MRC Laboratory of Molecular Biology, Cambridge;
Peter Cameron, Queen Mary, University of London;
Richard S Clymo, Queen Mary, University of London;
Richard Cogdell FRS, University of Glasgow;
David Colquhoun FRS, University College London;

Adam Curtis, Glasgow University;
John Dainton FRS, University of Liverpool;
Felipe Fernandez-Armesto, University of Notre Dame;
Pat Heslop-Harrison, University of Leicester;
Dudley Herschbach, Harvard University, Nobel Laureate;
Herbert Huppert FRS, University of Cambridge;
H Jeff Kimble, Caltech, US National Academy of Sciences;
Sir Harry Kroto FRS, Florida State University, Tallahassee, Nobel Laureate;
James Ladyman, University of Bristol;
Michael F Land FRS, University of Sussex;

Peter Lawrence FRS, University of Cambridge;
Sir Anthony Leggett FRS, University of Illinois at Urbana-Champaign, Nobel Laureate;
Angus MacIntyre FRS, Queen Mary, University of London;
Sotiris Missailidis, Open University;
Philip Moriarty, University of Nottingham;
Andrew Oswald, University of Warwick;
Lawrence Paulson, University of Cambridge;
Iain Pears, Oxford;
Beatrice Pelloni, University of Reading;
Douglas Randall, University of Missouri, US National Science Board member;

David Ray, BioAstral Limited;
Sir Richard J Roberts FRS, New England Biolabs, Nobel Laureate;
Ian Russell FRS, University of Sussex;
Ken Seddon, Queen’s University of Belfast;
Steve Sparks FRS, University of Bristol;
Harry Swinney, University of Texas, US National Academy of Sciences;
Iain Stewart, University of Durham;
Claudio Vita-Finzi, Natural History Museum;
David Walker FRS, University of Sheffield;
Glynn Winskel, University of Cambridge;

Lewis Wolpert FRS, University College London;
Phil Woodruff FRS, University of Warwick.

Now cheer yourself up by reading Captain Cook’s Grant Application.

Follow-up

Scientists should sign the petition to help humanities too. See the Humanities and Social Sciences Matter web site.

Nobel view. 1. Andre Geim’s speech at Nobel banquet, 2010

"Human progress has always been driven by a sense of adventure and unconventional thinking. But amidst calls for “bread and circuses”, these virtues are often forgotten for the sake of cautiousness and political correctness that now rule the world. And we sink deeper and deeper from democracy into a state of mediocrity and even idiocracy. If you need an example, look no further than at research funding by the European Commission."

Nobel view. 2. Ahmed Zewail won the 1999 Nobel Prize in Chemistry. He serves on Barack Obama’s Council of Advisors on Science and Technology. He wrote in Nature

“Beware the urge to direct research too closely, says Nobel laureate Ahmed Zewail. History teaches us the value of free scientific inquisitiveness.”

“I have emphasized that without solid investment in science education and a fundamental science base, nations will not acquire the ground-breaking knowledge required to make discoveries and innovations that will shape their future.”

“Preserving knowledge is easy. Transferring knowledge is also easy. But making new knowledge is neither easy nor profitable in the short term. Fundamental research proves profitable in the long run, and, as importantly, it is a force that enriches the culture of any society with reason and basic truth.”

How many more people have to say this before the Research Councils take some notice?

Print Friendly, PDF & Email

43 Responses to Nonsense about “research impact”. The Research Councils are as much a problem as the government

  • […] This post was mentioned on Twitter by David Lindsay, Alan Henness. Alan Henness said: RT @david_colquhoun: Research quangos lead to mediocrity. Letter in Times about harm done by RCUK http://bit.ly/fbhEA1 […]

  • CrewsControl says:

    Is the present situation not a variant of the Rothschild Principle of the early 1970’s in which the customer-contractor principle operated. It relied on customers (eg MAFF) knowing what they wanted and paying for the research from a contractor (Agricultural Research Units). The present variant seems to be one in which the contractor (University Researcher(s)), like some angler, has to speculatively select juicy bait in order to snare the Research Council’s funds. I suspect your ire is correctly directed at those with a scientific background who now manage science. To use your analogy if they can produce a leather washer with twice the life span and half the cost that is an outcome amenable to a cost-benefit analysis (in the short term). Funding dozens of present day Faradays to produce a significant benefit in the long term cannot/will not be part of the equation if promotion/bonuses are stake. We seem no longer to value the blue sky boffin(s) beavering away in the basement because we can’t (or don’t want to) measure their worth with present accounting systems

  • TheModernUbermensch says:

    It’s interesting to see this from a science point of view. I should declare that I work in Research Management – not in applications, but in drafting and negotiating contracts between a University and external industrial sponsors, other Universities and government bodies. I find the application process quite mysterious, and the ‘Pathways to Impact’ idea a bit bizarre.

    The spiel from those above me in the Research Office is that Pathways to Impact is not supposed to ‘predict’ anything, but to look at effects – it is, in a way, an attempt to stop compartmentalisation of research, an over-emphasis on specialisation and encourages academics to look at the broader influence of their research. That influence can be within academia – it can lead to exciting blue skies research – it does not have to be “My Research = £££ or X which you can sell for £££”.

    It’s important to note that the Research Councils have a finite resource, so they must use SOME criteria to focus that resource on where it does the most good. For every Faraday there are a thousand (or more?) researchers working on research that will lead to an article that is read by few. I’m a great believer in knowledge for knowledge’s sake (and believe the Research Councils should have more money to distribute), but in the context they work in, what works better?

    Criticism is easy (and criticism of Pathways to Impact as management fluff is possibly TOO easy).

    What would you have in its place?

    I’m genuinely interested; this is a complex problem and it’s people like you who should be providing more of the answers, and people like me who should take a step back and listen more.

    What criteria should Research Councils use to distribute their finite funds?

  • @TheModernUbermensch

    You ask what criteria should Research Councils use to distribute their finite funds?

    How about giving the money to the best research projects that are submitted to them, as judged by peer review?

    It seems obvious that the people who are doing the work are in the best position to know what’s likely to give results, or what’s the best novel, but risky, idea. It makes no sense to me to delegate that job to a committee of suits who aren’t doing research themselves.

    You say “that influence can be within academia “. It’s true that under a barrage of criticism from the research community, the Research Councils have continually wriggled to adjust the definition of “impact” until now the term is now almost meaningless.

    You say “Pathways to Impact is not supposed to ‘predict’ anything, but to look at effects”. What sense does it make to look at the effects of research before the research has even been started?

    I have never in a lifetime of research, come across and academic who does not hope it will have academic interest, and who does not hope that one day it will turn out to be useful. What on earth is the point in making us write that down, no doubt with a good dose of hyperbole? It’s teaching your grandmother to suck eggs.

    In fact shortage of funds is not the immediate problem (though it will become so as inflation bites). The problem is that the Research Councils are not spending enough of their funds on responsive mode grants The funding rate at a recent MRC panel was 7 percent! With that sort of chance of success you might just as well give up.

    So what are the Research Councils spending the money on? One thing is a bloated bureaucracy writing “management fluff” about impact. But also they have earmarked a lot of their funds for ‘buzzword’; research. for example an enormous amount has been spent on systems biology. Certainly it makes sense to spend something in this area. Eventually the aim has to be to understand complex systems in a quantitative way. The problem is that there exists nowhere near enough information yet to do the calculations. Throwing money at the problem won’t generate the information, in fact it will slow down its generation by diverting funds from the job. That’s what happens when you allow science to be run by people who aren’t scientists.

  • I understand the frustration most researchers have with the requirement to produce a prospective account of the impacts proposed research could have. The physicist John Ziman famously called the peer review of research not yet performed – the stock-in-trade of the research councils after all – a “higher form of nonsense”. Adding ‘impact’ considerations into the mix when evaluating research proposals could be seen as compounding nonsense with further nonsense.

    Yet acknowledging the absurdity of crude prospective impact assessment does not invalidate the concern with impact itself, and the letter in the Times does seem to acknowledge this. And I think this is my problem with the tone of the blog post. Much blame is heaped on science administrators for seeking to manage the unmanageable but this is done without any acknowledgement of the context, particularly the historical context.

    I think two inter-linked contextual factors are particularly important. First, the modern publicly-funded scientific enterprise has grown massively in scale and scope since its mid-20th century origins. At least since the 1960s, increasing attention has been devoted by science administrators and by thoughtful scientists themselves to the management problems that stem from the fact that growth in the demand for funding will always outstrip growth in the supply of funding. Notable names here would include Alvin Weinberg and Ziman himself.

    Second, as science has grown, so have society’s expectations of it. And scientists themselves have not been shy of promising fabulous impacts in return for public funding. Many critics, including a number from within the research community have argued that there is a tendency to over-promise and that this could have dangerous longer term consequences.

    [The tendency to over-promise is neatly satirised in this from xkcd: http://xkcd.com/678/ ]

    The combination of a large, complex and ever more demanding (in terms of funding) scientific enterprise and the tendency to over-promise, especially when the funding settlement is felt to be in some jeopardy, has led us to a situation where prioritisation on the basis of likely ‘impact’ is all but inevitable. It is scientists themselves who have cemented in the minds of public and policy-makers alike the idea that science has, and should have, ‘impact’. And it is not crazy to seek to incorporate ‘impact’ into funding considerations (even if it is crazy to try to do it in the way the ‘pathways to impact’ model works). David’s blog post mentions research-based companies. Such companies actually do a great deal of filtering and prioritising of projects, ruthlessly cutting projects with little promise, whilst of course trying to promote diversity of ideas at the earliest stages, so that there is something to select from. The research funded by research charities is carefully directed in the hope of achieving maximum impact in line with the wishes of the donors and the mission of the charity.

    Taxpayer funded basic research funders already balance a wide range of considerations in allocating research funding, including maintaining / underpinning research capacity in particular areas and, yes, maximising research and other impacts. These trade-offs are often an implicit consideration in the review process. Impact should clearly never be the sole criterion but there is surely nothing to be lost from making it a more explicit consideration, and we urgently need to find better ways to do this than the ‘pathways to impact’ model.

  • TheModernUbermensch says:

    @ David

    Thank you for replying. It’s clear there’s some anger towards the Pathways to Impact model – and I can understand why. Having said that, it does make sense to explain to a non-specialist the possibilities of the research. I think there’s also a parallel problem with most University in house media teams, who sometimes distort the results of research to create headlines.

    I’m interested in your idea of Peer Reviewed applications – how would this work in practice (particularly where there are so many specialisms and, perhaps, not all areas should be treated equally with regard to public funding)?

    In particular, how would you balance transparency and consistency in such a model?

    It sounds like you believe that only those currently active as researchers can identify applications with the most worth, which is interesting. Do you think this would lead to politicising those who peer review (as it seems like publicly funded research should at least address economic and political concerns, e.g. the focus on low carbon fits nicely with current policy)?

  • […] This post was mentioned on Twitter by Stephen Curry, Kieron Flanagan. Kieron Flanagan said: Finally got around to posting a comment on @david_colquhoun 's blog post on research quangos and 'impact': http://bit.ly/dZJqig #scipolicy […]

  • @kieron.flanagan

    Thanks for your contribution to the discussion. We can certainly agree on one thing, and that is that the tendency of scientists themselves to exaggerate the significance of their findings has done a good deal of harm. That is the topic of a fair proportion of the entries on this blog. There is much truth in the cartoon to which you link, but the problem is much more public than that. Take, for example, the biting satire of an item that appeared, very publicly, in the New York Times in 2006. The typography makes it obvious that it’s a spoof of what might appear in the journal Science

    imaginary genomics
    Click image to enlarge

    One reason for this harmful exaggeration is the vanity of scientists themselves. Another culprit is sometimes the PR department of universities who see their job as promoting the university rather than conveying realistic science. Yet another is the Research Councils themselves who wish to promote their importance to politicians.

    When we published an article in Nature in 2008, we had to veto the the press release (written by a hired arts graduate) because a sufficiently realistic wording couldn’t be agreed. Less creditably, while I was director of the Wellcome Lab for Molecular Pharmacology, the Trust commissioned a journalist to write a puff piece about the lab which I thought at the time was exaggerated. Sadly I didn’t have the guts to say so at the time -we wanted (and got) renewal.

    One of my objections to "Pathways to Impact" is precisely that it is a direct incentive to this sort of bad conduct. It makes it almost compulsory.

    I can’t agree that shortage of funds justifies or excuses venture like "Pathways to Impact". As I said in response to the previous comment, the biggest problem is the reduction in spending on responsive mode grants (partly because funds are being spent on bureaucratic nonsense like "impact")

  • @TheModernUbermensch.

    I’m astonished by your comment “I’m interested in your idea of Peer Reviewed applications – how would this work in practice”. That is precisely how the grant system has always worked and still does for responsive mode grants. I can only suppose that you are so young that you don’t recall the time before the Research Councils got themselves bogged down in managerialism, bureaucracy and top-down management of science. That’s how it worked at a time when we got more Nobel prizes than we do now.

    Of course, peer review is an imperfect process, but, I think, less imperfect than top down management by black suits. Nobody has thought of anything better. It does depend critically on the research council staff who handle the grant having the expertise to pick appropriate referees. I have the impression that rapid turnover of staff has harmed this process.

    The biggest improvement in the grant awarding process that I can remember was when, at least for 5 year grants, applicants were allowed to reply to comments of referees before a final decision was taken. This made it possible to rebut malicious or ill-informed referees.

    All of this is useless, though, if the Research Councils don’t fund enough responsive mode grants. The fact that they don’t is the greatest harm they are doing to good science.

  • stephenemoss says:

    I couldn’t agree less with Kieron.Flanagan’s assertion that “it is scientists themselves who have cemented in the minds of public and policy-makers alike the idea that science has, and should have, ‘impact’”. Who are these cement-fuelled scientists? Having been an active researcher for more than two decades, I can admit to never once having given serious thought as to whether or not the research I was doing would at some future point have ‘impact’. I simply wanted to answer what I believed were interesting or curious biological questions. If I uncovered something that might later be deemed to have ‘impact’, e.g. to contribute to the UK economy, or to improvements in human health, I would consider this a bonus rather than an objective in itself. In fact, and largely by chance, one strand of our blue skies research did uncover something that led to a patent, and may even take us to the development of a new therapeutic.

    Scientists like me, and like David Colquhoun and those who signed the letter to The Times, are clearly at pains to distance themselves from the idea that there can be any meaningful assessment of impact for research that has not yet been done. Trying to predict ‘impact’ is no more than so much fanciful crystal ball-gazing, and is frankly so alien to the scientific way of thinking that it can only have come from policy makers far removed from the lab, and certainly not from scientists themselves. TheModernUbermensch asks how we are to judge grant applications without ‘pathways to impact’ or something equivalent. Well, as David pointed out, we seemed to manage for years by simply evaluating the quality of the science.

    A couple of months ago I had to grapple for the first time with writing the ‘Pathways to Impact’ section in a RC grant application. But, curse my fascination with how cells work, this project was yet more blue skies research. I wanted to do some experiments for no other reason than I thought they would be interesting, and I found myself wrestling with the problem of producing a text that would capture the future impact of the work. In the end I produced half a side of caveat-loaded conjecture in which I suggested that one set of imaginary data might spawn another imaginary line of research, that may in some unpredictable way shed light on the cellular basis of a disease, which could in an equally undefined way lead to the alleviation of human suffering. By flirting with the realms of science fiction, ‘Pathways to Impact’ obliges one to stretch both credulity and sincerity, but this is what we are asked to do.

  • TheModernUbermensch says:

    “I can only suppose that you are so young that you don’t recall the time before the Research Councils got themselves bogged down in managerialism, bureaucracy and top-down management of science.”

    I’m happy to say you’re right – I am young and have been part of the Research Office for less than a year, having just completed a Master’s degree at the same University, so seeing things from the administrative point of view is still new to me. Even then, most of my interaction is with industrial sponsors of research and other universities rather than research councils.

    There seem to be two issues, the first that it is impossible to judge the impact a piece of research will have and the second that research should not be subject to a cost/benefit analysis. The first is an epistemological problem, the second is that research (specifically scientific research) has an inherent value; it is simply good to ‘know’.

    I can’t disagree with either.

    I also think that you’re right, that by emphasising the ‘impact’, there’s a tendency to over-exaggerate, to over-sell. I still struggle though, to see how a system that does not ask these questions, but spends millions of pounds of public money, can be justified. Impact should not be the deciding criteria, but it doesn’t seem absurd for it to form part of the decision making process. Or does it?

  • @David – I agree that encouraging researchers to talk up uncertain or unlikely benefits is stupid and potentially dangerous. I think the tendency to over-promise very much predates the current fashion for science by press release, however. I take the point about responsive mode versus priority programmes. This is a difficult one. On the one hand, it’s part of the balancing act required because research funding is expected to contribute towards multiple, sometimes conflicting policy goals. This is inevitable, it’s just important to get those discussions out in the open. On the other hand, I do worry that big, fashionable and expensive priorities and over-promising go hand-in-hand (c.f. Human Genome Programme).

    @stephen – I’m not blaming individual scientists for the phenomenon of ‘over-promising’ – it’s an emergent property if you like of the way the research community acts (@david has described some of the possible mechanisms in his response). On the other point, I’m not so convinced we did ‘manage’ all those years depending on the ‘higher form of nonsense’. Clearly there can be no foolproof way of allocating money to research that hasn’t yet been conducted and many of the indirect impacts which we know come from research will also come from ‘failed’ research – maybe even more so. But if we know peer review isn’t perfect, then why the sensitivity about tweaks to the system? This unfortunately creates the impression that this is all about retaining control within an ‘elite’ community.

    Personally, rather than having some brain dead ‘user’ committee decide which A+ proposals to select based on fashionable or self-interested ‘priorities’, I’d like to try an experiment whereby a jury of randomly selected members of the general public allocate a small proportion of research funding according to the best cases (scientific, impact, curiousity, whatever) made by the scientists themselves.

  • davenewman says:

    I did my Ph.D. in the 1970s, when there was a mass movement among engineers and scientists to make their work relevant – specifically to solving the world’s worst problems: freedom from hunger, freedom from thirst, freedom from poverty and freedom from ignorance. One thing that came out of my research was a new industry in Kenya, in which artisans made charcoal stoves that used 1/3rd less charcoal – important when trees were cut down 150 km. outside Nairobi to sell to people in towns who spent 30% of their income on charcoal for cooking. Now there was an impact you can measure.

    I am disappointed at the way so many current researchers and research councils have lost the mission to improve the world through practical, interdisciplinary research. Instead, so many have retreated to their tiny silos, where they work on the next minor step to something they wrote last year, a process akin to mental masturbation. We get more mass-produced “normal science” and less of Kuhn’s “revolutionary science”.

  • […] To read all the article: Nonsense about “research impact” […]

  • @TheModernUbermensch

    The reason you can’t do a cost/benefit analysis is not so much a consequence of abstract ideas like “inherent value of knowledge”. It is because, although you can measure the cost at the time it is done, you can’t measure the value until much later.

    You say ” it doesn’t seem absurd for it to form part of the decision making process. Or does it?” It would not be absurd if you could measure it. The reason it is absurd is because you can’t measure (until decades later). It makes no sense to use made-up arm-waving statements and pious hopes as part of the decision-making process.

    @kieron.flanagan
    It is hard to see what contribution the general public could make to judging the quality of a grant application that was concerned with particle physics, or even a grant concerned with single molecule behaviour. To do that would simple result in yet more exaggerated statements, designed to impress the public members of the panel.

    You say “creates the impression that this is all about retaining control within an ‘elite’ community”. I’d say that it isn’t a matter of ‘elite’, just a matter of knowledge and understanding. Would you like the public to have a vote on the methods used by your surgeon to remove your tumour? I know I wouldn’t.

    @davenewman
    Your research on better charcoal stoves sounds admirable. Work of that sort should certainly be done, and it is. But if all research were done on that basis we’d have no electricity, no cars, no mobile phones. The world would stand (almost) still.

    Your second paragraph seems to contradict the first. You suggest we have too much “mass-produced ‘normal science'” and not enough of”Kuhn’s ‘revolutionary science’.”

    What, I wonder, do you mean by revolutionary science? The first discovery of electricity perhaps? Newton’s laws? Then later relativity? The discovery of the evolution of species? These are precisely the sort of things that would have been stopped dead in their tracks if we’d been ruled by “paths to impact” and we’d all worked on better charcoal stoves. Imagine Einstein writing a grant application that said

    “I want to invent relativity. When I discover it, it will have enormous impact because it will (in almost 100 years time) enable your carriage to have a GPS”

    I think you have disproved your own case.

  • @David – the surgeon analogy is a good one. A good surgeon not only has the technical skill and judgement required to be able to perform the procedure but would attempt to explain in terms that I can understand what the procedure entailed, what risks were involved and for what promise.

    And basic research is not life-and-death surgery. I don’t see why the public can’t discuss these issues with researchers – and indeed there’s some evidence to suggest they can.

  • Both surgeon and scientist should, and do, discuss things with the public. That is the aim of this and many other blogs.

    That does not mean that the public should decide on the merits of grant applications on subjects that are too technical for them to have an informed opinion about their merits.

    One outcome of doing that would, I suspect, be the funding of grants that promise the impossible. There is no point in spending large amounts of money on problems you would like to be solved when the methods for solving them don’t exist. See, for example, my comments about systems biology, above.

  • kausikdatta says:

    @kieron.flanagan
    I am afraid you misread David’s intent with the surgeon analogy. It has less to do with ‘life and death situations’ and more to do with the specialized knowledge, borne out of years of training and experience, that the surgeon utilizes in order to arrive at a reasonable medical decision. Surely you understand that it is not possible for the uninformed public, lacking that specialized knowledge, to pronounce a reasonable judgment on the said decision, don’t you?

    Agreed that basic research is not surgery (I hope not!), but can you say with certainty that it doesn’t possess any life-and-death potential? Basic research of today lays the foundation stone for the applications of tomorrow, and if you look around you, it is the breakthroughs in the basic research (which appears to hold no importance for you) that have provided the modern marvels of science and technology, including, yes, the contrivances that are applicable to life-and-death situations.

  • TheModernUbermensch says:

    @David – thank you for taking the time to respond to my comments. I’m going to go away and think about this – particularly how University research support can help and respond to this without simply increasing the size and scope of our own departments by, for example, hiring people to effectively ghost write parts of applications. But thank you for explaining your thoughts.

  • stephenemoss says:

    @kieron.flanagan
    Are you suggesting that the public might have a role in assessing grant applications? I’m all for engaging the public in the work we do, however ‘basic’ research is frequently anything but. Often, the proposed work is so specialised that even among scientists there may be only a few with the knowledge required to make an informed evaluation.

    The ‘lay summary’ that appears on virtually all grant applications may summarise the work in broadly understandable terms, but there is still a need to assess the feasibility of the experiments, the track record of the investigator, etc., and this requires knowledge and experience.

  • Margaret says:

    @TheModernUbermensch

    Wasn’t it the case that Medel’s work on peas lay forgotten, insignificant until one day…..

    I spent years in a research lab because I wanted to know the answer to ‘What does that do? How does that work, then? How the hell am I supposed to figure that out?’

    If I’d wanted to be a ‘predictor’ I’d have got myself a job at the local rag as the Horoscope writer or a fiction writer.

    If I’d wanted to tell porkies to convince people to go along with me, I’d have become a self-employed scam artist and made a mint.

    Was Mendel a ‘weak’ scientist because he failed to predict what impact his work would have in 10, 20 or even 200 years later?

    The whole point about research is you don’t know. If we did, we’d do something else.

    At this point my strongest skill in predictions is this: Mum will call tomorrow night, telling me she either
    a) burnt a dinner again in the past week (assuming no broken bones etc)
    b) my brother still hasn’t phoned her but never mind, his wife did
    or
    c) both

    I predict she’ll do it but I love her so I let her tell me all about it rather than doing something else.

    Predictions for anything else – best whistle while you watch to see what happens.

  • martinbudden says:

    Here are a couple of timelines that show why focusing on the short term benefits of science is, well, short sighted:

    For the microelectronics we have had:
    1900 quantum hypothesis by Max Planck
    1925 transistor patented
    1947 transistor first demonstrated
    1956 formation of Fairchild Semiconductor
    1959 integrated circuit first demonstrated
    1971 first microprocessor
    1981 IBM PC introduced

    For biotechnology we have had:
    1865 publication of “Experiments on Plant Hybridization” by Gregor Mendel
    1930s convergence of biochemistry and genetics into molecular biology
    1953 discovery of the structure of DNA
    1961 cracking of the genetic code
    1972 invention of recombinant DNA
    1976 Formation of Genentech
    1982 synthetic human insulin approved by FDA

  • @kausikdatta @stephenemoss – every day juries of the ‘uninformed public’ make complex decisions in courtrooms and this is seen as one of the key bases of our ancient freedoms. The issue of whether they are technically competent or not simply does not arise – it is seen as right and proper.

    The ‘uninformed public’ pay for the research. I don’t see any problem at all with allowing a small proportion of research funding (that’s what I suggeted) to be allocated on the basis of decisions made by a randomly selected jury. I certainly don’t see that it would automatically deliver ‘worse’ science than the ‘higher form of nonsense’ method.

  • @martinbudden – your neat timelines where one discovery leads to another leads to another are worthy of a primary school textbook but 60 years of empirical research into the complex relationships between scientific research, technological development and industrial innovation clearly show that it rarely happens like that.

    Attempts to trace back the scientific antecdents of innovation are inevitably arbitrary and incomplete. We often can’t know how crucial a particular discovery was (we often can’t even establish whether person x was aware of prior discovery y when they did their work).

    All the evidence shows that most techology is not directly dependent on scientific research, it is developed out of previous technology. Scientific research is certainly crucial but as a kind of background underpinning and as the principal means by which ‘research capacity’ is built and maintained.

  • martinbudden says:

    @kieron.flanagan – I think you’ve misunderstood my post completely. It was not to show how one discovery neatly leads to another, rather it was to show precisely the opposite, namely that the path taken by science is unpredictable.

    Who would have anticipated that research into black body radiation would even indirectly be related to computing?

    Who would have thought that Mendel’s research (which funnily enough was funded by the food and drink “industry” of the time) would even indirectly benefit diabetics?

  • davenewman says:

    @David Colquhuon – we know what often produces the types of revolutionary science you mention: people with skills in one discipline working on problems from another discipline. The classic example is the founding of psychology in Germany, when neurophysiologists took chairs in philosophy, and started working on problems at the boundaries of two disciplines.

    More generally, those who try to solve interesting problems often come up with innovative ideas as well as solutions to the problem. So the question should neither be “can you foretell the impact?” nor “is it something scientists in your narrow field approve of?”, but is it an interesting and important problem for someone somewhere?

  • @TheModernUbermensch
    I do hope you weren’t serious about hiring people to ghostwrite grant applications. That was satire, right?

    @davenewman
    In the examples that I produced of revolutionary science, not a single one depended on interdisciplinary collaboration. Sometimes it is sensible to work with people in other disciplines (I’ve collaborated with mathematicians all my life). But for Research Councils, “interdisciplinary” has become yet another buzzword to be enforced on researchers. It is for scientists to decide when to collaborate, not Research Council bureaucrats

  • @kieron.flanagan

    Having juries made up of non-specialists might seem to you to be “right and proper”, but that’s probably more a reflection of you being used to that arrangement than fact.

    I live in England but am not native to the country. I find the idea of juries quite bizarre and I would rather be judged by professionals than some random strangers. I have debated this often with the natives, and my feeling is that many of them are in favour of the system simply because it is the current one. I have never seen or heard any proper evidence that it produces better results than jury-less systems.

    My uninformed opinion is that there are probably cases where juries can do the job and other cases where the nature of the evidence would be too much for a jury to handle. It does, however, seem to be a very expensive way to do things for no obvious benefit. I fear that would also be true if we applied the same system to grant applications.

  • @eiduralfredsson
    Thanks for your comment, but perhaps we should stay on the topic of how best to fund good science.

  • RFC says:

    I remember when this issue of impact and beneficiaries first loomed it’s head back in 2006 – the BBSRC were amongst the first to embrace it on their online application version. In the online version, you have to write a paragraph describing the likely beneficiaries of the work proposed.

    I gave this some considerable thought for a project we were submitting and eventually filled in the box truthfully “None obvious. This is a basic research proposal”.

    Within 5 minutes of me submitting electronically I had the research office on the phone, paniced and explaining I couldn’t say that. So despite it being the truth, I ended up writing some esquisite BS about potential rational drug target identification down the line.

    The project in that form didn’t get funded, but eventually did by the Wellcome Trust, essentially doing the same research. Out of interest I looked at what did get funded in that round and looked at what the the likely beneficiaries were in each sucessful grant. Every single explanation was pure, unadulterated BS. I could see it, the people writing the grants must have known it, the scientists doing the peer review would have known it and the awarding committee would have certainly known it. So why bother? Four or five layers of educated, intelligent people all paying lip service and pretence to an issue everyone knows is a game. Presumably to tick some box in the BBSRC research plan that involves ‘public engagement’?

    But this does link back to Davids issue of over-hyping the expectation and outcomes of basic researcg. If I’d read my beneficairy proposal as a laymember of the public I would have thought my project was going to revolutionise the world. Who knows, maybe it will in 50 years? The point is, nobody can just sell their project as a really interesting, good quality research project, becasue under this type of beneficary and impact agenda, the response to interesting and good quality work is “so what?”

    If you build a wall of knowledge which brick is the most important. Which brick has the most beneficaries? Which brick is likely to have a key supporting role? Can anybody tell me?

  • @RFC
    Thanks. I think that what you say is very much what every working scientist says. it is deeply offensive, and quite contrary to the ethos of science, to be compelled to say things that are made-up nonsense.

  • @kieron.flanagan
    Anyone following Twitter on the evening of Wednesday 8th December (or the Twitterfeed in the left sidebar) might be quite surprised to find you, as a science administrator, apparently being an apologist for postmodernism, and critical about Sokal. It certainly shook me a bit.

  • Dr Aust says:

    I guess Kieron F would be by eventually to say this himself, but he is not a science administrator (“The Modern Ubermensch” is one, I think). Kieron is an academic in what I guess one might call “science policy studies”. Though I too am a little surprised by the strength of his defence of postmodernism.

  • @kieron.flanagan, @DrAust
    Thanks very much for your correction. I’d missed the distinction between being a science administrator and an being an expert in science administration. For that, I apologise.

    I do find it odd, though, that science administration has become an academic subject and, it seems, one that you can become an expert in without ever having done science in any depth.

    In fact perhaps that is the problem.

  • @DrAust @David – Sorry I didn’t reply to this earlier. It was only today’s Twitter discussions that prompted me to come back here, to find this discussion of my expertise/lack of expertise, and an attack on my alleged defence of ‘postmodernism’.

    Taking the latter first, I do not defend all cultural or social studies anymore than I think you would defend all work that has ever been published in scientific journals.

    What I actually said was that scattergun attacks on wide ranges of social and cultural studies made by the likes of Sokal and Bricmont betray clear anti-intellectual, anti-Enlightenment prejudices. That should be self-evident.

    It is suggested above that one cannot ‘know’ anything about science without being a ‘scientist’. This is effectively to say that science itself is not amenable to systematic analysis.

    My knowledge of ‘science’ builds on sixty years of systematic, published, peer reviewed research conducted by economists, political scientists, sociologists, philosophers and – yes – scientists. In principle that knowledge should be more ‘reliable’ than the partial, personal perspective of any individual scientist. Science policy based on partial, personal perspectives, anecdotal evidence and blind faith in convenient myths would be what you would call ‘woo’, would it not?

    You may believe that researchers like myself am not qualified to talk about science – but wanting to believe something does not make it true.

  • @kieron.flanagan
    You say
    “scattergun attacks on wide ranges of social and cultural studies made by the likes of Sokal and Bricmont betray clear anti-intellectual, anti-Enlightenment prejudices.”

    That makes me wonder if you have ever read the book carefully. If you have, and your conclusion is that stated above, I could only infer that you have no real understanding of science whatsoever.

  • […] This post was mentioned on Twitter by Kieron Flanagan. Kieron Flanagan said: .@Dr_Aust_PhD A final comment on public involvement: http://bit.ly/dSFmdN & @david_colquhoun a resp. about expertise: http://bit.ly/gsaXEm […]

  • aka_kat says:

    Excellent piece! It has been argued that one reason why you need to ear mark funding for some research areas – as the research councils increasingly do for current fads – is that the research in these areas is too poor to compete in the responsive mode grant process. That is you draw money away from excellent research to fund the mediocre but fashionable. Systems biology has already been mentioned as an example, from the last decade we could also mention proteonomics and structural genomics – both have made valuable contributions to science but not proportional to the amount of funding that has been thrown at them by the research councils. Money which could have been spend on responsive mode grants, decided by peer review, aiming to fund the best science around. There was an article in Nature News Features recently on peer review of grant applications – I hope it is not behind a pay wall: http://www.nature.com/news/2010/100922/full/467383a.html
    One of the points made is that peer review works well when about a third of grants are funded. As already stated in the comments here the current funding rate is much, much lower and then it really does become a lottery who gets funding. The conclusion to be drawn should be that the amount of funding going to responsive mode grant funding must be increased at the expense of funding fashionable so-called high impact but quite average science.

  • @aka_kat
    Thanks for getting the comments back to point.

    I’d missed the Nature News Feature. It isn’t behind a paywall, and it is very much to the point. When funding rates for responsive mode grants falls to 7%, the system is as good as dead. You’d have to write 14 applications to get a single grant. You might just as well give up science. Some of the best people undoubtedly will prefer to leave rather than join the bullshit brigade.

  • Dr Aust says:

    Ah, but never mind, the elite will all be working in the new UKCMRI, or owning their share of a multi-site EU Framework 16 Super-programme Grant, or something.

    *sigh*

    Perhaps at this point I should plug my discourse on Funding the Elite is Not the Problem again.

  • Simon says:

    Just stumbled on this thread, thought I’d share my thoughts. One thing that hasn’t been mentioned, the Pathways to Impact case is one way in which unscrupulous scientists can knock down a perceived competitor, precisely because it is so vague and ill-defined. This can happen both at the peer review stage, as well as the panel stage. I’ve recently had a grant proposal rejected at a RC panel stage purely on the basis of my Pathways to Impact document not being “acceptable”; this is despite the referees judging the proposed work as having strong scientific merit and promise – this did not seem to matter at all. It is particularly sad since the majority of people sitting on these panels are working scientists. If a proposal is judged as being scientifically weak or uninteresting then this is something I can accept, or at least argue against in a meaningful way. With criticism of impact cases there isn’t a meaningful comeback – it is heaping bullshit upon bullshit to argue against a different kind of bullshit. Rules of the game of course, but this is the consequence in practice.

  • @Simon

    Thanks for that comment, I certainly find it worrying if you grant was rejected solely on the basis of the impact section. When I was reviewing grants I’d barely look at the impact saction, on the grounds that it would just consist of a load of made-up guff.

  • Simon says:

    According to EPSRC, it is “an essential component of a research proposal and a condition of funding”, and “If a proposal is ranked high enough to be funded but does not have an acceptable Pathways to Impact it will be returned”:

    https://www.epsrc.ac.uk/funding/howtoapply/preparing/impactguidance/

    I shudder to think what some of my colleagues in say Mathematical Logic make of this.

Leave a Reply

Your email address will not be published. Required fields are marked *

 

This site uses Akismet to reduce spam. Learn how your comment data is processed.