Wednesday, 27 September 2017

Goldilocks Grants

The first, second and third reviewers disagree about the merits
of Goldilock's research methodology
Earlier this year Michael Lauer, the National Institutes of Health’s (NIH) Deputy Director for Extramural Research, wrote an interesting blog post examining the productivity of its funding. He looked for a correlation between the amount of funding a project had received and the number of citations it got. This he described as “citations per dollar”.

Such a stark metric had many rolling their eyes, and the comments that followed questioned the underlying supposition. “Numbers of citations rarely correlate with greatest discoveries”, wrote one commentator. “Citation numbers are strongly biased towards fast-moving areas of inquiry”, said another. A third noted that “the number of citations a paper receives is an extremely error-prone measure of scientific merit.”

Nevertheless, I think it’s worth exploring Lauer’s work further for two reasons. First, because it is entirely appropriate for funders to try to assess the most effective use of their limited resources. Second, because the conclusion he hints at runs counter to the current drive by the majority of funders to back larger and longer projects.

Lauer’s analysis suggested that there was some correlation between the amount of funding a project received and the resultant citations, up to a point. For non-human NIH-funded studies, this was around the $1-million mark. After that it tailed off markedly. Given this, and the fact that it was almost impossible to predict where breakthroughs in science were going to happen, Lauer concluded that “the best way to maximise the chance of such extreme transformative discoveries is…to do all we can to fund as many scientists as possible.”

This is in line with what many have been suggesting for some time. In 2013, Fortin and Currie looked at grants given by the Natural Sciences and Engineering Research Council of Canada and the National Science Foundation of America. They found either no correlation or even an inverse correlation between size of grant and impact: “Impact is a decelerating function of grant size.”
They continue: “Our results are inconsistent with the hypothesis that concentrating research funds on ‘elite’ researchers in the name of ‘excellence’ increases total impact of the scientific community. Quite the opposite: impact per dollar remains constant or decreases with grant size. Highly cited studies are no more likely to result from large grants than from spreading the same funds among multiple researchers.”

Jon Lorsch, director of the US National Institute of General Medical Sciences, echoed this two years later. “NIGMS conducted an analysis of the productivity and scientific impact of the research the institute funds as a function of each NIGMS investigator’s total direct-cost support from NIH. This study showed that, on average, these metrics increase only shallowly with funding above a moderate level (approximately $250,000–300,000) and then actually decrease above about $750,000.”

He concluded: “These considerations suggest that we should be very selective in allowing principal investigators to accumulate high funding levels and that, in general, funding more investigators at a moderate level, rather than a few at a high level, will yield the best payoffs.”

The recognition that big science isn’t necessarily good—or efficient—science is nothing new. More than 30 years ago Bruce Alberts wrote that “several factors combine to make large research groups inefficient. As the size of a research group increases, more and more of its leader’s time has to be spent on such nonintellectual endeavors as helping with job applications, finding and accounting for funds, and other organisational matters. Less and less time is left for thinking about science, let alone keeping up with a voluminous research literature.”

Why, then, are so many funders giving out larger grants? Bloch and Sorensen (2015) “identify three interrelated objectives behind increases in funding size: the creation of a critical mass of research competences; linking research to social and economic impacts; and concentrating resources towards excellence”.

In other words, most funders believe that concentrating funding in the hands of the few creates economies of scale and provides the necessary resources so that research superstars can crack open the mysteries of the universe that will, in turn, help society. It’s a beguiling notion, and intuitively it feels right. Surely, by giving people with a proven track record more resources, they will have the freedom to really push forward the boundaries of science? Right?

Well, not really. Not only is there the issue of inefficiency bloat (to paraphrase Alberts, above), but putting all your eggs in a very limited number of baskets doesn’t allow for the breakthrough science that most funders are seeking.

“It is impossible to know where or when the next big advances will arise,” writes Lorsch, “and history tells us that they frequently spring from unexpected sources. It is also impossible to know what threads of foundational knowledge will be woven together to produce a new breakthrough. Supporting a wide variety of lines of inquiry will improve the chances of important discoveries being made.”

This is made worse by the fact that large, multi-million pound projects may become ‘too big to fail’. There is too much at stake, and subsequently PIs become less willing to take risks. Instead, they concentrate on ‘small steps’ research, where the outcome is more predictable.

More insidiously still, concentrating funding among the few does not allow the majority to even get on the grant-winning ladder. Many younger researchers are becoming increasingly frustrated and disillusioned by what they feel are lottery-level success rates. A whole generation of future Nobel prize winners are being put off pursuing careers in research as the message seems to be that it is only for the elite few.

Interestingly, concentrating funding on the greybeards goes counter to what history suggests is a scientist’s most productive period. Andy Harris, writing in The New York Times, noted that a study of 2,000 twentieth century Nobel prize winners and other notable scientists found that the age at which most had their breakthrough ideas was between 34 and 39. In stark contrast, the average age of the first time recipients of NIH R01 grants is 42, and the average for all recipients is 52. Indeed, more over-65-year-olds get R01 grants than under-35s.

As well as having an adverse effect on younger scientists, concentrating funding on larger, longer grants has an adverse effect on female scientists. A study quoted by Bloch and Sorensen showed that, in the Scandinavian countries, the percentage of centre directors who were women was some 6 per cent to 12 per cent lower than the overall female professoriat.

While the wealth of evidence against ‘centre-isation’ seems compelling, one shouldn’t move too far in the other direction. There is an equal danger of salami-slicing the available funding into irrelevance and ineffectiveness. Funders should aim for a Goldilocks grant: not too big, not too small
.
A grant needs to be large enough to enable the science to happen, but small enough that the overall pot can be shared more widely and used more efficiently, allowing diversity to thrive. Exactly where this Goldilocks zone is will depend on the discipline, but all funders should stop the headlong rush for centres, and instead fund the small scale projects from which the disruptive science—and breakthroughs—are more likely to come.

This article first appeared in Funding Insight in September 2016 and is reproduced with kind permission of Research Professional. For more articles like this, visit www.researchprofessional.com

No comments:

Post a Comment