|A previously funded large project.|
Amos articulated what many of us have been feeling for some time. In recent years there has been an inexorable move by the Research Councils and the Wellcome Trust to concentrate funding on longer, larger grants. The logic is impeccable: in order to answer the big questions that will make a big difference to society, we need big teams of big hitters doing big science. Or big social science. Or big humanities. Moreover, big projects don't require proportionately more administration, and by cutting back on all those pesky small grants, funders will be saving a lot of time and money in selecting and monitoring the things.
What has particularly rankled in this is a sense that the funders are shooting themselves in the foot. I always believed that the funders got many, many more bangs for their bucks by funding small grants. Principal investigators tended to makes sure that every pound, every penny counted. Better still, these small grants offered early career researchers a first step on the ladder of research funding. Sure, some funders do offer dedicated routes for ECRs, but they still tend to be be substantial, and the success rate as a result is negligible. Small grants with a high success rate allowed funders to light the blue touch paper and watch as the subsequent careers fizzed and whirred, glittering in a thousand different directions. To many, this initial light has been extinguished.
However, up until now this view was, for me, nothing more than a hunch. This changed last week when I stumbled across 'Big Science vs Little Science: How Scientific Impact Scales with Funding' via Twitter (thank you, Open Access). Its authors, Fortin and Currie, looked at whether it was 'more effective to give large grants to a few elite researchers, or small grants to many researchers.'
Focusing on the Natural Sciences and Engineering Research Council of Canada and the National Science Foundation of America, Fortin and Currie start by asking what the goal of the funder is. Is it to maximise major discoveries (sometimes criticised as 'photo opportunity science'), or to maximise the 'summed impact' of the community. The authors looked at four measures of 'impact': total number of papers published, number of citations, number of citations for the most highly cited paper, and the number of very highly cited papers.
Interestingly, they found either no correlation or even an inverse correlation between size of grant and impact: 'impact is a decelerating function of grant size'. They continue: 'our results are inconsistent with the hypothesis that concentrating research funds on 'elite' researchers in the name of 'excellence' increases total impact of the scientific community. Quite the opposite: impact per dollar remains constant or decreases with grant size. Highly cited studies are no more likely to result from large grants than from spreading the same funds among multiple researchers.'
Furthermore, 'if maximising the total impact of the entire pool of grantees is the goal, then the 'few big' strategy would be less effective than the 'many small' strategy...funding more scientists increases the diversity of fields of research, and the range of opportunities available to students.'
The time has come to reconsider this passion for large 'ribbon cutting' projects. Whilst funding many small projects might mean that you can't blow your trumpet about unpicking the human genome, it will have a huge effect on a wide, diverse range of research and careers, which might actually have a more positive effect on the health of your discipline - and consequently your profile as a funder - in the long run. Funders, think again.