Alex Hulkes settles down for the winter |
There’s something wonderfully autumnal about Alex Hulkes, the Strategic Lead for Insights at the ESRC. He has a comforting fireside tone, somewhere between Mr Kipling and JR Hartley. You can imagine sitting with him as the logs crackle, gazing, entranced, as he strokes his mutton chops, flicks crumbs from his smoking jacket, and talks about his exceedingly good cakes or fly fishing escapades.
His latest report is littered with sublimely arcane phraseology: ‘laudable curiosity,’ ‘one may conclude,’ ‘it is incumbent upon [us],’ ‘let us return briefly to the question posed at the beginning,’ and my favourite: ‘[it] pulls a thread that is weaved discretely into most of the analyses presented thus far.’
Hulkes is a national treasure, and not just for his cakes. He opens up what he himself admits is ‘the black box...of [the] Research Councils’ to reveal ‘the wiring [that] is hidden.’ Like modern cars, most of us are happy that they just work, but Hulkes wants to show you the wonder of the internal combustion engine, and revels in the interlocking genius of the carburettor, distributor, spark plugs and camshaft.
I’ve discussed his reports before, and in particular the ‘conversion rate’ between those applications which are fundable and those that are actually funded. For this report he does touch on this again, but on much more besides. It contextualises the outcomes of applications, explaining application and award distribution, value, and success rates, as well as the age, ethnicity and gender of investigators. It’s not for the faint hearted. It’s 169 pages long. It talks about Gini coefficients.
However, the picture it presents is fascinating. In the six years he focusses on (2011-2017) the ESRC received £3.4bn of requests, but could only give out £880m of grants and fellowships. The applications are quite broadly - and thinly - distributed, with ‘the median number of applications per year...usually being two or three, and the mode (the most commonly occurring number of applications) being one in all years.’
Looking on an individual level, ESRC applicants are - perhaps unsurprisingly - very different beasts from their EPSRC counterparts. ‘Multiple applications from a given researcher in one year are rare, [whereas for] EPSRC...multiple applications in a given year are markedly more common...applicants apply [to the ESRC] once every six or seven years on average.’
Once every six or seven years! Social scientists do like to save themselves.
Over the last six years, the size of applications and awards has increased. Half of all ESRC funding now goes to grants worth £1m or more. Hulkes breaks down the applications sought into bands, and notes that ‘the most significant change...has been the dramatic reduction in [those worth] £50,000 to £99,999. This was primarily the result of a decision to discontinue several small grant schemes which were open before mid-2011. Over the same period larger grants...have become much more common. About 25% of all applications now request at least £500,000.’ Thus, the average grant is now worth twice what it was in 2011, and responsive-mode grants are nearly 50% larger.
This has, inevitably, had a knock on effect on success rates. The ESRC budget hasn’t increased significantly (and between 2010-2015 not at all), but requests are larger. At the same time the quality of applications has improved, meaning that many more applications are graded as ‘fundable’, but fewer of these actually get funded, as I discussed before.
The age of the applicant seems to make a slight difference to their chances of success. As with its sister research councils the BBSRC, MRC and STFC, success rates tend to increase with age, and, along with the AHRC, the success of those in their twenties appears to be ‘meaningfully lower’.
For me, the report came into its own when it highlighted the considerable concentration of applications and grants. I mentioned before that the distribution of applications was quite broad. However, the majority of funding is clustered toward the top end. ‘Each year about half the funding decisions made relate to a group of around 20 ROs [research organisations]...Only a few ROs are consistently in the top 10 applicant organisations, while the top 30 sees a great deal of flux.’
So a small minority of organisations apply for - and get - a majority of the funding. Hulkes suggests that ‘in any one year the top 10% (by value) of ROs applying...will request around 40% of the funding between them. 80% of funding will be requested by just 40% of ROs and the top 50% of applicant ROs will request up to 90% of the total.’
What Hulkes calls the ‘50% list’ has, over the last six years, decreased by around one RO per year. Yes, things are slowly becoming more concentrated, although Hulkes is quick to say that this is ‘an observed fact rather than a meaningful trend’.
Nevertheless, it is noteworthy. More interesting is perhaps the further analysis Hulkes does of the distribution between the ‘golden triangle’ (i.e. Oxbridge and large London universities), the 24 members of the Russell Group and everyone else. The golden triangle universities make up 5% of the 121 organisations that applied to the ESRC; the Russell Group 20%. And yet both of these groups punch well above their weight, with the golden triangle getting a third of all ESRC funding, the remaining Russell Group another third, and the remaining universities sharing out the remains.
Even within this there is considerable concentration. ‘The organisation with the largest portfolio was awarded about ten times the funding of the 20th largest recipient, and more than 1000 times the funding of the RO with the smallest ESRC portfolio,’ suggests the report.
But is this concentration, this inequality, due to bias? On balance I think not. It’s a matter of logistics and ambition. As Hulkes says, there are three main reasons for this: larger organisations submit more proposals, they ask for more money, and they tend to have a higher success rate due to ‘higher quality proposals.’
I think this is fair enough, given the current parameters within which the ESRC operates. It is, in theory, blind to the host institution, and wishes only to fund the most excellent (and impactful) research, wherever it is found. However, I think it’s reasonable to suggest that there is an unavoidable structural bias towards larger, more research intensive universities. They have both the resources, the expectation, and the expertise to ‘play the system’ and benefit from it.
In addition I think that there is a prejudice, conscious or otherwise, amongst reviewers when faced with an application from a little known or less research intensive university. The applicant will have to work harder to justify and convince the reviewer that their research is plausible, that their methodology is viable, and that their support framework is appropriate for their research to succeed.
Does the ESRC need to address this concentration, or is it a natural reflection of the imbalance of the UK HE landscape? It’s certainly the latter, and I don’t think there’s the appetite to really tackle the concentration at a time when there is so much uncertainty around the future of the research councils, and UK funding as a whole.
It’s understandable, but worrying. It will mean that the slow ticking of the ever shrinking ‘50% list’ will continue to be ‘an observable fact’, and more and more funding will go to fewer and fewer institutions. It will be up to Mark Walport and the UKRI Board to decide if this manifestation of the Matthew effect is an acceptable price to pay for the continuing pursuit of chimerical ‘excellence’, or whether the time has come to think more seriously about sustainability and capacity.
No comments:
Post a Comment