Pages

Tuesday, 24 June 2014

Confusion, Damned Confusion and ESRC Statistics

In the heart of Death Star House,
the Unifying Calendar of Everything
I recently received a little gift from the Economic and Social Research Council’s chief executive, Paul Boyle.

“I am pleased to attach a report,” he wrote, “which summarises the general progress that has been made under the council’s demand management strategy, along with a digest of statistics for your own institution, which I hope you will find useful.”

For me, a data-fixated research manager, this was a very welcome present. Here, I hoped, was evidence of the value of our internal peer-review system. Here were hard facts. Here, I hoped, was something to crow about.

Unfortunately, the data presented by the ESRC obfuscated as much as they illuminated.
Of course, that’s partly the Disraelian nature of statistics, but I don’t think the ESRC helped its own cause. For a start, the data covered a 14-month period, benchmarked against a similar period immediately before the introduction of the demand management policy.

Now call me old-fashioned, but I’m a 12-month kind of a guy. I’m not sure what Mayan-calendar-style time frame the ESRC is working to. I assume that there is some logic in play, and that perhaps it is down to the timing of the Grant Assessment Panel meetings. Or maybe it’s the cycles of the moon.

More puzzling still is how the ESRC has divided its figures. Rather than presenting the number of applications and then giving the success rates and grades for them, it offers the rates and grades for each, as a snapshot, within the 14-month period. The application number does not map on to the success rates and quality, and vice versa.

Thus, you could be told that your university submitted 50 applications in the 14-month window, but the success rates and quality are calculated on the outcomes of the three proposals that happen to have been decided in that period.

Wouldn’t it be better for the ESRC to say that 50 applications were submitted, of which only five were successful, rather than that 50 applications were submitted and, at around the same time, three unrelated ones were successful?

Squaring the circle of these statistics might not be an easy task. The ESRC’s data system might not allow it, and the laborious process of peer review means that the outcomes of applications are a long time coming. However, this report is already seven months behind the reporting period; I’m sure most outstanding applications had been mopped up by the time of its publication.

The ESRC should be commended for its commitment to transparency, but making the link between submissions, success rates and grades in its published statistics would allow us to map the effect of our internal peer review and (I hope) offer evidence of its efficacy to sceptical academics.

And, while you’re at it, let's drop the whole 14-month thing. It didn't really work for the Mayans, and I don't think it's working for you.

This article first appeared in Funding Insight on 15 May 2014 and is reproduced with kind permission of Research Professional. For more articles like this, visit www.researchprofessional.com

No comments:

Post a Comment