|'We've got a record of the number of bodies. |
Who cares about the original bodies?'
On many levels this makes sense, and RCUK aren't shy of crowing about the benefits:
- It saves time. RCUK suggest this could be as much as 80% compared with the previous system.
- It can be used at any time during or after the grant.
- Multiple users can view the reports.
And so on. All very good. But what struck me the other day is that the new system seems to miss out a crucial element of the End of Award Reports: monitoring and assessment. Whilst RCUK annual monitors compliance with ROS, there doesn't appear to be anyone checking the quality of the outputs or the success of the project measured against it's original aims and objectives.
Or am I missing something?
This seems to be a considerable oversight. When I worked at the AHRC, getting academic reviewers to assess the success of the project was a laborious but crucial element of the grant cycle. If a project was deemed to have failed to produce the intended goods, or had veered too widely from the original path, it could be labelled as 'unsatisfactory', and would have a limited time to come good. If it still didn't, the investigators would be barred from applying again to the Council.
Equally important, if the project had been seen to succeed beyond anyone's wildest dreams, it could be labelled as 'outstanding', and the investigators could bask in the glow of approbation.
Okay, so the old system may have smacked of the schoolroom, but I think it is important that investigators are held to account - for better or worse. The new system just seems to be a bit, well, quantitative. As long as you give them a title or two, it doesn't really matter beyond that. The spaces are filled, the box is ticked, the numbers are counted.
As I say, I could be wide of the mark on this, and I'd welcome any thoughts from those in Death Star House as to the level of assessment that goes on post-project.