EARMA2013 was metrics. Maybe that's an inevitable rule of the sector: whenever more than two research managers meet in one place they shall talk about metrics.
That's not to say it's not interesting. This time we had two sessions, one from Prof John Green of Cambridge talking about Snowball Metrics, and one from Dan Nordquist of Washington State University and Martin Kirk of the University of British Columbia talking about how they used Thompson Reuters and SciVal respetively.
Metrics, for anyone who hasn't met with a research manager recently, are the figures used to assess the performance of your university. They are potentially very useful. They can help you to back up or dispell hunches as to how you're doing. They can help to identify what you should or should not be prioritising. In a word, they can help you strategise.
This was certainly the claim of all four speakers across the two sessions. Snowball Metrics had been developed by eight UK Russell Goup universities which recognised that the figures that they were currently using were not robust. they were reliant on shaky data that had been harvested, generally, by third parties trying to sell their league table software. What was needed was an agreed set of parameters, and an agreement between all those involved to share the results.
Green, in his usual bluff and robust style, said that Snowball had achieved everything they had hoped for. As ever, he gave the impression that anyone that didn't agree with him was, franky, deluded. Like a cross between Robert Winston and a terrier, he bounded around the conference hall, prodding us all with a metaphorical finger, and asking us to rate his product on a scale of 1-10, with 1 being 'quite brilliant' and 10 being 'brain-meltingly brilliant'.
Even allowing for ego, what he was presenting did appear to be very good, and a sensible step forward. There is too much 'hunchwork' in the sector, too many fingers in the air, too few hard comparables. He suggested that the next step for Snowball would be to extend its reach internationally. but what, I wondered, for the rest of the UK sector?
The sense I got was that they really weren't worth dealing with. The eight partners already accounted for some 45 per cent of research funding in the UK, and roughly the same percentage of highly rated outputs. Why should they bother with the 150 or so other UK institutions when the gentlemen from Harvard, the Sorbonne and Max Planck were waiting in the ante room?
Dan and Martin, by contrast, were a little more ambivalent. Whilst both were glowing about the potential of their of the off the shelf products, they questioned the price that the suppliers were charging for them, and I couldn't help but wonder, post-Green, if the underlying data on which they relied were robust enough.
Still, what they did show were graphics that demonstrated clearly and quickly what the relative strengths of their institutions were. Using a series of blobs within a circle, which would have brought a tear to the eye of Damian Hurst, they showed by their relative position and size the intensity and interdisciplinarity of the university's reach portfolio.
So how did I feel after a couple of days in the metrics matrix? I felt that playing with metrics will be increasingly important for the sector, and that soon we'll all be studying complicated, elaborate graphs and scatter charts, if we're not doing so already. We'll be setting our course to benefit from the currents and eddies within our territorial waters - as well as avoiding the Scylla and Charybdis. In the current climate, with a shrinking funding pot and increasing concentration, we've all got to be more savvy about our strengths, and any tool that can effectively help with that should be embraced.
Now where's John Green's phone number..?