Saturday, 15 September 2018

What You Need to Know: How other Countries Assess Research

Two ends of the South African research assessment scale: kak and lekker.
Photo by Greg Bakker on Unsplash
In the UK the REF has become an unavoidable feature of the research landscape. But how do other countries assess research? We look at one example - South Africa - and their very different solution to the same challenge.

Here in the UK the Research Excellence Framework is like the key to the kingdom of Middle Earth in Tolkien’s saga, the one ring that rules us all. I won't revisit the question of its efficacy—I've written about that in the past—but suffice to say that not all of us think the game is necessarily worth the candle.

South Africa has a much smaller higher education sector—just 26 universities compared to around 130 in the UK— and yet they have two systems of assessment. 

Research outputs policy

The first is the Research Outputs Policy. This policy serves as a tool for the distribution of research subsidy to public higher-education institutions in South Africa. So far, so REF. However, the policy is more instrumental than that. As well as being used to determine the distribution of a ‘block’ grant, it also aims “to encourage research productivity by rewarding quality research output at public higher­-education institutions”.

The system differs further in being both annual, and much simpler in operation. It uses a predetermined list of “approved scholarly journals”. If your article appeared in a journal on the list, then your university is credited with one ‘unit’. Books, however, are worth between one and 10 units depending on the length, in a wonderfully prescriptive tariff: a chapter is one unit, a short book of 60 to 90 pages is worth two units, and so on until you get to a 300-plus page book, which is worth 10 units. 

In addition, a university gets credit for the number of postgraduates who have completed their studies: half a unit for a taught masters degree student, one unit per research masters student, and three units for each PhD student.

Each year, a university has to submit a claim before 15 May listing all the students, and all of the books or outputs appearing in the approved journals. There is no limit to the number of outputs you can submit. While the claims for students and articles are fairly straightforward and transactional, those for books, book chapters and conference proceedings are assessed by a Research Output Evaluation Panel, to which you have to submit a hard copy of the book, together with a 500-word explanation of why it's worth consideration. 

The monetary worth of a unit varies depending on the annual budget the government has allocated for higher education, and on the number of outputs and students submitted that year. However, a unit is broadly worth around R120,000, or about £7,148.

This funding is usually split between the institution and the individual. For example, the University of Fort Hare keeps R100,000 centrally, and passes R20,000 to the individual. If the academic wrote with co-authors, he or she would have to further split it with them. Some have suggested, therefore, that this is a disincentive to collaboration, encourages quantity over quality, self-plagiarism and actually penalises highly citated articles: “It has been argued that the current system may lead to ‘non-virtuous practices in research’, such as writing short ‘salami-sliced’ papers, targeting low-tier journals with high acceptance rates, and avoiding collaboration to increase subsidy” (Harley et al, 2016).

NRF rating system

As well as the Research Outputs Policy, individual academics can put themselves forward to be 'rated' by the main research funder in South Africa, the National Research Foundation. While there is little financial benefit in doing so (rated researchers get some R60,000 or £3,600), the majority of academics put themselves forward for judgement, despite the system being—as one of my visitors put it—“hectic, lengthy and cumbersome”. There’s a wonderful flow chart which does nothing to dispel this view. 

The original intention of the scheme was to allow for international benchmarking, despite the fact that no other country in the world uses the system. However, it is broadly based on a number of widely used metrics, such as citations, publications in high-impact journals, and impact activity. 

Once your academic life has been assessed, you're given one of the following ratings: 
  • A-rating Unequivocally acknowledged as a leading international researcher
  • B-rating An internationally acclaimed researcher 
  • C-rating An established researcher with a sustained recent record of research marked by coherence 
  • P-rating President’s awardee; a young researcher with a PhD; considered a future leader in the specific field 
  • Y-rating A young researcher with a PhD; expected to become an acknowledged researcher within five years 
  • L-rating A late-entrant into research, who is expected within five years to become an acknowledged researcher
Within the top three there is a further numerical classification between 1-3. There had been some dissatisfaction with the C rating, which some saw as “stigmatised”, but a survey in 2013 found little substance to this claim. After all, even if you got a relatively lowly rating, you were still rated. Some don't even get that, and are then not allowed to apply again for a five-year period. 

So, if the benchmarking is somewhat nominal and could be done by, say, citation count or other measures of esteem understood in other countries, what's the point in the rating system? Well, there’s some currency for career progression: if you’re an A-rated academic it puts you in a good position both within your university (for promotion) and in the wider market for recruitment. However, my visitors summed up the real worth very succinctly as “bragging rights”.

You can just imagine the scene at conferences and in senior common rooms, where details of your rating—or, let's be frank—your label get dropped nonchalantly into the conversation. “Oh, you're B-2 rated? I'm A-1, but honestly, the system doesn't mean anything to me...” 

Both systems have their critics, and while they are simpler and broadly more instrumental than the REF, they are still a considerable bureaucratic headache. And, ultimately, like other rankings and ratings, their overall benefit is a matter of opinion. Although they're as addictive to vice-chancellors as anything Breaking Bad’s Walt White might cook up, they come with considerable  potential dangers and unintended consequences. Perhaps we should thank our lucky stars that we have just the one REF, and like some rare comet or meteor shower, only passes through our atmosphere every few years.

A version of this article first appeared in Funding Insight in August 2017 and is reproduced with kind permission of Research Professional. For more articles like this, visit

No comments:

Post a Comment