Showing posts with label internal peer review. Show all posts
Showing posts with label internal peer review. Show all posts

Sunday, 31 December 2017

Fundermentals Top Ten of 2017

As we stumble towards the end of 2017, our heads spinning with fake news and fake news about fake news, it's time to look back and think: well, we've got Trump and May, but at least Fundermentals is still doing lookalikes.

Yes, readers, the world may be a bizarre place at the moment but there are certain things you can rely on. And so, as 2017 shudders to a halt, we take a look back at what's tickled your fancy in the year of covfefe.

Thursday, 5 January 2017

Reviewing Peer Review

Internal peer review has become increasingly prevalent in universities across the UK. The trend is the result of a push by the research councils for institutions to manage the quality of their applications better, but also to an implicit need to give academics as much advantage as possible in the increasingly competitive world of grant-winning.

In some ways, an internal peer-review system is a no-brainer. Showing your application to others for comment prior to submission is an obvious step, right? Well, yes and no.

Monday, 20 January 2014

Building an Internal Peer Review System

ARMA training: The Royal York Hotel
I took part in an ARMA event today. The focus was on how we, as research administrators and managers, could help to improve the quality of applications. I talked about the Grants Factory programme and Kent's internal peer review system. I shared the platform with Linsey Dickson from Heriot-Watt and Sue Coleman from Edinburgh, and it was interesting to hear what they did at their institutions, and how they compared and contrasted.

However, for me the most interesting part of the day was when the delegates talked amogst themselves about what an ideal peer review system would look like, and what challenges they would face in introducing one.  Common themes emerged:
  • Timeliness is key. Applicants need feedback as early as possible. Of course, this isn't always possible: funder deadlines might be too tight, or applicants' collaborators may not give access to proposals early enough. Or (whisper it) the applicants themselves might just do things last minute.
  • Feedback has to be useful. Well, durr. Perhaps I need to rephrase: reviewers have to be forced to give feedback which can be used. It's not enough to say, 'fine', or 'needs more work.' What applicants need is detailed feedback: what needs changing, and how?
  • There needs to be more than one reviewer. This is something that Kent's system includes (our proposals have to be seen by a disciplinary reviewer and a funder reviewer), but some of the suggestions around the room included a user reviewer, or applicants having the opportunity to nominate their own, but having no guarantee that they'd be used. Additionally, reviewers should be compensated for their time, either through some form of (annual) fee, or perhaps factoring in their review work to the workload allocation model.
  • There needs to be buy in. This is crucial: any new system has to have the backing of the PVC, the Heads of School and the Directors if Research. If it comes just from the centre, or is seen to be nothing more than a bureaucratic burden, then it's doomed. Moreover, or needs to be seen by the applicants as relevant and 'on their side'. Which brings me on to the final point:
  • It should be Faculty/School based. This surprised me, and I don't think I agree with it. However, I understand the point being made: it needs to have ownership by the academics. If it's university wide there's a danger that this will be lost. This may be true at bigger universities, but I feel that at Kent the academic base is small enough for this not to be a problem. In addition, I tried to incorporate some of the school systems that already existed before our peer review system was launched. Nevertheless, we need to be alert to this as a potential issue.
And it's that kind of thing which makes involvement in these kinds of events worthwhile. Although I'm at the front pontificating and pretending I have all the answers, I've got as much to learn as anyone.

Thursday, 12 September 2013

'The Aggregation of Marginal Gains'

Bradley Wiggins: what's easier, the Tour de France
or winning a Research Council award?
When you think about the world of research funding, Bradley Wiggins might not be the first person that comes to mind. However, I couldn't help conjuring up Wiggo as I reviewed the first year of the Kent Peer Review (KPR) system.

Tuesday, 26 February 2013

AHRC - View from the Committee Room


I took part in the AHRC’s inaugural ‘Developing Better Applications’ event yesterday. It was a great event, and a good opportunity to chat to others doing a similar job to me in a wide range of different institutions.
Prof Roberta Mock gave a really useful talk based on her experience as a peer review panellist for the AHRC. Amongst the points she raised were:
·         The AHRC is not a cabal. It is not us and them. They rely on academics reviewing other academics. We are all part of it, and necessary for its successful running.
·         Applicants should not run before they walk. Having a commensurate track record was crucial for getting an appropriate grant.
·         You should write with potential reviewers in mind, and imagine the ‘nightmare critic’. Preempt their criticism, but don’t be defensive. Reviewers smell fear.
·         Choose keywords wisely. These are used for choosing your reviewers, so do think about what specialism you want your reviewer to have.
·         Talk to colleagues, and share your application. Get some tough love. Better still, take it through internal peer review. It was always clear at panel which applications hadn’t.
·         Take time over preparing it. A good application takes at least two months (and at least 40 hours of intense writing) to draft. 
·         The standard has changed over recent years. What worked five years ago will not work now. Have to keep getting better to stay still.
·         Grammar, spelling and clear formatting do all make a difference.
·         Use the sections, and write what they ask for in the appropriate sections. Doing otherwise makes you appear arrogant.
·         Don’t over inflate claims for impact. The panel is not necessarily looking for the most impactful project , but just for reassurance that you’ve got an effective strategy in place.
·         Don’t hide or disregard ethical elements of your research. If you blank this, or claim not to have any, the panellists will look all the harder for them.
·         It’s all in the detail. Be specific about such issues as which journals you intend to publish in or which conferences you plan to attend. Give them a sense of how you arrived at your costs.
·         Have a realistic work plan that takes account of having a life beyond your research – i.e. factor in holidays, recruitment, potential illness, etc. You are not a robot, and neither is your RA.
·         Value for money is important. That’s not just a case of offering the lowest price. Rather, it’s asking for the money necessary to achieve the objectives and answer the research question. Moreover, it offers research that has both reach and significance.
·         Include information about monitoring of the project. This is often left out, but is really important. It needs to be built into the workplan. It demonstrates institutional buy in and shows that the stewardship of the award is taken seriously.  
·         A ‘super critical ’review is not the end of world, but a convincing right to reply is crucial. You need to be very gracious, but be aware that the panel sees everything. You don’t need to repeat praise from the reviewers, and don’t use one reviewer’s comments against another. The panel sees all the paperwork, and can see if any of the reviewers are out of line.
·         Yes, there is an element of luck. However, there is usually agreement about the first and second ranked applications. The grey area – and the luck – comes further down the list. So give yourself as much of a helping hand as possible. If there’s an early career researcher card, play it. If there’s a highlight notice you can latch on to, do it.

The questions that followed flushed out a final, interesting point: not all reviewers read applications in the same order. Roberta, for instance, flicks to the CV first. All the more reason to do as the Grants Factory suggests, and make sure that key messages are written through the application like words through Brighton rock. Wherever you bite into it, you can see what the research question is, why it’s important, why it offers value for money, and why you’re competent to handle the project.

The training event runs again in London on the 8 March. I'm not sure if it's booked up, but get in touch with the AHRC if you want to go along.


Thursday, 26 January 2012

NERC Introduces Demand Management

NERC has become the third Research Council - after EPSRC and ESRC - to explicitly state that they want universities to 'self regulate' their applications. This announcement was triggered by worries about success rates in some of their schemes falling to 16%.

Whilst NERC already has in place some measures to 'manage demand' - eg limiting the number of applications an investigator can submit per call and restricting resubmissions - this hasn't stopped the success rates from sliding in recent years. They're hoping to reverse this by encouraging institutions to strip out applications which NERC would define as 'uncompetitive' (defined by them as scoring 6/10 or below at panel).

So what are they going to do?
  • firstly, ask institutions to nominate a point of contact for demand management;
  • secondly, in the summer, provide data on past performance to them. This will be repeated annually from autumn 2013. The data will apply to Urgency, Large and Standard Grants, but not Fellowships or outlines. It will include: success rates for all schemes; distribution of grades for funded and unfunded proposals by scheme; final moderated grades for all proposals from institution/department; relative performance of institution/department.
  • thirdly, from autumn 2012 NERC will (ahem) 'engage in a strategic dialogue' with institutions to provide information and advice in support of demand management, including setting targets for changes in submission behaviours. They can't meet with everyone in the first year, so those with the most applications, or with black marks in the NERC copy book, will be the first to get a visit from 'the management.'
So, at the moment, it looks to be relatively light touch: more ESRC than EPSRC. However, there will be the expectation that all research organisation will have their own internal quality control systems in place.

Who will be the next Research Council to fall in to line? Given the recent rumblings from Death Star House, my money's on the AHRC...

Tuesday, 25 October 2011

Autumn Newsletter Out Now

Right! Time to stop whatever you're doing, and make for the newstands: the autumn edition of our newsletter, ResearchActive, is out.

It's a bumper edition this term, and includes:
  • details of the shiny new Peer Review system that was launched at the beginning of October;
  • a snapshot of selection of recent awards;
  • information about the new Grants Factory programme;
  • all the funding gossip from Brussels and Swindon;
  • an overview of the research interests of the 25 new academics who have joined the University recently;
  • pass notes on the REF;
  • contact details for everyone in Research Services.
So check your pigeon holes, and drop me a line if that familiar yellow pamphlet isn't there; I'll send you a pdf of it by return.

Friday, 30 September 2011

Kent Peer Review Goes Live


The University will be introducing an internal peer review system from 1 October.

Kent Peer Review (KPR) comes in response to the stated intentions of the Research Councils to introduce ‘demand management’ systems. The EPSRC has already introduced a ‘blacklisting’ system for individuals; the BBSRC has introduced a grading system that may lead in time to a ‘triage of grant proposals based on referee scores, in order to eliminate lower-scoring applications before the committee meeting’; and the AHRC is suggesting ‘introducing sanctions...if self-management proves ineffective’. The ESRC has recently consulted on different options for limiting the numbers of proposals it receives, and has stated that
‘the Research Councils, where possible, will harmonise their demand management strategies. There is general agreement that HEIs should be encouraged to self regulate with a particular emphasis on structured peer review aimed at the submission of significantly fewer but better quality applications. This self regulation will be underpinned by the regular supply of performance data to institutions alongside better applicant guidance.’
The new system has been developed in consultation with Directors of Research over the past six months. It is intended to be supportive rather than oppressive, and is targeted at three specific types of applications:
  • Research Council applications;
  • First substantial external grant applications;
  • Large grant applications.
If your proposal fits one of these categories, it will be seen by two reviewers: one will have a knowledge of your discipline, one a knowledge of the funder. More detail of the new system is available on the Research Services website.

If you'd like to talk about KPR do get in touch with your Faculty Funding Officer, who will be able to answer any questions, and guide you through what you need to do.

Wednesday, 22 June 2011

ResearchActive Newsletter: Summer Edition Available

The summer edition of the Research Services Newsletter, ResearchActive, is now available. A hard copy has been sent to all staff, but if you've not received it do get in touch and we can send you an electronic version. It's a bumper six pager this term, and includes:
  • Details of the University's new Internal Peer Review system;
  • Information on the research interests of new staff;
  • Highlights of some recent awards;
  • REF update;
  • Changes to RCUK equipment costs;
  • Details of how we use the data that EPSRC sends us on 'blacklisting';
  • Notes from the Leverhulme visit, and Grants Factory events;
  • and, of course, some choice cuts from the Blog.
Get it while it's hot!

Thursday, 16 June 2011

Changes to the ESRC - Part 2

Last week I wrote about imminent changes to the ESRC funding schemes. Yesterday I went along to their regional event at Brighton, and got further clarification and detail about how they see these changes being implemented:
  • Risky research. As mentioned before, they will be introducing a new mechanism into their grants scheme for risky research, with a 'breakpoint' mid way through at which the success or otherwise of the pilot project will be assessed. However, they made it clear that, in effect, this would mean the reintroduction of the small grants scheme, but with a very specific remit of encouraging risky, innovative, ambitious research.
  • Advanced Sifting. As well as outline applications, the ESRC would introduce 'advanced sifting' of their Research Grants. In practice, this would mean that applications go through an initial peer review by 2-3 academics. If it's an application for a small amount, this might be all the assessment it gets: depending on the outcome, the application will get funded or rejected. If it's for a larger amount, this peer review will decide whether the application goes to full panel.
  • Outline Applications. The ESRC will simplify the JeS form for outline applications, including the costing element. They will shortlist approximately three times as many applications as they can fund, so that the success rate for the second stage (full applications) will be around 33%.
  • Right to Reply. A right to reply would be built into all their funding schemes.
  • Demand Management. The deadline for the consultation process on the demand management options closed on 16 June. They will now consider the responses. It is hoped that they will not have to introduce any of the more draconian measures. They will allow a year to see how the 'interim measures' have worked - eg outline applications, no uninvited resubmissions, encouraging HEIs to implement 'quality assurance' procedures. They admitted that a year might not be enough for these to have a real effect, but it should be enough to see the 'direction of travel'. If they are happy with the 'direction', they will allow more time for them to have further effect.
  • Statistics. To support the demand management measures, the ESRC will provide HEIs with stats on their comparative performance. These will be provided three times a year and, it is hoped, they will be more nuanced than just presenting simple success rates. For example, they should show the relative position of an institution's applications in the peer review panel's prioritisation list, so that HEIs can get a sense of the quality of their applications.
  • Working with the Other Research Councils. The ESRC made clear that they will be working with their sister councils to implement a common form of demand management. Of course, this will have to allow for variance that arises from the culture and patterns of those working within a council's disciplines. Thus, what works for the EPSRC wouldn't necessarily work for the ESRC. However, as far as possible they hoped for a consistency across RCUK.

Tuesday, 24 May 2011

AHRC Follows ESRC to Manage Demand

Disturbing - but, let's be frank, unsurprising - news from the Faculty of Arts & Humanities on Death Star Avenue. They're planning to follow their sisters in the Faculty of Social & Economic Sciences and will introduce some form of demand management.

As I say, unsurprising. The clues have all been there. It was clear in their Delivery Plan that they wanted to 'mange demand' (Key Point 10, first bullet point). However, I had assumed that this would be the more touchy-feely end of demand management, and sometime in the future. Why? Because they talked (4.7.1) about the relatively low-level steps they'd taken already, such as rolling deadlines. Elsewhere (4.7.2), they said that they would 'systematically collect, analyse and disseminate to HEIs' success rates and trends, and, following this, will have stern, headmasterly 'strategic discussions with key HEIs...falling below the average, to develop self-management of demand and quality control of proposals.'

As far as I'm aware this hasn't happened yet. However, the word on the street is that they're rushing to follow the ESRC into a more Darth Vaderish understanding of management. As I'm sure you know, the ESRC is currently consulting on possible forms this will take, including quotas, sanctions, and paying to apply.

Whatever comes out of that process will probably end up as AHRC policy sooner rather than later. As the ESRC said in the Consultation Document: 'the Research Councils, where possible, will harmonise their demand management strategies.' I just didn't expect them to be so quick, as their pace is other areas can seem glacially slow. I mean, have you had an application reviewed recently?

Anyway, it's all the more reasons for us to crack on with the Internal Peer Review System which is close to its final draft form, and will hopefully be in place by the time Darth comes knocking.

Monday, 16 May 2011

Peer Review - and Understanding Feedback

We're currently working on a proposal for introducing University-wide internal peer review. I'll talk more about it in future posts, but I'd be interested to hear from people - both at Kent and elsewhere - about their experiences of peer review. What works? What doesn't?

One difficulty with peer review is, of course, interpreting the feedback. It's particularly an issue when reading feedback that you've received from a British colleague. Here's a handy cut out and keep guide to interpreting Brit-speak. Note in particular the following:
  • 'I only have a few minor comments'
  • 'This is a very brave proposal'
  • 'I would suggest...'