Showing posts with label ESRC. Show all posts
Showing posts with label ESRC. Show all posts

Tuesday, 15 October 2013

Notes from Leverhulme Visit, October 2013

A Soap Opera: the source of the Leverhulme millions
The new Director of the Leverhulme Trust, Prof Gordon Marshall, visited the University last week. Whilst we've had visits from his predecessor, Prof Sir Richard Brook, and I've heard Marshall speaking at the LSE, this was the first time he's come to Kent. Anticipation was high, and was shown in the strong turnout: there was standing room only at the back.

Gordon Marshall is a sociologist by training, and has had a long and illustrious career in senior management within higher education. He taught at Bath, Essex and the LSE, was Chief Executive of the ESRC and VC at Reading. After climbing to these dizzy heights, Marshall is enjoying life at the helm of what he described as a 'small peer review shop off Fleet Street'.

The Trust gives out less than half the value of awards of the ESRC (£80m, compared to some £200m), but processes four times as many applications (4,000 pa, compared to the ESRC's 800). Moreover, it does so with just 14 people, compared to the ESRC's 125. With such a small team, 'we can't generate much bureaucracy', said Marshall.

96% of their awards are responsive mode. The exception are the Programme Grants, which offer substantial funding (up to £1.75m) in areas identified by the Trustees. The disciplinary distribution of awards generally follows applications: they get more science applications, so tend to give out more science awards. The divide is roughly as follows:

  • Sciences: 50%
  • Humanities: 30%
  • Social Sciences: 20%
 However, these figures should be treated with caution: Leverhulme encourages interdisciplinary work, so its sometimes hard to pin down exactly which discipline any project belongs to.

The Trust does not 'manage demand', as many of the Research Councils have had to do. 'If your quality is acceptable we fund you,' said Marshall. Whilst the Trustees have the ultimate say on who gets funded, they rely for advice on reviews and on a small group of academic advisors. The Trustees, claimed Marshall, 'were the last group of people in the country who have respect for the academy'. They recognised the worth of good research, and wanted to fund it. In terms of what shape this should take, Leverhulme was very open. It covers all disciplines except:

  • clinical medical research (which is already well covered by Wellcome); 
  • policy-driven research, which should be funded by the Government;
  • 'advocacy' projects;
  • those with immediate commercial applications, which should be funded by industry.
The Trust wanted to fund the best, but didn't want to be in competition with the Research Councils or, worse still, be the 'funder of last resort' for those who have already tried the Research Councils. However, if your work is exciting, ground-breaking and robust, but can't get funding with the Research Councils (perhaps because you're emeritus, or you're seeking studentships, or the project's too risky, or too interdisciplinary), then the Trust would be interested.

Marshall finished by highlighting some common failings of unsuccessful applications. These included:

  • An overly detailed review of the literature. Whilst the Trustees need some context, you should concentrate on the specifics of what you are actually going to do. This leads on to the second failing:
  • Under specified research design;
  • Claims of scholarship. Leverhulme isn't interested in H-Index, REF scores, or any other indication of prestige. They look solely at the potential of the project and your ability to undertake the research;
  • Supposition of a hidden agenda. There is no agenda. Leverhulme just wants to fund the best research, wherever it is found;
  • Incremental work. They don't provide funding for 'empire building', or work that doesn't lead to a step change in understanding. Excite them.
  • Claims of impact. They have no interest in this agenda.
I will make a recording of his talk and Marshall's slides available on the Research Services website shortly. If you would like me to email these to you, drop me a line.

Thursday, 15 August 2013

RCUK Scrabble

It's holiday season, even for the good folks at Death Star Avenue. Now read on.

A holiday cottage on the west coast of Wales. Rain is lashing against the windows, and the casements are rattling. It's August. A group of RCUK apparatchiks are huddled around a game of Scrabble. It's clear they're playing an unusual variant of this family favourite in which only acronyms can be used.

Thursday, 25 April 2013

Notes from ESRC Regional Meeting, April 2013

Prof Paul Boyle
The ESRC held a Regional Meeting at the LSE on Tuesday.

Introduction 

 Prof Paul Boyle (CEO, ESRC) started by giving a ‘state of the union’ summary of the position of the ESRC. It currently gave out £200m in funding, of which £180m came from BIS, and £20m from cofunding. It was distributed as follows:

  • Training & Skills £53m (26%) 
  • Strategic/Collaborative £51m (22%) 
  • Responsive Mode £45m (22%) 
  • Methods & Infrastructure £33m (16%) 
  • Other £23m (11%) 
Following publication of its Delivery Plan (2009-15), it had cut Small Grants, but still provided small scale funding (such as for the Secondary Data Analysis initiative). It firmly believed in international collaboration (providing up to 30% funding for overseas Co-Is), and was embedded in all six of RCUK’s cross-council programmes (global food security; energy; global uncertainties; lifelong health and wellbeing; digital economy; living with environmental change). It saw engagement with the private sector as a key priority for the future, particularly in financial services, green business and retail (see Strategic Priorities, below). It had also acted quickly on ad hoc priorities recently, such as initiatives on the Future of UK and Scotland, and Transformative Research.

Strategic Priorities 

ESRC had recently reviewed its three strategic priorities (economic performance and sustainable growth; influencing behaviour and informing interventions; vibrant and fair society), but had decided not to change them. However, they had recognised that there were gaps within these, and that further ‘urgent but predictable scientific opportunities’ had arisen since the priorities were first formed. Moreover, looking at the funding trend towards 2016/17, there was ‘investment headroom’ as current grants tailed off. Thus, the ESRC would be looking to provide more funding, or facilitate further networks, frameworks and events, in the following areas:

Evidence:

  • ‘Big data’ 
  • ‘What Works’ (in which the ESRC aims to embed the use of evidence in policy and practice. Whilst on a different scale, Boyle likened this to NICE – i.e. to synthesise evidence robustly, recommend interventions and monitor their success) 
  • Macroeconomics 

Economic Performance:

  • Business innovation 
  • Financial markets 
  • Cities (ESRC had looked at what sister social science funders globally were focusing on, and recognised cities as a major area of interest. Will possibly hold a town hall meeting about this) 
  • Green economy 

Influencing Behaviour:

  • Epigenetics and educational neuroscience (Boyle described this as ‘frontier science’. It’s a small but growing area looking at how genes can be influenced by environment) 
  • Innovation in health and social care (ESRC would look to cofund with other health sponsors) 
  • Higher education (there’s a sense that, despite the recent big upheavals, there hasn’t been enough work done on HE recently. Might also cofund with DFID on primary/secondary education in developing countries) 

Fair & Vibrant Society:

  • Civil society and social innovation 
  • Social media 
  • Work (such as the impact of recession. There might be a new call around this, but would examine what work had already been done in this area first). 

Demand Management 


The ESRC had consulted the sector a couple of years ago following a 33% increase in applications with no rise in funding between 2006-11. Following this, it had cut the Small Grants scheme, and had encouraged universities to implement internal peer review. As a result there had been a 37% decrease in application volume, and success rates had risen from 17% to 24% across its schemes. The decrease in volume had also led to a 20% decrease in peer review activity. Boyle encouraged individual institutions to mirror the kind of activity that the ESRC undertook: for example, using anonymous reviews, as it had done for the Transformative Research scheme.

Was he still considering introducing tougher measures for demand management? It was still under review, he said, and he had the options ‘in his back pocket’ if needed. However, he thought that any measures would be more nuanced. If a university was not considered to be ‘playing the game’ there may be specific sanctions that would not affect the rest of the sector. He suggested that universities should test the effectiveness of an internal peer review system by questioning how many projects had actually been rejected as a result of it. This was not black and white, however: he clarified that by ‘rejected’ he meant ‘rejected in its current state’, i.e. that they had been encouraged to reframe and redraft their application following feedback.

Doctoral Training Centres 

ESRC was currently assessing the lessons to be learnt from the DTC programme. They had used data from RAE to help in the assessment of it; as similar data from the REF would not be available when the current programme came to an end the ESRC would probably delay a new round by a year. They wanted to encourage:

  • A good balance between 3+ and 3+1 , and didn’t like universities trying to get more for less by only advertising for 3+. 
  • Cofunding with external partners. 

There was a move to harmonise DTCs across the Research Councils, and ESRC was considering whether it should expand or contract its DTC provision. However, it would always want to fund excellence, and would therefore wish to avoid quotas.

Friday, 8 February 2013

ESRC Peer Review Process

As I said in my the previous post, I took part in a PGCHE mock panel today. I preparation I did some background reading on the ESRC process, and thought it would be interesting to set down the way that they assess applications.

  •   First Stage – ESRC receive the application

o   Roughly 10% of applications get rejected at this stage on technicalities, such as not having the right attachments, sections not being filled, format not being adhered to, etc

  •  Second Stage – External Reviewers

o   Each application gets sent to at least 3 academic reviewers, and a user reviewer (if relevant). These are identified using key words.
o   Reviewers use a scale of 1 (low) -6 (high) to rate the application.
o   Applications are assessed on:
§  Originality and potential contribution to knowledge;
§  Research design and methods;
§  Value for money;
§  Outputs, dissemination and impact;
§  Scheme specific criteria (not relevant to responsive mode).
o   Any application with an average score of less than 4/6 it will be rejected at this stage. This applies to about 30% of applications

  •  Third Stage – Introducers

o   The remaining applications are allocated to Grant Assessment Panel (GAP) members who will act as 'introducers' at the panel meeting. There are usually two introducers per application.
o   The ESRC tries to match the applications to the GAP member with the most relevant experience. However, there are only three GAPs, which cover a wide range of disciplines, so applications may well be introduced by someone with limited knowledge or understanding of the discipline.
§  The disciplines covered by the three GAPs are:

GAP A
Education 
Psychology 
Linguistics

GAP B
Sociology 
Social Work 
Social Policy 
Social Legal 
Area Studies 
Anthropology 
Statistics and Methods 
Politics and International Studies 
Science and Technology Studies
GAP C
Economics 
Management 
Demography 
Environmental Planning 
Geography 
History


§  If it is felt that there is no one with relevant experience on the GAP, they can either cross refer to another GAP, or even to another Research Council.
o   Each introducer gets around 7-10 proposals to assess each meeting, and 4-5 weeks to write their assessments. Their assessments will highlight key strengths and identify any weaknesses that need to be addressed.
o   Based on this assessment, they rate the application using a scale of 1 (low) – 10 (high).
o   The ESRC will analyse these scores and work out approximately what score the applications need to have got in order to go forward to the GAP meeting. Roughly 30% of applications get rejected at this stage, and realistically only those scoring 6 or above are likely to be funded.

  •  Fourth Stage – The Panel Meeting

o   Each application is introduced by the two GAP members. It is not always clear beforehand who will be the leading introducer and who will be the seconder.
o   The panel works through the applications in order of introducers’ scores: the ones with the highest score get discussed first, those with a lowest score get discussed last.
o   The Panel works with PDFs which save on paper but do make it difficult to refer back, check, and follow the discussion of applications easily.
o   ESRC officers are present, and do have input:
§  By saying roughly how many applications they can fund in that round;
§  By highlighting any problems with applications that have not been picked up before (eg they have been submitted to the Council before)
o   Whilst the panel will take a steer from the introducers, the panel discussions allow for proposals to be pulled up or down the rankings. Most of the discussion is around marginal or controversial proposals.
o   The Chair is key. S/he summarises the discussion and, if there is no consensus, has the final say.

Notes from a Mock Panel

'Save me from my friends'. Never truer than in the bearpit
of academic peer review
I took part in the University's PGCHE research funding module today, which took participants through a 'mock panel'. This is always a useful exercise as it gives potential applicants a feel for the issues that peer review panellists are having to grapple with in real life.

There were about thirty participants, and we divided them into five panels: one of humanities academics (looking at AHRC applications), two of social scientists (looking at ESRC applications)  and two of scientists (looking at EPSRC applications).

In preparation I did some reading around the ESRC process, and I'll write these up in a post shortly, but in the meantime it's worth noting some of the key points that came out of the session:

  • Firstly, no matter what your discipline, you can spot the weaknesses in the applications. This is interesting: I chaired one of the social science panels, and all those who took part were in disciplines that were very different from those of the applicants. Nevertheless they picked up on a lot of the weaknesses, and were pretty accurate in the ranking of the applications.
  • Secondly, seniority matters. We pretend it doesn't, but if a PI has an impressive CV, we're more likely to let some vagueness in the application slide. This was a bit disheartening for the ECRs present. However, I pointed out that two of the best applications under consideration were from ECRs who had overcome this difficulty by either have a strong, robust and well thought through project design, or had more senior co-investigators on board to give it gravitas and reassurance. 
  • Thirdly, time is short. You imagine the panellists have all the time in the world to consider your application, but time's snapping at their heals. Decisions need to be made, compromises struck, and the agenda moved on. You - as the applicant - have to help the introducers by giving the the information they need to support your application in a format and place that they can grab it quickly. Cut to the chase: what's your research question, why's it important, why's it timely, why are you the person to answer it, how are you going to do it, and how are you going to disseminate it.
  • Fourthly, confidence shines through. If you believe in yourself and your research, it really helps. Don't be tentative, uncertain or - let's be frank - academic. You need to sell your proposal, and to do so you've got to believe in both its worth, but also in its achievability.

Monday, 28 January 2013

A Sierra-Fuelled Dawn

The Shuttlebus, in R2D2 livery, awaits its first customers
Following the commendable example of the Universities of Bristol, Bath, Exeter and Cardiff in forming the GW4 collaboration, we are pleased to announce that Fundermental Towers University will be forming an alliance with its sister institutions in the wider Rochester area.

The research intensive universities of Fundermental Towers, Ebbsfleet White Horse (Unrampant), and Snodhurst, together with Deangate Ridge Golfing Academy, will form a strategic alliance to explore and identify opportunities where their combined research strength can more effectively address global challenges, as well as getting an impressive discount on stationery costs.

Professor Gymslip Plimsole, Vice-Chancellor of Fundermental Towers University and former contestant on ESRC's T Factor, welcomed the news. 'This is an exciting day for the wider Rochester area, and the impact will be felt right the way from The Esplanade in the West to Tinker's Alley in the East. Together the 'R2D2' collaboration of universities is more than the sum of its parts. With a total turnover in excess of £7.83(sterling), these four royal and ancient seats of learning will create an unstoppable critical mass of ground breaking research.'

'We have already commissioned a 'shuttle service' between the universities', continued Plimsole. 'I have had my son Kevin, our Transport Officer, source and purchase a 1983 Ford Sierra which is ideal for the job. We hope to paint it in the colours of all four institutions. When combined these come out as a rather fetching puce-brown. Fortunately most Sierras of that age came in that colour, so there may be no need for a repaint. See? This collaboration is already saving us money!'

'Whilst we are all excited about the bright new Sierra-fuelled dawn ahead of us,' concluded Plimsole, 'there are still a number of details to iron out. The most important of these is the biscuit selection for our intra-R2D2 meetings. I know Snodhurst have a penchant for jammie dodgers, but we at Fundermental Towers have more refined tastes. For us, it's custard creams or nothing.'

Thursday, 6 December 2012

The 'T' Factor

Plimsole, Doomberger and O'Leary
The ESRC recently launched a radical new scheme for funding 'transformative research'. Not only will the projects they fund be groundbreaking; the way they assess the applications will be too. Applicants will put in a  bare bones outline to the ESRC by 24 Jan, and those who tickle their fancy will be shortlisted and invited to an - if you will  - 'pitch to peers' workshop.


What exactly is a 'pitch to peers' workshop? I'm glad you asked me that. Imagine the X Factor for social science academics, with the other competitors as the judges. 

What could possibly go wrong? 

Dermot O'Leary: 'So, Prof Plimsole, what did you think of Dr Doomberger's project?'
Yes, we're all very much looking forward to the televising of this innovation in peer review. We've invested in  a widescreen plasma TV to watch the spectacle, and will be ready to vote for our favourites, frittering away our block grant on premium rate phone calls. Put your feet up, crack open the popcorn and let the spectacle begin!

Prof Plimsole (sniffily): 'To be honest, Dermot, it lacked cohesion. I admire her bravery in exploring the sociology of Lego, but her research questions were all over the place, her objectives were unrealistic, and her outputs were frankly negligible.' 
(boos from the audience)
Dermot O'Leary: 'Hmm, I'm not sure the audience agrees with you, Prof Plimsole...' (squeals from the audience. O'Leary turns to Doomberger) And Dr Doomberger: what do you think of Prof Plimsole's project?'
Dr Doomberger (angrily): 'I've never seen such routine, dull... incremental research (gasps of horror from the audience) masquerading as transformative in my life! His project couldn't transform its way out of a paper bag!' 
(the audience explodes)

Wednesday, 10 October 2012

RCUK Decent Success Rate Shock!

'100% you say? Well I'm sure
I could make my Chaucer
project fit  'Environmental Change''
Now here's a good news story that the Research Councils have been remarkably slow to crow about: success rates for most are looking surprisingly healthy. Looking at the latest published stats (RCUK has gathered links to them all here, including a broken AHRC one. *sigh* Best go direct, here), they give a lie to the current feeling that RCUK success rates have fallen so low that you may as well buy a ticket to EuroMillions. The  success rates for the five councils that have produced them for 2011-12 range from 21-41%, and average a healthy 31%.

Of course, there are a number of factors which have buoyed the figures:

  • firstly, some Councils have either introduced, or are strongly encouraging, some form of demand management. This is almost certainly the reason for EPSRC's 41% success rate.
  • secondly, other success rates are skewed by 100% success rates for some managed programmes, such as the 6 out of 6 that the AHRC's Researching Environment Change Follow Up scheme garnered. This helped the AHRC achieve an eye popping 40% success rate last year.
  • finally, many have seen application numbers drop, not necessarily because of demand management, but perhaps because applicants have become demoralised by rejection, or their attention has been diverted elsewhere, such as on the REF, or teaching , or (perish the thought!) impact activities.

I've collated the figures I can find, and the full table of application numbers and success rates can be found (for Kent staff only, I'm afraid) on our SharePoint site.

Thursday, 27 September 2012

ESRC Publishes Photofits of 'Knowledge Navigator'

I was very excited to see that the ESRC is advertising for a Knowledge Navigator. The successful candidate will be expected to 'scope' and 'explore', as well as fend off tigers, cut through virgin jungle, and survive on a diet of grubs and boiled scorpions. In a radical change from usual protocol, the ESRC has decided not to provide Further Particulars, but instead to issue a series of photofit images of 'the Ideal Candidate'. I've reproduced these below. Don't even consider going for it if you don't match these.

You have been warned.






Thursday, 5 July 2012

ESRC & NIHR Call for 'Oven-Ready' Dementia Interventions

The ESRC and NIHR whetted the sector's appetite for their forthcoming Dementia Initiative by holding a workshop for prospective applicants on Tuesday. The Call Specification is now on the ESRC's website, but there were some interesting points that came out of the workshop :

  • Stakeholder involvement is key;
  • Projects need to be ambitious and bold;
  • They are especially interested in looking at 'oven-ready' interventions that change behaviour and can prevent dementia;
  • Social care research is seen as a particularly rich seam that has been under-researched;
  • They would welcome the wider involvement of social sciences. 
We're planning to hold a meeting in a couple of weeks time to discuss this call. If you want more information about it, or are considering putting in a bid, drop me or Brian Lingley a line.

Saturday, 9 June 2012

RCUK Publishes Impact Case Studies

Last week RCUK published impact case studies. The intention of this was, I think, twofold: to highlight to external bodies (such as the Treasury) that public money is being well spent, and to highlight to potential applicants examples of best practice. Having read them all now I feel that they might have succeed at the latter, but not necessarily the former. Which is a shame, as I do think that it's crucial to make the case for the importance of research to government and the wider world.

The case studies are split into four categories: ‘policy’,‘business’, ‘public engagement’ and ‘voluntary and charitable’. Some appear in more than one category. RCUK had a huge pool of projects to choose from for their fourteen examples. The Research Councils give out grants for some 2,500 projects each year.As such, you would expect them to be spectacular examples of their kind, but I wasn't convinced.

Unsurprisingly, the business-related ones have the clearest impact and it is easy to make the case with these: licensing agreements, patents, and new technology with applications that will benefit society. All good. Things become a bit less clear in the other three categories which, to my mind, are somewhat weaker. Or rather, I think it’s very difficult to make the case. Many talk about ‘stakeholder engagement’, which of course is good, and of feeding into the development of policies and working practices. Which is also good. Others talk about their tweets and blogs, their public lectures and even their jazz compositions. Okay, so I know it's very hard to try and quantify the effect that these activities have had, but I would have thought that RCUK might have been able to provide more hard evidence as to the effect their funded research is having on society.

Nevertheless, I think the positive that potential applicants should take from this is that expectations are low and broad. If you can demonstrate that you are able and willing to engage with end users, to talk to school children and put on public events, then you will easily have met the expectations of your future paymasters. And, you never know, with your conceptual art spin-offs and children’s books sub-projects, you could well be up there on the RCUK website in years to come.

Thursday, 26 April 2012

Beware the Poets

I'm worried about those clever people at Polaris House. Our great and glorious research leaders, those academic taste makers who hold UK funded research in the palms of their hands, seem to be entering the world of self parody.

A couple of weeks ago I devised the Research Council Priority Generator. This randomly mashed together abstract nouns to create strategic priorities that sounded edgy and thoughtful, but were ultimately empty and meaningless.

Whilst it highlighted how randomness could produce apparent profundity, I thought it was too exaggerated and  stupid to really bear any resemblance to reality. How wrong I was. Within hours of launching the Generator, the AHRC had produced its latest 'emerging theme': 'Care for the Future: Thinking Forward through the Past'.

Beautiful. I couldn't have invented a better nonsense programme myself. But, oh, it got better. The AHRC weaved together a fine piece of poetic prose to explain the rationale of the theme: it was, they gushed, 'an opportunity for researchers...to generate new novel understandings of the relationship between the past and the future, and the challenges and opportunities of the present through a temporally inflected lens'.

'New novel'? Really? 'A temporally inflected lens'? If I had a temporally inflected lens I'd be sure to take it down to Jessops to have it looked at.

But the muse is upon them, and they continue in a stream of consciousness that would make Molly Bloom blush:
'...these include questions around what is meaningful about continuity and change, and the role that narratives, experiences, visualisations, performances and stories have to play in these processes. Issues around understanding modes of cultural learning and intergenerational equity, as well as questions relating to authority, ownership and justice within and across time, may help inform understanding of current and future global challenges faced by society today. Technological development, alternative lifestyle movements, and the nature of ideological and philosophical, ethical and creative, historicised and imagined perspectives jostle for attention and require a diversity of approaches and disciplinary engagements for the theme to reach its full potential.'
It's like a postmodern disciplinary shopping list, complete with an unreliable narrator. It's all there, but it's up the reader to try and make sense of it.

However, the AHRC is not alone in bowing to the creative urge. Following swiftly on this is EPSRC's announcement that it will be running a 'creativity greenhouse'. They've already had us playing in 'sandpits', and the TSB is encouraging us to develop 'catapaults'. What analogy, metaphor or simile will they reach for next? The ESRC Trouser Press? The NERC Hostess Trolley? The BBSRC Kenwood Mixer? Now there's an idea for a new generator...

But should we welcome all this creativity? After all, other great leaders have succumbed to the inner poet. Barack Obama has written poetry, as has Jimmy Carter. But then, apparently, so has Stalin, Mao, Pol Pot, Ivan the Terrible and Goebbels.

Hmm. On second thoughts perhaps the Research Councils should stick to their day jobs before they take UK research any further into this weird parallel universe.

Wednesday, 15 February 2012

Everything's ROS-y at NERC

NERC has decided to join AHRC, BBSRC, EPSRC and ESRC and join the Research Outcomes System (ROS). This will take over from its Research Outputs Database (ROD), which they've been using for nearly a decade.

The Council make the case that this will:
  • Reduce the reporting burden by reducing the number of equivalent systems;
  • Simplify submission by moving to a more standardised questionnaire;
  • Improve how publications are handled;
  • Share information better between systems, reducing data entry and reducing transcription errors; and
  • Improve the quality of performance information available to support the case for public investment in the environmental sciences.
What's not to like? Well, as reported here a few months back, the new system isn't without glitches. However, NERC isn't adopting it immediately. Oh no. As in the Life of Brian, this calls for immediate...discussion. NERC will bound into action by:
  • Completing the current collection exercise on the existing system;
  • Establishing a project to manage the process of adopting ROS for future years collection with Centre participation;
  • Engaging with users through the project to ensure that user requirements are identified and met;
  • Adapting ROS where necessary to address NERC requirements, including coverage of grants and Centre programmes; and
  • To migrate, as necessary, historic data.
I love this, the snail-like progress of bureaucracy in motion. It is a thing of beauty. Now if you want more information as to where we are in the programme of NERC migration to ROS - and who doesn't? - it can be found here.

Thursday, 26 January 2012

NERC Introduces Demand Management

NERC has become the third Research Council - after EPSRC and ESRC - to explicitly state that they want universities to 'self regulate' their applications. This announcement was triggered by worries about success rates in some of their schemes falling to 16%.

Whilst NERC already has in place some measures to 'manage demand' - eg limiting the number of applications an investigator can submit per call and restricting resubmissions - this hasn't stopped the success rates from sliding in recent years. They're hoping to reverse this by encouraging institutions to strip out applications which NERC would define as 'uncompetitive' (defined by them as scoring 6/10 or below at panel).

So what are they going to do?
  • firstly, ask institutions to nominate a point of contact for demand management;
  • secondly, in the summer, provide data on past performance to them. This will be repeated annually from autumn 2013. The data will apply to Urgency, Large and Standard Grants, but not Fellowships or outlines. It will include: success rates for all schemes; distribution of grades for funded and unfunded proposals by scheme; final moderated grades for all proposals from institution/department; relative performance of institution/department.
  • thirdly, from autumn 2012 NERC will (ahem) 'engage in a strategic dialogue' with institutions to provide information and advice in support of demand management, including setting targets for changes in submission behaviours. They can't meet with everyone in the first year, so those with the most applications, or with black marks in the NERC copy book, will be the first to get a visit from 'the management.'
So, at the moment, it looks to be relatively light touch: more ESRC than EPSRC. However, there will be the expectation that all research organisation will have their own internal quality control systems in place.

Who will be the next Research Council to fall in to line? Given the recent rumblings from Death Star House, my money's on the AHRC...

Wednesday, 18 January 2012

ESRC Opens Secondary Data Initiative

Last June I wrote about how the ESRC was going to implement its Delivery Plan. Yesterday news came through about one arm of this implementation: the Secondary Data Initiative. As you will doubtless remember, the ESRC scrapped its Small Grants scheme, with the exception of this Initiative. Backed by £10.8 million funding, it will offer grants of up to £200k to exploit major data resources that the ESRC and other agencies had already created.

These include cohort studies, the British Household Panel Survey, Understanding Society, census datasets, and the European Social Survey. In this first phase they will only be funding 20 or so projects, but if your research uses these or other sources of existing data, it's worth considering putting together an application. The deadline is 19 April; get in touch if you want any help with putting together an application.

Monday, 9 January 2012

ESRC Seeks New Panel Members

The ESRC’s peer review panels, which assess grant applications, are seeking new members in the following areas:

• Sociology, particularly sociology of health
• Socio-legal studies
• Science and technology studies
• Management and business studies, including accounting and finance
• Economics, particularly micro-economics

I would strongly encourage you to consider putting yourself forward, or suggest this to members of your School. The insights you will get as to the decision making process, as well as finding out about work going on at other universities, is invaluable. Deadline is 1 February 2012. More information is available here.

Wednesday, 7 December 2011

What Are the ESRC Strategic Priorities for?

In the thick of the back-slapping love-in that was the ESRC Open Meeting last night, I felt a little like Banquo's ghost. I'm not saying that Paul Boyle's murdered anyone recently to be Thane of Swindon, or anything, it's just that I felt a little out of place. Don't get me wrong: I love the ESRC and admire all who sail in her, and I was made to feel very welcome, but I was taken aback by how uncritical the audience seemed to be. The questions were, generally, along the lines of, 'Paul, could I just agree with the previous questioner by saying how brilliant you are?' The toughest questions were saved for government departments (Boo! Hiss!) which, it was generally agreed, weren't pulling their weight in (a) using ESRC-sponsored research, and (b) telling the world how brilliant the ESRC was.

So, like the oik that I am, I waded in with an everso, everso slightly critical question. Feeling a little like a naughty schoolboy before the headmaster, I asked, - em – what did the panel think of Sir Paul Nurse's comments last week, when he took a side swipe at the EPSRC by attacking the concept of funders as 'sponsors'? After all, the ESRC's three 'strategic priorities' seemed to be a move in this direction.

Paul Boyle chortled like an indulgent Dumbledore, 'I certainly wouldn't want to comment on a sister research council,' he began, before explaining how the ESRC was cleverly treading the tightrope between shepherding the sector and giving them the space to do whatever they wanted via their responsive mode schemes.

Well, yes and no. You see, my problem with the ESRC priority areas is that I just don't get the point. For all its faults, the EPSRC is at least putting its money where its mouth is. You may disagree with the policy of 'shaping' its remit, but it's obviously decided what is important, and is now steaming ahead with putting into practice the changes necessary. Their priority areas do, at least, have some value – for better or worse.

The ESRC, on the other hand, has consulted widely, and has produced a 'bottom up' list that is so broad as to be almost meaningless:
  • economic performance and sustainable growth;
  • influencing behaviour and informing interventions;
  • a vibrant and fair society.
Well, that's pretty much the ESRC's remit covered, then.

But what are social scientists meant to make of – or do with – this list? It was made clear that the priorities wouldn’t play a part in responsive mode funding; indeed, at the ESRC Study Day in September Michelle Dodson said that the ESRC would ‘only exceptionally’ provide ‘new investments’ in these areas.

So they don’t want to railroad the sector with the priorities, nor do they want to provide much funding for them. What's left? What are they for, and what will they do? Dodson did say that the priorities would be fulfilled by ‘enhancing impact from existing investments’ and ‘encouraging investments to work together.’

Make of that what you will. Of course, if you don’t like them, you needn’t worry, because there might well be a new set along in due course. Whilst they don’t want to revise them each year they might be (ahem) ‘refreshed annually.’

Okay, so I may joke about these priorities, but I do think there is an important point to make here. There’s been a lot of light and heat generated by these: as Boyle suggested, there was a long consultation process, involving 'taskforces', 'frameworks', 'discussions', and 'comment', to arrive at these fairly anodyne aspirations. The ESRC should now either back the priorities by committing wholeheartedly to them [*shudder*], or, more preferably, drop the pretence at being directive and allow the sector to decide for itself – through the peer review and funding of excellent research – what its priorities are.

Thursday, 27 October 2011

From Me to You: ESRC Panellists Advice to Reviewers

The ESRC has recently held workshops - or 'masterclasses' - for members of its peer review college. These involved some of the Grants Assessment Panel (GAP) members talking about their experience, about what they have to bear in mind when assessing applications, and on the importance of the reviewers in the process.
As you can imagine, this provided some interesting insights. The GAP members were generally grateful to the reviewers, and recognised their reliance on the reviewers' knowledge to make their decisions. Some points to highlight:

Firstly, the process itself:


  • Each application gets sent to at least 3 academic reviewers, and a user reviewer (if relevant)

  • If the average score for these is above 4/6, they get sent to GAP members who will act as 'introducers' at the panel meeting. Introducers usually get 7-10 proposals each to assess each meeting, and 4-5 weeks to write their assessments.

  • Each application will have 2 introducers assessing them. If the average introducers' score is above 4/10, they go to panel. Only those scoring 6 or above are likely to be funded. Thus, as I've said before, it's worth noting how important the introducers are.

  • However, the panel discussions allow for proposals that fall below this to be pulled up the rankings.

  • Most of the discussion is around marginal or controversial proposals.

What are the core assessment criteria for reviewers:

  • Scientific quality and intellectual contribution;

  • originality and potential contribution to knowledge;

  • timelineness;

  • robustness of research design and methods;

  • value for money;

  • outputs, dissemination and impact.

So what should reviewers bear in mind when assessing an application?

  • Give yourself enough time to properly assess the application.

  • Judge only what is written, not what you imagine the project to be. If the applicant hasn't made clear what they're going to do, that is their fault.

  • Base your judgement on the research question the applicant asks. Is it interesting/important? Are the methods appropriate for answering the question?

  • Evaluate, don't advocate. Not every proposal can be funded. Be frank, and indicate risk versus benefit.

  • Justify your arguments, and provide constructive criticism. It's important that (a) the panellists understand why you have scored as you have, and that (b) your score matches your comments. And, of course, it's useful for the applicant if they are rejected.

And what should they not do?

  • be personal or aggressive;

  • be too brief or too verbiose;

  • be ambiguous;

  • make inappropriate, irrelevant or polemic remarks.

  • forget that reviewing research proposals is different from reviewing papers: here, the research is speculative, so you have to evaluate the likely results;

  • forget to draw attention to ways in which the proposal meets specific assessment criteria particularly well, and to point to any major logical flaws, contradictions or omissions;

  • forget what the point of the review is, namely to weigh up the positive aspects of the proposal against the negative ones.

After the workshops the ESRC has updated its website with FAQs and a checklist (short and long). It's worth having a look at these, to get an idea of what the reviewers will be considering when they start to read your proposal.

Friday, 21 October 2011

Finally! Europe Gets What It Deserves

Great news today from Europe. Our best beloved King of the Social Sciences, ESRC chief Paul Boyle, has been elected President of Science Europe.

Thank heavens for Science Europe. Some may question the worth of a European supranational quango with a complex organisational structure, a vague and aspirational mission statement and a vision statement, and no identifiable powers. But not us.

For us, we can only thank the powers that be that they have finally recognised that the continent that brought us the Renaissance and the Enlightenment, Democracy and the Industrial Revolution, Shakespeare, Einstein, Newton, Galileo, Planck, Darwin and Mozart, that discovered heliocentrism, penicillin and the circulation of blood, needs some help in developing its potential.

Yes, this is a great step forward. What European research needs is more committees, more policy statements and more plans of action. My only worry is that Boyle, who is already ESRC CEO and RCUK International Champion, will be stretched too thin. Surely something will have to give if Science Europe is to fulfil its mission statement and genuinely deliver 'a broad based forum...to inform discussions on ERA and related policy matters.'?

Thursday, 20 October 2011

Automatic for the People

The second set of notes from yesterday's ESRC Seminar Series on Impact looks at the efforts made by the USA's National Science Foundation (NSF) to automate the collection of science metrics.

Julia Lane, Program Director for the Science of Science and Innovation Policy, gave an overview of background and development of the 'Star Metrics' system. As in the UK, the 17 federal funding agencies were asked to justify the investment the government had made in science.

Refreshingly, rather than offloading this burden on to individual researchers (as is currently happening with the RCUK ROP system), the NSF decided that:
  • the information should be harvested automatically and electronically;
  • the system should be voluntary.
I know. What were they thinking? What we need is mandatory forms, and lots of them! Do they know nothing about research funding management?

But no, they were thinking very logically. After all, in the twenty first century, when the internet allows us to order our groceries, book our holidays and buy our road fund tax, why can't it be used to automatically gather information on impact?

Thus, they created a system that does the following:
  1. Follows the trail of grants through individual HEI financial system. This can tell them: who is funded (via the HR system), including PI, Co-I, RA and students, where the money is being spent (via the procurement system), and who they are subcontracting to or collaborating with (via the finance system);
  2. Follows the trail of outputs, by linking with the patent office and publication databases;
  3. Follows the individual via various CV systems, such as Vivo, Harvard Catalyst and Eurocris;
  4. Analyses the areas funded by the federal agencies by scanning and machine reading the applications, and doing a key word analysis.
As a result Star Metrics can be used to identify:
  • What expertise there is in a particular area, or at a particular site;
  • Where there is a shortage of expertise;
  • How much funding has been put into any discipline area;
  • Areas of overlap between funders;
  • What has been funded in any geographical location (such as a state);
  • What has been funded in any institution, or for an individual;
  • What local or national businesses have benefited from the funding;
  • The outputs and outcomes from any funded project;
  • The career development of anyone associated with the project.
And all, as Lane said, without the academic having to lift a pen. Now how refreshing is that?