Pages

Tuesday, 8 June 2010

Lessons Learnt from the Impact Pilot Study

As many of you will be aware, HEFCE have been running a pilot study to understand how the assessment of impact will work in the REF. It has involved 29 institutions. Here at the ARMA Conference, both HEFCE and some of the participants have been speaking about their experiences of the exercise, and what lessons have been learnt from it.

First, some basic parameters for the pilot study:
  • Five units of assessment (UoAs) were focussed on:
  1. Clinical medicine
  2. Physics
  3. Earth Systems
  4. Social Work and Social Policy
  5. English Literature and Language
  • The assessment was based on a narrative and case studies, with 1 case study for every 10 Category A staff from RAE2008;
  • There was a template for the case studies, which can be viewed here;
  • Impacts were expected to have been felt between January 2005 – December 2009; based on ‘underpinning’ work of 2* quality or above that took place as far back as 1993;
  • Impact was understood to mean ‘any identifiable benefit or positive influence on economic, social, public policy or services, cultural, environmental or quality of life.
From a universities point of view, their experience was as follows:
  • ‘minimal initial interest’ from academics, and some of those that did express an interest did not have relevant experience of impact;
  • The guidance was ambiguous, and it was unclear what the boundaries between inputs (‘which are not the focus of the assessment’) and outcomes were;
  • The templates were unclear;
  • It was difficult to gauge how some activities would be assessed or weighted by specific panels;
  • There was a tendency in case studies for the attribution of impact to be stressed more than its significance;
  • There was a desire to avoid claiming ‘mere’ knowledge transfer’, which led to an overly inhibited account of the contribution that the research had made to any impact;
  • There was a relatively low level of appreciation of what counts as impact for the REF, and many academics talked of high impact journal, esteem indicators etc.
  • There was a tendency to focus on recent activity by current members of staff with strategic potential. Were staff missing the opportunity to use ‘profitable’ previous research, which had had impact, but whose areas had subsequently become dormant?
  • Difficulty of accessing external impact indicators – eg figures for attendance at events which academics participated in, etc.
From HEFCE’s perspective, some initial questions raised by the exercise included:
  • How can the template be improved? For example, to ask for information in a different order;
  • How can claims be corroborated?
  • How should the impact narrative and case studies be weighted?
  • How should the difference between public engagement and public benefit be differentiated?
  • How can interim impact be assessed?
HEFCE have also been looking at the nuts and bolts of the systems for collecting REF data. Whilst most in the sector recognise that the 2008 system was better than its predecessors, there is still plenty of room for improvement, and they are looking at ways of getting around problems with – for example – formatting text, possibly by asking submitters to upload pdfs.
HEFCE will report back on the exercise formally in October 2010, when it will issue:
  • Sub profile of each of the institutions that took part in the exercise;• A report from each panel;
  • A report on the lessons learnt from the HEIs;
  • A report from the impact workshops that HEFCE is undertaking.

No comments:

Post a Comment