First, some basic parameters for the pilot study:
- Five units of assessment (UoAs) were focussed on:
- Clinical medicine
- Physics
- Earth Systems
- Social Work and Social Policy
- English Literature and Language
- The assessment was based on a narrative and case studies, with 1 case study for every 10 Category A staff from RAE2008;
- There was a template for the case studies, which can be viewed here;
- Impacts were expected to have been felt between January 2005 – December 2009; based on ‘underpinning’ work of 2* quality or above that took place as far back as 1993;
- Impact was understood to mean ‘any identifiable benefit or positive influence on economic, social, public policy or services, cultural, environmental or quality of life.
- ‘minimal initial interest’ from academics, and some of those that did express an interest did not have relevant experience of impact;
- The guidance was ambiguous, and it was unclear what the boundaries between inputs (‘which are not the focus of the assessment’) and outcomes were;
- The templates were unclear;
- It was difficult to gauge how some activities would be assessed or weighted by specific panels;
- There was a tendency in case studies for the attribution of impact to be stressed more than its significance;
- There was a desire to avoid claiming ‘mere’ knowledge transfer’, which led to an overly inhibited account of the contribution that the research had made to any impact;
- There was a relatively low level of appreciation of what counts as impact for the REF, and many academics talked of high impact journal, esteem indicators etc.
- There was a tendency to focus on recent activity by current members of staff with strategic potential. Were staff missing the opportunity to use ‘profitable’ previous research, which had had impact, but whose areas had subsequently become dormant?
- Difficulty of accessing external impact indicators – eg figures for attendance at events which academics participated in, etc.
- How can the template be improved? For example, to ask for information in a different order;
- How can claims be corroborated?
- How should the impact narrative and case studies be weighted?
- How should the difference between public engagement and public benefit be differentiated?
- How can interim impact be assessed?
HEFCE will report back on the exercise formally in October 2010, when it will issue:
- Sub profile of each of the institutions that took part in the exercise;• A report from each panel;
- A report on the lessons learnt from the HEIs;
- A report from the impact workshops that HEFCE is undertaking.
No comments:
Post a Comment