Pages

Monday 31 October 2011

The Mists Are Clearing...I See a Project of National Importance...

As ever, EPSRC is blazing a trail in developing new hurdles for potential applicants. Hot on the heels of controversially 'managing' its remit and 'managing' demand via its blacklist, the Council has just announced that, as of 15th November 2011, all applicants will have to identify the national importance of their research.

Yes, it's both Scientific Excellence AND national importance that will now be the primary assessment criteria. Impact, track record, resources and management will be secondary assessment criteria.

Reviewers and panel members have been issued with new assessment criteria/forms to reflect these changes. For applicants, the national importance section must now be included in the case for support.

Is that a collective groan I can hear from the sector? Having just got used to the concept of predicting their potential impact, they're now going to have to predict its importance.

I do like the amount of faith that EPSRC has in the visionary abilities of the scientists within its remit. I'm surprised, however, that they haven't put these abilities to better use and got them to predict the numbers for the National Lottery? It would be an excellent use of the collective brain power of EPSRC scientists if they could rustle up the Euromillions necessary to meet the 10% funding cut that resulted from the CSR flat settlement.

Thursday 27 October 2011

From Me to You: ESRC Panellists Advice to Reviewers

The ESRC has recently held workshops - or 'masterclasses' - for members of its peer review college. These involved some of the Grants Assessment Panel (GAP) members talking about their experience, about what they have to bear in mind when assessing applications, and on the importance of the reviewers in the process.
As you can imagine, this provided some interesting insights. The GAP members were generally grateful to the reviewers, and recognised their reliance on the reviewers' knowledge to make their decisions. Some points to highlight:

Firstly, the process itself:


  • Each application gets sent to at least 3 academic reviewers, and a user reviewer (if relevant)

  • If the average score for these is above 4/6, they get sent to GAP members who will act as 'introducers' at the panel meeting. Introducers usually get 7-10 proposals each to assess each meeting, and 4-5 weeks to write their assessments.

  • Each application will have 2 introducers assessing them. If the average introducers' score is above 4/10, they go to panel. Only those scoring 6 or above are likely to be funded. Thus, as I've said before, it's worth noting how important the introducers are.

  • However, the panel discussions allow for proposals that fall below this to be pulled up the rankings.

  • Most of the discussion is around marginal or controversial proposals.

What are the core assessment criteria for reviewers:

  • Scientific quality and intellectual contribution;

  • originality and potential contribution to knowledge;

  • timelineness;

  • robustness of research design and methods;

  • value for money;

  • outputs, dissemination and impact.

So what should reviewers bear in mind when assessing an application?

  • Give yourself enough time to properly assess the application.

  • Judge only what is written, not what you imagine the project to be. If the applicant hasn't made clear what they're going to do, that is their fault.

  • Base your judgement on the research question the applicant asks. Is it interesting/important? Are the methods appropriate for answering the question?

  • Evaluate, don't advocate. Not every proposal can be funded. Be frank, and indicate risk versus benefit.

  • Justify your arguments, and provide constructive criticism. It's important that (a) the panellists understand why you have scored as you have, and that (b) your score matches your comments. And, of course, it's useful for the applicant if they are rejected.

And what should they not do?

  • be personal or aggressive;

  • be too brief or too verbiose;

  • be ambiguous;

  • make inappropriate, irrelevant or polemic remarks.

  • forget that reviewing research proposals is different from reviewing papers: here, the research is speculative, so you have to evaluate the likely results;

  • forget to draw attention to ways in which the proposal meets specific assessment criteria particularly well, and to point to any major logical flaws, contradictions or omissions;

  • forget what the point of the review is, namely to weigh up the positive aspects of the proposal against the negative ones.

After the workshops the ESRC has updated its website with FAQs and a checklist (short and long). It's worth having a look at these, to get an idea of what the reviewers will be considering when they start to read your proposal.

Tuesday 25 October 2011

2011-12 Grants Factory Programme Announced

A new programme of Grants Factory events is now available for 2011/12. Each of the themed workshops addresses a different aspect of the research funding process and is led by a senior Kent academic with a track record in winning (and awarding) research grants.

Autumn Term
Spring Term
Summer Term
All events are suitable for researchers of any discipline and at any career stage and you can find more information here

Please get in touch if you would like more information or want to reserve a place.

Autumn Newsletter Out Now

Right! Time to stop whatever you're doing, and make for the newstands: the autumn edition of our newsletter, ResearchActive, is out.

It's a bumper edition this term, and includes:
  • details of the shiny new Peer Review system that was launched at the beginning of October;
  • a snapshot of selection of recent awards;
  • information about the new Grants Factory programme;
  • all the funding gossip from Brussels and Swindon;
  • an overview of the research interests of the 25 new academics who have joined the University recently;
  • pass notes on the REF;
  • contact details for everyone in Research Services.
So check your pigeon holes, and drop me a line if that familiar yellow pamphlet isn't there; I'll send you a pdf of it by return.

Friday 21 October 2011

Finally! Europe Gets What It Deserves

Great news today from Europe. Our best beloved King of the Social Sciences, ESRC chief Paul Boyle, has been elected President of Science Europe.

Thank heavens for Science Europe. Some may question the worth of a European supranational quango with a complex organisational structure, a vague and aspirational mission statement and a vision statement, and no identifiable powers. But not us.

For us, we can only thank the powers that be that they have finally recognised that the continent that brought us the Renaissance and the Enlightenment, Democracy and the Industrial Revolution, Shakespeare, Einstein, Newton, Galileo, Planck, Darwin and Mozart, that discovered heliocentrism, penicillin and the circulation of blood, needs some help in developing its potential.

Yes, this is a great step forward. What European research needs is more committees, more policy statements and more plans of action. My only worry is that Boyle, who is already ESRC CEO and RCUK International Champion, will be stretched too thin. Surely something will have to give if Science Europe is to fulfil its mission statement and genuinely deliver 'a broad based forum...to inform discussions on ERA and related policy matters.'?

Thursday 20 October 2011

Automatic for the People

The second set of notes from yesterday's ESRC Seminar Series on Impact looks at the efforts made by the USA's National Science Foundation (NSF) to automate the collection of science metrics.

Julia Lane, Program Director for the Science of Science and Innovation Policy, gave an overview of background and development of the 'Star Metrics' system. As in the UK, the 17 federal funding agencies were asked to justify the investment the government had made in science.

Refreshingly, rather than offloading this burden on to individual researchers (as is currently happening with the RCUK ROP system), the NSF decided that:
  • the information should be harvested automatically and electronically;
  • the system should be voluntary.
I know. What were they thinking? What we need is mandatory forms, and lots of them! Do they know nothing about research funding management?

But no, they were thinking very logically. After all, in the twenty first century, when the internet allows us to order our groceries, book our holidays and buy our road fund tax, why can't it be used to automatically gather information on impact?

Thus, they created a system that does the following:
  1. Follows the trail of grants through individual HEI financial system. This can tell them: who is funded (via the HR system), including PI, Co-I, RA and students, where the money is being spent (via the procurement system), and who they are subcontracting to or collaborating with (via the finance system);
  2. Follows the trail of outputs, by linking with the patent office and publication databases;
  3. Follows the individual via various CV systems, such as Vivo, Harvard Catalyst and Eurocris;
  4. Analyses the areas funded by the federal agencies by scanning and machine reading the applications, and doing a key word analysis.
As a result Star Metrics can be used to identify:
  • What expertise there is in a particular area, or at a particular site;
  • Where there is a shortage of expertise;
  • How much funding has been put into any discipline area;
  • Areas of overlap between funders;
  • What has been funded in any geographical location (such as a state);
  • What has been funded in any institution, or for an individual;
  • What local or national businesses have benefited from the funding;
  • The outputs and outcomes from any funded project;
  • The career development of anyone associated with the project.
And all, as Lane said, without the academic having to lift a pen. Now how refreshing is that?

Impact: the Wellcome View

I went along to the ESRC Seminar Series on Impact at UCL yesterday. It was a very interesting event, and wasn’t what I had feared: either an evangelical sermon by the funders, or a diatribe by academics. Instead, we were presented with four different takes on impact: the first, from the Wellcome Trust, the second from the American funder the National Science Foundation (NSF), the third from the Netherlands on research assessment, and fourth from the coal face by a practising scientist. I’ll cover two of these – the Wellcome Trust and NSF – in this and the next post.

The Wellcome Trust spends £650m per year, roughly equivalent to the MRC. Liz Allen, the Senior Evaluator Adviser at the Trust, outlined the challenges faced and tools used in understanding and quantifying the effect of this funding.

Wellcome, of course, does not have to justify itself to government; however, it does have a duty to report to the Charity Commission, which gives it tax breaks, and to understand what does and doesn’t work.

She highlighted a number of challenges in assessing impact in the biosciences:
  • the time frame involved: for instance, it took Robert Edwards 18 years from first developing the technique for IVF to the first successful ‘test tube baby’
  • the serendipity of science: for instance, Alec Jeffreys has been quoted as saying ‘our discovery of DNA fingerprinting was of course totally accidental...but at least we had the sense to realise what we had stumbled upon.’
  • Attribution and contribution: there is often a long and diverse list of people and organisations involved in the evolution of a piece of research. For instance, John Todd had funding from Wellcome, MRC, JDRF and NIH. In addition, there is the ‘ripple effect’, the value of negative findings, and the ‘counter factual’ question: what would have happened if x didn’t discover y?
So pinning down impact is not easy. However, Allen, who was keen on quotes, quoted Charles Babbage: ‘errors using inadequate data are much less than those using no data at all.’ It’s better to do something with limited data than to do nothing at all. Wellcome’s attempt to ‘do something’ includes:
  • Understanding impact in terms of ‘progress’ rather than ‘success’;
  • With this in mind, the Trust looks for traditional indicators of progress, including: publications, people and training, products and interventions, software and databases, engagement with communities, policy, advocacy and influence, funding, and awards and prizes.
  • In addition, it has explored ‘new’ indicators, such as article-level metrics (through software such as PLoS), post-publication peer review (such as F1000), clinical guidelines (NICE has recently digitised all citations used for clinical guidelines, and the Trust can check who has been involved), and the separate contributions of all members of a research team (via the ORCID initiative).
  • The Trust is also making use of ‘softer’ indicators, such as case studies. However, it recognises the dangers inherent in this. Allen once again turned to another quote, this time from John Allen Clements: ‘when you ask the memory to reconstruct daily events from forty years ago, you’ve got to be appropriately cautious.’ The Trust needed to make the award winners know what it might be interested in right from the start, so that they can note and monitor it.
  • Wellcome Trust Career Checker. This is a recent development, and aims to track the career choices and progression of cohorts of individuals after their funding has finished. What have they gone on to do?
Allen finished by highlighting the work that others had done in trying to quantify the effect that medical research had had. The Lasker Foundation had commissioned research that suggested that it had had a 20 fold return, so that for every $1 spent there was $2.17 in health benefits. A similar report in the UK showed that ‘for every £1 of public money invested in cardiovascular disease and mental health research, a stream of benefits is produced equivalent ot earning 39p and 37p respectively each year in perpetuity.’

Which is quite compelling. Sure, assessing the effect of biomedical research is a tough call, but Allen rounded off with a quote from Mary Lasker: ‘if you think research is expensive, try disease.’

Tuesday 18 October 2011

Getting Involved with Funders

A number of funders are inviting people to become involved in their decision making. These include the following:

Do consider putting yourself forward for these vacancies. Being involved with funders raises your profile, but also gives you the opportunity to influence their policy and direction.

Friday 14 October 2011

RCUK's New System ROP-ey

Following my previous post, I've heard on the grapevine about some - ahem - 'discontent' with RCUK's Research Outcomes Project (ROP), which is due to go live on 14 November.

Whilst the RCUK is still whistling in the dark about its new system, both the Russell Group and 94 Group have written to RCUK asking them to think again about implementing it in its current form. One person I spoke to said that working with the system 'will make you cry'. Whilst they had issues with MRC's e-Val system, it was user friendly in comparison to the ROP. Apparently the reason that the ESRC-based system was chosen in preference to the MRC one was that it was the cheapest to develop.

The main issue with ROP is that it does not allow 'bulk uploads'. Thus, universities will not be able to centrally load details of all the outcomes from all their RCUK-funded projects. Each academic will have to enter details of the projects separately. Worse still, the new system does not currently have the capacity to link to an institution's own repository - such as the Kent Academic Repository (KAR) - so academics will end up having to enter the data twice.

The result will be that research offices will spend a lot of time chasing academics to fill in their details - individually - and that academics will spend twice as long as they need providing their data - individually. And, if push comes to shove, the academics might decide that they will only provide the obligatory ROP data rather than (say) the requested HEI data, which may foul up institutional preparations for the REF.

There's still a possibility that RCUK will find a way to allow bulk uploads, and they may even find a way of linking to individual institutional repositories. But it's unlikely to do so by 14 November.

Thursday 13 October 2011

RCUK Research Outcomes Project Goes Live

RCUK has been developing a system to collect data on outcomes from all their funded research. I highlighted it in January last year, and it's development has not been without controversy. In June last year Research Fortnight reported on disquiet within universities about the additional work that would arise from inputting data on to the system. There was also some concern about how it would fit in with or replace the MRC's existing e-Val system, which had only recently replaced the Output Data Gathering Tool (ODGT).

Anyway, it all seems to have been settled now, and we've just had word through that it is finally going to go live. Details are available here, with some FAQs here (pdf). It will be released to research offices in the week commencing 24 October 2011, to give us time to become familiar with the system, and feedback any queries or concerns to RCUK.

It will then be released to all Research Council grant holders in the week commencing 14 November 2011. The system will collect details of the following:
  • Publications
  • Other Research Outputs
  • Collaboration
  • Communication
  • Exploitation
  • Recognition
  • Staff Development
  • Further Funding
  • Impact
If you have any questions about this, do get in touch with me.

Monday 10 October 2011

Horizon 2020: Reading the Runes

UKRO, the UK Research Office in Brussels, have summarised where we're at with the development of Horizon 2020. If you belong to a subscribing institution, you can access this summary here.

It makes interesting reading, as much for reading between the lines as for the lines themselves. Whilst there's still plenty of gestation time for the EC's new baby, you can get a sense of how its developing. I've talked about the overall shape of Horizon 2020 elsewhere, but some recent developments that UKRO has highlighted include:
  • EIT: there's no separate provision for the European Institute of Technology. To me, this suggests that they want closer integration with other parts of the Framework Programme, but does it also mean that the EIT is quietly being sidelined or shelved, that it is being reabsorbed back into the body from which it emerged?
  • ERC: the latest proposals don't specify the different schemes as 'objectives'. This suggests that the EC wants to allow the ERC room to develop and introduce new schemes as and when necessary. Which, in turn, suggests that the EC has confidence in the Council, and is going to allow it a little more independence.
  • Societal Challenges: this is based around six multidisciplinary areas. These are evolving as we speak, but some interesting developments recently. These include 'Smart, Green & Integrated Transport,' for which the EC has added a new section on 'evidence-based transport policy for the long term.' So the EC is wanting to expand future transport beyond the scientific and technical to include socio-economic policy implications. 'Resource Efficiency & Climate' has changed its name to 'Climate Action & Resource Efficiency including Raw Materials', which makes clearer the overall aim and direction of the challenge, and ecosystems have been made more of a priority within this. 'Inclusive, Innovative & Secure Societies', which is closest to the current 'Socioeconomic Sciences & Humanities', has been simplified, and appears to be moving away from the FP7 theme from which it emerged. Whilst the Humanities were never a huge player in FP7, it looks like it will be even less so under Horizon 2020.
  • Marie Curie: the EC is obviously wanting to drum home exactly what each of the Marie Curie schemes will do. They've ditched the original headings, and gone for headings that provide more explanation of what each scheme is intended for. For example 'Research staff exchange' becomes 'stimulating innovation through cross-fertilisation of knowledge.' Got it?
  • Industrial Leadership & Competitiveness Frameworks: not a lot of change here, although in the most recent drafts the EC is emphasising the need for these underlying technologies to link more explicitly to the societal challenges.
I would encourage you to read the UKRO document in full, and sign up for alerts that will keep you up to date with developments.

Thursday 6 October 2011

How to Fail at FP7

The workshop title was like a thrown gauntlet: 'How to Fail at FP7.' Anyone can succeed at FP7, said the workshop leader, Melvyn Askew; it takes real determination to fail. He was being facetious, but there was an undercurrent of truth. After all, the EC tells you exactly what it wants, and how: it published voluminous guidance which, if followed, should lead to success. It's when you disregard this that you come unstuck, when you assume your project can be shoehorned in to the call, or that all costs are eligible, or that you can invite all your chums along to do separate projects under a vague umbrella.

So if it's that easy, why do so many people fail? Askew suggested it was down to time. You need time to not only draft the application but, way before you set pen to paper, time to lay the foundations. So here's a quick run down of what you should be doing, now, to prepare.


  • Think. Askew singled out one of the hapless workshop participants and asked, 'what's your strategy for getting European funding?' Like an embarrassed schoolboy the participant mumbled and looked at his shoes. As would the rest of us if he'd picked on us. The truth is most universities have a laissez faire attitude to applying. Askew, however, suggested that we should all be thinking strategically: what are our strengths? What are our weaknesses? What are our connections? Where should our focus be? Identify those strengths, those networks, and build on them. Don't leave it to chance, or to those on the peripherary of European research, to play the tune.

  • Talk. Once you've established a European strategy, you need to lay the foundations for your consortium. Who are the best people working in your area? Who should you approach to be part of a consortium? Not everyone need be an equal partner, but equally there should be no 'make weights' or padding. Each partner should have a clear purpose. Once identified you need to sound them out and set ground rules about the collaboration. If you're coordinating, you will be the one held responsible, and you don't want to be left to pick up the bill should a partner renege on a collaboration agreement.
    You also need to talk to the Commission. Get a sense of what's on the horizon. Now is the time to start establishing contacts with Commission officials and project officers. They are there to help. Later, as the bid develops, they can clarify the intentions of the call, so that you don't end up pushing your project down a false trail. Your relationship should continue once your project's off the ground and you need to provide progress reports. Don't be scared of picking up the phone to the commission (or, in fact, to the NCPs) to get an insight into their thinking. Better still, spend the money on a Eurostar ticket to Brussels for an informal talk.

  • Plan. So you've identified your strengths and you have in place your partners. Now is the time to think about the project itself. One person - preferably with English as their native language - needs to pull it together and draft the application.It must appear to be coherent and unified, not like some kind of clippings album, with pieces taken from a selection of different newspapers. Each work package should interlink and interweave with the others; it should be interdependent and integral with the whole. It should be written in plain English, with acronyms spelt out and explained where necessary, and any jargon or slang cut out. Spell out everything, and don't assume anything. Just because you think you've got a global reputation, or your university is the toast of the UK, that doesn't mean that a Latvian evaluator will have heard of you or your institution.

  • Write. As you draft your application, you should keep in mind the assessment criteria that the EC will use. There are three elements, each of which gets a score out of five:
    - Science and Technology
    - Management
    - Impact
    The first of these is usually well met by applicants, albeit with a little too much context. The second is often so-so, and the third is frequently dire. Recent signals from the Commission are that they are tiring of poor impact programmes, so think seriously about how you will disseminate the findings of your research, and how you will engage with stakeholders. As with Research Council applications, it is a good idea to have an 'advisory group' that includes end users who can guide you in your research, and ensure that you are meeting the needs of those who may benefit from the research.
    The evaluation itself is, in the eyes of Askew, fair, balanced and objective. There is no truth in the belief that lobbying has any effect, or that the EC expects consortia to be balanced and equal, with members from north and south Europe, or from new and old member states. The consortium has to 'make sense' (see above), and that's it. In the peer review meeting there is a member of the Commission on hand to ensure fair play, and to object if they sense that the rapporteur is not heeding the views of all panellists, or being too partisan.
    As well as the evaluation criteria, you should bear in mind that, if your application is successful, you will have to go through a gruelling negotiation. At this stage the Commission will meet with you to discuss the nitty gritty of your project. They might present you with questions and queries that were raised by evaluators, such as unnecessary costs, or an unbalanced consortium. They may ask you to strip these out, and this may well affect your overall project. So preempt this by checking both the eligibility and the necessity of all components of your project.

  • Submit. Submit early, and often. Each time you submit via the EPSS system it overwrites what has already been submitted. Don't leave it until the last moment, only to find the software crashes, leaving you out in the cold.
Think. Talk. Plan. Write. Submit. Sounds so easy, doesn't it? Of course it isn't, and you'll face plenty of frustrations, barriers, hurdles and dead ends along the way. But if you give yourself time then you have a much better chance of succeeding - and not failing - at FP7.

Wednesday 5 October 2011

REF: a Few Thoughts on Drafting Impact Case Studies

I took part in an interesting workshop on REF impact case studies yesterday. We were looking at some initial drafts and, whilst there were some great ideas about possible impact, there were a few key points to bear in mind when thinking about your case study.
  • There has to be a strong link between the impact and the research upon which it is based. It's not enough to be working generally in that area; you need to highlight the project, and the findings of the project, and make clear how these led on to the resultant impact.
  • The research has to have been undertaken whilst you were at the University. It's fine if it began elsewhere, but at least part of it has to have happened after you arrived at Kent;
  • It helps to have quantifiable indicators of impact. Whilst HEFCE define impact very broadly (note their definition in the checklist here), it will help you to objectively demonstrate your impact if you are able to show some figures to back up your claims.
  • The impact has to have happened already. Unlike RCUK's understanding of impact, HEFCE's is backward looking. It's past, not potential. You have to be describing impact that has already been felt.
  • It is better to write in the third person. This adds to the sense of an objective, impersonal analysis of the impact (as does having quantifiables, see above), which will help give your case study substance and credibility.
If you're working on your case study, do have a look at the checklist that my colleague Clair Thrower has prepared. If you would like some feedback on your draft, do get in touch with her directly.

Tuesday 4 October 2011

Call for Nominations to AHRC Peer Review College

The AHRC is seeking nominations for its Peer Review College. I would encourage all research staff in relevant areas to consider putting themselves forward. Assessing applications for a funder will help to raise your profile nationally, as well as being a useful way of getting an insight into how the funder works, keeping abreast of what work is being done in your discipline, and gaining an understanding of what it takes for an application to get funded. More details of the call for nominations is available here.

Applications are sought from academics at all stages of their career and, if chosen, you will serve a four year term. If you want to be nominated do get in touch with your CV. I will pass your details on to Prof John Baldock, the PVC Research, who will put forward nominations on behalf of the University.

Monday 3 October 2011

There's a Battle Outside and It's Raging



There's a fascinating storm raging at the moment around the walls of the EPSRC citadel. Some of you outside the Engineering and Physical Sciences might not be aware of it, but it has the potential to affect all of your disciplines, because what EPSRC does first, the other Research Councils tend to follow.

This time the EPSRC is 'engineering' their sector. Or, as they would have it, 'shaping our portfolio.' Basically, through their 'Shaping Capability' agenda, they've had a good look at the disciplines within their remit and have decided which should be backed, and which should be quietly shelved.
Now in some ways you can see the logic of this. In strained financial times it may be better to prioritise the important, high quality work that has the potential to make a difference to science globally, as well as nationally bolstering the UK's competitiveness.
However, as you can imagine, those who are adversely affected by this prioritisation are angry about what they see as the fairly arbitrary algorithm by which it has been decided. Have a look at the sub-GCSE graph above. This is known as the 'Bourne Graph'. It is a visual representation of how EPSRC see the relative value of subjects within its remit.
But how were these relative positions decided? What scale is being used along the X and Y axes? Hmm. It's not clear, and EPSRC's reticence on this is not helping. People are thinking the worst. As a York-based organic chemist comments on his blog, 'if one didn't know better you may be forgiven for thinking it had been thought up on the back of a fag packet over a pint in the pub after work.'
Helpfully, he provides a similarly inane graph for his relationship with fruit and vegetables, as follows:

I think the methodology's clear, don't you?
Prof Timothy Gower, another blogger, has tried to work out where they're coming from by deconstructing the newspeak pronouncements to come out of EPSRC.
More seriously, the sector's disquiet has resulted in letters from the chemists, statements from the mathematicians, and articles from the physicists, as well as a call from, well, everybody (in the shape of the Royal Society) to 'pause' the strategy. David 'Smalls' Delpy responded, specifically to the chemists, saying that he felt their pain, but ultimately it was their own fault for getting too much of the budget recently. Or words to that effect. Elsewhere he's poured oil on the troubled waters by saying that the complaints were an 'overreaction', backed up by 'relatively little' evidence.
The storm has been rumbling on since July, and there's no sign of it abating any time soon. If anything, it's growing in strength, and there's hope that, as Dylan said, 'the loser now will be later to win.' Whilst I have sympathy for EPSRC, and believe it acted in good faith, I think this kind of engineering is dangerous and ultimately fruitless.
Remember Robert Edwards, who was awarded the Nobel Prize for Medicine this time last year? He had developed in vitro fertilisation, which has led, since 1978, to millions of 'test tube babies.' Well, when he turned to his sector's funder, the MRC, in 1971 they turned him down. At the time his discipline wasn't of interest, as the politics of the day suggested the world was heading for malthusian destruction. If there hadn't been a private funder on hand his research may well have withered on the vine.
His is a cautionary tale. The allocation of research funding shouldn't be left up to politicians and apparatchiks (like me): it should be up to peers and contemporaries to decide what should be prioritised. Only then will the best, bravest and brightest have an equal chance - from whatever discipline.