I was recently involved with a study of evidence use in Parliament, a piece of research led by UCL STEaPP and POST. One of the main conclusions from this piece of work was that policy-makers weren’t interested in the latest, shiny, new academic papers. Since time is precious, they wanted accessible summaries of what the body of evidence said; in other words, nicely summarised syntheses of the evidence. I’ve written about this already here and make similar points in this blog written by a member of the Science in Policy group at the University of Sheffield.
Imagine my surprise then in my first ever REF progress meeting as a new permanent member of Faculty. When asked to propose my top papers, I immediately suggested a review that I did in Methods in Ecology and Evolution, which I thought was probably my most useful paper. The dismissal was instant – ‘sorry, that’s a review paper, that doesn’t make an ‘original, novel, significant contribution’ to knowledge so that can never be 3* or 4*’. What nonsense is this? We have policy-makers desiring syntheses of evidence, while at the same time criticising academia for not providing evidence in a usable format, and the powers-at-be shaping the REF are actively discouraging evidence synthesis? And as Neal Haddaway said to me after reading this, the novel part is the synthesis! Madness!
Now you might tell me that the impact part of the REF rewards such activity. Well yes, maybe it does. But academic career progression is still mainly linked to your ability to write a 4* paper. As I’ve written here, this seems to motivate academics to spend a good deal of time making up some fancy-words, upping the ‘bull****’ in a paper, and focusing on novelty. The consequence – we end up swimming in an ocean of new information without being able to make the most of what we already have.
The way to make the most of the evidence we have is to synthesise. Now I’ve had some experience of working with Bill Sutherland’s Conservation Science group at Cambridge who are real pioneers of synthesising, and then summarising conservation evidence, in a user-friendly way. This takes great skill and effort. Occasionally I’ve been met with slightly disappointed comments from colleagues when I say that I’ve written a review paper – a sort of patronising tone that might be saying ‘you did a review paper, that’s nice, you weren’t clever enough to come up with something novel, but well done anyway, that will be policy relevant’. This perception that systematic reviews and summaries of evidence are somehow easier than other types of academic papers is nonsense. It is exactly the kind of snobbish attitude that means that great review papers can’t be 4*.
So, answers in the comments section please everyone – why can’t a great review paper be worth 4*? It takes great skill, and it is what policy-makers want.