Impact sometimes requires long decades of research
You can't create a one-size-fits-all formula for research impact. Best instead to provide environments in which creativity and innovation can flourish, writes Les Field.
You can't create a one-size-fits-all formula for research impact. Best instead to provide environments in which creativity and innovation can flourish, writes Les Field.
OPINION: In today's environment of intense competition for research funding worldwide, universities are being asked to justify publicly funded investment in research across all disciplines. Part of that process is to catalogue the wider benefits to the community of the range of research projects undertaken and so identify the various returns on investment for funding.
Yet, to make a difference in outcomes for patients, it usually takes more than 15 years for a research discovery in a biomedical lab to translate into day-to-day clinical practice. So it's an interesting and difficult question to ask: At what point along the lengthy pathway from discovery to application does it become clear that a research project will make an impact beyond the lab?
In Britain, research impact is about to become a significant driver of the funding allocated to universities, so many minds are attuned to the issue. Last year, several universities participated in the first Excellence in Innovation for Australia trial, which used a series of case studies to tease out research impacts. The resulting report concluded it was possible to capture the effect university research has had on society, technology, industry, culture and the economy, and called the EIA the first step "in an essential national conversation about research impacts".
Governments, universities and the wider community are keen to direct research dollars in ways that maximise the consequent public benefit. And, given the increasing importance of innovation in securing economic prosperity in a globalised knowledge economy - and the fundamental role of research and development in innovation - such impact measures may seem eminently sensible.
The EIA trial produced an excellent portfolio of stories about the benefits of research but also highlighted the difficulties in trying to measure something as nebulous as impact.
First, there is the problem of retrospectivity: most of the work we showcased was done some time ago. That's because the fruits of research need to be taken up, accepted and applied in ways that are meaningful to society. Achieving "high impact status" takes time.
It doesn't necessarily follow we can join the dots to research being done today. Ian Frazer's work at the University of Queensland leading to the development of Gardasil, the vaccine that reduces the risk of cervical cancer for young women, was undertaken about 30 years ago. As the vaccine is rolled out, the global impact of this work will be 50 years on from the first key discoveries.
Second, there is the vexed issue of attribution. If we use impact as an assessment tool, we need to attribute the research to an institution, or at least a manageable number of researchers. High impact outcomes typically evolve across time with many contributors, many of whom move on and off the various projects. "Success has many fathers" and there will always be a queue to take partial credit for any successful venture, so attributing the credit is not an easy exercise.
Third, there is the question of proof - it's easy to tell the story of a great outcome but sometimes difficult to produce the paper trail that can verify the link from source to outcome. This is particularly true when foundation work was done a long time ago and the key individuals who could verify the link between the original research and the eventual impact have moved on.
So it's not difficult to find really good examples to highlight high-impact university research, but the tough part is converting this to an impact assessment, and particularly an assessment relevant to the sector today, or to where we want it to be tomorrow.
At an international meeting I recently attended, discussion among university leaders about impact measures turned to what an institution might do to position itself for a regular impact assessment regime, with the next assessment in, say, five years. This was a particularly interesting question, given that almost none of the research projects that had made the "significant impacts list" in the EIA trial were planned around impact metrics. Specifically positioning the research for impact was never a consideration.
We also discussed the reality that some areas of research mature more quickly than others. Research in information technology, for example, is taken up relatively quickly compared with the very long lead times in biomedical research. One risk in incorporating impact metrics into the research funding mix is the perverse incentive to concentrate the research effort in areas with the potential for quicker returns.
Then there was the rather seedier side of how to drive the impact agenda, - focusing on better control and manipulation of public profile. Some universities were feeling the pressure to boost their public profile, bringing in the spin doctors, PR companies and experts in exploiting social media to amplify the perceived impact of their research.
Call me old-fashioned, but I am a firm believer that you can't create a one-size-fits-all formula for innovation, creativity or impact.
We best can deliver real, far-reaching benefits to our societies and economies by focusing on creativity and excellence, and by providing the environment in which the creativity and innovation can flourish.
Professor Les Field is Deputy Vice-Chancellor (Research) at UNSW.
This opinion piece was first published in The Australian.