OPINION: Last Friday, the results for this year’s round of applications for National Health and Medical Research (NHMRC) grants were released. Because headlines focus on success and rankings, universities and medical research institutes look for positive things to say about the outcomes. Unfortunately, this means there is little public comment about the inconsistency and unreliability of the assessment system for awarding project grants.

Imagine a student achieves a mark of 65% for the first submission of an assignment. She seeks feedback about how to improve in future assignments, confident that she is capable of scoring 75% or better for this type of work. Then she resubmits, having taken on board advice from various sources. But her assignment goes to a different examiner and is awarded a final score of 50% with no further explanation given. When she inquires, she is firmly told that none will be provided.

Such an assessment system would cause uproar in any high school or university. Yet medical scientists continue to tolerate exactly this sort of outcome from the panel review system used for assessment of NHMRC project grants.

This year, I know of many medical researchers – myself included – who invested literally hundreds of hours to enhance, update and resubmit applications initially judged to be close to the cut-off for funding, only to receive final scores lower than the original. A number of colleagues' applications, which last year were awarded near-miss scores (five out of seven), were this year rejected out of hand as non-competitive (with a score of category three or below).

This makes no sense, but is an almost inevitable result of the way NHMRC organises its panel system. Panels are comprised of researchers who are asked to review more than 100 applications during a week-long meeting, and to act as either primary or secondary spokesperson for 16-20 of these. This sounds good in principle, but in practice is unsatisfactory.

Firstly, because the panel members are volunteers and the workload is huge, almost no one reads all the applications allocated to their panel. So the opinions of the two spokespersons largely determine the outcome. It’s not uncommon for a spokesperson to be unfamiliar with the particular area of research, or have strongly differing personal views about how research in that area should be conducted.

Secondly, because the composition of each panel changes from year to year as the NHMRC seeks to allow as many researchers as possible to participate, many spokespersons are inexperienced.

Thirdly, not only is a revised and improved application very likely to go to different spokespersons the following year, but there is no reference to any earlier version of an application.

Worse yet, while the scoring system requires all panel members to provide for each application a category score from one to seven for scientific quality (50%), significance and innovation (25%) and the track record of the investigators (25%), the ranking system then treats these scores as if they were continuous numerical data.

With more than 30 panels across multiple different areas of research, ranking involves comparing the apples and oranges of one panel against the steak and chicken of another, using averaged scores reported to three decimal places, as if this confers precision.

Thus the assessment of project grants fails every requirement for reliability and validity.

So what can be done?

A system of long-term appointments to assessment panels – and a requirement to return resubmitted applications to the same panel and spokespersons – might achieve more reliable assessment of relative merit and improvements in response to feedback.

Yes, this may concentrate power and there would be difficulties managing conflicts of interest. But despite the best efforts of the NHMRC, conflicts are difficult to manage even now because the total pool of researchers in Australia is small. At least the assessment would no longer be a mere lottery.

Of course, the unreliability of outcomes is exacerbated by the utter inadequacy of funding for medical research. This year, the success rate for project grants declined to just over 20%, as the average funding commitment per grant increased and the total pool of research funds remained static.

The recently-released discussion paper of the Strategic Review of Health and Medical Research in Australia (the McKeon review) rightly recommends increasing research funding while embedding research within the health-care system, and moving towards an increased proportion of five-year grants funded by the NHMRC. One can only hope that these recommendations are implemented, and soon.

Meanwhile, the assessment system needs urgent reform so that whatever funds are available for project grants are sensibly and fairly distributed.

Rakesh Kumar is a Professor of Patholoy at UNSW. 

This opinion piece first appeared in The Conversation.