Ithaka Life Sciences - Blog

Ithaka Life Sciences Ltd (Ithaka) is a provider of business advisory and interim management services to the life sciences sector.

Wednesday 5 October 2016

Why is so much medical research wasted?


Medical research is an activity of global importance and, not surprisingly, is heavily funded by governments. In the US the NIH invests nearly $32.3 billion annually in medical research, and in the UK the MRC’s gross research expenditure in 2015/16 was £927.8 million. Medical research is the most popular target of charitable giving in the UK, and medical research charities invest £1.3 billion a year in the UK.  Philanthropists such as Bill and Melinda Gates have also been active funders of medical research, and the recently the Facebook founder Mark Zuckerberg and his wife Priscilla announced they are donating $3 billion towards “ending all disease”.
However, Iain Chalmers and Paul Glasziou (in a blog published on the British Medical Journal website) suggest that about 85% of medical research is wasted (globally, this amounts to about $170 billion every year). They have identified four main causes of waste:
1.       Researchers ask the wrong question. Much research addresses questions to which the answer may already be known; fewer than half of medical trials appear to be informed by all the previous relevant research. Often research asks questions that are of little or no interest to patients, and frequently the most researched problems are not the most severe or prevalent.
2.       The research methods are unreliable. Many trials fail to compare an intervention with anything or are conducted on too few patients. The average trial tests an intervention on only 36 people, which is very unlikely to produce statistically robust results.
3.       Research is not clearly reported. Publications don’t always adequately explain what the intervention being examined actually was. For example, was the surgery carried out on frail old ladies or strong young men, because you’d expect their survival rates to be quite different?
4.       Often research is not reported. About half of all clinical trials are never published. Health systems and taxpayers pay dearly for this. For example, in 2009 many governments stockpiled the drug Tamiflu due to concerns about a flu epidemic. The UK government alone spent £500 million on it. However, 8 of the 10 clinical studies on Tamiflu had not been published. When data from those unpublished trials were eventually released it was found that Tamiflu may not reduce deaths and does not even reduce hospitalisations.
Tamiflu wasn’t an isolated case; trials with negative outcomes tend not to get published. For example, published studies funded by pharmaceutical companies are four times less likely to show a negative result than independent studies. Also, studies with negative findings tend to take a year longer to publish than those with positive findings.

Fixing all this is difficult. The behaviour of health researchers is driven by incentives, infrastructure and information. The incentives for academics and pharmaceutical companies do not always align with those of patients and the public. A good example of this is provided in a recent paper, published in Royal Society Open Science, by Paul Smaldino and Richard McElreath.

They focused on incentives within science that might lead even honest researchers to produce poor work unintentionally. To this end, they built an evolutionary computer model in which 100 laboratories competed for “pay-offs” representing prestige or funding that result from publications. They used the volume of publications to calculate these pay-offs because the number of papers is a known proxy of professional success.

Some labs were better able to spot new results than others. Yet these labs also tended to produce more false positives; their methods were good at detecting signals in noisy data but also often mistook noise for a signal. More thorough labs took time to rule these false positives out, but that slowed down the rate at which they could test new hypotheses. This, in turn, meant they published fewer papers.

Smaldino and McElreath concluded that when the ability to publish copiously in journals determines a lab’s success, then “top-performing laboratories will always be those who are able to cut corners”, and that this is regardless of the supposedly corrective process of replication.
Ultimately, therefore, the way to end the proliferation of bad science may not be to nag people to behave better, or even to encourage replication, but for universities and funding agencies to stop rewarding researchers who publish copiously over those who publish fewer, but perhaps higher-quality papers. This is easier said than done but the consequences for science and, ultimately, health systems and taxpayers, are clear.

Labels: , ,