My mum has used the same potato peeler for 57 years. Okay my dad has replaced the wooden handle but the blade is the blade of a peeler bought in preparation for their wedding and life together. I know, flashy. My parents are practical people. Other potato peelers have appeared, only to be quickly exiled to the back of that drawer of random kitchen implements that we all have. Potato peelers are not created equal.
And so it is with randomised trials. To think they are all as good as one another would be careless. While some trials are very good indeed, others are, well, much less so. In 1994 the British medical statistician Doug Altman called the commonplace occurrence of low quality health research a scandal. Strong words. But is it still a scandal?
Doug did not put numbers on the size of the scandal. My colleagues and I at the Universities of Aberdeen, University College Cork and Queen’s University Belfast thought numbers might focus minds. We asked: how many trials are bad, how many people take part in them and how much money is spent on them?
Our results were recently published in Trials. Our starting point was very recent Cochrane systematic reviews, studies that look at large numbers of trials to see what the combined evidence says about the effect of a particular treatment. Cochrane reviews also record review authors’ judgements about what is called the ‘risk of bias’ of each of trial in the review. What this means is that they assess in a standardised way the extent to which a trial’s findings can be believed. Risk of bias can be whittled down to three flavours: high, uncertain and low. Low risk of bias is good; high risk of bias is bad. Uncertain risk of bias is exactly that, uncertain. It could be good, it could be bad but there isn’t enough information to be sure.
To cut to the chase we looked at 1640 randomised trials spread across 96 reviews from 49 of the 53 clinical Cochrane Review Groups. The 96 included reviews involved 546 review authors. Trials in 84 countries, as well as 193 multinational trials, are included.
Of those 1640 trials, 1013 (62%) were high risk of bias – bad in other words. Only 133 trials (8%) were low risk of bias, or good. The rest (494, or 30%) were uncertain. Bad trials were spread across all clinical areas and all countries. Well over 220,000 people (56% of all trial participants) were in bad trials. Our low estimate of the cost of bad trials was £726 million; our high estimate was over £8 billion. That sort of money would fund the UK’s biggest public funder of trials, the National Institute for Health and Care Research’s Health Technology Assessment Programme for between a decade and a century.
This all points to the scandal being in rude health. Based on our work and that of others, we made five recommendations:
- trials should be neither funded or..
- ..given ethical approval unless they have a statistician and methodologist in their team.
- trialists should use a risk of bias tool at design.
- more statisticians and methodologists should be trained and supported.
- there should be more funding into applied methodology research and infrastructure.
We think acting on these could make a difference quite quickly. Recommendations 1 to 3 are almost free to implement.
There is no excuse for a bad trial. When we meet one, we should consign it to the kitchen drawer of irrelevant and annoying oddities. When we design a trial we need to think about what we can do to make sure it has all the beauty and quality of a trusted device that can do it’s job to the complete satisfaction of users for years and years and years.
To quote Doug Altman ‘We need less research, better research, and research done for the right reasons’. Quite so.
This work was part of the Trial Forge initiative to improve trial efficiency.
Please note, all comments will be moderated so may not appear immediately. If you wish to remain anonymous in your comment please put down initials or solely a first name.