When I began researching this article, I wondered if the doubters were being unfair. Sure, occasional studies with unusual results get seized on by the media, but maybe they are unrepresentative of the wider field. I discovered this is the first response of nutrition scientists when a journalist tries to ask them, tactfully, if their field is broken.
“You have to be careful about not taking one study and saying that’s the be-all and end-all,” says Louis Levy, head of nutrition at Public Health England, an agency of Britain’s Department of Health. “You have to look at the broader evidence.”
Yet the more I dug into the subject, the more it became clear that, while misleading media coverage is part of the problem, this field’s flaws run much deeper. There are huge amounts of research on diet published every year, a lot of it funded by governments concerned about rising levels of obesity and diabetes.
But even in the pages of respected science journals we find conflicting results relating to much of what we eat and drink, including potatoes, dairy products, bacon, fruit juice, alcohol, even water. And this isn’t just quibbling over details: there is a major fault line dividing the field over whether we should eat food that is low in fat or low in carbohydrates, for example.
Many of the problems stem from the fact that the majority of food studies are of a certain kind that makes them easier to carry out but more likely to lead to false conclusions. To understand their weakness, consider the better kind of research, the randomised controlled trial. Here, doctors ask a random half of their subjects to take a new medicine, while the rest take dummy pills that look just like the real ones so no one knows who is taking what. If those who take the real drug end up in better health, there is a good chance the medicine was responsible.
That kind of study is hard to do for food. Few would agree to change their diet for years based on the roll of a dice, and it would be hard to keep secret what they were eating. So instead, nutrition scientists usually observe what people eat by asking them to fill out food diaries, and then track the health of participants.
The big problem with these “observational” studies is that eating certain foods tends to go hand in hand with other behaviours that affect health. Those who eat what is generally considered an unhealthy diet – with more fast food, for instance – tend to have lower incomes and unhealthy lifestyles in other ways, such as smoking and taking less exercise. Conversely, eating supposed health foods correlates with higher incomes, with all the benefits they bring.
We have large studies that measure all things simultaneously – it’s more possible than ever to cherry-pick Chirag Patel, assistant professor, Department of Biomedical Informatics, Harvard Medical School
These other behaviours are known as confounders, because in observational studies they can lead us astray. For example, even if blueberries do not affect heart attack rates, those who eat more of them will have fewer heart attacks, simply because eating blueberries is a badge of middle-class prosperity.
Researchers use statistical techniques to try to remove the distorting effects of confounders. But no one knows for certain which confounders to include, and picking different ones can lead to different results.
To show just how conclusions can vary based on choice of confounders, Chirag Patel, an assistant professor in the Department of Biomedical Informatics at Harvard Medical School, examined the effects of taking a vitamin E supplement. He used a massive data set from a respected United States study called the National Health and Nutrition Examination Survey. Depending on which mix of 13 possible confounders are used, taking this vitamin can apparently either reduce death rates, have no effect at all or even raise deaths.
Patel says this shows researchers can get any result they want out of their data, by plugging into their analysis tools whatever confounders give an outcome that fits their favoured diet, be it low-fat or low-carbohydrate, vegetarian or Mediterranean. “We have large studies that measure all things simultaneously – it’s more possible than ever to cherry-pick,” he says.
Another source of error is known as publication bias: studies that show interesting results are more likely to be published than those that do not. So if two studies look at red meat and cancer, for instance, and only one shows a link, that one is more likely to be published.
This bias is present at nearly every stage of the long process from the initial research to publication in a scientific journal and ultimately to news stories, if journalists like me write about it. “What you see published in the nightly news is the end result of a system where everyone is incentivised to come up with a positive result,” says Vinay Prasad at Oregon Health and Science University.
Prasad is an oncologist who has highlighted the lack of evidence behind certain cancer medicines. But, he says, nutrition research is in a worse state than his own field. “And they don’t seem to want to improve themselves.”
It is impossible to quantify exactly how much confounders and publication bias are distorting the field. But they are enough of a problem that we should be sceptical of all dietary advice, says data scientist John Ioannidis at Stanford University, in California.
Out of the roughly 1 million papers that have been published on nutrition, only a tiny fraction, perhaps a few hundred, are large, good-quality randomised trials, says Ioannidis. The rest are mainly observational studies, small or poorly designed trials, opinion pieces, or reviews that summarise the results of other papers, with all their potential flaws. Even national dietary guidelines are based on this kind of work.
And what do the few hundred decent-sized, randomised trials find? Here is the clincher: when the trials test the dietary recommendations based on observational studies, the strategies almost never succeed in extending lifespan. The studies either find no effect, or one that is much smaller than predicted by observational studies – so small as to be practically meaningless
By Clare Wilson
March 30, 2020
Source: SCMP.com