BTN Explainers
11 Jan 2022

How to Read Medical Research Without a PhD

No items found.

Understand medical studies without a PhD. Clear steps to read research, spot biases and separate strong evidence from weak.

Why understanding science has never mattered more, and why AI may be the biggest test yet.

If you feel as though you’re living in a world where every day brings a new “breakthrough,” you are not alone. Coffee lengthens life. Coffee shortens life. Wine is good. Wine is bad. Some mornings it feels as though the only consistent finding in modern health research is that everything is simultaneously curing and killing us.

“Some days it feels as though everything is both curing and killing us.”

Meanwhile, if you’ve ever tried to read an actual scientific paper, not the news headline, but the PDF tucked behind the link, you may have felt the sudden urge to lie down. The abstract is already dense. The methods section reads like a cross between a legal contract and an electrical engineering manual. And the conclusion often appears to contradict the headline that brought you there.

Yet understanding medical research has never been more important. The NHS is stretched, private wellness is exploding, and AI now writes health advice as confidently as a Harley Street consultant with a double-barrelled surname. The result is a public trying to make sense of claims that are louder, faster and more contradictory than ever before.

So the question is simple: How can an intelligent, non-scientific reader make sense of it all? The answer is simpler still: by learning how to read research the way scientists wish everyone would.

But before we get to that, let’s start with a story.

The Problem With Headlines

Some years ago, Dr Ben Goldacre , a physician, epidemiologist, and professional thorn in the side of bad research, pointed out something painfully obvious: most scientific breakthroughs reported in the news are nothing of the sort. Goldacre made a career (and several very good books) out of showing how poor studies become sensational headlines. A trial on eight undergraduates is described as “Scientists prove.” A biochemical mechanism observed in a petri dish is reported as “Cure discovered.” A correlation between two lifestyle habits suddenly becomes a prescription for national behaviour change.

He called it, bluntly: bad science. And it is everywhere. Newsrooms, understandably, are drawn to stories with punch. “Promising signal in small pilot study” does not sell as many papers as “New cure for anxiety discovered.”

But the result is a public, all of us, trying to make informed decisions in a fog of enthusiasm and exaggeration.

Why Reading Research Looks Hard (But Isn’t)

The obstacle for most people is not intelligence. It’s orientation. Medical studies use unfamiliar language, operate on unfamiliar rules and measure outcomes in ways that do not always translate easily to daily life.

But look a little closer and you notice something liberating. Nearly every study, no matter how complex, follows the same structure. And nearly every misunderstanding comes from the same handful of pitfalls Goldacre wrote about repeatedly:

  • confusing correlation with causation

  • mistaking small studies for big truths

  • ignoring the control group

  • elevating mechanisms to miracles

  • cherry-picking the one favourable study among many neutral ones

Once you see these patterns, you can’t unsee them. Scientific papers stop being intimidating and start being readable, even interesting.

“Once you learn the patterns, even dense papers start making sense.”

The Evidence Pyramid: Your Quiet Secret Weapon

Imagine standing back from the noise and looking at research not as a thousand competing voices but as a hierarchy. At the top are systematic reviews and meta-analyses, the kind of papers that combine results from many trials, smoothing out the flukes of individual studies. Below that are randomised controlled trials, the best way we have of showing cause and effect. Then come cohort studies, case series, mechanistic papers and expert opinion.

“A mechanistic rat study is a hypothesis — not a guarantee.”

This is known as the evidence pyramid, and once you absorb it, everything becomes clearer. If an article tells you that “Scientists have proven X,” but the study was based on no control group and 17 volunteers in a university basement, you know something is off. If a product’s marketing rests on a mechanistic paper showing what happens to a molecule in a rat brain, you know you’re looking at a hypothesis, not a guarantee.

Goldacre’s plea was consistent: ask what kind of study you’re being shown, and the rest will follow.

Real People, Real Outcomes

One of the most important questions you can ask of any health study is also one of the simplest: What did they measure? Some trials measure what people actually care about: symptoms, pain, sleep, mobility, quality of life. Others measure something more obscure: a protein shift in the bloodstream or a marker that may or may not translate into actual benefit. The former matters. The latter is interesting, but rarely decisive. This distinction, between meaningful outcomes and surrogate outcomes, might be the single most important thing the public never gets told.

“What did they measure? If it isn’t meaningful to real people, it isn’t meaningful.”

And Then Came AI

If Goldacre thought bad science spread quickly in the age of newspapers, he may be relieved he published Bad Science before AI hit the mainstream.

AI can synthesise information at a speed no human can match. It can produce explanations that sound polished, clear and authoritative. It can turn a mechanistic mouse study into “strong evidence” with a single misplaced adjective. It can misinterpret an observational study as causation simply because the training data behind it did too. And, on a bad day, it may even fabricate a study entirely, a sin Goldacre would surely have enjoyed dissecting with alarming precision.

“AI doesn’t lie — it just repeats the internet’s confusions with incredible confidence.”

AI doesn’t mean to get things wrong, of course. It simply reflects patterns. If the internet is full of overclaim, AI produces overclaim. If the internet confuses mechanism with outcome, AI will do the same. If the internet cannot tell the difference between a pilot study and a phase III trial, AI may struggle too.

And this is why understanding research yourself is no longer optional. AI can help you read faster. It cannot help you think better without guidance.

The Good News: You Don’t Need to Be a Scientist

Once you understand the shape of research, the hierarchy, the common traps, the difference between causation and correlation, something remarkable happens.
You start reading studies with the same calm detachment as someone checking the weather forecast.


You see the patterns. You recognise the limitations. You understand when a headline is justified and when it’s a stretch.

You also become better at spotting when something is genuinely promising. Not because a newspaper told you so. But because the evidence is strong, consistent, repeated and meaningful.

The Future: Clarity as a Form of Safety

Medical research is accelerating. Innovation is accelerating. AI is accelerating. But humans still need the same thing they always did: clarity. Goldacre’s message, sharpened now by the arrival of AI , is not that science is untrustworthy. It is that science must be read properly. And reading it properly is not about expertise. It is about asking the right questions.

Not: “Is this new?” But: “Is this meaningful?”

Not: “Is this exciting?” But: “Is this strong?”

Not: “Does this fit my bias?” But: “Where does this sit in the evidence pyramid?”

“Science isn’t untrustworthy — it’s just often read badly.”

When you can answer those questions, headlines stop being confusing. Wellness claims stop being overwhelming. AI-generated health advice stops being intimidating.
And you discover, as Goldacre intended, that the world is full of good science, you just need to know how to see it.

No items found.
Share this post