Way back in government class, we all learned how a bill becomes a law. (If it’s been a while, this video provides a quick refresher.) But legislators are not alone in their task to turn intriguing ideas into meaningful documents that benefit the public; scientists and doctors do it, too. And just like Bill, thumbing a ride to Capitol Hill, a scientist’s idea has a long way to go before it becomes what we call “evidence.”
In the past few weeks, I’ve heard some pretty ridiculous accusations about scientific evidence and the people who make it. And it’s funny, because they seem to come mostly from people who have no idea how the process works. Give me a couple hours, and I can have a website up and running that says anything. It could say that consuming carrots causes chlamydia (it doesn’t), or that my new detox system will make you look ten years younger. I can write anything I want. I can make up testimonials from fake customers, generate my own 5-star reviews, and make whatever ridiculous claims I want to try to sell you my product. And I could make a lot more money doing that than by practicing medicine or doing research. Strangely, it’s people with websites like these who assert that physicians are just minions of “Big Pharma” (whatever that means), or that doctors are closed-minded, kickback-motivated, toxin-pushing racketeers. But that’s not how science works.
The process of turning an idea into evidence starts with a question. But lots of people have questions, and most of them are fleeting notions that never go anywhere; it takes a lot of effort to turn them into outcomes. (I’m going to focus on medical literature, but the same principles apply to questions about why all the bees are disappearing or whether green fire trucks are safer than red ones.) Many times, the question has already been studied—but sometimes the studies weren’t very good, weren’t large enough to show a result, looked at a different population of people, or didn’t address a specific aspect of the idea. In these cases, it may be worth answering the same question again. For now, let’s assume a scientist has an idea for a new medicine that he hopes will help people with a certain disease. In order for that medicine to be approved, we need evidence to show that it is both safe and effective for the illness it is designed to treat.
Once someone decides to commit to testing a question in the form of a scientific study, they start by developing a hypothesis—essentially an educated guess at the answer to the question. “Educated” is important, because an understanding of the field helps to design a study that answers the question well. Trying to make an educated guess about a topic you don’t understand is like playing darts on a roller coaster. The hypothesis could be that a new lotion will make wrinkles disappear, or that it won’t make a bit of difference. It doesn’t really matter which side you choose because science isn’t about proving what you already believe; it’s about testing to see whether the thing that you hypothesized is true–and being open to disproof. A good hypothesis should be testable (otherwise, it’s useless). It should be relevant (otherwise, it won’t matter). It should be plausible (otherwise, you probably wouldn’t bother doing a scientific study).
The next step in the process is to find funding for the study—and it’s an important step. The funding can come from a university, a corporation, the government, or an individual—but without funding, your question will remain a question forever. Many critics of science point to the fact that pharmaceutical companies fund research studies as “proof” that science can’t be trusted. Sure, it’s a conflict of interest, but the company that stands to make a lot of money from a new drug is also probably the entity most likely to front millions of dollars for the clinical trial. It’s a good idea for someone to go back and verify their results, but the fact that a study was funded by a drug company certainly isn’t a sufficient reason to reject it.
In those instances in which the question involves a brand new drug, someone has to make it. Depending on the circumstances, that may involve harvesting it from a natural source, modifying a naturally-occurring substance, altering an existing drug molecule, or synthesizing an entirely new molecule. No matter how you go about it, it’s not cheap. Once a new drug is created, it typically goes through multiple rounds of testing in test tubes and animals. Maybe it kills cancer cells in test tubes; so do bleach and flamethrowers. Maybe it makes little white mice even skinnier; that doesn’t mean it will do the same for you. As my medical school immunology professor used to say, “People aren’t mice. We don’t have tails.” Animal studies don’t always translate directly into human results. At some point, we have to test new drugs on the people they are intended to treat.
Any study that involves human participants must be approved by an Institutional Review Board—a group tasked with ensuring that the study meets certain ethical requirements. Participation in a clinical trial has to be voluntary, informed, and confidential. There are a lot of studies that would provide interesting and useful information, but that would be unethical to perform. We’ve gotten this wrong in the past, and this step helps to keep us from doing it again.
The first trials are usually small and safety-related. “Normal” people (whatever that means) are given different doses of the medication and observed for side effects. If terrible reactions happen at a high frequency, that’s usually where it ends. But if the research volunteers seem to do alright, it’s time to test the medication on people who have the disease it’s intended to treat.
This is typically done in the form of a randomized controlled trial (RCT). There’s a lot that goes into designing one, and they can be ridiculously expensive and time-consuming. I’m about to over-simplify things. Essentially, the researchers design a study that compares two or more things (new drug vs. existing drug, new drug vs. placebo, etc.).
Each volunteer is randomly assigned to either group A or group B (more groups are possible, but I’ll keep it simple). The process of randomization is designed to ensure that the two groups are as similar as possible in every respect except for the difference being tested. Ideally, the two groups would be identical with regard to average age, sex, racial/ethnic distribution, income, existing health problems, and any other factors that may influence the results. This concept is crucial to getting rid of any complicating factors. (You can imagine, if a group of elementary school kids taking drug A was compared to a group of nursing home patients taking placebo, it would be hard to decide which factor really made the difference).
“Controlled” refers to the one variable that is intentionally different. The “control” group typically gets either a placebo or an already-tested medication that serves as a comparison. The “treatment” group receives the experimental drug (or whatever intervention is being tested).
Another key concept of RCTs is that they should, whenever possible, be “double-blinded.” This means that neither the patient nor the physician treating them knows which group they are in. Instead, another person or team is tasked with giving them the treatment for the group to which they were assigned. Drug companies go to great lengths to make placebo pills that look/taste/feel/smell exactly the same as the actual medication. However, some situations make it difficult or impossible to design a double-blinded trial—for instance, it wouldn’t be hard for a patient with anxiety to figure out whether they had received weekly counseling or a pill. [Hopefully.]
At the end of the trial, the results are compiled, some number-nerd does some statistical calculations (again, I’m over-simplifying), and the results are obtained. If everything was done properly, we can be quite confident that any differences between the two groups were because of the intervention.
If a new medication makes it this far through the process, the clinical trials are submitted for publication. The journals that publish these studies are “peer-reviewed,” meaning that every article is meticulously examined by a panel of experts in the specific field to ensure that the study was well-designed, that ethical principles were not violated, and that the research methodology and statistical analysis were sound.
If the new drug is proven to be safe and effective, and the study was accepted by the peer-review process and published, the pharmaceutical company can apply to the FDA for approval to produce and market the drug. If the evidence is determined to be convincing, the drug will be approved. Maybe. But it’s not over. Monitoring for safety continues after a drug’s approval as well, because some adverse effects are so uncommon that they don’t show up until a medication is used on a large scale. If, at any time, a medication is felt to be unsafe, it may be studied further or pulled from the market. That’s another thing about science–it’s always open to proving past conclusions wrong.
So that’s how a pill becomes a peer-reviewed double-blinded randomized controlled trial. Does that make science infallible? Absolutely not. But it’s a heck of a lot better than my 2-hour website to sell you my new 21-day detox program.
I have never participated in a randomized controlled trial or published a peer-reviewed article. (I’m glad other people do, but it’s just not my thing. I do have an understanding of how they work, and that they are just a bit more complex than I described above.) I have never received any payments from pharmaceutical companies. I am currently employed as a resident physician, making just barely over the relatively meager national average salary. I have received $12.08 (at the time of publication) from Amazon.com as compensation when people click on links on this site. I don’t really have a detox program to sell you.