Scientific Research, Migraine Prevention & the Decline Effect
The more migraine prevention medications I’ve tried and failed the more I’ve wondered why researchers get good results with these drugs, yet I (and many other chronic migraineurs) don’t. Of course it’s possible they just aren’t addressing the particular process in my brain that brings about a migraine attack. But it has increasingly seemed like something else might be going on. In reading an article in the New Yorker I came upon a phenomenon called the decline effect that seems to provide an explanation. The whole idea behind the scientific method is to create outcomes that can be replicated. But the decline effect is the idea that the more something is studied the less likely it is for the outcome to be replicated.
In the 1980s psychology researcher Jonathan Schooler was a graduate student when he came upon an exciting scientific discovery about language and memory. Like most scientists he continued conducting experiments to replicate his findings, expecting the result to become easier to prove in subsequent studies. But something unexpected happened. Schooler realized it was actually becoming harder to prove his hypothesis and that the initial result was becoming less statistically significant with each result. Instead of keeping quiet about it and continuing to push his initial findings, which have been referred to more than 400 times in highly respected research journals, he spoke up and initiated a public discussion.
One important aspect of the decline effect is publication bias: the idea that peer-reviewed research favors positive results. This is a problem for those of us turning to medical research to look for possible treatments because positive results are more likely to be published. Once an idea has become widely accepted anything disproving it is unlikely to find a home in a journal. We have no real way of knowing how many negative or neutral results are ignored that might give us a clearer, more complete picture.
Yet publication bias is an incomplete explanation for this phenomenon. It does not explain the initial positive results that fail to stand up to scrutiny, but that are never published. One scientist says this could be because of the bias of scientists against documenting anything but positive data. It’s not that they are intentionally discarding negative or neutral results, but rather that the culture of science is to work toward proving something. It’s just that anything that does not prove the hypothesis this is considered of little importance. Some believe scientists unconsciously find a way to prove their hypotheses. They often scour their data in hopes of finding something that supports their beliefs. Like the rest of us, scientists enjoy being right and intensely dislike being wrong.
A more insidious explanation is that when clinical research is sponsored by pharmaceutical companies they only want to call attention to studies that show their products are effective and safe. They have a strong incentive to squash anything else.
Randomness has to be considered an additional contributing factor to the decline effect. Unfortunately it’s something that can’t be overcome with more openness or stricter research parameters. In the 1990s a neuroscientist conducted an extremely rigid experiment at three different sites to see if he could produce similar results by controlling every possible variable that might lead to differences. He ordered mice from one company and had them shipped the same method and day. He fed the mice the same food and used the same bedding in their cages. He even made sure they used the same brand and type of surgical gloves for handling the mice. Yet the results in the three locations varied wildly. If someone goes to that effort to ensure reliability and still can’t get consistent results, what does that say about most scientific experiments and whether their results are anything more than happenstance?
We don’t really have many options for gathering information about possible migraine prevention medications aside from looking at medical research. However, it is important to view the results with skepticism and know that just because a medication is thought to bring about a certain result doesn’t mean it actually does so.