How Biased Research and Incorrect Data Hurt the Chronic Illness Community
A recent Harvard University study, published on December 13th, 2017, into how weather — in this case rain — affects chronic pain, surmised that there was little or no correlation between inclement weather, with its air pressure and temperature changes, and pain flares. While the conclusions of the study allowed that more research would be valuable, it also carried the suggestion that patients might be perceiving “patterns (e.g., an association between rainfall and joint pain) where none exist.”
On the other hand, anyone with chronic pain knows that clouds rolling and the air pressure rising are signals to prepare for a bad day. For some, myself included, overcast conditions are generally accompanied by worsening headaches or a migraine; for others, back and joint pain, cramps and fatigue might increase. This is a widely known issue in the chronic community. So why is there such a disparity between the patients’ testimony and the findings of studies such as this?
This can be answered by observing the metric the study used: the researchers gathered their results by monitoring the number of outpatient appointments related to back and joint pain on rainy days compared to those on days with fairer weather.
The assumption here, of course, is that people experiencing heightened pain would visit the doctor or hospital. However, this assumption betrays a lack of knowledge on the part of the researchers as to the habits and routines of chronic patients. What it doesn’t take into account is that the vast majority of chronic patients have devised their own coping patterns, in lieu of meaningful medical intervention, to deal with flares. If anything, chronic patients are less likely to seek medical attention during flares*. Rather, the study was undertaken on the back of able-bodied assumptions about seeking medical care and without the involvement of chronic patients themselves.
(*Curiously, the results suggested this too: “the difference in the proportion of patients with joint or back pain between rainy days and non-rainy days was significant, but the difference was in the opposite anticipated direction.”)
Unhappily, this speaks of a wider issue in the mainstream study of chronic illness. This kind of study could have been conducted under different conditions: how many appointments are cancelled on rainy days, whether there is an increase in appointments for chronic conditions in the week following precipitating, or actually speaking to patients rather than reducing them to numbers through a door. But it still would have been testing something we already know.
Rather, it seems, thanks to a rigorous set of assumptions and biases on the parts of the researchers, that studies are often undertaken with the intention to disprove the assertions of those with chronic pain rather than to reach the root of the cause. This creates an obstacle to objective results, as researchers are less open to methods and results that counteract their original assumption.
In seeing that the results of the study were contrary to the original hypothesis, the researchers concluded those with pain must be wrong instead of questioning whether, in fact, their metric was flawed.
“No patients were involved in setting the research question or the outcome measures, nor were they involved in developing plans for the design or implementation of the study. No patients were asked to advise on interpretation or writing up of results. There are no plans to disseminate the results of the research to study participants or the relevant patient community.” – Harvard University Study, patient involvement
This kind of thinking is no more obvious than in the 2011 PACE study, a publicly-funded research project into viable treatments for chronic fatigue. Except it wasn’t. Now revealed to be fraudulent — though the results are still accepted by many doctors — the whole study was rife with conflicts of interest, improper practices and biases. Headed by medical psychologists, the study set out to discover the efficacy of CBT and graduated exercise therapy, two much-maligned “treatments” in the chronic community. These treatments, championed by the leaders of the study, were put into practice due to the assumption that chronic illness was all in the mind — the result of psychological disorders and deconditioning.
When the results didn’t bear this out, the study drastically changed its criteria halfway through. By lowering the pass-rate significantly, the study’s success criteria fell into the same area as its entry criteria. This meant that patients could decline significantly in the study and still be passed as making an improvement.
Much as the Harvard study failed to recognize the assumptions driving it were incorrect, leaders of the PACE study clambered to alter the parameters in such a way that they could maintain their own beliefs — a fact that was kept hidden from public view until late 2016. In the meantime, this formed the basis of treatment in the UK for years, damaging thousands of patients, all the while dismissing concerns over its legitimacy as the cries for help from patients. This kind of confirmational bias and groupthink is hampering the progress of a still deeply misunderstood group of illnesses, as well as holding the UK back while other countries slowly begin to accept chronic conditions as physiological.
It wasn’t just the study and its acceptance by NICE that has been damaging, however. That The Lancet circulated the findings and also dismissed calls for further investigation into the study has helped to reinforce the already problematic attitudes held by doctors in the UK.
By comparison, the Harvard University study is relatively small, but that doesn’t mean it won’t have an effect on patients. It is yet another flawed source of information that can empower doctors to ignore the physiological foundations of chronic illness, giving them more room to label us, inappropriately, as mentally ill instead. And as more funds are directed towards flawed studies based on arrogant assumptions, these attitudes become more and more ingrained into the medical nomenclature. And the more you empower ignorance, the harder it becomes to dislodge.
This all highlights the need for unbiased research into chronic illness and for better funding for those doing meaningful research into the host of diseases that the medical industry places under the umbrella of “chronic fatigue syndrome.” The Harvard University study ends with a note that “A relation may still exist, and therefore larger, more detailed data… would be useful.” But the problem with the study wasn’t a lack of data, it was just the wrong data in the first place.
It’s true more research is needed, but while studies continue to look in the wrong place for answers and continue to exclude patients and patient testimony from their research, the big questions are never going to be answered. But really, until researchers can enter into studies with a real idea of what chronic patients go through, the results are unlikely to be groundbreaking. After all, how can you make a measurement if you don’t know what you’re measuring?
We want to hear your story. Become a Mighty contributor here.
Photo via Pexels