Between the duping of the American public from Big Sugar, shocking moral fact-checks on some of history’s most groundbreaking experiments, and failure after failure of science research we’ve accepted as truth, science is teetering on the verge of being, well, fucked.
Actually, it might be too late.
Those fears drove University of Oregon psychologist Sanjay Srivastava, Ph.D., to design the fictitious course “Everything Is Fucked.” The imagined ten-week seminar, written up as a syllabus on his blog, is Srivastava’s version of a reality check for young scientists: Do you seriously think you’re going to cure the world’s diseases, engineer away climate change, and elevate humanity through technology? You won’t — because everything is fucked.
But that’s the problem with science, Srivastava argues: We’re so intent on focusing on the results that we don’t see the path toward finding these results, and that’s fucked up. Would CRISPR have become 2015’s biological buzzword if academic journal editors hadn’t decided gene editing papers were worth publishing? What else did we find out (or not find out) about Mars that wasn’t deemed publishable? Can we even trust the data showing the pandas and whales are thriving? We don’t know, and not only is it fucked up, it raises questions about the foundations of modern science altogether.
The “everything” in “Everything Is Fucked,” Srivastava explains, refers to scientific methods, the ways researchers acquire funding, the universities that support research, and our channels for publishing and disseminating scientific knowledge.
He defines something as fucked if “it presents hard conceptual challenges to which implementable, real-world solutions for working scientists are either not available or routinely ignored in practice.” For Srivastava, the way we discover and share scientific knowledge is so riddled with these problems that there’s no way we can fully trust what we believe to be true.
Take week eight’s theme, “Scientific publishing is fucked.” In the science community, a published paper in an academic journal is professional currency; everyone’s striving to collect them, and anyone who doesn’t have one is professionally broke. Academic journals — big-name, respected publications like Nature, Science, and the New England Journal of Medicine — are academia’s banks. In Srivastava’s view, this is troublesome, because it means they’re in control. “The problem is, there’s this incentive to find things that meet the publishable standards,” he says, referring to science that shows positive results, like a drug that promises a cure. If journals only pay attention to the “success” stories, it means they’re ignoring all of the research that doesn’t produce sexy results. In turn, the temptation to falsify results and fudge graphs only grows stronger.
Scientists behind the unsexy studies are screwed: If they don’t get published, they don’t get tenure at research institutions or grants to continue their work, and their scientific pursuits inevitably come to a standstill. But does their work deserve to come to an end? Most of those studies are not “wrong” or “useless,” because negative results are often just as useful for advancing scientific discovery as positive ones. But they aren’t treated that way because they don’t make for good publishing. “That’s really where it starts: Publication decisions becoming hinged, not on whether it’s a good question, but on whether the answer comes out, one way or another,” Srivastava says. The cycle continues, and fucked-ness persists.
He has other, finer bones to pick. In week six, he discusses the issue of replicability, which is the idea that an experiment needs to be repeatable. If it’s not, the data it produces is, statistically, a one-off, and therefore pretty much useless. And yet, there are an alarming number of scientific studies out there that are not replicable, cited by everyone from Big Pharma to the federal government. He claims that the scientific profession as a whole is fucked in week ten because the pressure on scientists to publish more papers, faster, leads to smaller sample sizes, which in turn lead to weaker, less statistically sound results. The scientists that design better, more thorough experiments may be doing better science, but because they aren’t publishing as many studies, they’re ultimately viewed as less accomplished. “Where do truth and publishability depart?” Srivastava ponders. “When it comes time to hire people, we’re still going to hire someone with more papers.”
If it sounds like a massive clusterfuck, then Srivastava has made his point. Does that mean the pursuit of science in 2016 is a completely futile endeavor?
No, Srivastava surprisingly argues. Look to week seven — “Interlude” — for a scientific salvation, right about the time when his fictional students consider switching to a liberal arts degree, which can be distilled to this: Sure, everything is fucked, but it’s up to the next generation of scientists to make science less so. There’s hope, and the replicability crisis might ironically offer that by being a cautionary tale for the arrogance of science. The rest of us should be at least calmed by the fact that sexy results are garnering more upturned eyebrows and head scratching than ever before.