ERROR 1
ERROR 1
ERROR 2
ERROR 2
ERROR 2
ERROR 2
ERROR 2
Password and Confirm password must match.
If you have an ACS member number, please enter it here so we can link this account to your membership. (optional)
ERROR 2
ACS values your privacy. By submitting your information, you are gaining access to C&EN and subscribing to our weekly newsletter. We use the information you provide to make your reading experience better, and we will never sell your data to third party members.
Last year, Nick Franks published a paper that kicked off an almighty row over the biology of what happens to the brain during sleep (Nat. Neurosci., DOI:10.1038/s41593-024-01638-y).
Previous studies in mice had consistently shown that the brain flushes out harmful proteins, such as those related to dementia, during sleep. But Franks, a biophysicist at Imperial College London, and his colleagues concluded the opposite; their data suggested that this process of brain clearance is in fact reduced during sleep. It’s an important detail to get right because the answer could greatly influence the success of dementia drug development.
But Franks’s findings were not welcomed by Maiken Nedergaard, a neuroscientist at the University of Rochester and a lead proponent of the brain clearance theory.
“His is the only paper that has looked at sleep and claimed that there's more clearance in wakefulness,” she says. “It’s from someone who doesn’t know the field well.” She added that Franks hasn’t properly understood the previously published work. “He is citing the literature wrong,” she says. “It’s plain wrong.”
Franks says that Nedergaard has publicly and vociferously condemned his work in an unnecessarily personal way.
She cowrote an open letter to the editors of Nature Neuroscience, which the journal published, to criticize them for publishing his work. In it, she and her colleagues wrote that Franks’s paper has “several important flaws” and that “consequently, the data fail to support meaningful conclusions.”
Franks, for his part, refutes her critiques and has plausible explanations for them, which Nature Neuroscience has also published. Crucially, though, he remains open to the idea that it’s possible he is wrong.
At a certain point, however, the question is not just about who is right and who is wrong—as fascinating as that question may be, scientifically speaking. Instead, it morphs into a much broader and existential question about how scientists should behave when their work is challenged by the results of another researcher.
Do they remain open to the possibility that they’re wrong? Are they engaging with rival findings in good faith? How can they prevent themselves from becoming too entrenched in their own theories? How can they genuinely engage with alternative theories without feeling like they’ve abandoned their life’s work?
Perhaps most importantly, Franks says, is the question of whether they are staying within the confines of polite argument.
“There are certainly ethical issues at play here,” says Thomas Powers, who heads up the Center for Science, Ethics, and Public Policy at the University of Delaware. “No matter the case in question, we can assume that every scientist is self-interested, at least in part. They would never want their study to be retracted. They don’t want to be proven wrong in front of their peers. There are so many social elements to consider, and sometimes people forget that scientists are still human beings and they have our normal faults.”
When scientists such as Franks and Nedergaard fundamentally disagree on what they think the science is telling them, they come head to head with what Powers terms the “trust-skepticism paradox.”
“Every time you read a paper in a scientific journal, you trust that the experiment was truly done and the results were accurately recorded. You trust that people aren’t making things up,” he says. “Don’t get me wrong, falsification does sometimes happen, but by and large, you trust that people have done what they’ve said they’ve done to get the data.”
On the other hand, “the other virtue of science is of course skepticism. You ought to be skeptical of your own results and anybody else’s. And so scientists live in this strange world where they are expected to be trusting and at the same time skeptical, and that can be where some confrontation creeps in.”
Franks learned to be skeptical of his own data and those of his colleagues as a young researcher in the 1970s at Kings College London, where he collaborated with another scientist, William Lieb. It was generally accepted at the time that anesthetic drugs influenced the structure of lipids somehow. Franks thought the lipids would get thicker, and Lieb thought they’d get thinner. “Lieb and I worked together for 30 years, until his death, and there was not one day when we didn’t have an argument,” Franks says. “Everything he said, I would disagree with, and everything I said, he would disagree with. We’d argue it out and challenge each other with respect.”
That atmosphere whereby both collaborators feel comfortable in questioning each other is an important one for scientists to foster, says Franks. Their initial experiments were inconclusive. “That came as quite the blow,” he says. They spent another 2 years trying to fix their methods to finally figure out which of them was correct. Eventually, they realized that their experiments were not flawed; neither of them was right. And so they published a paper that contradicted the prevailing theory that lipids were affected by anesthetics.
Franks remembers receiving letters from peers around the world telling him they were wrong. That was a lot to deal with as an early-career researcher, he says. But he took comfort from the fact that he and Lieb had been bickering with each other long before their manuscript went to press. “There was hardly a single argument that somebody raised that we hadn’t played out ourselves,” says Franks. In the end, the prevailing wisdom changed to accept the idea that lipids don’t change their structures or properties.
That’s why it’s so important to surround yourself with colleagues who you respect and who are willing to disagree with you but stay friends, Franks says. That becomes more difficult as you become more senior because it’s harder for junior researchers to tell their bosses that they’re wrong. “Dominant scientists who work on their own, or with acolytes, have a huge disadvantage, and they might not even realize it,” he says.
If you can manage to climb the academic ladder without forfeiting authentic scrutiny from the other people in your laboratory, it means that you’re also less likely to become so entrenched in your own theories, says Franks.
“I have always resisted having my research work towards one theory or another because I notice that when people do that, they have confirmation bias, and the way they defend their theories becomes very personal,” he says. “It can almost be like defending an article of religious faith rather than a calm presentation of facts.”
Part of the problem, Franks says, is that the way we fund science has created a perverse series of incentives whereby people are encouraged to defend their theory at all costs and to shun rival ideas.
“In the past, people would break open a bottle of champagne when they had an exciting result. Now, they break open a bottle when they get a grant. Science has become more expensive, especially animal studies. Once you’ve got a grant, it’s a precious thing, and it allows you to do science,” Franks says. “If that’s threatened by somebody stepping up and saying that they think you’re wrong, it’s almost like they’re threatening your lab and career. It feels like they’re threatening more than your opinion. It goes way beyond the dimension of who is right or wrong and becomes a threat to your livelihood.”
Adding to that, Powers says, is the fact that it’s almost impossible to get a paper out of negative results. “There are no journals that publish negative results,” he says. That makes it harder for researchers to be truly dispassionate in analyzing their own results; they have something to gain from a positive result, but not from a negative result.
Powers isn’t suggesting that Nedergaard or Franks is thinking like this, but he says the consequences of these incentives contributes to a general atmosphere in which it can be hard to accept opposing theories. It would be helpful, Powers says, if governments or charities or universities provided venues devoted to publishing negative results. Such a paper wouldn’t be as prestigious as publishing in a major journal, but it would take some of the sting out of getting a negative result.
Franks thinks the most recent attacks on his science have been forthcoming because of a multitude of such factors; his findings have threatened to undermine a consensus of work that has taken 10 years to build. And, quite understandably, people want to push back against that. That’s perfectly within their right, Franks concedes. But if the spirit of the scientific method is to be preserved, we also need to build incentives for scientists to engage with people who produce data that disagree with their previous work.
Join the conversation
Contact the reporter
Submit a Letter to the Editor for publication
Engage with us on X