ERROR 1
ERROR 1
ERROR 2
ERROR 2
ERROR 2
ERROR 2
ERROR 2
Password and Confirm password must match.
If you have an ACS member number, please enter it here so we can link this account to your membership. (optional)
ERROR 2
ACS values your privacy. By submitting your information, you are gaining access to C&EN and subscribing to our weekly newsletter. We use the information you provide to make your reading experience better, and we will never sell your data to third party members.
The organic synthesis journal Synlett has been conducting an editorial experiment called “intelligent crowd reviewing” that could shake up the traditional peer-review process. The results “are stunning,” says Benjamin List, the journal’s editor-in-chief and director of homogeneous catalysis at the Max Planck Institute for Kohlenforschung.
Peer review for scientific journals typically involves three anonymous referees from the same research area who judge the quality of a manuscript sent to them by a journal editor. List and his graduate student and editorial assistant Denis Höfler instead worked with an information technology company to create a protected online forum for reviewers. With authors’ permission, they posted 10 submitted papers on the forum and gave a stable of about 100 of the journal’s reviewers 72 hours to respond anonymously to papers of their choosing as well as to respond to fellow reviewers’ comments. In parallel, some of the manuscripts were also examined through the traditional review process (Nature 2017, DOI: 10.1038/546009a).
List points out that the approach is not what is sometimes referred to as crowdsourced reviewing, in which anyone can comment on an openly posted manuscript. Compared with the traditional method, the crowd review was much faster—three days versus weeks—and collectively provided more comprehensive feedback on the merits of the manuscripts, List says. The paper authors reacted positively, saying they appreciated the thoroughness of the crowd’s comments and the speedy turnaround. The authors of the one manuscript that was rejected did not complain. As editors, List and Höfler did have more to read, but they haven’t found that their workload massively increased. So far, no referees have been domineering in the discussions and there were no breaches of confidentiality in the limited experiment.
Peer review, long held as essential but imperfect, has been challenged over the years by all sorts of experiments and evaluation studies. Yet many editors and researchers, finding no better alternatives, tend to accept the default approach and soldier on.
When scientific peer review works, and that’s much of the time, it’s “a wondrous thing,” List says. “But all too often, it can be an exercise in frustration for all concerned.” Authors are on tenterhooks to learn of potentially career-changing decisions, generous peer-reviewers are overwhelmed, and editors are condemned to doggedly sending reminders weeks after deadlines pass for reviewers, he adds. When the evaluation finally arrives, it might be biased, inaccurate, or otherwise devoid of insight.
List acknowledges that assembling a suitable reviewer crowd is easier for a specialized journal such as Synlett catering to a small research community. The journal’s experiment continues, but List says Synlett is in the early stages of switching to crowd reviewing full-time. “My conclusion so far is clear: crowd reviewing works.”
“The Synlett pilot is interesting, looks to be well-conducted, and the initial responses sound positive,” observes James Milne, senior vice president for publishing at the American Chemical Society. Journal editors and publishers routinely evaluate how peer review can be improved, Milne says, and effectiveness at scale is paramount to any approach that might be adopted. “How crowd reviewing might scale to successfully handle many tens of thousands of submissions is probably this new approach’s greatest challenge.”
Join the conversation
Contact the reporter
Submit a Letter to the Editor for publication
Engage with us on X