Unintended Impact | January 13, 2014 Issue - Vol. 92 Issue 2 | Chemical & Engineering News
Volume 92 Issue 2 | p. 3 | Editor's Page
Issue Date: January 13, 2014

Unintended Impact

Department: Editor's Page
Keywords: impact factor, ethics, publishing

As someone who knows little about impact factors, I was riveted by Donald R. Paul’s talk at the 2014 conference of American Chemical Society journal editors. Paul is the Ernest Cockrell Sr. Chair in Engineering at the University of Texas, Austin. From 1986 to 2013, he was editor-in-chief of Industrial & Engineering Chemistry Research (IECR). The journal is published by ACS, which also publishes C&EN.

A journal’s impact factor plays a key role in many decisions affecting scientists. The conventional wisdom is that high impact factor means high-quality science. Scientists covet publication in journals with high impact factors, such as Nature and Science. Publication in such journals can reap monetary and other benefits for authors.

What I learned from Paul is that overemphasizing impact factors can have insidious consequences. Consider, for example, this notice that Paul found on the website of a young chemical scientist: “To prospective group members: A minimum necessary requirement for graduation with a PhD from this group is to accumulate 20 IF (impact factor) points, and at least 14 of which should be earned from first-author publications. For example, a student earns X points if s/he publishes a paper as the first author in a journal with IF of X (X/2 and X/3 points as the second and third author respectively).”

Developed by the Institute for Scientific Information, the impact factor of a journal for any given year is a ratio, A/B. A is the total citations in that year of articles published by the journal in the previous two years. B is the total articles published in the previous two years.

The math is simple, but the nuances are illuminating. For example, the ratio is subject to inflation. The denominator is straightforward. An article is a full paper, a note, or a review, Paul said. The numerator, however, is not restricted to articles, he explained. Citations can be mentions in editorials, letters to the editor, and general news media. Inflation can also occur when journal editors “encourage” authors to cite articles published in their journals. Called coercive citation, the practice is common in non-hard-science fields, according to a 2012 study in Science (DOI: 10.1126/science.1212540).

Impact factors are highly variable across fields, Paul noted. In 2004, for example, the weighted average impact factor of journals in molecular and cell biology was 4.76. The corresponding values were 2.90 for chemistry and 0.56 for mathematics. These variations have many causes. For example, different fields grow at different rates, and citation practices vary by field. “I had been concerned about the disparity of impact factors across fields,” Paul told me. The problem, he explained, arises when impact factors are used to evaluate people or programs in different fields by administrators who are not aware of these variations.

People assume that publication in a high-impact-factor journal reflects well on the author, Paul said. But the data do not support that assumption. In fact, he explained, studies show that 90% of Nature’s 2004 impact factor is due to only 25% of its papers. Papers that are most downloaded are not the same as the papers that are most cited, and publishing in a high-impact journal does not guarantee that a given paper will be highly cited. Interestingly, Paul also noted a positive correlation between the incidence of retractions and impact factors.

“At best,” Paul said, “impact factors reflect on the journal as a whole and not on individual papers or researchers.” Rather than focusing on impact factors, Paul said, researchers may be better served by heeding his seven simple rules of how to have scientific impact: (1) Do good work. (2) Work in areas where others are interested, work, and publish. (3) Publish where those folks are likely to see it. (4) Have something significant to say. (5) Write clearly and interestingly, never too long or too short; use eye-catching figures. (6) Use your published work to teach; do not conceal. (7) Time your activity in a given field well; do not jump in too early or too late.

But impact factors are now so deeply entrenched in scientific publishing that no amount of data may convince researchers to ignore them.


Views expressed on this page are those of the author and not necessarily those of ACS.

Chemical & Engineering News
ISSN 0009-2347
Copyright © American Chemical Society
José C. Conesa (February 16, 2014 5:36 PM)
Concerning the fourth of the simple seven rules given by Paul one could say, modifying somewhat a well known saying by Plato:
"The good scientist makes a paper because (s)he has something to publish; the bad one, because (s)he has to publish something"
In general, citation statistics of the scientist's own works are a much better measure of his/her quality than impact factors. But even the bad scientist might be much cited, e.g. as an example of results that could not be reproduced by others... So in the end one must read the papers to see if they are good...

Leave A Comment

*Required to comment