top of page

Journal Impact Factor: a usefully flawed metric


Source: http://blogs.biomedcentral.com/bmcblog


The Journal Citation Report is out. I have just read a nice commentary on the utility of journal impact factor (1), and I would like to chip in a few words. Let me state my view: to me, journal impact factor (JIF) is a usefully flawed metric. The JIF, as the name implies, is designed for assessing the impact of a journal, not for evaluating an individual scientist. Unfortunately, many people, committees, institutes and universities have been using the JIF to judge a scientist's performance, and that is obviously wrong. The inventor of JIF, Mr. Eugene Garfield, never intended to use JIF for an individual scientist; he used it for deciding which worthy journals his library should subscribe to.


Even when the JIF is used for assessing the impact of a journal, it is also flawed. It is flawed because it is based on the mean citation per paper. We all know that the distribution of citations is highly skewed (with the majority of papers receiving 0 citation after 5 yr pub), so using the mean to describe the citation is both intellectually simplistic and dangerously biased. A median-based JIF is a much better way to capture the average impact.

The main determinants of JIF are time and the number citations, with the latter being manipulated easily. In the bone research field, a relatively new journal has recently achieved an IF of 9.XX after only 5 yr of operation! Yet, very few people -- if any -- in the field would consider that the journal is more influential than the "old horse" JBMR. So, these days -- in the presence of so many emerging journals and predatory journals -- to make judgment on a piece of sci communication and/or a journal, expert insight does count here.


Although flawed, the JIF is a useful metric. It is useful because it can yield a moderate signal for science quality. There are essentially 3 ways to evaluate the impact and utility of an individual paper: the number if citations, post-publication peer review, and JIF. It takes time (eg usually >5 yr) to gather reproducible citations, so the number of citations is not practically feasible for short term evaluation. Post-pub peer review is biased and error-prone, because there is poor agreement between assessors on any paper. It is a paradox that scientists are pretty good at judging science, but they are pretty bad at judging research!

These people (2) found that JIF seems to be the most satisfactory way (compared with peer-review and total citations) to to measure the scientific merit of a paper. However, they warned that JIF is prone to error when it comes to qualitative assessment. So, it reinforces my view that JIF is a usefully flawed metric.


===


(1) https://www.nature.com/articles/d41586-018-05467-5

(2) http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001675

Featured Posts
Recent Posts
Archive
Search By Tags
Follow Us
  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square
bottom of page