For anyone involved in academic research, understanding how to evaluate and compare journals is of paramount importance. Publishing in journals regarded as “prestigious”, as assessed by ranking metrics such as impact factor or h-5 index, has the potential to influence not only the reception of one’s research by the broader scientific community but also decision-making regarding future applications for grant funding, faculty positions, and other elements of a successful career in science.
Impact factor is one of the most well-known metrics used to rank scientific journals. The metric was developed in the mid-1960s by The Institute for Scientific Information (ISI), now known as Clarivate Analytics, as a computer-aided method for assessing the output of journals based on the citation frequency of their publications. Since 1975, the company has maintained a database of current journal impact factors, Journal Citation Reports.
What is impact factor?
According to Clarivate Analytics, impact factor is “a measure of the frequency with which the “average article” in a journal has been cited in a particular year or period.” The annual impact factor is calculated by dividing the total number of annual citations received by a journal by its number of cited source publications from the prior two years.
For example, if 500 publications produced by a journal over the course of 2017 and 2018 were cited 1000 times in 2019, the journal would have an annual impact factor of 2. If 800 publications from 2014 to 2018 were cited 2400 times in 2019, the five-year impact factor of the journal would be 3.
The equation, by design, reduces the importance of publication quantity, publication frequency, and journal age in the determination of comparative journal quality. Publications from years prior to the given range are not considered in the calculation of impact factor, limiting the advantage of older, more established journals.
Since impact factor considers the total number of citations in the context of publication quantity, journals that publish lower-quality articles more often or in greater number do not win brownie points.
Impact factor is not infallible to bias, as artifacts and manipulation can skew the metric to represent the quality of journals misleadingly. Since review articles often serve as surrogate citable elements for prior science publications (and thus accrue many citations), journals that exclusively publish review articles can have aberrantly high impact factors (Sharma et al 2014).
The encouragement of impact factor-biased self-citation by journals, whether directly or indirectly (such as by editorial boards accepting manuscripts whose references have a greater number of citations specific for their journal), is a manipulative practice that inflates journal impact factor and has had an increasing prevalence in recent years (Chorus and Waltman 2016).
Although there are considerable advantages to using impact factor as a metric for comparing two or more journals, the metric should not be used in isolation.
The comparative quality of a scientific journal should be assessed holistically, taking into account both multiple quantifiable metrics (such as h-5 index, SCImago score, and impact factor) and also non-quantifiable data such as the nature of the editorial board and its practices and the robustness of the journal’s publication policies and criteria.
The publication of Nobel Prize-winning science is not restricted to the journals with the highest impact factor, since the process always bears some degree of subjectivity. Don’t be discouraged if your article is not accepted by a “top journal” and may luck be on your side as you move forwards in your research career.
Interested in learning about the rankings of journals from a variety of fields in science and medicine? Consider checking out our journal rankings.
Learn about the latest immunology research from ImmunoFrontiers.