The impact factor (IF) or journal impact factor (JIF) of an academic journal is a measure reflecting the yearly average number of citations to recent articles published in that journal. It is frequently used as a proxy for the relative importance of a journal within its field; journals with higher impact factors are often deemed to be more important than those with lower ones. The impact factor was devised by Eugene Garfield, the founder of the Institute for Scientific Information. Impact factors are calculated yearly starting from 1975 for journals listed in the Journal Citation Reports.
In any given year, the impact factor of a journal is the number of citations, received in that year, of articles published in that journal during the two preceding years, divided by the total number of articles published in that journal during the two preceding years:
This means that, on average, its papers published in 2012 and 2013 received roughly 41 citations each in 2014. Note that 2014 impact factors are reported in 2015; they cannot be calculated until all of the 2014 publications have been processed by the indexing agency.
New journals, which are indexed from their first published issue, will receive an impact factor after two years of indexing; in this case, the citations to the year prior to Volume 1, and the number of articles published in the year prior to Volume 1, are known zero values. Journals that are indexed starting with a volume other than the first volume will not get an impact factor until they have been indexed for three years. Occasionally, Journal Citation Reports assigns an impact factor to new journals with less than two years of indexing, based on partial citation data. The calculation always uses two complete and known years of item counts, but for new titles one of the known counts is zero. Annuals and other irregular publications sometimes publish no items in a particular year, affecting the count. The impact factor relates to a specific time period; it is possible to calculate it for any desired period. For example, the Journal Citation Reports (JCR) also includes a five-year impact factor, which is calculated by dividing the number of citations to the journal in a given year by the number of articles published in that journal in the previous five years.
It is possible to examine the impact factor of the journals in which a particular person has published articles. This use is widespread, but controversial. Garfield warns about the "misuse in evaluating individuals" because there is "a wide variation from article to article within a single journal".
Some companies are producing false impact factors. According to an article published in the United States National Library of Medicine, these include Global Impact Factor (GIF), Citefactor, and Universal Impact Factor (UIF).
Numerous criticisms have been made regarding the use of impact factors. For one thing, the impact factor might not be consistently reproduced in an independent audit. There is also a more general debate on the validity of the impact factor as a measure of journal importance and the effect of policies that editors may adopt to boost their impact factor (perhaps to the detriment of readers and writers). Other criticism focuses on the effect of the impact factor on behavior of scholars, editors and other stakeholders. Others have made more general criticisms, arguing that emphasis on impact factor results from negative influence of neoliberal policies on academia claiming that what is needed is not just replacement of the impact factor with more sophisticated metrics for science publications but also discussion on the social value of research assessment and the growing precariousness of scientific careers in higher education.
It has been stated that impact factors and citation analysis in general are affected by field-dependent factors which may invalidate comparisons not only across disciplines but even within different fields of research of one discipline. The percentage of total citations occurring in the first two years after publication also varies highly among disciplines from 1–3% in the mathematical and physical sciences to 5–8% in the biological sciences. Thus impact factors cannot be used to compare journals across disciplines.
Because citation counts have highly skewed distributions, the mean number of citations is potentially misleading if used to gauge the typical impact of articles in the journal rather than the overall impact of the journal itself. For example, about 90% of Nature's 2004 impact factor was based on only a quarter of its publications, and thus the actual number of citations for a single article in the journal is in most cases much lower than the mean number of citations across articles. Furthermore, the strength of the relationship between impact factors of journals and the citation rates of the papers therein has been steadily decreasing since articles began to be available digitally.
Indeed, impact factors are sometimes used to evaluate not only the journals but the papers therein, thereby devaluing papers in certain subjects. The Higher Education Funding Council for England was urged by the House of Commons Science and Technology Select Committee to remind Research Assessment Exercise panels that they are obliged to assess the quality of the content of individual articles, not the reputation of the journal in which they are published. The effect of outliers can be seen in the case of the article "A short history of SHELX", which included this sentence: "This paper could serve as a general literature citation when one or more of the open-source SHELX programs (and the Bruker AXS version SHELXTL) are employed in the course of a crystal-structure determination". This article received more than 6,600 citations. As a consequence, the impact factor of the journal Acta Crystallographica Section A rose from 2.051 in 2008 to 49.926 in 2009, more than Nature (at 31.434) and Science (at 28.103). The second-most cited article in Acta Crystallographica Section A in 2008 only had 28 citations. Also, impact factor is a journal metric and should not be used to assess individual researchers or institutions.
A.E. Cawkell, sometime Director of Research at the Institute for Scientific Information remarked that the Science Citation Index (SCI), on which the impact factor is based, ″would work perfectly if every author meticulously cited only the earlier work related to his theme; if it covered every scientific journal published anywhere in the world; and if it were free from economic constraints.″
A journal can adopt editorial policies to increase its impact factor. For example, journals may publish a larger percentage of review articles which generally are cited more than research reports. Thus review articles can raise the impact factor of the journal and review journals will therefore often have the highest impact factors in their respective fields. Some journal editors set their submissions policy to "by invitation only" to invite exclusively senior scientists to publish "citable" papers to increase the journal impact factor.
Journals may also attempt to limit the number of "citable items"—i.e., the denominator of the impact factor equation—either by declining to publish articles that are unlikely to be cited (such as case reports in medical journals) or by altering articles (e.g., by not allowing an abstract or bibliography in hopes that Journal Citation Reports will not deem it a "citable item"). As a result of negotiations over whether items are "citable", impact factor variations of more than 300% have been observed. Items considered to be uncitable—and thus are not incorporated in impact factor calculations—can, if cited, still enter into the numerator part of the equation despite the ease with which such citations could be excluded. This effect is hard to evaluate, for the distinction between editorial comment and short original articles is not always obvious. For example, letters to the editor may refer to either class.
Another less insidious tactic journals employ is to publish a large portion of its papers, or at least the papers expected to be highly cited, early in the calendar year. This gives those papers more time to gather citations. Several methods, not necessarily with nefarious intent, exist for a journal to cite articles in the same journal which will increase the journal's impact factor.
Beyond editorial policies that may skew the impact factor, journals can take overt steps to game the system. For example, in 2007, the specialist journal Folia Phoniatrica et Logopaedica, with an impact factor of 0.66, published an editorial that cited all its articles from 2005 to 2006 in a protest against the "absurd scientific situation in some countries" related to use of the impact factor. The large number of citations meant that the impact factor for that journal increased to 1.44. As a result of the increase, the journal was not included in the 2008 and 2009 Journal Citation Reports.
Coercive citation is a practice in which an editor forces an author to add extraneous citations to an article before the journal will agree to publish it, in order to inflate the journal's impact factor. A survey published in 2012 indicates that coercive citation has been experienced by one in five researchers working in economics, sociology, psychology, and multiple business disciplines, and it is more common in business and in journals with a lower impact factor. However, cases of coercive citation have occasionally been reported for other scientific disciplines.
Because "the impact factor is not always a reliable instrument", in November 2007 the European Association of Science Editors (EASE) issued an official statement recommending "that journal impact factors are used only—and cautiously—for measuring and comparing the influence of entire journals, but not for the assessment of single papers, and certainly not for the assessment of researchers or research programmes".
In July 2008, the International Council for Science (ICSU) Committee on Freedom and Responsibility in the Conduct of Science (CFRS) issued a "statement on publication practices and indices and the role of peer review in research assessment", suggesting many possible solutions—e.g., considering a limit number of publications per year to be taken into consideration for each scientist, or even penalising scientists for an excessive number of publications per year—e.g., more than 20.
In February 2010, the Deutsche Forschungsgemeinschaft (German Research Foundation) published new guidelines to evaluate only articles and no bibliometric information on candidates to be evaluated in all decisions concerning "performance-based funding allocations, postdoctoral qualifications, appointments, or reviewing funding proposals, [where] increasing importance has been given to numerical indicators such as the h-index and the impact factor". This decision follows similar ones of the National Science Foundation (US) and the Research Assessment Exercise (UK).
In response to growing concerns over the inappropriate use of journal impact factors in evaluating scientific outputs and scientists themselves, the American Society for Cell Biology together with a group of editors and publishers of scholarly journals created the San Francisco Declaration on Research Assessment (DORA). Released in May 2013, DORA has garnered support from thousands of individuals and hundreds of institutions, including in March 2015 the League of European Research Universities (a consortium of 21 of the most renowned research universities in Europe), who have endorsed the document on the DORA website.
Some related values, also calculated and published by the same organization, include:
Additional journal-level metrics are available from other organizations. The measures above apply only to journals, not individual scientists, unlike author-level metrics such as the H-index. Article-level metrics measure impact at an article level instead of journal level. Other more general alternative metrics, or "altmetrics", may include article views, downloads, or mentions in social media.
Fake impact factors are produced by companies not affiliated with Journal Citation Reports. These are often used by predatory publishers; Jeffrey Beall maintained a list of such misleading metrics. Consulting Journal Citation Reports' master journal list can confirm if a publication is indexed by Journal Citation Reports, which is a necessary (but not sufficient) condition for obtaining an IF. Use of fake impact metrics is considered a "red flag".
a measure of the speed at which content in a particular journal is picked up and referred to
The Immediacy Index is the average number of times an article is cited in the year it is published. The journal Immediacy Index indicates how quickly articles in a journal are cited. The aggregate Immediacy Index indicates how quickly articles in a subject category are cited.