Skip to Main Content

Transition from Web of Science

Journal Metrics

Knowing the impact and importance of a journal can help you decide where to submit your articles. With bibliometric databases you can compare and evaluate journals, see the most productive research areas, download data, follow journals, and create robust data visualizations.

Journal Impact Factor (JIF) is one way to consider the relative stature of a publication venue. The JIF is a measure of how often the journal is cited; it is not a reflection of the stature of an individual author or article.

Journal Impact Factor is calculated by dividing the number of citations (in the current year) to articles published during the last two years by the number of articles published in the journal during those two years. The quotient is the JIF for a particular year.

The most established source for journal rankings is Journal Citation Reports (JCR), though Scopus, Google Scholar, and other indexers also provide journal metrics. Alternative metrics that can be used to measure the relative importance of a journal include the h-5 index, SCImago Journal Rank (SJR), Source Normalized Impact per Paper (SNIP), and the Eigenfactor Score.

Journal Citation Reports

JCR uses the Journal Impact Factor to rank the importance of a journal by calculating the times its articles are cited. It is calculated by the total number of publications in a journal and the total number of citations to those publications over a two-year period. For example, a journal that had 100 publications in two years and received 5,000 citations the following year would have an impact factor of 50. JCR covers over 11,000 scholarly and technical journals and conference proceedings from more than 3,000 publishers in science, social sciences, arts, and humanities.

JCR bases its calculations on Web of Knowledge data and can be accessed through subscription to Web of Science or Journal Citation Reports.

Alternatives to Journal Citation Reports

SCImago Journal and Country Rank (SJR)

SCImago is a free online portal based on citation data tracked in Elsevier’s Scopus database. The ranking system incorporates citation data as well as relationships among journals (via citations). The SJR algorithm weights citations from journals according to how highly cited the journal itself is, which differs from the JCR (Journal Citation Reports) which weights all articles identically, regardless of their source.

The SJR (SCImago Journal Rank) indicator expresses the average number of weighted citations received in the selected year by the documents published in the selected journal in the three previous years: i.e., weighted citations received in year X (2019) to documents published in the journal in years X-1 (2018), X-2 (2017), and X-3 (2016).

Journals can be compared or analyzed separately. Country rankings may also be compared or analyzed separately. Journals can be grouped by subject area, subject category, or by country. Citation data is drawn from over 34,100 titles from more than 5,000 international publishers and country performance metrics from 239 countries worldwide.

SNIP (Source Normalized Impact per Paper)

Source Normalized Impact per Paper (SNIP) measures contextual citation impact by weighting citations based on the total number of citations in a subject field. The impact of a single citation is given higher value in subject areas where citations are less likely, and vice versa. Unlike the journal impact factor, SNIP corrects for differences in citation practices between scientific fields, thereby allowing for more accurate between-field comparisons of citation impact.

SNIP was created by Professor Henk F. Moed at Centre for Science and Technology Studies (CWTS), University of Leiden. SNIP values are available from CWTS Journal Indicators or in the Scopus database.

CWTS Journal Indicators currently provides four indicators:

  1. P – The number of publications of a source in the past three years.
  2. IPP – The impact per publication, calculated as the number of citations given in the present year to publications in the past three years divided by the total number of publications in the past three years. IPP is fairly similar to the well-known journal impact factor. Like the journal impact factor, IPP does not correct for differences in citation practices between scientific fields. IPP was previously known as RIP (raw impact per publication).
  3. SNIP – The source normalized impact per publication, calculated as the number of citations given in the present year to publications in the past three years divided by the total number of publications in the past three years. The difference with IPP is that in the case of SNIP citations are normalized in order to correct for differences in citation practices between scientific fields. Essentially, the longer the reference list of a citing publication, the lower the value of a citation originating from that publication.
  4. % self cit – The percentage of self citations of a source, calculated as the percentage of all citations given in the present year to publications in the past three years that originate from the source itself.

For more details, see: https://www.journalindicators.com/methodology.

Eigenfactor Score

A journal's Eigenfactor score is a measure of the journal's total importance to the scientific community. Eigenfactor ranks the overall impact of a journal, and not the impact of articles within that journal. Eigenfactor scores are scaled so that the Eigenfactor scores of all journals listed in Clarivate's Journal Citation Reports (JCR) sum to 100. Thus, if a journal has an Eigenfactor score of 1.0, it has 1% of the total influence of all indexed publications. In 2013, the journal Nature has the highest Eigenfactor score, with a value of 1.603.

CiteScore

A journal's CiteScore is the total number of citations in a year to articles published in the three previous years, divided by the total number of articles published in those three years. CiteScore is limited to only scholarly articles, conference papers and review articles and does not consider citations from trade publications, newspapers, or books. CiteScore is similar to the JCR Impact Factors but uses Scopus data rather than Web of Science data and three years rather than two as the publication period.

Google Scholar Metrics

h5-index = The h5-index is the h-index for articles published in the last 5 complete years. It is the largest number h such that h articles published in 2015-2019 have at least h citations each. So, a publication that had 5 articles but only three had at least 5 citations or more would have an h5 of 3.

h5-median = The h5-median for a publication is the median number of citations for the articles that make up its h5-index.

Controversy over Journal Metrics

While the JIF and other journal metrics can help you to identify the most widely cited journals in a discipline, there are several limitations to these metrics:

  • Journal metrics vary among disciplines
  • Journal metrics do not consider the nature of the citation (positive or negative)
  • JIFs can change from year to year
  • Some metrics can exclude smaller, niche journals.

Note that journal-level metrics such as Impact Factor are a controversial measure of the importance of scholarly contributions. As one example of such criticisms, see “Escape from the Impact Factor,” by Philip Campbell, editor-in-chief of Nature. He recommends evaluating individual scholarly contributions on their own merit rather than using the journal as a surrogate for quality.

In a more recent article in Nature "Beat it, impact factor! Publishing elite turns against controversial metric"  Ewen Callaway discusses the turn against impact factor and its outsized impact on science.  In the article “The Impact Factor Game,” the editors of PLoS Medicine explore the implications of the Impact Factor for publishers and authors, concluding “If authors are going to quote the impact factor of a journal, they should understand what it can and cannot measure. The opening up of the literature [through various open access developments] means that better ways of assessing papers and journals are coming—and we should embrace them.”