<iframe src="https://www.googletagmanager.com/ns.html?id=GTM-KCV32QR" height="0" width="0" style="display:none;visibility:hidden">
Free access

The most influential journals: Impact Factor and Eigenfactor

April 28, 2009
106 (17) 6883-6884
Progress in science is driven by the publication of novel ideas and experiments, most usually in peer-reviewed journals, but nowadays increasingly just on the internet. We all have our own ideas of which are the most influential journals, but is there a simple statistical metric of the influence of a journal? Most scientists would immediately say Impact Factor (IF), which is published online in Journal Citation Reports® as part of the ISI Web of Knowledgesm (www.thomsonreuters.com/products_services/scientific/Journal_Citation_Reports). The IF is the average number of citations in a year given to those papers in a journal published in the previous 2 years. But what, for example, is the most influential of the 3 following journals: A, which publishes just 1 paper a year and has a stellar IF of 100; B, which published 1,000,000 papers per year and has a dismal IF of 0.1 but 100,000 citations; or C, which publishes 5,000 papers a year with an IF of 10? Unless there is a very odd distribution of citations in B, or A has a paradigm-shifting paper like the Watson and Crick DNA structure, C is likely to be the most influential journal. Clearly neither IF nor total number of citations is, per se, the metric of the overall influence of a journal.
Bibliometricians have introduced various scales of ranking journals; some based on publications, some based on usage as well, including the internet, using social networking analysis. Bollen et al. (1) recently concluded that no single indicator adequately measures impact and the IF is at the periphery of 39 scales analyzed. But there is a new parameter, the Eigenfactor™, which attempts to rate the influence of journals (www.eigenfactor.org). The Eigenfactor™ ranks journals in a manner similar to that used by Google for ranking the importance of Web sites in a search. To quote from www.eigenfactor.org/methods.htm:
The Eigenfactor™ algorithm corresponds to a simple model of research in which readers follow chains of citations as they move from journal to journal. Imagine that a researcher goes to the library and selects a journal article at random. After reading the article, the researcher selects at random one of the citations from the article. She then proceeds to the journal that was cited, reads a random article there, and selects a citation to direct her to her next journal volume. The researcher does this ad infinitum.
The Eigenfactor™ is now listed by Journal Citation Reports®. In practice, there is a strong correlation between Eigenfactors and the total number of citations received by a journal (2). A plot of the 2007 Eigenfactors for the top 200 cited journals against the total number of citations shows some startling results (Fig. 1). Three journals have by far and away the most overall influence on science: Nature, PNAS, and Science, closely followed by the Journal of Biological Chemistry. So, publish in PNAS with the full knowledge that you are contributing to one of the most influential drivers of scientific progress.
Fig. 1.
Plot of the 2007 Eigenfactor rating against total number of citations listed in the Journal Citation Reports®.
The terrible legacy of IF is that it is being used to evaluate scientists, rather than journals, which has become of increasing concern to many of us. Judgment of individuals is, of course, best done by in-depth analysis by expert scholars in the subject area. But, some bureaucrats want a simple metric. My experience of being on international review committees is that more notice is taken of IF when they do not have the knowledge to evaluate the science independently.
An extreme example of such behavior is an institute in the heart of the European Union that evaluates papers from its staff by having a weighting factor of 0 for all papers published in journals with IF <5 and just a small one for 5 < IF < 10. So, publishing in the Journal of Molecular Biology counts for naught, despite its being at the top for areas such as protein folding.
All journals have a spread of citations, and even the best have some papers that are never cited plus some fraudulent papers and some excruciatingly bad ones. So, it is ludicrous to judge an individual paper solely on the IF of the journal in which it is published.
Fortunately, PNAS has both a good IF and a high reliability because of its access to so many expert National Academy of Sciences member–editors. If a paper has to be judged by a metric, then it should by the citations to it and not to the journal. The least evil of the metrics for individual scientists is the h-index (3), which ranks the influence of a scientist by the number of citations to a significant number of his or her papers; an h of 100 would mean that 100 of their publications have been cited at least 100 times each. In terms of a “usage” metric, Hirsch's h-index paper (3) is exceptional in its number of downloads (111,126 downloads versus 262 citations since it was published in November 2005).
While new and emerging measures of scientific impact are developed, it is important not to rely solely on one standard. After all, science is about progress, which is ultimately assessed by human judgment.

Acknowledgments.

I thank Philip Davis for pointing me toward the relevant literature.

References

1
J Bollen, H Van de Sempel, E Hagberg, A principal component analysis of 39 scientific impact measures., e-Print Archive, http://xxx.lanl.gov/abs/0902.2183. (2009).
2
PM Davis, Eigenfactor: Does the principle of repeated improvement result in better estimates than raw citation counts? J Am Soc Info Sci Tech 59, 2186–2188 (2008).
3
JE Hirsch, An index to quantify an individual's scientific research output. Proc Natl Acad Sci USA 102, 16569–16572 (2005).

Information & Authors

Information

Published in

Go to Proceedings of the National Academy of Sciences
Go to Proceedings of the National Academy of Sciences
Proceedings of the National Academy of Sciences
Vol. 106 | No. 17
April 28, 2009
PubMed: 19380731

Classifications

    Submission history

    Published online: April 28, 2009
    Published in issue: April 28, 2009

    Acknowledgments

    I thank Philip Davis for pointing me toward the relevant literature.

    Authors

    Affiliations

    Alan Fersht1 [email protected]
    Medical Research Council Centre for Protein Engineering, Cambridge CB2 0QH, United Kingdom

    Notes

    1

    Metrics & Citations

    Metrics

    Note: The article usage is presented with a three- to four-day delay and will update daily once available. Due to ths delay, usage data will not appear immediately following publication. Citation information is sourced from Crossref Cited-by service.


    Citation statements




    Altmetrics

    Citations

    If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

    Cited by

      Loading...

      View Options

      View options

      PDF format

      Download this article as a PDF file

      DOWNLOAD PDF

      Get Access

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Personal login Institutional Login

      Recommend to a librarian

      Recommend PNAS to a Librarian

      Purchase options

      Purchase this article to get full access to it.

      Single Article Purchase

      The most influential journals: Impact Factor and Eigenfactor
      Proceedings of the National Academy of Sciences
      • Vol. 106
      • No. 17
      • pp. 6881-7264

      Media

      Figures

      Tables

      Other

      Share

      Share

      Share article link

      Share on social media