Abstract
In this paper, scientific performance is identified with the impact that journal articles have through the citations they receive. In 15 disciplines, as well as in all sciences as a whole, the EU share of total publications is greater than that of the U.S. However, as soon as the citations received by these publications are taken into account the picture is completely reversed. Firstly, the EU share of total citations is still greater than the U.S. in only seven fields. Secondly, the mean citation rate in the U.S. is greater than in the EU in every one of the 22 fields studied. Thirdly, since standard indicators—such as normalized mean citation ratios—are silent about what takes place in different parts of the citation distribution, this paper compares the publication shares of the U.S. and the EU at every percentile of the world citation distribution in each field. It is found that in seven fields the initial gap between the U.S. and the EU widens as we advance towards the more cited articles, while in the remaining 15 fields—except for Agricultural Sciences—the U.S. always surpasses the EU when it counts, namely, at the upper tail of citation distributions. Finally, for all sciences as a whole the U.S. publication share becomes greater than that of the EU for the top 50% of the most highly cited articles. The data used refers to 3.6 million articles published in 1998–2002, and the more than 47 million citations they received in 1998–2007.
Similar content being viewed by others
Notes
-
NAFTA includes the U.S., Canada, and Mexico.
-
As a matter of fact, for later reference highly cited papers as a percentage of total number of scientific publications are 1.64 and 0.25 in the U.S. and the EU, respectively.
-
On the other hand, Katz (2000) adjusts relative citation impact indicators to take into account a strong, non-linear relationship between the number of citations a collection of papers receives and the collection size. As a consequence, there is a dramatic reversal of positions in many sub-fields between the U.S. and some European and non-European countries (see EC 2003a, pp. 443–444 for a large reversal between large and small countries). However, a discussion of Katz’s approach is beyond the scope of this paper.
-
Archambault et al. (2006) have recently established that there is a 20–25% overrepresentation of English-language journals in TS’s databases compared to the list of journals in Ulrich’s International Periodicals Directory.
-
Albarrán and Ruiz-Castillo (2009) contains a discussion of the characteristics shared by these two social sciences and the remaining broad scientific fields.
-
It should be noted that when the 1998–2002 dataset is partitioned into the U.S., the EU and a third geographical area consisting of the rest of the world, the total number of articles in such extended count is 13.6% more than the standard count in which all articles are counted once. Similarly, the total number of citations in the extended sample is 20.2% greater than the one in the standard dataset. For further details, see Albarrán et al. (2009).
-
For the simultaneous measure of outputs and inputs to the scientific and innovation process, as well as a discussion of productivity indicators, see May (1997, 1998), EC (2003a), King (2004), and Shelton and Holdridge (2004). The latter also includes a review of qualitative methods for the measurement of science and technology consisting of studies of the international stature of research centers in the U.S. and the EU conducted by experts in the corresponding disciplines. For a general discussion of the evolution and shortcomings of science and technology indicators and their use in national policy, see Grupp and Mogee (2004).
-
There are two types of average-based measures: the impact measures rebased against the world baseline, used inter alia in May (1997), Adams (1998), King (2004), EC (2003a), and Shelton and Holdridge (2004), and the relational charts in Glänzel et al. (2002) that use information—unavailable in our database—about the journals where each country’s articles are published.
-
The Leiden group also constructs their average-based indicators counting with information about the journals where each country’s articles are published. This allows them to compare the research groups’ observed mean citation with the expected behavior of the set of journals where the group is known to publish. The ratio of such expected behavior to the behavior of the journals in the entire field constitutes another interesting indicator in this case.
-
See also Batty (2003) for a study of the pattern of spatial concentration by the highly cited scientists.
-
Since the total number of extended articles is greater than the actual number, the sum of the shares in (i) and (ii) over the partition of the world into geographical areas would add up to more than one.
-
This is also the method followed in the construction of the top 1% of the most highly cited articles in the Web of Science’s Essential Science Indicators.
-
Like before, the sum of such shares at every percentile will not add up to one.
-
As economists and/or members of Economics Departments, we believe that members of the European Economic Association and many other colleagues in Economics accept the information in the SCI and the SSCI as valid in our field.
References
Adams, J. (1998). Benchmarking international research. Nature, 396, 615–618.
Adler, R, Erwing, J, & Taylor, P. (2008). Citation Statistics, a report from the International Mathematical Union (IMU) in cooperation with the International Council of Industrial and Applied Mathematics (ICIAM) and the Institute of Mathematical Statistics (IMS).
Albarrán, P., & Ruiz-Castillo, J. (2009). References made and citations received by scientific articles. Working Paper 09-81, Economics Series 45, Universidad Carlos III.
Albarrán, P., Crespo, J., Ortuño, I., & Ruiz-Castillo, J. (2009). A Comparison of the Scientific Performance of the U.S. and Europe at the Turn of the XX Century. Working Paper 09-55, Economics Series 34, Universidad Carlos III (http://www.eco.uc3m.es/personal/cv/jrc.html).
Anderson, J., Collins, P., Irvine, J., Isard, P., Martin, B., Narin, F., et al. (1988). On-line approaches to measuring national scientific output: A cautionary tale. Science and Public Policy, 15, 153–161.
Archambault, E., Vignola-Gagne, E., Côté, G., Larivière, V., & Gingras, Y. (2006). Benchmarking scientific output in the social sciences and humanities: The limits of existing databases. Scientometrics, 68, 329–342.
Batty, M. (2003). The geography of scientific citation. Environment and Planning A, 35, 761–770.
Dosi, G., Llerena, P., & Sylos Labini, M. (2006). Science-technology-industry links and the ‘European Paradox’: Some notes on the dynamics of scientific and technological research in Europe. Research Policy, 35, 1450–1464.
EC. (1994). First European report on science and technology indicators. Luxembourg: Directorate-General XII, Science, Research, and Development, Office for Official Publications of the European Community.
EC. (1997). Second European report on science and technology indicators. Luxembourg: Directorate-General XII, Science, Research, and Development, Office for Official Publications of the European Community.
EC. (2002). Key Figures. Towards a European research area. Science, technology, and innovation. Luxembourg: Research Directorate General, Office for Official Publications of the European Community.
EC. (2003a). Third European report on science and technology indicators. Directorate-General for Research. Luxembourg: Office for Official Publications of the European Community. http://www.cordis.lu/rtd2002/indicators/home.htlm.
EC. (2003b). From ‘European Paradox’ to declining competitiveness? Snapshots, 4. In Key Figures 2003/2004, Directorate-General for Research. Luxembourg: Office for Official Publications of the European Community, http://cordis.europa.eu/indicators/publications.htm.
Glänzel, W. (2000). Science in Scandinavia: A bibliometric approach. Scientometrics, 48, 121–150.
Glänzel, W. (2001). National characteristics in international scientific co-authorship relations. Scientometrics, 51, 69–115.
Glänzel, W., Schubert, A., & Braun, T. (2002). A relational charting approach to the world of basic research in twelve science fields at the end of the second millennium. Scientometrics, 55, 335–348.
Grupp, H., & Mogee, M. E. (2004). Indicators for National Science and Technology Policy: How robust are composite indicators? Research Policy, 33, 1373–1384.
Katz, J. S. (2000). Scale-independent indicators and research evaluation. Science and Public Policy, 27, 23–36.
King, D. (2004). The scientific impact of nations. Nature, 430, 311–316.
Leydesdorff, L., & Wagner, C. (2009). Is the United States losing ground in science? A global perspective on the world science system. Scientometrics, 78, 23–36.
May, R. (1997). The scientific wealth of nations. Science, 275, 793–796.
May, R. (1998). The scientific investments of nations. Science, 281, 879–880.
Moed, H. F., & van Raan A. F. J. (1988). Indicators of research performance. In A. F. J. van Raan (ed.), Handbook of quantitative studies of science and technology (pp. 177–192). North Holland.
Moed, H. F., Burger, W. J., Frankfort, J. G., & van Raan, A. F. J. (1985). The use of bibliometric data for the measurement of university research performance. Research Policy, 14, 131–149.
Moed, H. F., De Bruin, R. E., & van Leeuwen, Th. N. (1995). New bibliometrics tools for the assessment of national research performance: Database description, overview of indicators, and first applications. Scientometrics, 133, 381–422.
Shelton, R., & Holdridge, G. (2004). The US-EU race for leadership of science and technology: Qualitative and quantitative indicators. Scientometrics, 60, 353–363.
Tijssen, R., & van Leeuwen, T. (2003). Bibliomeric analysis of world science. Extended technical annex to chapter V of EC (2003a).
Van Leeuwen, T., Moed, H., Tijssen, R., Visser, M., & van Raan, A. (2001). Language biases in the coverage of the science citation index and its consequences for international comparisons of national research performance. Scientometrics, 51, 335–346.
Van Leeuwen, T., Visser, M., Moed, H., Nederhof, T., & van Raan, A. (2003). The holy grail of science policy: Exploring and combining bibliometric tools in search of scientific excellence. Scientometrics, 57, 257–280.
Acknowledgements
The authors acknowledge financial support from the Spanish MEC through grants SEJ2007-63098, SEJ2006-05710, SEJ2007-67135, and SEJ2007-67436. The database of Thomson Scientific (formerly Thomson-ISI; Institute for Scientific Information) has been acquired with funds from Santander Universities Global Division of Banco Santander. This paper is part of the SCIFI-GLOW Collaborative Project supported by the European Commission’s Seventh Research Framework Programme, Contract number SSH7-CT-2008-217436. Suggestions by a referee helped to improve a previous version of the paper.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Albarrán, P., Crespo, J.A., Ortuño, I. et al. A comparison of the scientific performance of the U.S. and the European union at the turn of the 21st century. Scientometrics 85, 329–344 (2010). https://doi.org/10.1007/s11192-010-0223-7
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11192-010-0223-7