Advertisement

Abstract

The faculty job market plays a fundamental role in shaping research priorities, educational outcomes, and career trajectories among scientists and institutions. However, a quantitative understanding of faculty hiring as a system is lacking. Using a simple technique to extract the institutional prestige ranking that best explains an observed faculty hiring network—who hires whose graduates as faculty—we present and analyze comprehensive placement data on nearly 19,000 regular faculty in three disparate disciplines. Across disciplines, we find that faculty hiring follows a common and steeply hierarchical structure that reflects profound social inequality. Furthermore, doctoral prestige alone better predicts ultimate placement than a U.S. News & World Report rank, women generally place worse than men, and increased institutional prestige leads to increased faculty production, better faculty placement, and a more influential position within the discipline. These results advance our ability to quantify the influence of prestige in academia and shed new light on the academic system.

INTRODUCTION

Faculty hiring is a ubiquitous feature of academic disciplines, the result of which—who hires whose graduates as faculty—shapes nearly every aspect of academic life, including scholarly productivity, research priorities, resource allocation, educational outcomes, and the career trajectories of individual scholars (14). Despite these fundamental roles, a clear and systematic understanding of the common patterns and efficiencies of faculty hiring across disciplines is lacking.
From the institutional perspective, faculty hiring is an implicit assessment: when an institution u hires as faculty the graduate of another institution v, u makes a positive assessment of the quality of v’s teaching and research programs. Similarly, when an individual accepts a job offer from u, he or she makes a positive assessment of u’s quality. As a collection of such pairwise assessments, a discipline’s faculty hiring network (Fig. 1) represents a collective assessment (5) of its own educational and research outcomes. When institutions are unequally successful in faculty placement, achieving more placements at other successful institutions implies a more positive collective assessment of that institution’s outcomes.
Fig. 1 Prestige hierarchies in faculty hiring networks.
(Top) Placements for 267 computer science faculty among 10 universities, with placements from one particular university highlighted. Each arc (u,v) has a width proportional to the number of current faculty at university v who received their doctorate at university u (≠v). (Bottom) Prestige hierarchy on these institutions that minimizes the total weight of “upward” arcs, that is, arcs where v is more highly ranked than u.
Differential success rates in such competitions are a hallmark of social hierarchy, which may emerge from either physical dominance or social prestige mechanisms (6). Among academic institutions, physical dominance may be neglected, leaving social prestige, in which less prestigious institutions seek to emulate the successful behaviors of more prestigious institutions in an effort to bolster their own prestige (7, 8). In this context, prestige in faculty hiring is an operational variable that encompasses differences in both scholastic merit and nonmeritocratic factors such as social status or geography. If such factors are irrelevant, then prestige is equivalent to merit. More realistically, nonmeritocratic factors play a role, and the greater their importance, the lesser the correlation between prestige and merit.
Objectively measuring institutional prestige is complicated by the fact that it depends on interactions between institutions and on subjective evaluations, among other factors. Classic approaches, such as the authoritative rankings by the U.S. News & World Report and the National Research Council (NRC) (9), quantify institutions independently, omitting the impact of interactions like joint initiatives, research collaborations, graduate admissions, or faculty hiring. Such rankings are also widely criticized (10, 11) for emphasizing educational inputs, like reputation, wealth, and “selectivity,” rather than educational outputs. In contrast, faculty hiring networks simultaneously represent interactions and expert assessments of outcomes, which enables an effective, quantitative approach by which to characterize the impact of prestige, identify large-scale patterns in hiring, and shed light on the relative roles of merit and status.
Here, we investigate the structure of faculty hiring networks using complete and hand-curated data on the placements of nearly 19,000 tenure-track or tenured faculty, among 461 North American departmental or school-level academic units, in the disciplines of computer science, business, and history (see Supplementary Materials and table S1). These disciplines represent highly distinct scholastic traditions, which provide a broad basis for characterizing general patterns in faculty placement in academia. Institutions in our sample were selected from comprehensive lists of Ph.D.-granting academic units within each discipline. To be present in our data, a faculty member must have received his or her doctorate from and held at the time of sampling a faculty position at one of the in-sample institutions. Of the faculty sampled, 86% met these criteria, indicating a nearly closed doctoral ecosystem among these institutions.
To these data, we apply a novel network-based technique for extracting a prestige hierarchy that best explains the observed hiring decisions. Across disciplines, we show that faculty hiring follows a common and steeply hierarchical structure that reflects profound social inequality among institutions. Furthermore, we show that (i) doctoral prestige alone better predicts ultimate placement than authoritative rankings from the U.S. News & World Report and the NRC, (ii) female graduates generally place worse than male graduates from the same institution, and (iii) increased institutional prestige leads to increased faculty production, better faculty placement, and a more influential position within a discipline. These results advance our ability to quantify and understand the systematic structure of academia, shed new light on the factors that shape individual career trajectories, and identify a novel connection between faculty hiring and social inequality.

RESULTS

Across the sampled disciplines, we find that faculty production (number of faculty placed) is highly skewed, with only 25% of institutions producing 71 to 86% of all tenure-track faculty (table S2; this and subsequent ranges indicate the range of a given quantity across the three disciplines, unless otherwise noted). The number of faculty within an academic unit (number of faculty hired, that is, the unit’s size) is also skewed, with some units being two to three times larger than others. Business schools are especially large, generally containing several internal departments, with a mean size of 70 faculty members who received their doctorates from other within-sample units, whereas computer science and history have mean sizes of 21 and 29, respectively (see Supplementary Materials). The differences in size within a discipline, however, cannot explain the observed differences in placements. If placements were simply proportional to the size of a unit, then the placement and size distributions would be statistically indistinguishable. A simple test of this size-proportional placement hypothesis shows that it may be rejected out of hand [Kolmogorov-Smirnov (KS) test, P < 10−8; Fig. 2, B and C], indicating genuine differential success rates in faculty placement.
Fig. 2 Inequality in faculty production.
(A) Lorenz curves showing the fraction of all faculty produced as a function of producing institutions. (B and C) Complementary cumulative distributions for institution out-degree (faculty produced) and in-degree (faculty hired). The means of these distributions are 21 for computer science, 70 for business, and 29 for history.
The Gini coefficient, a standard measure of social inequality, is defined as the mean relative difference between a uniformly random pair of observed values. Thus, G = 0 denotes strict equality, and G = 1 maximal inequality. We find G = 0.62 to 0.76 for faculty production (Fig. 2, A and B), indicating strong inequality across disciplines [cf., G = 0.45 for the income distribution of the United States (12)].
Strong inequality holds even among the top faculty producers: the top 10 units produce 1.6 to 3.0 times more faculty than the second 10, and 2.3 to 5.6 times more than the third 10. For such differences to reflect purely meritocratic outcomes, that is, utilitarian optimality of total scholarship (13), differences in placement rates must reflect inherent differences in the production of scholarship. Under a meritocracy, the observed placement rates would imply that faculty with doctorates from the top 10 units are inherently two to six times more productive than faculty with doctorates from the third 10 units. The magnitude of these differences makes a pure meritocracy seem implausible, suggesting the influence of nonmeritocratic factors like social status.
If faculty placement overall followed a perfect social hierarchy, then no faculty would be hired at an institution more prestigious than their doctorate (6). The extent to which a particular hiring network exhibits this pattern may be determined by identifying the minimum violation ranking (14, 15), which is a hierarchy that is maximally close to this extreme.
Within faculty hiring networks, each vertex represents an institution, and each directed edge (u,v) represents a faculty member at v who received his or her doctorate from u. A prestige hierarchy is then a ranking π of vertices, where πu = 1 is the highest-ranked vertex. The hierarchy’s strength is given by ρ, the fraction of edges that point downward, that is, πu ≤ πv, maximized over all rankings (14). Equivalently, ρ is the rate at which faculty place no better in the hierarchy than their doctorate. When ρ = 1/2, faculty move up or down the hierarchy at equal rates, regardless of where they originate, whereas ρ = 1 indicates a perfect social hierarchy.
Both the inferred hierarchy π and its strength ρ are of interest. For large networks, there are typically many equally plausible rankings with the maximum ρ (15). To extract a consensus ranking, we sample optimal rankings by repeatedly choosing a random pair of vertices and swapping their ranks, if the resulting ρ is no smaller than for the current ranking. We then combine the sampled rankings with maximal ρ into a single prestige hierarchy by assigning each institution u a score equal to its average rank within the sampled set, and the order of these scores gives the consensus ranking (see the Supplementary Materials). The distribution of ranks within this set for some u provides a natural measure of rank uncertainty.
Across disciplines, we find steep prestige hierarchies, in which only 9 to 14% of faculty are placed at institutions more prestigious than their doctorate (ρ = 0.86 to 0.91). Furthermore, the extracted hierarchies are 19 to 33% stronger than expected from the observed inequality in faculty production rates alone (Monte Carlo, P < 10−5; see Supplementary Materials), indicating a specific and significant preference for hiring faculty with prestigious doctorates.
Examined in detail, these hierarchies generally assign higher ranks to elite institutions (but not always, see Supplementary Materials and fig. S10, which visualizes the hierarchies for the 60 top-ranked institutions in each discipline), and more highly ranked institutions have lower rank uncertainty (fig. S3). These network-based rankings are also at least as accurate in estimating institutional prestige as authoritative rankings: prestige correlates as well with the U.S. News & World Report rankings (r2 = 0.51 to 0.79, P < 10−17) and the NRC rankings (r2 = 0.33 to 0.80, P < 10−11; see Supplementary Materials), as these two rankings correlate with each other (r2 = 0.39 to 0.83, P < 10−13). Unlike the authoritative rankings, however, prestige hierarchies provide additional insights into the pattern of faculty hiring across disciplines.
The placement experience of individual faculty is captured by the distribution of changes-in-rank relative to the individual’s doctoral institution. Across disciplines, we find that faculty place an average of 27 to 47 ranks below their doctorate (Fig. 3). The median change of 21 to 35 is smaller, indicating a sizable right skew in each of these distributions. When combined with the observed inequality in faculty production across institutions, the average rank change implies that a typical professor can expect to supervise two to four times fewer new within-discipline faculty than did their own doctoral advisor. This falloff in faculty production is sufficiently steep that only the top 18 to 36% of institutions are net producers of within-discipline faculty (table S2).
Fig. 3 Faculty placement distributions.
(A) Network visualizations for computer science, business, and history (top to bottom) showing central positions for institutions in the top 15% of prestige ranks (highlighted; vertex size proportional to ko). (B and C) Estimated probability density functions for relative change in prestige (doctoral to faculty institution) for (B) the top 15% and (C) the remaining institutions, showing a common but right-skewed structure.
The observed rank changes are also unequally distributed by doctoral prestige and by gender. For instance, a greater fraction of faculty trained at higher-ranked institutions make smaller moves down the hierarchy than those trained at lower-ranked institutions (Fig. 3, B and C; see Supplementary Materials), indicating that the steepness of the hierarchy increases as prestige falls. Furthermore, male and female faculty experience similar but not equivalent rank change distributions (KS test, P < 10−3; figs. S5 and S6), with the median change for men being 21 to 35, whereas that for women being 23 to 38. Differences by gender are greatest for graduates of the most prestigious institutions in computer science and business, where median placement for women graduating from the top 15% of units is 12 to 18% worse than for men from the same institutions. That is, the hierarchy is slightly steeper for elite women than for elite men in these disciplines. In contrast, we find no gender difference in median placement for history.
The strength of the extracted hierarchies suggests that individual faculty placement may be predictable from doctoral prestige alone, without directly modeling the characteristics or preferences of individuals or institutions. We test this hypothesis by quantifying and comparing the placement accuracy of doctoral prestige to that of alternative measures, including both authoritative rankings and network-based measures. Each of these measures represents a ranking of institutions, from which we calculate a distribution of rank changes relative to doctoral rank for each faculty in our sample. The predictive accuracy of each measure is then quantified by its area under the curve (AUC) score (16) for the placements of the assistant professors in our sample. The AUC represents the probability that a uniformly random true positive (correct placement) is ranked above a uniformly random false positive (incorrect placement). The closer the AUC is to 1.0, the better that measure predicts placement, whereas a value of AUC = 0.5 represents accuracy no better than chance.
Across disciplines, prestige hierarchies make the most accurate predictions of faculty placement, with AUCs ranging from 0.58 to 0.67 (see Supplementary Materials; fig. S9). All other single measures, including the authoritative rankings from the U.S. News & World Report and the NRC, have lower accuracies, sometimes substantially so. Furthermore, the relative ordering of alternative measures by their accuracies is not consistent across disciplines, indicating poor generality. In contrast, prestige is always the best predictor. The modest overall accuracy (AUC < 0.7) indicates that other factors may play substantial roles in particular placements, for example, the contingency of a particular department hiring in a particular field in a particular year. Identifying and quantifying the importance of such factors would shed new light on the efficiency of faculty hiring markets.
Together, these results are broadly consistent with an academic system organized in a classic core-periphery pattern (17), in which increased prestige correlates with occupying a more central, better connected, and more influential network position (18) (Fig. 4). Supporting this conclusion, we find that standard measures of network centrality correlate strongly with prestige rank (see Supplementary Materials; fig. S8). For instance, the harmonic centrality—an inverse measure of the mean shortest-path distance from u to all other vertices (19)—increases smoothly with prestige, meaning that high-prestige institutions are separated from all other institutions by many fewer intermediaries than are low-prestige institutions. As a result, faculty at central institutions literally perceive a “small world” (20) as compared to faculty located in the periphery.
Fig. 4 Core-periphery patterns.
(A to C) For several institutions within each disciplinary hiring network, we highlight the tree of shortest paths rooted at each u within this network (black) for (A) computer science, (B) business, and (C) history (vertex size is proportional to out-degree, and lighter colors indicate higher prestige). As prestige increases (left), the paths in these trees contract, reflecting a more central network position, increased faculty production, and better faculty placement.
A strong core-periphery pattern has profound implications for the free exchange of ideas. Research interests, collaboration networks, and academic norms are often cemented during doctoral training (2). Thus, the centralized and highly connected positions of higher-prestige institutions enable substantial influence, via doctoral placement, over the research agendas, research communities, and departmental norms throughout a discipline (6, 21). The close proximity of the core to the entire network implies that ideas originating in the high-prestige core, regardless of their merit, spread more easily throughout the discipline, whereas ideas originating from low-prestige institutions must filter through many more intermediaries. Reinforcing the association of centrality and insularity with higher prestige, we observe that 68 to 88% of faculty at the top 15% of units received their doctorate from within this group, and only 4 to 7% received their doctorate from below the top 25% of units.

DISCUSSION

These results demonstrate the enormous role of institutional prestige in shaping faculty hiring across academe, both for institutions and for individuals seeking faculty positions. Prestige hierarchies are also likely to influence outcomes in other scholarly activities, including research priorities, resource allocation, and educational outcomes, either directly through prestige-sensitive decision making or indirectly through faculty placement. Despite the confounded nature of merit and social status within measurable prestige, the observed hierarchies are sufficiently steep that attributing their structure to differences in merit alone seems implausible.
Supporting this conclusion are the observed statistical differences in placement quality by gender within computer science and business. Similar patterns of gender inequality are observed in other aspects of scholastic evaluation, particularly in the sciences (2224), which indicates a systematic role for nonmeritocratic factors. In contrast, faculty placement in history exhibits no such gender inequality. Whether this difference is related to the smaller proportion of male faculty in history (64%) as compared to computer science (85%) or business (78%) is unknown. Identifying the mechanisms that underlie these differences may shed additional light on the origins of gender inequality and the role of other nonmeritocratic factors in faculty hiring.
It is remarkable that despite the broad differences in scholastic practices and evaluation standards between computer science, business, and history, these disciplines exhibit qualitatively and quantitatively similar patterns. This common structure suggests that strong prestige hierarchies may be fundamental, a claim that is supported in part by qualitatively similar results, using different methods of evaluation, from single-discipline studies of faculty hiring networks in mathematics, economics, law, sociology, political science, and organizational science (8, 2531). However, the specific mechanisms that produce and maintain these hierarchies remain unclear (see Supplementary Materials). A better understanding of their nature would facilitate the disentanglement of genuine merit from mere social status within prestige hierarchies, and shed new light on the operation of current faculty markets.
In our analysis, institutional prestige depends on both overall faculty production and placement quality. Some institutions achieve relatively high prestige by successfully placing a smaller number of faculty at highly ranked institutions. For example, in computer science, Caltech ranks above 98.5% of other institutions but places fewer computer science faculty than 27 lower-ranked institutions.
Both these unusually successful institutions, and the 9 to 14% of individual faculty who place above their doctoral rank, present a puzzle, and it remains unknown what characteristics distinguish them from the more typical experience. Identifying the factors, if any, that distinguish these exceptional faculty, and the degree to which such factors compensate for a doctorate from a low-prestige institution, would have significant implications for the mechanisms used in faculty hiring across academia. A proper study of this phenomenon would require detailed data on the characteristics of the research, mentoring, institutional resources, and other factors that are not part of the present study.
A complete study of the placements of all faculty in all disciplines would be a substantial undertaking but would provide a broad basis by which to understand what makes these unusual institutions and individuals so successful. Such a broad study would also facilitate a broader understanding of the processes that shape the flow of faculty across disciplines, the formation of new fields, and the emerging practice of interdisciplinary research.
At the institutional level, assessments based on faculty hiring networks provide a principled and data-driven alternative to the widely criticized methods of the U.S. News & World Report and the NRC (10, 11), among others. Rather than choose an arbitrary weighting of arbitrary factors, a prestige hierarchy extracted from a faculty hiring network uses the collective assessments of research and education outcomes by many semi-independent groups of experts—the faculty themselves. Additionally, faculty hiring is a costly and highly decentralized process, the results of which are generally readily available to the public. These factors suggest that the data on which a prestige hierarchy depends are likely to be more robust to corruption by self-serving institutional manipulation, a known problem for U.S. News & World Report.
More broadly, the strong social inequality found in faculty placement across disciplines raises several questions. How many meritorious research careers are derailed by the faculty job market’s preference for prestigious doctorates? Would academia be better off, in terms of collective scholarship, with a narrower gap in placement rates? In addition, if collective scholarship would improve with less inequality, what changes would do more good than harm in practice? These are complicated questions about the structure and efficacy of the academic system, and further study is required to answer them. We note, however, that economics and the study of income and wealth inequality may offer some insights about the practical consequences of strong inequality (13).
In closing, there is nothing specific to faculty hiring in our network analysis, and the same methods for extracting prestige hierarchies from interaction data could be applied to study other forms of academic activities, for example, scientific citation patterns among institutions (32). These methods could also be used to characterize the movements of employees among firms within or across commercial sectors, which may shed light on mechanisms for economic and social mobility (33). Finally, because graduate programs admit as students the graduates of other institutions, a similar approach could be used to assess the educational outcomes of undergraduate programs.

Acknowledgments

We thank M. Young, J. Horey, K. Maxwell, C. Moore, J. Silk, M. A. Porter, J. van Cleve, M. Jackson, S. Bowles, P. Mucha, and A. Jacobs for helpful conversations. Funding: This work was supported in part by the Ewing Marion Kauffman Foundation grant # 20120085. Author contributions: A.C. conceived the research and managed the data collection. A.C. and S.A. designed the analyses. A.C. and D.B.L. conducted the analyses. All authors wrote the manuscript. Competing interests: The authors declare that they have no competing financial interests. Data and materials availability: Computer code implementing many of the analysis methods described in this paper and other information can be found online at http://santafe.edu/~aaronc/facultyhiring/.

Supplementary Material

Summary

Table S1. Data summary for collected tenure-track faculty from each discipline.
Table S2. Statistical measures of inequality by discipline.
Fig. S1. An example graph A, the two minimum violation rankings (MVRs) on these vertices, both with S[π(A)] = 3, and a “consensus” hierarchy, in which the position of each u is the average of all positions that u takes in the MVRs.
Fig. S2. Bootstrap distributions (smoothed) for the fraction of unviolated edges ρ in the empirical data (filled) and in a null model (dashed), in which the in- and out-degree sequences are preserved but with the connections between them otherwise randomized, and those for the empirical data.
Fig. S3. Prestige uncertainty versus prestige, shown as the SD of the estimated distribution versus the distribution mean, for (A) computer science, (B) business, and (C) history.
Fig. S4. Changes in rank from doctoral institution u to faculty institution v, for each edge (u, v) in (A) computer science, (B) business, and (C) history.
Fig. S5. Changes in rank from doctoral institution u to faculty institution v, for each edge (u, v) in (A) computer science, (B) business, and (C) history, divided by male versus female faculty for u in the top 15% of institutions (top panels) or in the remaining institutions (bottom panels).
Fig. S6. Ratio of the median change-in-rank, from doctoral institution u to faculty institution v, for men versus women, for faculty receiving their doctorate from the “most prestige” institutions, showing that elite women tend to place below their male counterparts in computer science and business (ratio < 1).
Fig. S7. Changes in rank from doctoral institution u to faculty institution v, for each edge (u, v) in (A) computer science, (B) business, and (C) history, divided by faculty who have held one or more postdoctoral positions versus those that held none, for u in the top 15% of institutions (top panels) or in the remaining institutions (bottom panels).
Fig. S8. Centrality measures versus prestige rank.
Fig. S9. Placement accuracy for assistant professors.
Fig. S10. Prestige scores for the top 60 institutions for (A) computer science, (B) business, and (C) history.
Fig. S11. Centrality versus prestige rank for (A) computer science, (B) business, and (C) history departments, where centrality is defined as the mean geodesic distance (also known as closeness) divided by the maximum geodesic distance (diameter).
Fig. S12. Relative change in rank from doctoral to current institution for all Full, Associate, and Assistant Professors in (A) computer science, (B) business, and (C) history.
Fig. S13. Geographic structure of faculty hiring.
Dataset 1: Business Faculty-Hiring Network Edges
Dataset 2: Business Faculty-Hiring Network Vertex Attributes
Dataset 3: Computer Science Faculty-Hiring Network Edges
Dataset 4: Computer Science Faculty-Hiring Network Vertex Attributes
Dataset 5: History Faculty-Hiring Network Edges
Dataset 6: History Faculty-Hiring Network Vertex Attributes
References (3449)

Resources

File (1400005_datasets.zip)
File (1400005_sm.pdf)

REFERENCES AND NOTES

1
R. L. Geiger, Knowledge and Money (Stanford University Press, Palo Alto, CA, 2004).
2
E. B. Petersen, Negotiating academicity: Postgraduate research supervision as category boundary work. Stud. High. Educ. 32, 475–487 (2007).
3
D. Cyranoski, N. Gilbert, H. Ledford, A. Nayar, M. Yahia, The PhD factory. Nature 472, 276–279 (2011).
4
The Royal Society, The Scientific Century: Securing our Future Prosperity (The Royal Society, London, 2010).
5
J. Surowiecki, The Wisdom of Crowds (Doubleday Books, New York, 2004).
6
J. Henrich, F. J. Gil-White, The evolution of prestige: Freely conferred deference as a mechanism for enhancing the benefits of cultural transmission. Evol. Hum. Behav. 22, 165–196 (2001).
7
S.-K. Han, Tribal regimes in academia: A comparative analysis of market structure across disciplines. Soc. Networks 25, 251–280 (2003).
8
V. Buris, The academic caste system: Prestige hierarchies in PhD exchange networks. Am. Sociol. Rev. 69, 239–264 (2004).
9
National Research Council, Data-Based Assessment of Research-Doctorate Programs in the United States (National Academies Press, Washington, DC, 2010).
10
M. N. Bastedo, N. A. Bowman, U.S. News & World Report college rankings: Modeling institutional effects on organizational reputation. Am. J. Educ. 116, 163–183 (2010).
11
J. R. Cole, Too big to fail, The Chronicle of Higher Education, 24 April 2011; http://chronicle.com/article/Too-Big-to-Fail/127212/.
12
The World Factbook 2013–14 (Central Intelligence Agency, Washington, DC, 2013).
13
D. Kahneman, P. P. Wakker, R. Sarin, Back to Bentham? Explorations of experienced utility. Q. J. Econ. 112, 375–406 (1997).
14
H. de Vries, Finding a dominance order most consistent with a linear hierarchy: A new procedure and review. Anim. Behav. 55, 827–843 (1998).
15
J. Park, Diagrammatic perturbation methods in networks and sports ranking combinatorics. J. Stat. Mech. P04006 (2010).
16
J. A. Hanley, B. J. McNeil, The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology 143, 29–36 (1982).
17
P. Csermely, A. London, L.-Y. Wu, B. Uzzi, Structure and dynamics of core-periphery networks. J. Complex Netw. 1, 93–123 (2013).
18
P. Bonacich, Power and centrality: A family of measures. Am. J. Sociol. 92, 1170–1182 (1987).
19
M. Newman, Networks: An Introduction (Oxford Univ. Press, Oxford, 2010).
20
D. J. Watts, S. H. Strogatz, Collective dynamics of ‘small-world’ networks. Nature 393, 440–442 (1998).
21
R. Axelrod, An evolutionary approach to norms. Am. Polit. Sci. Rev. 80, 1095–1111 (1986).
22
S. J. Ceci, W. M. Williams, Understanding current causes of women’s underrepresentation in science. Proc. Natl. Acad. Sci. USA 108, 3157–3162 (2011).
23
C. A. Moss-Racusin, J. F. Dovidio, V. L. Brescoll, M. J. Graham, J. Handelsman, Science faculty’s subtle gender biases favor male students. Proc. Natl. Acad. Sci. U.S.A. 109, 16474–16479 (2012).
24
S. Knobloch-Westerwick, C. J. Glynn, M. Huge, The Matilda effect in science communication. Sci. Commun. 35, 603–625 (2013).
25
S. A. Myers, P. J. Mucha, M. A. Porter, Mathematical genealogy and department prestige. Chaos 21, 041104 (2011).
26
R. Amir, M. Knauff, Ranking economics departments worldwide on the basis of PhD placement. Rev. Econ. Stat. 90, 185–190 (2008).
27
D. M. Katz, J. R. Gubler, J. Zelner, M. J. Bommarito II, E. A. Provins, E. M. Ingall, Reproduction of hierarchy? A social network analysis of the American law professoriate. J. Legal Educ. 61, 1–28 (2011).
28
R. A. Hanneman, The prestige of Ph.D. granting departments of sociology: A simple network approach. Connections 24, 68–77 (2001).
29
B. M. Schmidt, M. M. Chingos, Ranking doctoral programs by placement: A new method. PS Polit. Sci. Polit. 40, 523–529 (2007).
30
J. H. Fowler, B. Grofman, N. Masuoka, Social networks in political science: Hiring and placement of Ph.D.s, 1960–2002. PS Polit. Sci. Polit. 40, 729–739 (2007).
31
C. C. Miller, W. H. Glick, L. B. Cardinal, The allocation of prestigious positions in organizational science: Accumulative advantage, sponsored mobility, and contest mobility. J. Organ. Behav. 26, 489–516 (2005).
32
P. Deville, D. Wang, R. Sinatra, C. Song, V. D. Blondel, A.-L. Barabási, Career on the move: Geography, stratification, and scientific impact. Sci. Rep. 4, 4470 (2014).
33
S. Bowles, S. N. Durlauf, K. Hoff, Eds. Poverty Traps (Princeton Univ. Press, Princeton, NJ, 2006).
34
M. E. J. Newman, G. T. Barkema, Monte Carlo Methods in Statistical Physics (Clarendon Press, Oxford, 1999).
35
B. Efron, R. J. Tibshirani, An Introduction to the Bootstrap (Chapman and Hall/CRC, Boca Raton, FL, 1994).
36
S. P. Borgatti, Centrality and network flow. Soc. Networks 27, 55–71 (2005).
37
M. Molloy, B. A. Reed, A critical point for random graphs with a given degree sequence. Random Struct. Algor. 6, 161–180 (1995).
38
P. Boldi, S. Vigna, Axioms for centrality. Internet Math. 10, 222–262 (2014).
39
M. Clarke, A. P. Sanoff, M. Savino, A. Usher, College and University Ranking Systems: Global Perspectives and American Challenges (Institute for Higher Education Policy, Washington, DC, 2007), pp. 9–21.
40
National Research Council, Research Doctorate Programs in the United States: Continuity and Change (National Academies Press, Washington, DC, 1995).
41
T. J. Webster, A principal component analysis of the U.S. News & World Report tier rankings of colleges and universities. Econ. Educ. Rev. 20, 235–244 (2001).
42
G. Leef, M. Lowrey, Do college rankings mean anything? Inquiry 17, 1–9 (2004).
43
M. Meredith, Why do universities compete in the ratings game? An empirical analysis of the effects of the U.S. News & World Report college rankings. Res. High. Educ. 45, 443–461 (2004).
44
K. O’Meara, Higher Education: Handbook of Theory and Research, J. C. Smart, Ed. (Springer, New York, 2007), vol. XXII, pp. 121–179.
45
N. A. Bowman, M. N. Bastedo, Getting on the front page: Organizational reputation, status signals, and the impact of U.S. News & World Report on student decisions. Res. High. Educ. 50, 415–436 (2009).
46
D. J. Tancredi, K. D. Bertakis, A. Jerant, Short-term stability and spread of the U.S. News & World Report primary care medical school rankings. Acad. Med. 88, 1107–1115 (2013).
47
E. Grimson, Dangers of rankings with inaccurate data. Comput. Res. News 22, 1–4 (2010).
48
L. Wasserman, All of Nonparametric Statistics (Springer, New York, 2007).
49
M. Girvan, M. E. J. Newman, Community structure in social and biological networks. Proc. Natl. Acad. Sci. U.S.A. 99, 7821–7826 (2002).

(0)eLetters

eLetters is a forum for ongoing peer review. eLetters are not edited, proofread, or indexed, but they are screened. eLetters should provide substantive and scholarly commentary on the article. Embedded figures cannot be submitted, and we discourage the use of figures within eLetters in general. If a figure is essential, please include a link to the figure within the text of the eLetter. Please read our Terms of Service before submitting an eLetter.

Log In to Submit a Response

No eLetters have been published for this article yet.

Information & Authors

Information

Published In

Science Advances
Volume 1 | Issue 1
February 2015

Submission history

Received: 22 September 2014
Accepted: 21 December 2014

Permissions

See the Reprints and Permissions page for information about permissions for this article.

Keywords

  1. faculty placement
  2. hiring networks
  3. prestige
  4. inequality
  5. hierarchy

Acknowledgments

We thank M. Young, J. Horey, K. Maxwell, C. Moore, J. Silk, M. A. Porter, J. van Cleve, M. Jackson, S. Bowles, P. Mucha, and A. Jacobs for helpful conversations. Funding: This work was supported in part by the Ewing Marion Kauffman Foundation grant # 20120085. Author contributions: A.C. conceived the research and managed the data collection. A.C. and S.A. designed the analyses. A.C. and D.B.L. conducted the analyses. All authors wrote the manuscript. Competing interests: The authors declare that they have no competing financial interests. Data and materials availability: Computer code implementing many of the analysis methods described in this paper and other information can be found online at http://santafe.edu/~aaronc/facultyhiring/.

Authors

Affiliations

Aaron Clauset* [email protected]
Department of Computer Science, University of Colorado, Boulder, CO 80309, USA.
BioFrontiers Institute, University of Colorado, Boulder, CO 80303, USA.
Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, NM 87501, USA.
Samuel Arbesman
Ewing Marion Kauffman Foundation, Kansas City, MO 64110, USA.
Daniel B. Larremore
Department of Epidemiology, Harvard School of Public Health, Boston, MA 02115, USA.
Center for Communicable Disease Dynamics, Harvard School of Public Health, Boston, MA 02115, USA.

Funding Information

Notes

*
Corresponding author. E-mail: [email protected]

Metrics & Citations

Metrics

Article Usage

Altmetrics

Citations

Cite as

Export citation

Select the format you want to export the citation of this publication.

Cited by

  1. Building a home for the social and interdisciplinary sciences and public health at Science Advances, Science Advances, 10, 17, (2024)./doi/10.1126/sciadv.adp7473
    Abstract
  2. Bridging the Gap: Evidence from the Return Migration of African Scientists, Organization Science, 34, 1, (404-432), (2023).https://doi.org/10.1287/orsc.2022.1580
    Crossref
  3. How career hubs shape the global corporate elite, Global Networks, (2023).https://doi.org/10.1111/glob.12430
    Crossref
  4. The plateauing of cognitive ability among top earners, European Sociological Review, (2023).https://doi.org/10.1093/esr/jcac076
    Crossref
  5. Demographic Dynamics of Publishing in the American Journal of Archaeology , American Journal of Archaeology, 127, 2, (151-165), (2023).https://doi.org/10.1086/723220
    Crossref
  6. The Endless Juggle: Differential Effects of the Pandemic for Criminal Justice Scholars, Journal of Criminal Justice Education, (1-20), (2023).https://doi.org/10.1080/10511253.2022.2160474
    Crossref
  7. Academic free speech or right-wing grievance?, Digital Discovery, 2, 2, (260-297), (2023).https://doi.org/10.1039/D2DD00111J
    Crossref
  8. American postdoctoral salaries do not account for growing disparities in cost of living, Research Policy, 52, 3, (104714), (2023).https://doi.org/10.1016/j.respol.2022.104714
    Crossref
  9. The PhD origins of finance faculty, Journal of Empirical Finance, 71, (88-103), (2023).https://doi.org/10.1016/j.jempfin.2023.01.003
    Crossref
  10. Internal labor markets: A worker flow approach, Journal of Econometrics, 233, 2, (661-688), (2023).https://doi.org/10.1016/j.jeconom.2021.12.016
    Crossref
  11. See more
Loading...

View Options

View options

PDF format

Download this article as a PDF file

Download PDF

Check Access

Log in to view the full text

AAAS ID LOGIN

AAAS login provides access to Science for AAAS Members, and access to other journals in the Science family to users who have purchased individual subscriptions.

Log in via OpenAthens.
Log in via Shibboleth.

More options

Register for free to read this article

As a service to the community, this article is available for free. Login or register for free to read this article.

Media

Figures

Multimedia

Tables

Share

Share

Share article link

Share on social media