Quantifying the Influence of Prestige
A new study by an interdisciplinary team of researchers focuses on “who hires whose graduates as faculty” in order to “present and analyze comprehensive placement data on nearly 19,000 regular faculty in three disparate disciplines. Across disciplines, we find that faculty hiring follows a common and steeply hierarchical structure that reflects profound social inequality.” The disciplines the authors looked at were computer science, business, and history, but were thought to be sufficiently different from one another so as to be an adequate basis for making some generalizations about academia overall.
Some of the findings:
- Across the sampled disciplines, we find that faculty production (number of faculty placed) is highly skewed, with only 25% of institutions producing 71 to 86% of all tenure-track faculty.
- Across disciplines, we find steep prestige hierarchies, in which only 9 to 14% of faculty are placed at institutions more prestigious than their doctorate (ρ = 0.86 to 0.91). Furthermore, the extracted hierarchies are 19 to 33% stronger than expected from the observed inequality in faculty production rates alone… indicating a specific and significant preference for hiring faculty with prestigious doctorates.
- When combined with the observed inequality in faculty production across institutions, the average rank change implies that a typical professor can expect to supervise two to four times fewer new within-discipline faculty than did their own doctoral advisor. This falloff in faculty production is sufficiently steep that only the top 18 to 36% of institutions are net producers of within-discipline faculty.
- A greater fraction of faculty trained at higher-ranked institutions make smaller moves down the hierarchy than those trained at lower-ranked institutions.
- Male and female faculty experience similar but not equivalent rank change distributions… with the median change for men being 21 to 35, whereas that for women being 23 to 38. Differences by gender are greatest for graduates of the most prestigious institutions in computer science and business… That is, the hierarchy is slightly steeper for elite women than for elite men in these disciplines. In contrast, we find no gender difference in median placement for history.
- Across disciplines, prestige hierarchies make the most accurate predictions of faculty placement.
- High-prestige institutions are separated from all other institutions by many fewer intermediaries than are low-prestige institutions. As a result, faculty at central institutions literally perceive a “small world” as compared to faculty located in the periphery.
The authors make available a very well-designed interactive graphic with which you can explore the data.
In their discussion of their findings, they write:
These results demonstrate the enormous role of institutional prestige in shaping faculty hiring across academe, both for institutions and for individuals seeking faculty positions. Prestige hierarchies are also likely to influence outcomes in other scholarly activities, including research priorities, resource allocation, and educational outcomes, either directly through prestige-sensitive decision making or indirectly through faculty placement. Despite the confounded nature of merit and social status within measurable prestige, the observed hierarchies are sufficiently steep that attributing their structure to differences in merit alone seems implausible.
They also raise some questions for further study:
More broadly, the strong social inequality found in faculty placement across disciplines raises several questions. How many meritorious research careers are derailed by the faculty job market’s preference for prestigious doctorates? Would academia be better off, in terms of collective scholarship, with a narrower gap in placement rates? In addition, if collective scholarship would improve with less inequality, what changes would do more good than harm in practice? These are complicated questions about the structure and efficacy of the academic system, and further study is required to answer them.
The study is here. Its authors are Aaron Clauset (Colorado, Computer Science), Samuel Arbesman (Ewing Marion Kauffman Foundation), and Daniel B. Larremore (Harvard, Epidemiology).
Inside Higher Ed has an article about it here, and there are discussions at The Philosophers’ Cocoon and Digressions & Impressions.
UPDATE: Helen De Cruz posts some related numbers from philosophy and other fields here.
(image: screen captures from the study’s interactive graphic)
It’s not just, or even primarily, departmental prestige. The more one goes up the food chain, the more it becomes a matter of the power of specific senior people.Report
We (the APA along with the CPA and other organizations) should ask the authors of this study to conduct similar research on us. The study should be international in scope including academic philosophers in several English speaking countries since they participate in the same philosophical network.Report
We shouldn’t even have to ask them to do it. They made the code available, and putting together the raw data for philosophy is something anyone could do. As far as I can see, anyone who knows even a little about Matlab (NB: I don’t) could do a similar job for philosophy.
See http://tuvalu.santafe.edu/~aaronc/facultyhiring/ for the matlab inputs, and the links on http://danlarremore.com/faculty/tutorial.html for how to draw the pretty pictures.Report
OK, I’ll kick-start a discussion, here. I’m fairly sure that this data tells against a meritocratic picture of academia, but what I’m wondering is how exactly it does so. Here is the only thing that the authors say about this:
“For such differences to reflect purely meritocratic outcomes… the observed placement rates would imply that faculty with doctorates from the top 10 units are inherently two to six times more productive than faculty with doctorates from the third 10 units. The magnitude of these differences makes a pure meritocracy seem implausible, suggesting the influence of nonmeritocratic factors like social status.”
We might think that this claim is easy to verify: look at publication rates amongst various PhDs and see if top-10 PhDs are in fact much more likely to publish. A quick (unscientific) google search on a few names reveals that it may not be unreasonable to expect an NYU grad to publish 2-3 times as often as a grad from the 25th-ranked school. But there’s a confounding factor: tenure at top-10 school pretty much requires a publication every year. Tenure at a 30-40 ranked institution requires no such thing. This means that when those top-10 schools hire those top-10 PhDs, it is in fact very likely that their publication (or “production”) rates will dramatically outstrip those with PhDs from less prestigious institutions. Paradoxically, the hiring skew itself makes it hard to disconfirm the meritocratic interpretation.
Anyway, as I say, I feel as though this has to be evidence of a serious problem, but I’m still trying to work out exactly how to respond to the person who will claim that this is mostly just talent going where talent is supposed to go.Report
Joe – I’m very interested in this, but the problem is that the quoted passage you mention (which seems central to the interpretation of these results) seems unsupported. Why do “the observed placement rates imply that faculty with doctorates from the top 10 units are inherently two to six times more productive than faculty with doctorates from the third 10 units”? I’d like to understand a bit better why we should expect a particular rate of dropoff in productivity to be needed to meritocratically explain a particular rate of dropoff in hireability.
(This is of course not to say that hiring is in fact purely meritocratic. But I’d like to know what they even might mean by meritocracy here, so that we can get some sense of whether they have any evidence of deviation from it, and if so what the magnitude of deviation is.)Report
I’m similarly puzzled by the question of how the data support the conclusion.
I don’t really know what ‘inherently’ means in ‘inherently two to six times more productive’. Maybe it isn’t supposed to mean anything, I’m not sure. And what does ‘productive’ mean in this context — does it really mean how many articles a professor publishes? Surely that is not the default measure of merit. (I doubt Harvard’s faculty was constantly wringing its collective hands over being stuck with that stiff Rawls when they could have snagged Rescher.)
Like the other commenters, I am definitely not trying to defend the idea that academia is a meritocracy — I think the sorting of candidates into jobs is some mix of ‘merit’ factors, unfair bias factors, and a big helping of luck. I’m just skeptical that the study shows anything like what it claims to show.Report
Typical: philosophers thinking they can walk in and criticise the methods of causal inference of another discipline by looking at one paragraph in an article.Report
I am a little startled by the claim that while a top-10 school requires a publication a year for tenure, 30-40 ranked would clearly have much less demanding requirement. Going one tier lower, I list the final tier in the top 50. They are: Carnegie-Mellon, Johns Hopkins, UC-Davis, UC-Santa Barbara, University of Illinois-Chicago,Florida State, University of Rochester, Rice, St. Louis University, University of Minnesota-Minneapolis, and University of Missouri-Columbia. Does anyone seriously think that the requirements for tenure at these fine schools is significantly less than i publication a year? This is just a reminder that the ranking system we know and love doesn’t make such fine-grained distinctions as all that.Report
Part of the problem with convincing people to fight the perniciousness pedigree bias in academia is the defenders of the system will point to the same data to bolster their own conclusion that some universities are just that much better than others. They’ll look at the same data and say, “well, see, Harvard/Yale/Princeton/whoever is that much better than other schools. I mean, look at all the jobs their graduates snap up!” It’s a silly argument, and an extremely harmful one, but it’s out there, nevertheless.Report
If hiring were strictly meritocratic, then hiring patterns would follow patterns of production more closely than they do. It’s not, as some are supposing, that some external measure of productivity is imposed on the data; rather, the data is used to extract a measure of relative productivity. One then asks, how well does this measure predict tenured faculty hires? It turns out not very well: within the data pedigree and prestige are better predictors of hiring than scholarly productivity.Report
If hiring were strictly meritocratic, then hiring patterns would follow patterns of production more closely than they do.
I think this is exactly what’s in question. Can you explain why you think so? It makes sense if publication quantity is itself a measure of merit, but I think we all agree that’s too simplistic. We also all agree that hiring is not strictly meritocratic; the question is whether this new paper manages to do anything to quantify non-meritocratic elements, as it claims to do.
Matt Drabek, I would like to see where the silly argument is made. (I agree it is silly – the data do not support either interpretation over the other.) Can you give a pointer?Report
Having just written a dissertation on meritocracy, I have more than a passing interest in these matters.
The authors of the study don’t need to provide a definition of “merit”. An efficient system puts productive resources like jobs into the hands of those who can use them well. If academia is a meritocracy then, given the data, faculty coming out of the top 10 departments are 2-3 times more productive than those coming out of departments 11-20 and 2-6 times more productive than those coming out of departments 21-30. Since that is hugely implausible, some non-meritocratic factor, like prestige, must be at work.
I’m not sure academia is any worse than the rest of the economy these days (the most purely meritocratic environment I’ve experienced is the U.S. military). This is unfortunate since meritocracy is the only just economic arrangement (or so I argue). There are several causes: a culture in which happiness turns on being popular rather than excellent; leftists who have devalued merit or even denied its existence; and conservative economic policies under which wealth, income, and other social goods redound to those born into the right families rather than the talented and industrious.Report
Ironic. Sociologists have been studying prestige-based hierarchies in hiring practices in academia for 25 or more years (often using network data) but because sociology is a low prestige discipline, the issue only gets attention when it’s reduscovered by an epidemiologist, a computer scientist, and someone from Kauffman (probably an economist)…Report
Right, the authors do not have to define ‘merit’. Nobody in this comment thread said they have to define ‘merit’. Kenny asked why “the observed placement rates imply that faculty with doctorates from the top 10 units are inherently two to six times more productive than faculty with doctorates from the third 10 units.” I thought this was a good question. You say,
If academia is a meritocracy then, given the data, faculty coming out of the top 10 departments are 2-3 times more productive than those coming out of departments 11-20 and 2-6 times more productive than those coming out of departments 21-30.
But that doesn’t answer Kenny’s question at all. How do the data imply the alleged conclusion?
It would be interesting if Kieran Healy gave his interpretation of the study. He seems able to explain things better than epidemiologists, computer scientists, and someone from Kauffman. (Even if he is much lower prestige than the authors.)Report
I gather that everyone here is happy with straight bean counting as the measure of “merit.”Report
The lead author elaborates on his notion of a meritocracy in the IHE article: “Clauset said that when someone is working within a meritocracy, he or she has about an even (50 percent) chance of being placed in a higher-ranked or lower-ranked program than his or her Ph.D. program. But across disciplines, the study reveals steep “prestige hierarchies,” he said, in which only about 9 to 14 percent of Ph.D.s get a job in a more highly ranked program then their own. Placement figures for women in elite programs are slightly worse.”
This is a notion of relative merit. If one wanted to defend an alternative notion of meritocracy, then for example, one would need to maintain an absolute standard of advancement consistent with the considerably lower rate of advancement. The actual placement rates indicate that the gatekeepers of institutional access “…to the resources of scholarship and to the networks of scholars that circulate their work around the world” must be doing an extraordinary job .
The comments to the IHE article are also informative, in particular those of Robert Oprisko concerning the placement rates of the top graduates of all programs, compared with the placement rate of graduates of the elite programs.
. Menand, Louis. The Marketplace of Ideas. New York: W.W. Norton, 2010. Print.Report
And what does ‘productive’ mean in this context — does it really mean how many articles a professor publishes? Surely that is not the default measure of merit.
Then later I said,
It makes sense if publication quantity is itself a measure of merit, but I think we all agree that’s too simplistic.
Then Anon says,
I gather that everyone here is happy with straight bean counting as the measure of “merit.”
“when someone is working within a meritocracy, he or she has about an even (50 percent) chance of being placed in a higher-ranked or lower-ranked program than his or her Ph.D. program.”
In the first place, this is mathematically impossible in a field like philosophy, since most candidates are placed in unranked programs (indeed, most are placed in programs with no graduate program at all). Maybe it’s just me, but that seems like a defect in a criterion for meritocracy.
Just as important, suppose we only look at candidates who are placed in ranked programs (so it is no longer mathematically impossible for the criterion to be satisfied). Then we expect a .5 chance of being placed higher or lower than the candidate’s own program if, but only if, merit is distributed randomly, without regard to where candidates are studying.
That may be true, but it is what needs to be shown, not merely assumed or stipulated.Report
You must first show, by citing the paper, that the programs are not “ranked” within the network ranking adopted by the paper. Bracketing this difficulty, your notion that for a meritocracy to hold, merit must be uniformly distributed across all programs is too strong and is counterintuitive. According to it, one takes the entire cohort of applicants over all programs, and declares that a meritocracy exists if 50 % of candidates studying at institutions below the median rank are placed at institutions above the median rank, 50% of the candidates studying at institutions above the median rank are placed at institutions below the median rank. Instead, as Clauset suggests, the movement above and below the rank of one’s program is a conditional probability, conditioned on the program one is studying in. Please explain why the conditioning must be removed.Report
Huh. If Helen’s post is to be trusted (and I have no reason no to trust it), people (like me) who are studying at a school not found in the top-50 have some serious unrealistic optimism motivating them. 🙂Report
Reposted from Leiter where there is a similar discussion:
Does pedigree (sometimes) trump ability? Yes it most certainly does, if ability is construed as the capacity to publish in a peer-reviewed journal which is surely a *relatively* objective measure of philosophical talent. Candidates with pedigree but no papers are often preferred to candidates with papers but poor pedigree. Candidates with good pedigree *and* publications are often preferred to candidates with an equivalent publication record but worse pedigree. This is evident from the statistics that have been published from time to time and it has been publicly admitted on a number of threads both on this blog and on others. So the sociological issue is not whether this happens. The sociological issue is the magnitude of the effect. In principle this could be tested for. Suppose that from now on search committees for tenure-track hires simply *ignored* pedigree (and minimized associated factors such as superstar letters of recommendation), using a weighted publication count to make the first cut, and writing samples plus teaching evaluations to make the second. Then we could compare the success as-of-then of the graduates of top programs with their success as-of-now. The difference between the success rates of the Leiterrific under the current and under the imagined conditions would give a rough measure the magnitude of the halo-effect as it exists at present. If the graduates of high-ranking programs still did significantly better than the graduates of less prestigious programs, this could reasonably be put down to the superior quality of their recruits and/or the superiority of the education that they receive (the latter probably being partly due to the former). My suspicion is that the products of the top programs would still do better on average than the graduates of the downlist schools, but that the effect would be a lot less pronounced than is currently the case. However such is the ingrained snobbery of the profession that this simple experiment will probably never be run.Report