The Philosophical Gourmet Report (PGR) is an online survey that asks faculty members in philosophy departments to score other philosophy departments on a scale from 0 (inadequate for PhD students) to 5 (outstanding). For years now, the PGR’s results and ranking of departments have been a go-to reference for students interested in graduate school and careers in academic philosophy.
However, the validity of the survey and its utility for helping young philosophers make wise careers choices has also been called into question. For example, it is not clear if (or how much) getting a degree from a top-ranked PGR program actually increases ones chances of finding a permanent home in academia. More recently, the Academic Placement Data and Analysis (APDA) project launched its own survey—asking the graduates of philosophy PhD programs about their careers—to try and directly assess the prospects for young philosophy doctorates across programs.
The graph below synthesizes some of the data from these two projects—plotting mean PGR score from the 2017-2018 survey vs. the APDA’s percentage of 2012-2016 program graduates with a permanent academic position. Programs that are not included in the PGR, but that do have placement data in the APDA are graphed on the left and all assigned a dummy score of 1.1.
Each node in the figure corresponds to an English-speaking philosophy graduate program (79 in total). Node color corresponds to the geographic region of the program. Node size corresponds to the number of graduates in the ADPA database for each school (which is a rough estimate of the size of the program). Mouse-over a node to see the underlying data, and click to visit the department's webpage.
What does this show?
For the programs included in the PGR, there is an association between score and placement. Roughly speaking, for every 1 point increase in a program’s mean PGR score, there is a 10% increase in its placement rate for recent graduates. However, that trend is really only a small part of the story here. Or in slightly more technical terms, only about 22% of the data is explained by that association. (And, of course, this analysis excludes programs the programs graphed on the left that do not have an actual PGR score.)
For example, the 60-40% placement range is populated by programs from across the PGR-score spectrum. This shows that getting into a top-scoring program is by no means a slam dunk for a future job in academia. It also shows that many lower-scoring programs do just as well as higher-scoring programs at placing their graduates—and some even better. UC Riverside, Irvine, and University of Virginia really stand out as “overperforming” based on their PGR score. Notably, NYU, which has been the top-ranked PGR program for several years, is very middle-of-the-pack in terms of permanent placement (although this analysis does not account for graduates that may end up in good postdoctoral positions for several years).
It is also interesting to see how the distribution of placement for programs not included in the PGR largely mirrors the pattern seen for the PGR-scored programs, with many programs clustered in 60-40% range and the vast majority falling between 60% and 20%. Since we don’t have the “predictor” variable in the dataset, we can only speculate about how, if these programs were scored by the PGR, they might affect the association.
More generally, I think the programs falling into the upper left and lower right quadrants of this graph raise some of the most interesting questions. What are some of these lower-scoring programs doing (or what areas do they specialize in) that helps them to place their graduates so well? And conversely: What aren’t some of these top-scoring programs doing? Obviously, getting your graduates jobs in academia isn’t the only measure of a program, but the PGR survey is ostensibly supposed to be tracking the ability of the program to train successful academic philosophers. So it seems to me that some of the “underperformers” here should raise an eyebrow—and students applying to graduate school would do well to probe the APDA data more closely (and ask their advisors lots of questions) before placing too much stock in a program’s PGR rank.
Notes and Updates
May 26, 2019: Added data points (and some discussion) for departments that are not included in the PGR but that do have placement data in the ADPA.
March 10, 2019: Changed the overall formatting of the page. Also changed the sizing of the nodes, which now represent (roughly) differences in the size of the graduate population at each program.
Nov 6, 2018: A few department chairs reached out to me, questioning some of the data points on their programs. I have now updated the graph to reflect these corrections. I have also added information about the data source, naming the individual department official as the guarantor (you can see in the “tooltip” when you mouse over a department’s node). If you are a department official and spot other errors in the data, feel free to let me know (firstname.lastname@example.org) and I will update the graph.