ERROR 1
ERROR 1
ERROR 2
ERROR 2
ERROR 2
ERROR 2
ERROR 2
Password and Confirm password must match.
If you have an ACS member number, please enter it here so we can link this account to your membership. (optional)
ERROR 2
ACS values your privacy. By submitting your information, you are gaining access to C&EN and subscribing to our weekly newsletter. We use the information you provide to make your reading experience better, and we will never sell your data to third party members.
The National Research Council's recently released assessment of graduate programs is more complex than ever. The methodology NRC used to determine its ranking ranges is grounded in statistics. The rankings committee developed two separate ranking ranges—instead of a single number—that measure overall program quality. The ranges are called the S, or survey-based, ranking range, and the R, or regression-based, ranking range.
COVER STORY
Methodology Explained: What's The Difference Between R And S, Anyway?
In addition to those overall ranges, they also assembled so-called dimensional ranking ranges, which assess schools on three separate dimensions of doctoral education: research activity, student support and outcomes, and diversity.
To obtain the ranking
ranges, NRC obtained data about 20 characteristics of graduate programs, such as median time to degree and percent of faculty with grants, on the basis of responses to a survey it sent to universities during the 2005–06 academic year. Faculty played an important role in the assessment because the ranking ranges are based on how much "weight" faculty attached to individual program characteristics. The committee chose this method because faculty in the sciences and humanities tend to prioritize different program characteristics.
NRC developed two sets of weights, which became the basis for the S- and R-ranking ranges. In a Sept. 27 conference call with reporters, Jeremiah Ostriker, who chaired the NRC report committee, explained the difference between the weights with an analogy. "If you want to know what people eat, you can ask them what they eat or you can watch them," he said.
In the same vein, to learn what faculty value in a graduate program, one can simply ask them, or one can learn what they value by watching how they themselves rate programs. For the S-ranking ranges, NRC did the former: They asked faculty to explicitly state which of the 20 graduate program characteristics they valued most. On the basis of the responses, NRC developed weights for each characteristic, with the most valued ones receiving the highest weights.
For the R-ranking ranges, NRC did the latter: They asked faculty to rate doctoral programs, but the committee was not interested in those ratings, per se. Instead, the committee used statistics to see what qualities seemed to be associated with a high rating.
The results are given as S- and R-ranking ranges instead of a strict number order to reflect inherent uncertainty in the data. To capture at least part of that uncertainty, the committee calculated an R rank and an S rank for each program 500 times, with a statistical technique called "random halves." For each program, they came up with 500 different numerical rankings for S and 500 for R. NRC numerically ordered the rankings and then excluded the top 25 and bottom 25 of each range of numbers. What remained—the 25th-highest ranking through the 475th-highest ranking for each program—is what NRC calls the fifth- and 95th-percentile rankings for each program. These numbers define each program's S- and R-ranking ranges. For example, Harvard's fifth-percentile R ranking is two and its 95th is eight. So in terms of R ranking, Harvard lies somewhere among the second- to the eighth-best chemistry doctoral program in the nation. The dimensional ratings, such as those for diversity, were calculated in similar ways.
Join the conversation
Contact the reporter
Submit a Letter to the Editor for publication
Engage with us on X