ERROR 1
ERROR 1
ERROR 2
ERROR 2
ERROR 2
ERROR 2
ERROR 2
Password and Confirm password must match.
If you have an ACS member number, please enter it here so we can link this account to your membership. (optional)
ERROR 2
ACS values your privacy. By submitting your information, you are gaining access to C&EN and subscribing to our weekly newsletter. We use the information you provide to make your reading experience better, and we will never sell your data to third party members.
After what seemed like interminable delays, the National Research Council (NRC) has published its first assessment of graduate programs since 1995. The report includes vast amounts of information and multiple rankings of doctoral programs, including those in chemistry. Groups in academia and beyond will use these data to see how programs stack up and how they can be improved.
"This report and its large collection of quantitative data will become in our view an important and transparent instrument for strengthening doctoral education in the United States," said the presidents of the National Academy of Sciences, National Academy of Engineering, and Institute of Medicine, in a statement. NRC operates under the auspices of the National Academy of Sciences.
"I'm up to my eyeballs in data" with this report, says Charles A. Wight, a physical chemist and dean of the graduate school at the University of Utah who was not part of the report committee. The rankings cover doctoral programs in disciplines ranging from aerospace engineering to theater. Those for chemistry alone evaluate over 150 departments on each of 20 criteria, which fall under the broader categories of research activity, student support and outcomes, and diversity. "These metrics are becoming the gold standard for measuring the quality of graduate programs," he says. "Most of the other rankings are more or less beauty contests."
The rankings will reverberate far beyond the ivory tower, Wight says. "There is a lot of attention being paid by the government and the press about the value people get for their education dollar," and these comprehensive rankings will be studied to tease that out, he says.
The flood of information has been a long time coming. The assessment is based on information collected by the council, which functions under the National Academy of Sciences, during the 2005-06 academic year, and its release has been postponed numerous times. Some of the delays stem from repeated refinements of the methodology used to produce the rankings, which has been drastically overhauled since the 1995 assessment. The new methods largely abandon reliance on reputation surveys in favor of a more data-intensive approach. The methodology for the report was released last year (C&EN Jul. 20, 2009, p. 18).
As a result of the methodology makeover, faculty, students, and administrators can no longer glance at a simple list of rankings. Instead, by conducting two separate surveys of university faculty, the council developed two separate ranges to measure overall program quality. Those ranges were determined by assigning weights to each of 20 graduate program characteristics that NRC measured, such as the percentage of faculty with grants and the percentage of first-year students with full financial support.
In addition to these overall ranking ranges, the assessment also provides additional ranking ranges based on specific criteria such as department diversity or student outcomes. "The NRC has decided to deemphasize the rankings and emphasize the fact that creating rankings is an uncertain business. For many people that's not a familiar concept," Wight says.
The two overall ranking ranges are called the S (or survey-based) rankings, and the R (or regression-based) rankings. "The ranges are both quantitative in the sense that they weight data that was gathered, but the weights are set differently in each rating," explains chemical engineer William B. Russel, dean of the graduate school at Princeton University who was also not part of the study committee.
"I'm interpreting it as the R being the perception- and S being more substantive, more objective," says Timothy Barbari, a chemical engineer and dean of the graduate school at Georgetown University who was not part of the study committee.
"What you'll see in chemistry and chemical engineering is for the most part, the ranges overlap. Programs are perceived to be excellent, and indeed they are," Barbari says. But sometimes the ranges don't overlap, and that could indicate a program whose reputation hasn't caught up with its stellar performance, or a program that is losing its luster, he adds.
"I think it's a valuable exercise to do these rankings," says Daniel Neumark, chair of the department of chemistry at the University of California, Berkeley, who was not part of the study committee. "But this does seem to be a very complex way of going about it. It's questionable whether this is the best way to do it," he adds.
The delays in releasing the report have led to concerns that the data will be too stale to be useful, because in the years since the data were collected faculty may have retired or moved among departments, student support structures may have changed, and departments may have made further inroads on the diversity front. In a telephone press conference held Sept. 27, the committee overseeing the project argued that the assessment was still useful.
"The great strength of this study is the vast database," rather than rankings themselves, says Richard Wheeler, a NRC report committee member who is also provost at the University of Illinois, Urbana-Champaign. Departments can customize the database and use it with current data to look at how they've changed, Wheeler adds. And now that the methodology is in place, the committee says it hopes to raise enough money to update portions of the rankings in two or three years.
"These rankings happen to be a snapshot in time, of what these programs looked like in 2006," Barbari says. But by some measures, 2006 may not be so far off as to discount all the data, he says. "The graduate students that started then are just now finishing," he says.
Now that the report is finally out, "I think there are going to be all sorts of reactions," Barbari says. "There will probably be some people that are not very happy, that feel slighted," he says. "And the more objective ranking criteria may lead to some surprises."
"But it's just like any ranking. You need to do your homework and not rely exclusively on the numbers on the table."
The report can be downloaded free of charge at http://www.nap.edu/rdp, along with a revised guide to the study methodology.
Join the conversation
Contact the reporter
Submit a Letter to the Editor for publication
Engage with us on Twitter