ERROR 1
ERROR 1
ERROR 2
ERROR 2
ERROR 2
ERROR 2
ERROR 2
Password and Confirm password must match.
If you have an ACS member number, please enter it here so we can link this account to your membership. (optional)
ERROR 2
ACS values your privacy. By submitting your information, you are gaining access to C&EN and subscribing to our weekly newsletter. We use the information you provide to make your reading experience better, and we will never sell your data to third party members.
When the National Research Council published its first assessment of graduate schools since 1995, it hoped to give prospective Ph.D. students a valuable tool for picking a program. But some of the measurements in NRC's new report were obtained in ways that might not be useful to prospective students. And students who spoke with C&EN say that they do not see the report as essential to their choices.
COVER STORY
View From The Trenches: Students Don't View New Assessment As Critical
Many of NRC's scholarly quality measures are on a per-faculty basis, but they were determined in a way that could include faculty who no longer accept graduate student advisees—a situation that might accurately reflect a department's scholarly output but might not be useful to prospective graduate students. The committee developed a formula to allocate faculty members' time to different departments, based on how many advisees they had in that department, how many committees they served on, and whether they were a core faculty member or an associate.
"The committee wanted to define people who are deeply involved in graduate education as faculty," says Charlotte Kuh, who directed the NRC study. But it also wanted to prevent universities from assigning especially productive faculty to multiple programs to beef up their stats, something that was possible in the 1995 assessment. The committee says it sent faculty allotment data to schools so they could verify that the calculations were accurate. But that process was not without errors, as happened at the University of Washington, Seattle.
Students who spoke with C&EN thought that NRC's ranking ranges weren't critical. "They're a good place to start, but they're not ultimately what I'd base my grad school decision on," says Sharon R. Neufeldt, a graduate student in Melanie S. Sanford's group at the University of Michigan, Ann Arbor. Neufeldt came to Michigan from a state university without a chemistry Ph.D. program, where she didn't get a great deal of guidance about what to look for in a graduate school. She hadn't heard of NRC's 1995 report, so she used U.S. News & World Report's graduate program rankings to aid her search. Undergraduates at Michigan usually have a different experience from hers, because graduate students will give them advice about what programs are good, Neufeldt says.
Kelly Chuh, a senior biochemistry major at the University of California, Santa Barbara, got that sort of advice from members of Kevin Plaxco's lab, where she's done research. She had already decided where she was applying to graduate school when the NRC assessment was released, and she doesn't plan on rethinking her choices. "When I was younger, rankings like these would've had a stronger influence on my decisions. I think now that I'm older, I don't really care," she says.
Jessica Sherman, a chemistry graduate student in Thuc-Quyen Nguyen's group at UC Santa Barbara, chose her school for its intense focus on organic materials research. "I don't care about rankings," she says. "I do care about having a supportive adviser who publishes a lot of good work."
Sherman, who founded the popular chemistry blog "Carbon-Based Curiosities," adds that rankings can't pick up on the intangibles she loves about her department, such as its collaborative nature and its sunny locale, just steps from the beach. "If I had cared about rankings, I might have gone elsewhere, and I don't think I'd be as happy as I am here."
Join the conversation
Contact the reporter
Submit a Letter to the Editor for publication
Engage with us on Twitter