ERROR 1
ERROR 1
ERROR 2
ERROR 2
ERROR 2
ERROR 2
ERROR 2
Password and Confirm password must match.
If you have an ACS member number, please enter it here so we can link this account to your membership. (optional)
ERROR 2
ACS values your privacy. By submitting your information, you are gaining access to C&EN and subscribing to our weekly newsletter. We use the information you provide to make your reading experience better, and we will never sell your data to third party members.
Much has been said about the importance of education—especially science, technology, engineering, and mathematics (STEM) education—to maintaining U.S. global competitiveness and ensuring the quality of life and well-being of future generations. Much also is being done to improve how teachers teach and how students learn, from innovations in curricula to adaptation of multiple ways to deliver instruction and feedback to students. Previous back-to-school features have highlighted projects aimed at encouraging interest in science among students and increasing their success in learning science. This year, C&EN takes a break from such coverage and instead asks the question: How do we know what teaching innovations work?
It turns out that funding agencies such as the National Science Foundation and the Howard Hughes Medical Institute recently have been asking the same question of the science education projects that they fund, as Senior Editor Susan J. Ainsworth and Associate Editor Linda Wang discovered in reporting the story “Measuring Success” on page 48. Together, NSF and HHMI spent close to $1 billion in STEM education research in 2009, according to Ainsworth and Wang. Yet until now, they have had “little data to determine whether their investment dollars were paying off,” they write.
The bad news is that the idea of evaluating science education projects—to determine which ones work, which ones are scalable, and which ones have lasting effects on learning—had not always been high among the priorities of principal investigators. Part of the problem was lack of time; another was lack of expertise in measuring educational outcomes among investigators who are scientifically trained but not necessarily in educational research methodologies. The good news is that many resources are available to assist principal investigators in measuring the outcomes of their science education projects, Ainsworth and Wang report. “And as funding agencies gather more evaluation data, they are also pushing investigators to disseminate their best practices,” they write.
Critical data must come from long-term assessments that follow the effects of education innovation on students as they progress in their schooling. Getting these data, however, is complicated not only by the logistics of tracking students as they move through the educational system, but also by the short duration of educational research grants. If funding agencies are serious about knowing what really works, they should consider allowing enough time for investigators to collect meaningful long-term data.
Meanwhile, the emerging trend of high-profile U.S.-based chemistry professors establishing research groups and labs in far-flung places also has caught C&EN’s attention. On page 53, in “An Office Across The Ocean,” Senior Editor Bethany Halford details the challenges and rewards of maintaining productive research activity in various parts of the globe.
The trend affirms that the U.S.’s method of educating graduate students in chemistry and related sciences works and continues to be one of the most highly sought in the world. That U.S.-based professors are increasingly immersing themselves in the cultures of other countries is a most welcome development, one that can only enhance international understanding and cooperation. Perhaps this trend marks the turning of a new page in science education: the export of U.S. talent enabling access to effective U.S.-style education in many places of the world.
Join the conversation
Contact the reporter
Submit a Letter to the Editor for publication
Engage with us on Twitter