Advertisement

If you have an ACS member number, please enter it here so we can link this account to your membership. (optional)

ACS values your privacy. By submitting your information, you are gaining access to C&EN and subscribing to our weekly newsletter. We use the information you provide to make your reading experience better, and we will never sell your data to third party members.

ENJOY UNLIMITED ACCES TO C&EN

Education

Measuring Success

Funding agencies are pushing for more data to prove that investing in chemical education innovation is paying off

by Susan J. Ainsworth
September 7, 2010 | A version of this story appeared in Volume 88, Issue 37

FOLLOW-UP
[+]Enlarge
Credit: James Kegley
By tracking students over the course of their education, Jordan hopes to demonstrate the long-term success of HHMI’s Science Education Alliance.
Credit: James Kegley
By tracking students over the course of their education, Jordan hopes to demonstrate the long-term success of HHMI’s Science Education Alliance.

The mere mention of doing assessment and evaluation on her chemical education project used to make Catherine L. Drennan nervous. “I thought of it as a burden, frankly, because there are only so many hours in the day,” says the professor of chemistry and biology at Massachusetts Institute of Technology.

COVER STORY

Measuring Success

Then in 2006, Drennan was named a Howard Hughes Medical Institute (HHMI) professor, and she received a $1 million grant to develop an innovative freshman chemistry course that would integrate real-life examples from biological research. One of the grant requirements was that Drennan provide a detailed plan for how she would evaluate the success of the course. “You have to prioritize,” she says. “And the things the funding agencies are telling you to do are likely to move higher up in your life, honestly.”

Over the past four years, Drennan has not only learned how to do assessment and evaluation, but she’s also come to embrace it. “You may be kicking and screaming down the path to doing some of this, but it really is fun,” she says. “The joy you get from looking at the success in what you’re doing in the classroom, and the impact on the lives of these people who are going to go on and do these amazing things, it’s really one of the most rewarding things that anyone will ever experience.”

Like Drennan, many chemical educators are intimidated by the thought of doing assessment and evaluation. After all, few chemists are formally trained in educational research methodologies. But with increased pressure from funding agencies to show success in chemical education, investigators have no choice but to get on board.

FORWARD THINKER
[+]Enlarge
Credit: Robert E. Klein/AP/© HHMI
Drennan used to be nervous about assessments and evaluations, but not anymore.
Credit: Robert E. Klein/AP/© HHMI
Drennan used to be nervous about assessments and evaluations, but not anymore.

One of the best ways to learn is by example, so C&EN sought the advice of several chemical educators who shared their experiences in evaluating chemical education projects. By learning from each other, C&EN believes, more chemical educators will be able to answer the question: “How am I doing in teaching chemistry?”

The need for better assessment and evaluation is undeniable. Funding organizations such as HHMI and the National Science Foundation have poured billions of research dollars into science, technology, engineering, and math (STEM) education. NSF estimates that in 2009 alone it spent about $905 million directly on learning across the whole agency; that amount is up from $849 million in 2008. Likewise, HHMI spent $79 million in 2009, up from $68 million in 2008.

Until recently, these funding agencies have had little data to determine whether their investment dollars were paying off. “We’re investing in science education to prepare students who will become scientists, science educators, and citizens who have a better understanding of science,” says David J. Asai, director of the precollege and undergraduate science education program in HHMI’s division of grants and special programs. “Assessment helps us understand the effectiveness of our investments.”

Funding agencies are now requiring investigators to include in their grant proposals detailed plans for how they will evaluate the success of their projects and how they will disseminate their results. NSF’s focus on evaluation is consistent with the push for more assessment across the entire federal government, says Susan H. Hixson, program director in NSF’s Division of Undergraduate Education. “Congress asks NSF for evaluation data all the time,” she notes. “Often legislation will specifically state that funding that is given to us is contingent on our providing evaluation data.”

By adopting more rigorous evaluation standards, NSF hopes to “find out what is actually helping students learn, come up with best practices, and then publicize those,” Hixson notes. In particular, “we are trying to find strategies that enable a broader array of students to really understand and excel in science.”

But doing evaluation “has to be more than just counting the beans,” Asai notes. “You don’t know if your program has been successful just because you’ve involved 20 students and five faculty members in three courses. That doesn’t tell me anything about what you think worked or didn’t work. The same standards that we would apply toward trying to measure good research, we should also apply toward good science education.”

Peter J. Bruns, vice president of grants and special programs at HHMI, concurs. “The science community has totally separated teaching from research,” he says. “We’re now trying to bring them back together.”

Many resources are available to help principal investigators measure the success of their education projects. Currently, for HHMI, up to 10% of the funding from a grant may be used for assessment and evaluation. Although NSF does not have this specification, it encourages its investigators to seek the help of an external evaluator. An outside perspective can help eliminate bias, says Melanie M. Cooper, a professor of chemistry at Clemson University who has received numerous NSF grants since 1992, when she began a laboratory curriculum project that required students to work in groups to solve problems, design experiments, and report their findings. “It can be very difficult for people leading curriculum development projects to remain objective about their work,” she notes. “After you have put so much of your heart and soul into a project, you may not be able to step back and accept that the absolutely fabulous idea you had just may not work or is not transferrable to other classrooms.”

INNOVATOR
[+]Enlarge
Credit: Sonia Underwood
Cooper practices interactive teaching methods that are part of the new curriculum “Chemistry, Life, The Universe & Everything,” which she codeveloped with Michael W. Klymkowsky of the University of Colorado, Boulder.
Credit: Sonia Underwood
Cooper practices interactive teaching methods that are part of the new curriculum “Chemistry, Life, The Universe & Everything,” which she codeveloped with Michael W. Klymkowsky of the University of Colorado, Boulder.

In addition to helping investigators collect and analyze both qualitative and quantitative data, an external evaluator can also help “keep you on track and make sure that you don’t forget where you are going,” Cooper says.

Investigators can find an outside evaluator right on campus. “Most universities have offices of institutional research and assessment,” says Tuajuanda C. Jordan, the director of HHMI’s Science Education Alliance (SEA), who is currently evaluating her program’s impact on undergraduate learning. “Go and talk to those people; they can point you in the right direction. Sometimes they will be able to identify somebody at that institution who can help you. Some of them will even do the preliminary analysis for you.”

Plenty of alternatives also exist if resources are tight. “Some institutions have wonderful statistics departments, and if you venture across campus, you might be able to find a faculty member who would be willing to work with you,” Jordan says. “And if you don’t have a statistics department, every university has a math department, so somebody could help you analyze the data, at the very least.”

A good collaboration is invaluable, Drennan says. After she received her HHMI grant, she sought the help of faculty at MIT’s Teaching & Learning Laboratory who are experts in assessment and evaluation. The partnership has helped Drennan carry out the assessment and evaluation part of her project while at the same time exposing her colleagues at the Teaching & Learning Laboratory to chemical education. “We’re learning from each other,” says Drennan. “And it’s been really fun.”

Even chemical educators who are not intimidated by assessment and evaluation can benefit from good partnerships. Over the past eight years, Richard S. Moog, professor of chemistry at Franklin & Marshall College, has worked with chemical education researchers who conduct NSF-funded research on Process-Oriented Guided-Inquiry Learning. Moog continues to lead the POGIL Project (www.pogil.org), a national professional-development project that involves disseminating POGIL and related student-centered pedagogies and helping educators learn to implement them effectively.

Moog, who is a physical chemist, says he is comfortable with the statistical data coming out of the research, but he continues to rely on scientists trained in chemical education methodologies to properly design and implement the research and analyze and interpret results.

Drennan suggests that scientific societies such as ACS could play a role in matching investigators with external evaluators. “If there’s a way that the American Chemical Society or other societies could help form these collaborations, or some way to network to bring these groups of people together, I think that could be really valuable,” she says.

Once an assessment and evaluation plan is in place, it’s important to establish a comparison group and start collecting data immediately. “A lot of our negative controls in the past were not so good because we were changing too much at once, and we didn’t have those years to collect data before we really started implementing the changes,” Drennan says. “Now, going forward, we’re going to collect data on things before we implement the changes so we can have a more direct comparison.”

The data can influence the direction of a chemical education project. “There definitely have been some surprises,” says Drennan of the data she’s collected so far. “I became a scientist because I want to ask questions, and I want to know how things are working. It’s wonderful for me to have this information so that I can go back and rethink what we’re doing.”

Moog echoes this point. “We are continually asking ourselves what we are doing well, what we need to improve, and what we have learned,” he says. Those efforts seem to be paying off. Since the inception of the POGIL project in January 2003, Moog says, well over 1,000 educators including high school teachers and faculty members from a full range of postsecondary institutions have reported implementing POGIL in a recent survey.

Ultimately, improvements in teaching should translate into improvements in learning. “I’m convinced that a lot of the changes that we’re making” are working, Drennan says. “I see the difference in the scores, and I see the difference in students’ enjoyment; I see the biology majors taking more chemistry than they need to take and enjoying it and feeling like it’s more valuable to them.”

TEAMWORK
[+]Enlarge
Credit: Beth Cardwell
Students interact with an instructor who serves as a facilitator of learning, rather than as a source of information, in a classroom structured around Process-Oriented Guided-Inquiry Learning (POGIL).
Credit: Beth Cardwell
Students interact with an instructor who serves as a facilitator of learning, rather than as a source of information, in a classroom structured around Process-Oriented Guided-Inquiry Learning (POGIL).

The real test of success, however, will be whether a project’s impact is long-term and sustainable. “A lot of what I really want to do is not just teach them chemistry, but to make chemistry a part of their lives,” Drennan says. “Some of what I want to know is years down the road, and I’m not sure how I’m going to get that data yet. I think that is a really huge challenge for everybody who is doing this.”

HHMI’s Jordan might point the way through an HHMI experiment to determine the feasibility of collecting long-term data. Jordan has introduced a research-based genomics course to a number of colleges and universities around the country that provides freshmen with a research experience. “Rather than doing a lot of cookbook type labs, there will be more research-based laboratory courses,” she says. Jordan plans to add other research-based courses to the program, which is supported by a four-year, $4 million commitment from HHMI.

Working with an outside evaluator, Jordan has generated preliminary data that suggests a positive impact on student learning. “The gains that our students are reporting are very similar to what the juniors and seniors are reporting when they do summer undergraduate research experiences,” she notes. The control groups of students, she says, “are not showing the changes in attitudes and behaviors at the same level as our students.”

But more important, Jordan is trying to determine the long-term impact of this course. “Are the positive gains that we’ve seen sustainable?” she asks. “How are the students tracking with respect to extracurricular activities, engaging in research, and doing science-related projects? What are their career plans, and what are they doing to try to get there?”

She says the key to collecting this type of data is to develop personal relationships with the students. “We visit every school in the program, and in those site visits, we talk to the students and we try to convince them of how important it is to participate in the follow-up surveys that they get after they leave the SEA course,” Jordan says. “If you establish a rapport with those students, then it’s not as difficult as it appears to be to follow up with them.”

At the beginning of each course, Jordan explains, students are asked to provide their primary e-mail address, a secondary e-mail address, and the name and mailing address of their favorite relative. “If I can’t find them, at least that relative should be able to find them,” she says.

Advertisement

Jordan hopes to track the students through higher education. “If you’re trying to enhance science literacy and increase the number of students going off to graduate and professional school, then I think that 10 years of tracking should be sufficient to make some solid conclusions.”

In addition to measuring and tracking student scores and satisfaction, Clemson’s Cooper believes that evaluation of chemical education projects must include “robust research data to show increased learning in a course.” Since beginning her laboratory curriculum project in 1992, she says, she has devoted time and resources to developing and applying sophisticated instruments and methods for measuring problem-solving abilities and metacognition—knowledge about when and how to use particular strategies for learning or for problem solving. For example, she collaborated with Ron Stevens at the University of California, Los Angeles, to develop ways to use Interactive Multi-Media Exercises, a suite of software applications that challenge a student’s problem-solving skills and provide teachers with real-time assessment tools.

With NSF support and collaborations with other faculty members, Cooper has been able to use these tools to assess the impact of projects including the 18-year-old laboratory curriculum project and her newest curriculum project, “Chemistry, Life, The Universe & Everything,” which is helping “students produce and use structural representations such as Lewis structures in a more meaningful way than [is done] in traditional general chemistry courses,” she says.

To support investigators’ efforts to collect long-term data, Cooper would like NSF to change its policy and allow renewal of educational research and curriculum projects. “NSF funds relatively small projects that are two to three years in duration,” she says. That period “is often too short to allow a researcher to get meaningful assessment data and really make an impact.” To continue his or her research, an NSF investigator must now extend the proposal to a new area and apply for a subsequent grant, she says.

One of the best ways to ensure that a project continues to receive funding is to provide funding agencies with thorough assessment and evaluation data. But more important, the data that investigators collect can offer valuable feedback on how they’re doing and insights on how to improve their teaching.

“The ones that do a good job with the evaluation are always really happy they did because they get information back that helps them,” says Debra Felix, senior program officer of the precollege and undergraduate science education program in HHMI’s division of grants and special programs. “Even if they find out that the program isn’t working, that’s great information to have.”

To help investigators learn how to assess and evaluate their education projects, NSF offers publications and websites that deliver “enormous amounts of evaluation advice and practical tools such as questionnaires developed by former researchers that newcomers can just take over and use,” NSF’s Hixson says.

Program directors from HHMI routinely conduct site visits to help train investigators on assessment and evaluation. In addition, HHMI incorporates assessment and evaluation into its program directors’ meetings. “We’ve devoted entire program directors’ meetings to this, much to the discomfort of our program directors,” Bruns says. “A lot of people come into these workshops with very negative feelings, but I think people are turning around. Most of our grantees now pretty much accept it.”

As funding agencies gather more evaluation data, they are also pushing investigators to disseminate their best practices. Dissemination of results is essential so other chemical educators will know whether or not a new pedagogical approach has a positive, negative, or no impact on student learning, NSF’s Hixson says.

NSF encourages its investigators to publish their findings and speak at meetings or regional workshops. “We place heavy expectations on our principal investigators to report what they have learned in the course of their education research,” Hixson adds.

Drennan encourages investigators to publish their data in scientific journals. She recently submitted one of her papers to ACS Chemical Biology. “It does not regularly cover education,” she says. “But we just thought that was the sort of audience that we wanted. If more people submit education pieces to research journals, we could reach a broader audience and some of the best practices can get out there.”

Jordan agrees: “As more and more examples of good assessment survey instruments get published, more and more people will be comfortable doing assessment and evaluation.”

Article:

This article has been sent to the following recipient:

0 /1 FREE ARTICLES LEFT THIS MONTH Remaining
Chemistry matters. Join us to get the news you need.