Advertisement

If you have an ACS member number, please enter it here so we can link this account to your membership. (optional)

ACS values your privacy. By submitting your information, you are gaining access to C&EN and subscribing to our weekly newsletter. We use the information you provide to make your reading experience better, and we will never sell your data to third party members.

ENJOY UNLIMITED ACCES TO C&EN

Policy

Improving Federal Science

Plans are under way to create a more formal science of science policy

by David J. Hanson
May 4, 2009 | A version of this story appeared in Volume 87, Issue 18

[+]Enlarge
Credit: Shutterstock
Credit: Shutterstock

IT STARTED WITH an appeal from President George W. Bush's science adviser, John H. Marburger III, in the spring of 2005 for the development of a better way for the federal government to make research funding decisions. Marburger said there was a need for a "science of science policy"—that is, a way to use data collection, theory testing, and other scientific methods to analyze science policy decisions, including where federal agencies invest their research and development dollars.

Today, that effort to bring more quantitative measures to science policy is moving forward along two paths: One effort is looking to academic scientists to develop testable theories of science policy, and another is trying to harmonize policy efforts across science funding agencies. And although there is agreement that more needs to be done to improve science decisions in the government, not everyone in the science policy community agrees with the present thrust.

"The basic idea is to think more analytically about the approaches we are taking," says Julia I. Lane, who manages the Science of Science & Innovation Policy program at the National Science Foundation. "The way federal science programs are set up is for program management, not for management of information. Proposals come in and proposals go out, but there is no analytical data infrastructure. We have no full understanding of what the outcomes are for our investments."

Lane's program is part of the effort to bring more academic scientists into the realm of science policy and task them with developing new tools to help measure those outcomes. NSF started the program in 2006 in response to Marburger's call for improved science policy methods. The agency is supporting research that develops models, analytical tools, data, and metrics that can be applied to the science policy decision-making process. It also funds research that examines the processes of innovation and discovery in organizations. NSF made its first awards in the program in 2007 and has recently put out its third solicitation for proposals.

Although there has not been time for much in the way of results yet, Lane is optimistic that the program will bring improvements to policy decisions. She says that the program is getting proposals from investigators in a wide range of disciplines, including economists, psychologists, and political scientists, as well as computer and information scientists. "The community is growing, and we are working to get the best possible people to submit proposals," Lane says. Still, the program has had to work hard to get scientists interested, she points out.

The second part of the effort to get science policy on a more evidence-based footing is the responsibility of an Interagency Working Group established by the White House National Science & Technology Council. This group is composed of participants from 17 government departments and offices that fund scientific research, and it produced a road map late last year for the development of a federal science of science policy.

Cochaired by NSF's Lane and by Bill Valdez, the director of the Office of Planning & Analysis in the Office of Science at the Department of Energy, the working group laid out a process in the road map to coordinate the federal approach to science policy and to establish interagency research priorities.

At a December meeting, science policy experts inside and outside the government debated the road map and tried to answer the questions it raised for improving science policy. The message that emerged is that the government needs a better way to make effective science and technology funding decisions and that it does not have the data or methods to do that.

"The road map really has two goals," Valdez tells C&EN. "One is to help the federal agencies understand the benefits of investing in science and technology, and the other is to improve the effectiveness of those investments. It's not that we are saying that the federal government makes bad investments or uninformed investments, we're saying that we could do a better job with analysis of investment decisions than we currently are."

TO REACH these goals, the agencies are working to develop common analytical methods and data sets that might be used to better inform science policy. Besides citation and patent data, the sorts of data that might be useful include how well funding leads to the creation of new companies and jobs. "Some of the areas we are working on include different approaches to modeling technology outputs and ways to develop a common data infrastructure," Valdez says.

"All the agencies that worked on the road map are pretty vested in the process," Valdez continues. "This is something people have thought a lot about, and we hope it will have some legs and fill an unmet need." The agencies are currently planning technical workshops and preparing white papers on how to meet their needs, he says.

Valdez points out that the government has made attempts to set up databases for science and technology data before, but "they have all sort of collapsed under their own weight." The current effort is trying to build a better database that collects useful data in a way that is not too burdensome to the science agencies but can still be used by science policy researchers.

"There is an emerging movement in the statistics and surveying world to develop data enclaves," Valdez says. "These are protected environments where data sets can be prepared in a way that researchers can use." He cites the huge National Opinion Research Center (NORC) at the University of Chicago as an example of a data enclave. NORC allows the sharing of social science data sets among a closed community of researchers.

THE EMPHASIS on collecting more and better data is seen as vital by science policy experts outside the government. "The absence of a common data set to which academic work could refer presents a big problem in terms of getting to a truly scientific method," comments Caroline S. Wagner, a senior science and technology policy analyst for SRI International. "If everybody is handcrafting their own data, then basically it's a cottage industry and not a science."

Wagner is concerned, however, that the road map mixes up the needs of the agencies for making informed decisions with the needs of academics for studying the performance of science and technology. "These are two very different things. I think the road map lacks some depth on this partly because it lacks a kind of historic overview," she says.

Christopher T. Hill, professor of public policy and technology at George Mason University, sees the same problem. "This is not a new idea. Doing studies and analysis, collecting data, and building econometric models on R&D funding were done by NSF back in the 1970s and '80s," Hill says. "This team seems to be proceeding without much consideration of the prior work."

Hill does see value in the work to improve data collection on science outcomes and believes it can provide important insights into the relationships among fields of science, institutions, and performers that are not currently available. "That work in analysis, large data sets, data mining, network analysis, and so forth does have the promise for telling us a lot about the dynamics of the scientific community," Hill says. "However, there is a long, long way to go before we can get policy-relevant, useful measures from this kind of thing."

Irwin Feller, the director of the Institute for Policy Research & Innovation at Pennsylvania State University, agrees that better data are key to improved science policy. "Generating better data would be a real public service, but it is laborious and expensive," Feller says. And although the questions of how we foster innovation and where we spend our funds are not new ones, Feller believes that there is a coalescing of concern in the government today to getting some answers.

"I think there is the potential to get a handle on some of the churning and ferment that is going on right now," Feller says. "People have been working on bits and pieces of this for years, but we have had an absence of systemic thought."

Some of that ferment is caused by the fact that the economic downturn is creating more demand for accountability in government spending, including spending on science and technology research. The American Recovery & Reinvestment Act, passed in February, includes more than $17 billion in new science funding, but it also demands that agencies report the numbers of new jobs the stimulus money creates and how many jobs are retained (C&EN, Feb. 16, page 7). NSF's Lane contends that this kind of pressure will force agencies to find ways to calculate science benefits.

"Both the recovery act and the America Competes Act make it clear that investments in science need to be tied in with competitiveness and creation of jobs," Lane says. The America Competes Act was passed in 2007 to keep the U.S. economically competitive with the rest of the world by strengthening math and science research and education. "A systematic way across the federal agencies of describing short-term and long-term impacts of research investments would be a big step forward," Lane says.

Feller sees the move toward accountability as a continuation of the scrutiny of science that began with the Government Performance & Results Act of 1993.

"THIS IS A REPACKAGING of some of the same things we have been doing for at least a decade," Feller says. "We have to be concerned that we don't try to force-fit these investments into short-term goals, like having to make so many discoveries or creating so many jobs. That runs counter to what you're trying to do for national competitiveness."

Hill is concerned, too, that the belief that science funding decisions can be made more quantitative and efficient by using models and data sets appears to devalue the role of peer review for research proposals. "Scientific inquiry is fundamentally an exploration of the unknown, and what we know from prior experience sheds remarkably little light on what we should do in the future," Hill says. "We can't write down a set of equations or a rule that will allow us to make these judgments."

SRI's Wagner says the increased call for accountability makes the science of science policy efforts even more germane. "I don't see reasons why you don't have to be accountable to the public for money that you're given. I think it would help scientists to understand better how their information is used," she says.

The change in presidential Administrations in January has slowed the work by the science agencies on improving science policy, but it continues to move forward. "The good news is that the new Administration is very open and engaged," Lane points out. "Every signal I have is that the science of science policy is important to this Administration."

The bottom line, Lane contends, is that they are trying to make the system better for both researchers and for the government. "If we can help improve the allocation of resources by even 0.01%, when you're talking about a $150 billion-per-year enterprise," she says, "that's a significant improvement in investment."

Article:

This article has been sent to the following recipient:

0 /1 FREE ARTICLES LEFT THIS MONTH Remaining
Chemistry matters. Join us to get the news you need.