The Executive Officer’s Column
Indicators for a New “Social Science of
How do you answer the question “Are current U.S. expenditures on Research and Development (R&D) adequate (and how do you define the latter)?”? During his keynote address to 300 scientists and policymakers at the 2005 annual Science and Technology Policy Forum of the American Association for the Advancement of Science, the President’s Science Advisor, John Marburger, made a plea for the creation of a new “social science of science policy” that would examine the effectiveness of federal science and technology (S&T) expenditures in ways not possible in the recent past (see May/June 2005 Footnotes, p. 3). While outlining trends in the nation’s federal R&D investment and describing the President’s proposed Fiscal Year 2006 S&T budget, Marburger complained of the “primitive,” extant intellectual framework that prevents objective assessment of such government investment policies. He expressed hope that the social science community could establish an updated, creative new econometric approach to assessing S&T policy, and he has reiterated this wish in a number of recent public venues, including the annual meeting of the Consortium of Social Science Associations (COSSA), of which ASA is a founding member.
Since Marburger’s entreaty, the social science community has reacted with a healthy level of caution. Some express suspicion that the administration is trying to re-normalize spending to make it impossible to assess the Bush record relative to past spending. Others are concerned that this will mean just another “econometric” model. At the same time, many sociologists and other social scientists remain curious and generally supportive of Marburger’s potentially energizing proposal. After all, the successful five-year doubling of the National Institutes of Health budget in the recent past was achieved using “primitive” metrics, which also extracted only incremental percentage increases in other federal R&D agencies such as the National Science Foundation (NSF). Elements of politics always fill the gaps where objective knowledge and modeling is lacking. NSF is tasked with taking a lead role through its Directorate of Social, Behavioral and Economic Sciences (SBE). Specifically, SBE’s Division of Science Resources Statistics (producer of the bi-annual Science and Engineering Indicators) is actively engaged in obtaining advice from the community and pursuing concrete ways to devise useful new quantitative models (e.g., updating the data taxonomy) and a new interagency decision framework, which may include qualitative components.
Within this cautionary framework, many in the science policy community see Marburger’s effort as laudable. Certainly, the regular data sources we have now are outdated, and provide insufficient analysis to enrich the political debates. We applaud the tapping of social sciences to help define the appropriate data elements, create the needed models and statistical tools, and help determine what types of studies (including qualitative and comparative) are needed. We must also uncover the existing research in this domain. NSF’s persistent advocacy in the 1990s for more funding to “fill” the trickle in the science and engineering workforce pipeline eventually fell silent with the realization that the call for more scientists and engineers was not necessarily the best policy approach. The social science community’s caution stems not just from such historical missteps but also from fear that agencies that are integral to defining the econometric assessment could become ensnared in what ultimately is a political process—the setting of national S&T funding priorities. NSF’s historically strong leadership is likely to forestall such an occurrence, and to a large degree, NSF’s science focus and stellar administrative reputation help insulate it from Washington’s worst politics.
Credible indices that help frame national S&T policies are essential. At the same time, there is a general sense among longtime science policymakers that what is lacking in Washington is not just a new framework for thinking about R&D issues, but a solid policymaking infrastructure. The now-defunct White House Office of Technology Assessment (OTA) was just such a tool for Congress, but the same Congress deliberately and shortsightedly eliminated it in the 1990s. Ever since this “self-handicapping” incident, Congress itself has been foundering with no credible, systematic way to guide science policy formation, forcing an unhealthy and almost total dependence on a haphazard “witch’s brew” of lobbyists and other less systematic sources of information. Other stakeholders in the national policy apparatus have been similarly handicapped. Marburger rightly maintains that globalization and the availability of new data have made metrics long used to help defend incremental annual budget increases (e.g., annual number of U.S. patents and scientific articles, R&D expenditures as a proportion of Gross Domestic Product) inadequate to plan or support meaningful S&T budget increases and distributions, given the competing demands on the nation’s treasury.
It is reasonable for sociologists to work with the social science community to support Marburger’s call for a re-evaluation of the framework used to examine S&T policies and to assess its strengths and applicability. In his initial appeal to the 300 people in the general science community at the AAAS forum, Marburger stated that he believed a new effort can be organized with some minimal federal funding. He encouraged scientists to use the methods and literature of the social science disciplines to explore S&T trends and measurement. No matter who is President or which party is in control of which branches, having an S&T policy research infrastructure is important to sound policy development. The nation’s policymakers have been needlessly handicapped for too long.
Sally T. Hillsman, Executive Officer