Designing Public Sector Performance Report

Designing Public Sector Performance

June 8, American University

Organizer: Donald P. Moynihan, La Follette School of Public Affairs, University of Wisconsin-Madison

Sponsors: the European Commission Erasmus + Jean Monnet Projects & La Follette School of Public Affairs, University of Wisconsin-Madison

 

This workshop examined the state of the research on public sector performance, focusing in particular on design issues, that is, deliberate efforts to alter some aspect of the public setting to improve outcomes. These included governmental reforms, policies about hiring, restructuring incentives, or training. Historically, such practices have been implemented, and then studied in an ex-post fashion using observational designs. But the availability of administrative data offers more opportunities for quasi-experimental designs, and researchers are addressing some of these issues with field experiments.

 

The workshop pulled together various trends in public management and public policy scholarship: an ongoing interest in performance, a shift toward evidence-based policy, and the embrace of experiments. Much of the research focus on performance has focused on cognitive process, but less on organizational design. The rise of evidence-based policy units in government has embraced field experiments, but has rarely applied them to management questions. Governments continue to introduce major reforms in a fashion that resists good causal design, but the existence of administrative data offers new possibilities to apply quasi-experimental designs. These varying trends suggest that this is an opportune time to discuss this topic, and perhaps to establish a research community.

 

The research presented combined new empirical studies from Europe, the United States, Asisa and Africa. Taking advantage of the Washington DC location, the workshop incorporated not just academics, but also researchers from think tanks, evaluation organizations, the US supreme audit office (the Government Accountability Office), and the US Office of Management and Budget.

 

Elizabeth Linos (@ElizabethLinos) of the University of California-Berkeley presented research showing how different types of advertising campaigns increased minority recruitment in police departments. (More on this research project can be found in the Economist. The field experiment increased the diversity of a police department in Charlotte and is now being replicated across the United States. One lesson from this work was the importance of a single success in persuading governments to participate in experiments. Once governments see that it worked in one venue, they are much more ready to adopt it. Rob Seidner of the US OMB echoed this point: it’s a lot easier to convince managers and policymakers with evidence of success rather than just a theory. Linos noted that a challenge is that the people in government who are most interested in partnering with researchers are often ambitious and looking for their next job opportunity – they tend to turnover quickly.

 

Lotte Bogh Andersen of Aarhus University (@LBoghAndersen) discussed the motivation for her research. She had heard time and again that leadership matters from politicians, and decided to try to scientifically assess their claim. She designed a field experiment she studied the effect of leadership training on 672 leaders in the public and private sectors. Preliminary results suggest that leadership training for women seems to have significantly positive organizational effects but not for men. More on this project can be found here.

 

Nina Van Loon (@NMvanLoon) of Leiden University also presented on work she was undertaking in Denmark, examining if reorganizations of incentive structures in public hospitals altered employee behavior and outcome. She also pointed to the risks of a “Hawthorne effect” in drawing inferences about outcomes in field experiments – it is difficult to fully disentangle the effects of the treatment from the presence of researchers in implementing or just studying designs, which may cause subjects to respond differently than they otherwise would.  One challenge she pointed to is distinguishing between reforms and implementation: organizational reforms are “treatments” that were applied in multiple settings, but the implementation of the reform varied a good deal across sites. Carolyn Hill of the MDRC pointed out that evaluation community has long studied issues of design “fidelity” in affecting outcomes. Van Loon made the case that field experiments should actively study implementation as a distinct variable, even if it is impossible to predict in an ex-ante fashion how implementation will occur. Such efforts are more feasible setting such as Denmark where researchers can draw on administrative data to investigate unanticipated findings. Hill also made the case for researchers to uncover and present exploratory findings from field experiments – many of the most interesting findings may not have been predicted or pre-registered, but if they are important they should still be reported on.

 

The second panel included research from around the world, but with a strong focus on India and African countries such as Liberia, South Africa and Ghana. This work pointed to similar themes in practice, but different themes for research. Don Moynihan (@donmoyn) pointed out that across the panels the evidence suggested that performance regimes had a net neutral or even negative effect, while management systems that invested in bureaucratic autonomy seemed to generate better outcomes. Researchers studying India and Africa were not applying field experiments, but instead creating new datasets based on a mixture of administrative data and freedom of information act requests to assess how different reforms or features of the administrative systems were affecting performance. This often requires spending time in the field, building relationships or waiting out bureaucrats who would otherwise not share the information. Performance in these settings may be more crudely and creatively measured, with measures such as project-completion standing in for indicators of performance.

 

In evaluating this panel, Carolyn Heinrich (@CJ_Heinrich) of Vanderbilt University notes that the research offered insights precisely because they connected theories about micro-behavioral processes with larger administrative questions about state capacity. Dan Rogger also drew larger lessons for the use of randomized-controlled trials and public management research generally. Rogger runs the Bureaucracy Lab at the World Bank which runs about 10 field experiments at any one time. Such experiments are the stars in the skies that can shed light for new design efforts, but to understand them you need to understand the entire constellation. This requires thick qualitative description of bureaucracies that forms the context in which these experiments occur.

 

In the final session both Hill and Rogger argued for the need to break down silos between universities, evaluation organizations, non-profits and government. In some cases, this is an incentive problem argued Rogger, while Hill pointed to cultural beliefs within universities as a no less salient barrier. Chris Mihm of the US Government Accountability Office argued that efforts such as the 2030 Agenda for Sustainable Development and the related global Sustainable Development Goals are venues where public management research and practice can be matched. Such efforts have tangible performance metrics in various stages of development, are a high priority for many countries, and include significant design challenges, but still lack much engagement among scholars, researchers, and practitioners.

 

 

AGENDA

 

9.15-10.30am Session 1: Recruitment, Training and Incentives

Chair: John Kamensky, IBM Business of Government

Nina Van Loon, Leiden University: Changing Incentive Structures in Performance Regimes

Elizabeth Linos, Harvard University & Behavioral Insights Team: Improving Recruitment

Lotte Bogh Andersen, Aarhus University: Can we Train Better Leaders?

Respondent: Rob Seidner, US Office of Management and Budget

 

Break 10.30am-10.45am

 

10.45am-12.00 Session 2: Studying Performance Outcomes in Developing Countries

Chair: Obed Pasha, University of Massachusetts-Amherst

Martin Williams, Oxford University: Management of Bureaucrats and Public Service Delivery: A Scientific Replication in Nigeria and Ghana,

Rikhil Bhavnani, University of Wisconsin-Madison: Research on Local Government Performance in India

Dan Honig, Johns Hopkins University: When Reporting Undermines Performance: The Costs of Politically Constrained Organizational Autonomy for Aid Agencies,

Respondent: Carolyn Heinrich, Vanderbilt University

 

Break 12.00-12.10pm

 

12.10-1pm Session 3: What are Next Steps to Facilitate Progress?

Chair: Donald Moynihan

Carolyn Hill, MDRC

Christopher Mihm, Government Accountability Office

Dan Rogger, World Bank

  • Nina Van Loon