Making the Case: “Strengthening Our Capacity to Prevent Child Abuse and Neglect” by Lisbeth Schorr

Author’s note: Over the last decade, we have learned—through both research and experience—the significant long-term economic and social impact of reducing the incidence of child abuse and neglect. We also have learned a great deal about “what works” in prevention. We are now in a position to sustain “what works” and to build on what we’ve learned to achieve significantly improved outcomes. We can now design and assess policies, strategies, and programs that will be increasingly effective, despite current economic constraints.

What follows are four lessons learned that will help legislators and other public officials, funders, service providers, community coalitions and advocates take advantage of today’s unprecedented opportunities to prevent child abuse and neglect.

The best place to start is to agree on results. Agreement among stakeholders on desired outcomes for children, families and communities smoothes the way to:

 

  • Identify the strategies and program designs likely to achieve the agreed-upon outcomes. To satisfy demands for accountability and the need to understand the effectiveness of our work, we must be explicit in identifying the assumed, though not always proven, connections between the strategies we select and the outcomes we seek to achieve.
  • Identify the policies that must be in place to support these strategies and program designs. A hostile regulatory, funding and accountability climate can seriously undermine “what works” at the front lines. Unless we’re prepared to rely forever on wizards who can beat the bureaucracies and the dysfunctional regulations and funding practices because they are some combination of Mother Teresa, Machiavelli and a CPA, we have to pay more attention to the context. By identifying the elements of the policy and systems context that are essential to making policies and systems more hospitable to “what works”, we could assure that many more talented people are mobilized communities could act on what we know to change outcomes for large populations of children and families.
  • Develop the theories of change that connect the policies, strategies, and programs, with the agreed-upon outcomes. By drawing out the underlying assumptions about how the selected actions lead to the desired change in outcomes, the creation of theories of change can bring clarity to the change process. Theories of change also provide a way of measuring progress in the daily work of prevention before the long-term outcomes are in, and a way of illuminating the effects of interventions as they impact individuals, families and neighborhoods.

 

The selection of indicators to measure progress must be seen as a major undertaking, and done with great care.

Done well, the indicators will establish baselines and trend lines, will provide public and philanthropic funders information on which to base investment decisions, will allow managers to continually improve effectiveness, will help in putting the issue on the advocacy and policy agenda, will maintain accountability, and will make it possible to compare effectiveness among preventive interventions. The objective is to assure that what gets measured is the most authentic possible representation of what citizens and policymakers value as they consider the results of their investments. This is extremely hard, and takes a lot of work because:

  • Few indicators neatly and precisely match the desired outcomes.
  • Most agencies and organizations face intense pressure to document quick, visible results from their own efforts.
  • Different stakeholders use data for different purposes and have different data needs.
  • Managers want to be able to respond to funders who are interested in impact beyond individual families and programs, while practitioners want to respect particular and non-generalizable goals of individual families.

 

To achieve better outcomes on a large scale for the children and families most at risk, it is not enough to rely on spreading what has been shown to work in the past.

Rather we must analyze past successes—and failures—to generate new hypotheses, and new solutions. We must build on “what works” by seeing proven programs and best practices as a starting point, not a destination. We must improve the design and implementation of successful interventions as they are scaled up to increase the magnitude of their effects for entire populations.

Evaluations must be purpose-driven. To provide useful information on prevention efforts, the methods to assess “what works” and what is cost-effective must fit the purpose of the evaluations and the nature of the interventions we seek to learn about.

We need a range of measures and analysis, all of which must be rigorous and reliable, so that we can match how and what we measure with what we need to know. The push for evidence and accountability is immensely useful unless evidence is defined so narrowly that only numbers that come out of randomized experiments are considered credible. Other methods can encompass the knowledge and practice that can be harvested from experience, and be more relevant in obtaining usable information about preventive interventions that tend to be complex, interactive, evolving, and must be adapted to unique local circumstances. These methods must be based on strong theory, drawing on research and practice to connect interventions and results. They must also reflect a robust, quantifiable set of findings from empirical outcome data that establish, beyond a reasonable doubt, that the observed change has a high probablility of being the result of the practices, strategies and programs under consideration.