Project Findings: Low Cost, Low Key Changes can Improve Outcomes

Work in behavioral science suggests that small changes can move people to act on personal goals. To test this approach in the realm of human services, MRDC along with sponsoring federal agencies, launched the Behavioral Interventions to Advance Self Sufficiency (BIAS) program with a goal of improving both the efficiency and outcomes of programming. Small changes or “nudges” to a program that facilitate the experience for clients, for example, the simplification of an application process, personalization of correspondence, or prominently highlighting a deadline, have an influence on decisions made by current or potential program participants. These adjustments are not major design changes, rather they are low cost, easily implemented ways to change the complexities many lower income families face .

Randomized trials at participating state and local human service agencies introduced specific behavioral interventions based on a period of review and identification of “bottlenecks.” Results indicate that these small changes had a statistically significant impact on outcomes in childcare and work support (including increased attendance at meetings or appointments) and child support (including increased rate of payment).

If small changes make a difference, why are larger-scale programmatic changes (that could result in increased benefits) so difficult to negotiate and implement? Perhaps examining program design through the lens of behavioral economics, where both staff and participant benefit from improved outcomes, is the path toward innovation in the provision of human services. The full report on the BIAS project and additional information on the MRDC’s work with behavioral interventions is available on their website.

 

Report citation: Richburg-Hayes, Lashawn, Caitlin Anzelone, and Nadine Dechausay with Patrick Landers (2017). Nudging Change in Human Services: Final Report of the Behavioral Interventions to Advance Self-Sufficiency (BIAS) Project. OPRE Report 2017-23. Washington, DC: Office of Planning, Research and Evaluation, Administration for Children and Families, U.S. Department of Health and Human Services.

Making the Most of Each Budget Dollar

One of the few things both sides of the political aisle are able to agree on is getting better performance out of programs, as seen by the move toward evidence-based policymaking under prior administrations. The federal government could lead by example, suggests Brookings Fellow Andrew Feldman, by creating a bipartisan team focused on improving performance of federally funded programs, giving states the freedom to implement innovative programming and shoulder more accountability for results, and reducing hurdles to program evaluation while encouraging the incorporation of data analytics into regular reporting. In a time of new federal spending priorities, budget shortfalls, increased need, and much uncertainty, states must get serious about investing in programs that work by actively incorporating outcomes research into their policymaking.

A report from the Pew-MacArthur Results First Initiative, How States Engage in Evidence-Based Policymaking: A National Assessment, assessed the levels of commitment and action of states using research to guide decision-making related behavioral health, criminal justice, juvenile justice and child welfare policy. This study scored each state and the District of Columbia on the extent to which they incorporated research findings in policy, including defining categories of evidence, conducting program cost-benefit analyses, and identifying specific funding for evidence-based programming. According to the brief, 50 states have taken some sort of action through the allocation of funding for programs supported by research findings, while 42 states report outcomes in the budget annually. Just 17 states compare program outcomes and costs.

While Washington, Utah, Minnesota, Connecticut, and Oregon lead the nation in evidence–based policymaking, Pennsylvania is one of 11 “established” states, with 13 evidence-based policymaking actions (three advanced and ten minimum) across the four policy areas studied. According to the assessment scorecard, Pennsylvania uses advanced research-driven policy actions most often in the juvenile justice sector.

Read more about the levels of evidence-based policy-making and individual state scorecards, in the report available at the Pew Charitable Trusts website. Case studies are also available on how to design contracts and grants to require outcomes reporting tied to program performance.

Plotting a Course for 2017

2016 was a year of flipping the script and changing up the status quo. Come tomorrow, it is time to push through our anxiety about what may lie ahead and plot a course to best navigate the unknown terrain of 2017.

But where to start? Some thoughts…

Where will the road take you in 2017?

In December, I always look forward to Lucy Bernholtz’s data and philanthropy forecast for the upcoming year and the insights in Blueprint 2017 are as thought-provoking as those of its predecessors. It is available for download at the Foundation Center’s Grant Craft website.

Diversification of revenue is more important than ever, especially among donors as well as sources.

Show the impact of the work you do – the very change your program facilitates at both the client and community levels. It seems to be in fashion to downplay all measurement because quantifying impact can be challenging, what with small samples and scattered cohorts and bias (oh my!). Yes, it is. But demonstrating how a program meets expected and desired goals – the outcomes – is not a clinical trial, it is just good practice. As is using those data to inform and improve services.

In Pennsylvania, as this fiscal year’s budget shortfall grows, all signs point to a doozy of a 2017-18 negotiation process. Structural changes to the current human services system are also on the table, which may signal new opportunities for nonprofits. How can you best advocate for the sector and your organization?

Moving purposefully into the unknown may be less intimidating for a nonprofit when there is a verbal AND a financial commitment to cultivate leadership within the ranks.

On the topic of developing leaders, this is a perfect time to engage in a some formative assessment of a more personal nature. As an established or up-and-coming nonprofit leader, how do will you look back on 2016 and plan for 2017?

  • Set aside some time to conduct your own career-centered end of year review.
  • Use/create a rubric to determine where you are now and what you should focus on, add, or set aside in 2017. Rubrics consist of a descriptive set of items or elements and a related performance scale. List your goals or expectations for 2016, then rate each one on a numerical scale where each point is defined along a continuum of progress, for example, 0 = “No progress made” while 4 = “Achieved 100%.” Add as much or as little detail to each rating point as needed to accurately capture the situation.
  • Last January, I worked with Emily Marco on a year-in-review that included a look back at professional and personal events and milestones of 2015 and planning for 2016. She also helped me clarify my goals and identify “action steps” to begin working toward them immediately. Emily is a visual problem solver who excels at helping people organize their thoughts and build a plan of action to achieve their goals. If you are interested in exploring a new way to digest the old and plan for the new you can learn more about her new online learning experience Relaunch 2017 or contact her for a goal setting session at Emilymarco.com.

 

 

Note:  This post is not sponsored.  I do not receive any compensation or services for mentions or links included in the post.

Process Metrics are not Outcomes

This is an unexpected follow-up to my last post.  I just heard about a nonprofit losing a hefty grant at renewal time due primarily to a lack of reported outcomes.  There was measurement – lots of data on process and organizational performance metrics – but not much to demonstrate the difference the program made in the lives of participants.  This kind of news is disheartening.

My first thought is – how did it get to that point?  Were the grant terms a surprise sprung on an unsuspecting organization at the last moment?  Was any mention of measuring program outcomes waved off by executives who preferred to discuss ways to scale up at the next funding cycle?

Probably not.

That said, I am pretty certain that…

  • the funder/s made their reporting criteria and protocol clear;
  • the program administration and staff were dedicated to their mission and conducted outreach and activities according to their model;
  • people who experienced the program gained something from it;
  • the nonprofit thought that they were collecting data that showed the impact they made on participants and in the community.

So what went wrong in that story I heard?  I’ll never know.  No one accepts a grant award with the expectation of a giant hole in their final report, but if there are questions about program application, geographic distance between sites, and/or irregular communication, measurement can and will get lost in the shuffle. Here are some steps you can take to prevent a similar situation from happening to your organization.

  1. Update your data collection plan. The outcomes listed in a column on a chart in your program materials will not measure themselves.  What are you currently collecting that may also fit as an indicator of your expected results?  Can you create a measure to better capture a specific change expected in the program participants?
  2. Make expectations clear and follow up regularly. Keep staff up-to-date on data collection with a matrix that lays out indicators, data sources, person(s) responsible and timeline.  Have a check-in call monthly to report on progress and address questions and other issues around the collection.
  3. Have patience. It will take a while to get used to a shift from collecting process metrics (still important – don’t stop doing that) to outcomes data,  But, if you have a plan ready to go you can work out any knots early on in the funding period rather than panic at report time.