Making the Most of Each Budget Dollar

One of the few things both sides of the political aisle are able to agree on is getting better performance out of programs, as seen by the move toward evidence-based policymaking under prior administrations. The federal government could lead by example, suggests Brookings Fellow Andrew Feldman, by creating a bipartisan team focused on improving performance of federally funded programs, giving states the freedom to implement innovative programming and shoulder more accountability for results, and reducing hurdles to program evaluation while encouraging the incorporation of data analytics into regular reporting. In a time of new federal spending priorities, budget shortfalls, increased need, and much uncertainty, states must get serious about investing in programs that work by actively incorporating outcomes research into their policymaking.

A report from the Pew-MacArthur Results First Initiative, How States Engage in Evidence-Based Policymaking: A National Assessment, assessed the levels of commitment and action of states using research to guide decision-making related behavioral health, criminal justice, juvenile justice and child welfare policy. This study scored each state and the District of Columbia on the extent to which they incorporated research findings in policy, including defining categories of evidence, conducting program cost-benefit analyses, and identifying specific funding for evidence-based programming. According to the brief, 50 states have taken some sort of action through the allocation of funding for programs supported by research findings, while 42 states report outcomes in the budget annually. Just 17 states compare program outcomes and costs.

While Washington, Utah, Minnesota, Connecticut, and Oregon lead the nation in evidence–based policymaking, Pennsylvania is one of 11 “established” states, with 13 evidence-based policymaking actions (three advanced and ten minimum) across the four policy areas studied. According to the assessment scorecard, Pennsylvania uses advanced research-driven policy actions most often in the juvenile justice sector.

Read more about the levels of evidence-based policy-making and individual state scorecards, in the report available at the Pew Charitable Trusts website. Case studies are also available on how to design contracts and grants to require outcomes reporting tied to program performance.

Plotting a Course for 2017

2016 was a year of flipping the script and changing up the status quo. Come tomorrow, it is time to push through our anxiety about what may lie ahead and plot a course to best navigate the unknown terrain of 2017.

But where to start? Some thoughts…

Where will the road take you in 2017?

In December, I always look forward to Lucy Bernholtz’s data and philanthropy forecast for the upcoming year and the insights in Blueprint 2017 are as thought-provoking as those of its predecessors. It is available for download at the Foundation Center’s Grant Craft website.

Diversification of revenue is more important than ever, especially among donors as well as sources.

Show the impact of the work you do – the very change your program facilitates at both the client and community levels. It seems to be in fashion to downplay all measurement because quantifying impact can be challenging, what with small samples and scattered cohorts and bias (oh my!). Yes, it is. But demonstrating how a program meets expected and desired goals – the outcomes – is not a clinical trial, it is just good practice. As is using those data to inform and improve services.

In Pennsylvania, as this fiscal year’s budget shortfall grows, all signs point to a doozy of a 2017-18 negotiation process. Structural changes to the current human services system are also on the table, which may signal new opportunities for nonprofits. How can you best advocate for the sector and your organization?

Moving purposefully into the unknown may be less intimidating for a nonprofit when there is a verbal AND a financial commitment to cultivate leadership within the ranks.

On the topic of developing leaders, this is a perfect time to engage in a some formative assessment of a more personal nature. As an established or up-and-coming nonprofit leader, how do will you look back on 2016 and plan for 2017?

  • Set aside some time to conduct your own career-centered end of year review.
  • Use/create a rubric to determine where you are now and what you should focus on, add, or set aside in 2017. Rubrics consist of a descriptive set of items or elements and a related performance scale. List your goals or expectations for 2016, then rate each one on a numerical scale where each point is defined along a continuum of progress, for example, 0 = “No progress made” while 4 = “Achieved 100%.” Add as much or as little detail to each rating point as needed to accurately capture the situation.
  • Last January, I worked with Emily Marco on a year-in-review that included a look back at professional and personal events and milestones of 2015 and planning for 2016. She also helped me clarify my goals and identify “action steps” to begin working toward them immediately. Emily is a visual problem solver who excels at helping people organize their thoughts and build a plan of action to achieve their goals. If you are interested in exploring a new way to digest the old and plan for the new you can learn more about her new online learning experience Relaunch 2017 or contact her for a goal setting session at Emilymarco.com.

 

 

Note:  This post is not sponsored.  I do not receive any compensation or services for mentions or links included in the post.

Process Metrics are not Outcomes

This is an unexpected follow-up to my last post.  I just heard about a nonprofit losing a hefty grant at renewal time due primarily to a lack of reported outcomes.  There was measurement – lots of data on process and organizational performance metrics – but not much to demonstrate the difference the program made in the lives of participants.  This kind of news is disheartening.

My first thought is – how did it get to that point?  Were the grant terms a surprise sprung on an unsuspecting organization at the last moment?  Was any mention of measuring program outcomes waved off by executives who preferred to discuss ways to scale up at the next funding cycle?

Probably not.

That said, I am pretty certain that…

  • the funder/s made their reporting criteria and protocol clear;
  • the program administration and staff were dedicated to their mission and conducted outreach and activities according to their model;
  • people who experienced the program gained something from it;
  • the nonprofit thought that they were collecting data that showed the impact they made on participants and in the community.

So what went wrong in that story I heard?  I’ll never know.  No one accepts a grant award with the expectation of a giant hole in their final report, but if there are questions about program application, geographic distance between sites, and/or irregular communication, measurement can and will get lost in the shuffle. Here are some steps you can take to prevent a similar situation from happening to your organization.

  1. Update your data collection plan. The outcomes listed in a column on a chart in your program materials will not measure themselves.  What are you currently collecting that may also fit as an indicator of your expected results?  Can you create a measure to better capture a specific change expected in the program participants?
  2. Make expectations clear and follow up regularly. Keep staff up-to-date on data collection with a matrix that lays out indicators, data sources, person(s) responsible and timeline.  Have a check-in call monthly to report on progress and address questions and other issues around the collection.
  3. Have patience. It will take a while to get used to a shift from collecting process metrics (still important – don’t stop doing that) to outcomes data,  But, if you have a plan ready to go you can work out any knots early on in the funding period rather than panic at report time.

Reboot Your Approach to Outcomes Measurement

graph-249937_640Outcomes measurement in the nonprofit sector needs a reboot.  First off, the word “impact” should not cause stomachs, or any other body parts, to clench, shoulders to sag, or blood pressure to rise.  Second, measurement should not be viewed as a zero sum game – as in program directors are terrified that the sum of their outcomes will result in zero funding.  This kind of anxiety just creates extra obstacles, especially for the small-to-mid-size organizations that are trying to build capacity for program measurement and reporting.  Let’s shake off those fears and shake up how you approach outcomes.

You, yes YOU, get to drive the outcomes bus.

Unlike the pigeon from the 2003 children’s story, I say we should LET nonprofits drive this bus. You are the experts when it comes to your work. As experts, you define program success and require a regular flow of relevant information to ensure programs are operating in a way that enables them to achieve that success.  Outcomes are not just about looking backward; they help you plot a more informed course forward. I recommend David Grant’s The Social Profit Handbook as an excellent resource for mission-driven organizations struggling to assess their impact in accordance with more traditional sector standards.  He brings a new perspective to assessment that includes nonprofits taking back the power to define what success looks like. Talk about shaking up the status quo!

Bottom line, if you do not measure your program, you are letting someone else do that for you.  Don’t let the perceived value or understanding of your work be left solely up to other people’s business models, compliance checks, and anecdotes.

Your model matters.   

Unless you are just in the start–up phase, you likely have a program model or logic model of some kind. I hope it isn’t sitting undisturbed exactly where it was placed upon completion.  See, this document is the essence of your nonprofit. It should live, breathe, and change just as your nonprofit does.  These models display the elements of your programs and services, the reality that your organization operates in, and how the programs are expected to address the problem(s) driving your work.  At the most basic level, the model answers the questions: What’s the problem here?  What are we going to do? With what? And how?  What do we expect will happen?  If any of the answers to those questions change over time, the model should be updated and reviewed for internal consistency.

“Oh please,” you think, how can we shake up the sector when you are using phrases like “internal consistency?”  Well, here is where it gets a little bit radical. Not only do you define your success; you take the reins to design a measurement plan that will best fit your operations and resources.  Take that model off the shelf and transform it* into a strategic data collection plan, where activities (what you do) link to outcomes (the intended result), and are documented by indicators (measure to determine if outcome is being achieved.)  Add a timeline for the collection and reporting schedule, and BOOM – get measuring.

*Ok, this part gets technical, and it may be best to seek out training or technical assistance to take your group through this process. I’ve worked with clients who bring our team back in to monitor data collection and assist staff with reporting.  Still, it often comes as a surprise to sector professionals that they already have exactly what is needed to develop a measurement plan that provides useful information for service and program planning, not just for funding reports. No need to start at square one.

You say social impact; I say long-term outcome.

I have to admit, one of my favorite topics of discussion with colleagues of late is how to navigate the veritable scrum of terms associated with assessing impact. The concept of measuring program outcomes has become mistaken for the idea of demonstrating sweeping social impact. While long-term outcomes do refer to a change in a condition or status brought about by program activities, that is not synonymous with causing a wave of change in their region for the typical nonprofit.

Andrew Harding of the London School of Economics and Political Science wrote a blog post on the difference between these two results in human welfare research to challenge this interchangeable use of terms.  He describes an outcome as “a finite and often measurable change,” with a pre-defined reach and limited scope.  Impact on the other hand, “refers to a much broader effect – perhaps the effect information and advice had on ability to make an informed choice, empowerment or wider life experiences. Impact can be conceptualized as the longer term effect of an outcome.”     

I think much anxiety around outcomes can be attributed to this misconception; that long-term outcomes must be far-reaching and epic when in reality the change is only expected within the population that experienced your program. That said, leaders should absolutely challenge their organizations by setting ambitious long-term program goals. With a robust model, the condition-changing outcomes you aim to achieve will be in proportion to your program scope and the intensity of your resources.  You cannot control every factor, but you must have the fortitude to regularly examine the results of what you are doing.

Reboot your expectations around measurement.  Define success. Take ownership of your outcomes.