Reboot Your Approach to Outcomes Measurement

graph-249937_640Outcomes measurement in the nonprofit sector needs a reboot.  First off, the word “impact” should not cause stomachs, or any other body parts, to clench, shoulders to sag, or blood pressure to rise.  Second, measurement should not be viewed as a zero sum game – as in program directors are terrified that the sum of their outcomes will result in zero funding.  This kind of anxiety just creates extra obstacles, especially for the small-to-mid-size organizations that are trying to build capacity for program measurement and reporting.  Let’s shake off those fears and shake up how you approach outcomes.

You, yes YOU, get to drive the outcomes bus.

Unlike the pigeon from the 2003 children’s story, I say we should LET nonprofits drive this bus. You are the experts when it comes to your work. As experts, you define program success and require a regular flow of relevant information to ensure programs are operating in a way that enables them to achieve that success.  Outcomes are not just about looking backward; they help you plot a more informed course forward. I recommend David Grant’s The Social Profit Handbook as an excellent resource for mission-driven organizations struggling to assess their impact in accordance with more traditional sector standards.  He brings a new perspective to assessment that includes nonprofits taking back the power to define what success looks like. Talk about shaking up the status quo!

Bottom line, if you do not measure your program, you are letting someone else do that for you.  Don’t let the perceived value or understanding of your work be left solely up to other people’s business models, compliance checks, and anecdotes.

Your model matters.   

Unless you are just in the start–up phase, you likely have a program model or logic model of some kind. I hope it isn’t sitting undisturbed exactly where it was placed upon completion.  See, this document is the essence of your nonprofit. It should live, breathe, and change just as your nonprofit does.  These models display the elements of your programs and services, the reality that your organization operates in, and how the programs are expected to address the problem(s) driving your work.  At the most basic level, the model answers the questions: What’s the problem here?  What are we going to do? With what? And how?  What do we expect will happen?  If any of the answers to those questions change over time, the model should be updated and reviewed for internal consistency.

“Oh please,” you think, how can we shake up the sector when you are using phrases like “internal consistency?”  Well, here is where it gets a little bit radical. Not only do you define your success; you take the reins to design a measurement plan that will best fit your operations and resources.  Take that model off the shelf and transform it* into a strategic data collection plan, where activities (what you do) link to outcomes (the intended result), and are documented by indicators (measure to determine if outcome is being achieved.)  Add a timeline for the collection and reporting schedule, and BOOM – get measuring.

*Ok, this part gets technical, and it may be best to seek out training or technical assistance to take your group through this process. I’ve worked with clients who bring our team back in to monitor data collection and assist staff with reporting.  Still, it often comes as a surprise to sector professionals that they already have exactly what is needed to develop a measurement plan that provides useful information for service and program planning, not just for funding reports. No need to start at square one.

You say social impact; I say long-term outcome.

I have to admit, one of my favorite topics of discussion with colleagues of late is how to navigate the veritable scrum of terms associated with assessing impact. The concept of measuring program outcomes has become mistaken for the idea of demonstrating sweeping social impact. While long-term outcomes do refer to a change in a condition or status brought about by program activities, that is not synonymous with causing a wave of change in their region for the typical nonprofit.

Andrew Harding of the London School of Economics and Political Science wrote a blog post on the difference between these two results in human welfare research to challenge this interchangeable use of terms.  He describes an outcome as “a finite and often measurable change,” with a pre-defined reach and limited scope.  Impact on the other hand, “refers to a much broader effect – perhaps the effect information and advice had on ability to make an informed choice, empowerment or wider life experiences. Impact can be conceptualized as the longer term effect of an outcome.”     

I think much anxiety around outcomes can be attributed to this misconception; that long-term outcomes must be far-reaching and epic when in reality the change is only expected within the population that experienced your program. That said, leaders should absolutely challenge their organizations by setting ambitious long-term program goals. With a robust model, the condition-changing outcomes you aim to achieve will be in proportion to your program scope and the intensity of your resources.  You cannot control every factor, but you must have the fortitude to regularly examine the results of what you are doing.

Reboot your expectations around measurement.  Define success. Take ownership of your outcomes. 

Implementing, Modeling and Managing a Measurement Culture

Regardless of the outcome of the upcoming election, the nonprofit social services sector – from mental health clinics to food banks – will still be challenged to meet an increased need with fewer resources and limited funding.  Savvy nonprofits have already moved toward an evaluation culture, embracing logic models and short-and-long term impact data to illustrate why (and how) their programs work. Organizational innovation and unique program accomplishments are practically prerequisites for making a successful connection with alternate funding sources, including corporate partnerships, yet nonprofits are still struggling to identify and quantify their impact on clients, the community, and the overall condition they work to modify.  Performance measurement, logic model and outcomes are not new or faddish terms, so why the hesitation?

The report, Tough Times, Creative Measures: What Will it Take to Help the Social Sector Embrace an Outcomes Culture? from the Urban Institute, came out of a Fall 2011 event that brought together leaders from the government, nonprofit, philanthropy, and business sectors to discuss the issue of data-driven management in social and human services and the challenges related to successfully utilizing a performance management system.  Some of the challenges identified:

The difficulty of turning away from the organization’s immediate needs to plan and implement a measurement system. No matter how small the agency, the demands on the executive director’s time and talent are immense. Writing up an organization-wide evaluation strategy and implementation plan, including models, indicators, instruments, and data collection plans is an enormous amount of work – and I haven’t mentioned the pilot testing, analysis and reporting aspects.  The role of director should be to communicate progress and needs with the board as they guide the agency through this kind of culture change, not create every step of the process.

The reality that  sometimes the best outcomes may not be rewarded.  Conspiracy theories and snarky excuses aside, well-crafted stories, high profile connections and nonprofits with missions or target audiences that are more interesting or appealing than your own may have an easier time selling their effectiveness. That said, incomplete or inaccurate information on program impact won’t help remedy the situation.

Some nonprofits may be waiting for the trends to flip and the tides to turn. Why move heaven and earth within your organization to embrace a culture that may seem like a phase (especially to long-time employees who have seen edicts from funders come and go).   Buy-in for outcomes tracking and reporting  may be based on acceptance of the  hoop-jumping norms, not the real value of performance measurement to the overall health of the organization.  It is time for boards and directors to be brave and commit to an organizational culture change – but be prepared to illustrate how it will be  beneficial for staff and (more importantly) clients.

In response to these and other impediments, I mean realities,  the symposium attendees identified strategic areas that would have the most impact in encouraging and implementing a data-centered culture: human and financial capital –  the tenacity and the tab, creative advocacy – sector giants to back this shift,  and ready-to-use systems and tools so directors don’t have to start from square one.  How can nonprofit leaders better model and manage  a measurement culture?   Why are some nonprofits hesitant to embrace this shift? 

 

 

Visual Models and Outcomes

The post by Paul Duigan, Paul Duigan on DoView and Visually Representing Outcomes, at the AEA365 Tip-a-Day by and for Evaluators Blog is meaty fare for nonprofit professionals looking for a new approach to the coordination and presentation of organizational activities. Dr Duigan’s method includes a series of visual depictions of the interrelated workings of organizational processes such as evaluation, strategic planning and ongoing monitoring in one large (and involved) model. Utilizing visual models, rather than the traditional narrative form of most planning and evaluation tools, may increase the efficiency of the process and its appeal to a wider audience.

Additional information on Paul Duigan’s work is available on his website, OutcomesCentral.org, which is a treasure trove of resources and multi-media presentations on outcomes, performance management, strategic planning and modeling.

Not All Measurement is Equal

Outcomes. Performance. Effectiveness. Fidelity. For nonprofits, this is the language of accountability – accountability around what organizations are doing and how they are doing it. Program and organizational information is used to demonstrate progress to funders and stakeholders, as well as to improve operations, so it is imperative that the purpose and method of all measurement activities are clearly defined.

Have you been selected to design a performance measurement plan for your organization? Been asked to be the point person for an upcoming program evaluation? It may be time for a quick refresher in the vernacular then, and the first lesson is that not all measurement-related terms (or tasks) are interchangeable.

The ChildTrends research brief, Performance Management and Evaluation: What’s the Difference? by Karen Walker and Kristin Anderson, provides a thorough look at the many differences, and (not surprisingly) a few similarities between these two common measurement activities. While both rely on the collection and analysis of data, they vary in purpose and method of data collection, intended audience and how the concept of “progress” is defined and measured. The complete brief is available here .

Report Citation: Walker, K. and K. Anderson Moore (2011) PERFORMANCE MANAGEMENT AND EVALUATION: WHAT’S THE DIFFERENCE? ChildTrends Publication #2011-02