Reboot Your Approach to Outcomes Measurement

graph-249937_640Outcomes measurement in the nonprofit sector needs a reboot.  First off, the word “impact” should not cause stomachs, or any other body parts, to clench, shoulders to sag, or blood pressure to rise.  Second, measurement should not be viewed as a zero sum game – as in program directors are terrified that the sum of their outcomes will result in zero funding.  This kind of anxiety just creates extra obstacles, especially for the small-to-mid-size organizations that are trying to build capacity for program measurement and reporting.  Let’s shake off those fears and shake up how you approach outcomes.

You, yes YOU, get to drive the outcomes bus.

Unlike the pigeon from the 2003 children’s story, I say we should LET nonprofits drive this bus. You are the experts when it comes to your work. As experts, you define program success and require a regular flow of relevant information to ensure programs are operating in a way that enables them to achieve that success.  Outcomes are not just about looking backward; they help you plot a more informed course forward. I recommend David Grant’s The Social Profit Handbook as an excellent resource for mission-driven organizations struggling to assess their impact in accordance with more traditional sector standards.  He brings a new perspective to assessment that includes nonprofits taking back the power to define what success looks like. Talk about shaking up the status quo!

Bottom line, if you do not measure your program, you are letting someone else do that for you.  Don’t let the perceived value or understanding of your work be left solely up to other people’s business models, compliance checks, and anecdotes.

Your model matters.   

Unless you are just in the start–up phase, you likely have a program model or logic model of some kind. I hope it isn’t sitting undisturbed exactly where it was placed upon completion.  See, this document is the essence of your nonprofit. It should live, breathe, and change just as your nonprofit does.  These models display the elements of your programs and services, the reality that your organization operates in, and how the programs are expected to address the problem(s) driving your work.  At the most basic level, the model answers the questions: What’s the problem here?  What are we going to do? With what? And how?  What do we expect will happen?  If any of the answers to those questions change over time, the model should be updated and reviewed for internal consistency.

“Oh please,” you think, how can we shake up the sector when you are using phrases like “internal consistency?”  Well, here is where it gets a little bit radical. Not only do you define your success; you take the reins to design a measurement plan that will best fit your operations and resources.  Take that model off the shelf and transform it* into a strategic data collection plan, where activities (what you do) link to outcomes (the intended result), and are documented by indicators (measure to determine if outcome is being achieved.)  Add a timeline for the collection and reporting schedule, and BOOM – get measuring.

*Ok, this part gets technical, and it may be best to seek out training or technical assistance to take your group through this process. I’ve worked with clients who bring our team back in to monitor data collection and assist staff with reporting.  Still, it often comes as a surprise to sector professionals that they already have exactly what is needed to develop a measurement plan that provides useful information for service and program planning, not just for funding reports. No need to start at square one.

You say social impact; I say long-term outcome.

I have to admit, one of my favorite topics of discussion with colleagues of late is how to navigate the veritable scrum of terms associated with assessing impact. The concept of measuring program outcomes has become mistaken for the idea of demonstrating sweeping social impact. While long-term outcomes do refer to a change in a condition or status brought about by program activities, that is not synonymous with causing a wave of change in their region for the typical nonprofit.

Andrew Harding of the London School of Economics and Political Science wrote a blog post on the difference between these two results in human welfare research to challenge this interchangeable use of terms.  He describes an outcome as “a finite and often measurable change,” with a pre-defined reach and limited scope.  Impact on the other hand, “refers to a much broader effect – perhaps the effect information and advice had on ability to make an informed choice, empowerment or wider life experiences. Impact can be conceptualized as the longer term effect of an outcome.”     

I think much anxiety around outcomes can be attributed to this misconception; that long-term outcomes must be far-reaching and epic when in reality the change is only expected within the population that experienced your program. That said, leaders should absolutely challenge their organizations by setting ambitious long-term program goals. With a robust model, the condition-changing outcomes you aim to achieve will be in proportion to your program scope and the intensity of your resources.  You cannot control every factor, but you must have the fortitude to regularly examine the results of what you are doing.

Reboot your expectations around measurement.  Define success. Take ownership of your outcomes. 

New Data: Prevalence of Mental Health Diagnoses, Prescriptions Among Foster Care Youth

The 2013 report Diagnosis and Health Care Utilization of Children who are in Foster Care and Covered by Medicaid, from the Substance Abuse and Mental Health Administration (SAMHSA) is loaded with useful data, including those showing a stark contrast in the prevalence of mental health diagnoses between Medicaid-covered youth in foster care and their peers outside the child welfare system.  While recent research indicates that the increase in psychiatric diagnoses and office visit rates for U.S. youth outpace those of adults (based on comparison of data from the latter half of the 1990s and 2007-2010),  mental illness and psychiatric disabilities appear to be more prevalent among children in foster care than in the general population (a trend also found in countries outside of the United States).

The SAMHSA report divides findings into age groups (and is available at their website in PDF form), but some of the overall trends include:

  • Mental health diagnoses (49 percent) rates among foster care youth covered by Medicaid was higher in 2010 than their counterparts not in foster care (11 percent)
  • Children in foster care had more outpatient visits and longer lengths of an average inpatient stay than those not in foster care

Among adolescents (ages 12-17):

  • Attention-deficient, conduct and disruptive disorder were the most common diagnoses in 2010, occurring in 38 percent of foster care youth compared to 11 percent of their peers outside of foster care
  • 40 percent of 12-17 years olds in foster care used prescription medication related to a mental health diagnosis

In light of these trends, it might be worth noting that a December 2012 report from the Government Accountability Office raised concerns to the Department of Health and Human Services’ over appropriate treatment of mental illness and use of prescribed psychiatric medication for children in Medicaid or in the foster care system (the majority also covered by Medicaid). Their report found that youth covered by Medicaid were twice as likely to take anti-psychotic medications than privately insured youth, but may not have received counseling or additional mental health treatment other than the medication.

 

 

Report Citation: Center for Mental Health Services and Center for Substance Abuse Treatment. Diagnoses and Health Care Utilization of Children Who Are in Foster Care and Covered by Medicaid. HHS Publication No. (SMA) 13-4804. Rockville, MD: Center for Mental Health Services and Center for Substance Abuse Treatment, Substance Abuse and Mental Health Services Administration, 2013.

 

Talking Data – Collection, Reporting and…the Closet

 

If you have worked in the realm of nonprofit program evaluation you may be used to the slightly disconcerting experience of being warmly welcomed by about a quarter of those assembled in the meeting room while receiving a combination of frozen half-smiles and the side (or even evil) eye by the other 75 percent.  I don’t blame them. According to conventional wisdom, it is likely that you are there to tell them how to do their jobs, add more paperwork to their jobs or cost them their jobs.  Luckily, an increased focus on the value of data and how it can help secure funding, motivate donors and highlight program accomplishments has led to a much better understanding of its importance and, in turn, many nonprofit organizations are already collecting information to assess performance and report outcomes.

Maria Townsend is a colleague and friend whom I met nearly a decade ago and have been lucky enough to partner with again on some recent projects. During her years of research experience in both an academic setting and as an independent evaluator and consultant, she has guided many nonprofit programs through the data collection and reporting process, and since I have been after her for some time to write a guest post here I thought that a conversation about using data “for good” (especially as it is the topic of the month for the Nonprofit Blog Carnival) could provide some insights for small to mid-sized nonprofits.

Me: Reporting outcomes is a standard requirement for most funders of late, but many small nonprofits struggle to get the “evidence” that their funders, or donors, or board want to “prove” program effectiveness.  Personally, I think that this is when it is best to have some professional guidance – the DIY approach may be too daunting and pull too much time and energy away from the daily operations of an organization with a staff of 10 or less.  That said, hiring a research firm to handle all aspects of an evaluation may be a pipe dream for a small nonprofit and even research consultants may be too pricey to serve as a long term solution for a small organization.   What is your advice to the small or start-up nonprofit?

Maria:  There are low or no cost resources on data collection, survey research and program evaluation for nonprofits online or through national or regional nonprofit associations (American Evaluation Association, Canadian Evaluation Society, The Community Toolbox University of Kansas, Outreach Evaluation Resource Center).  The more educated a nonprofit leader is on what they need and what their office is capable of, the better prepared they are to choose someone to assist in designing and implementing an evaluation plan that will meet them where they are.  Another option is looking at small grants from funding organizations and foundations that subsidize the building of evaluation capacity within an agency.

Me:  Even with an evaluator on board – in house, consultant or pro bono through a capacity building grant – a gap may exist between what a nonprofit collects to measure its program(s) and what data or even what collection instruments a funder requires for reporting purposes. How can the two competing interests be addressed efficiently?

Maria:  In these situations, it is important to do a data inventory. Think of it as if you are cleaning out your closets…

Me: As if data collection didn’t already have a reputation for being tedious and overwhelming.

Maria:  It isn’t glamorous – but stay with me.  Look at what is already hanging in the closet in terms of currently collected data.  How can we coordinate what we currently have to meet the new reporting requirements?  Do we have a mainstay survey that can be the foundation, the little black dress (or for the fellas, the grey business suit) that you can dress up or pare down based on the occasion. Craft and insert questions to gather additional outcome data or remove items that you don’t need. If you want to take the dress (or suit) to the next level you add something substantial …

Me:  Ah, the statement piece… adds pop, shows confidence, represents your style.

Maria: Right –take the data you have already collected to the next level by adding a focus group or site observation that would provide qualitative data that adds context to the quantitative data.  Maybe in this closet assessment you find that your wardrobe is out of season or too small – in the same way you may discover that your existing data collection is no longer a good fit for the current reporting expectations and just adding a few “extras” will not cut it.  You need to do some serious shopping or in other words, major revisions or additions to the data set.  This is the time to get rid of what you will not need anymore, such as surveys that are relics of past funder requirements or from programs that have since changed in scope. This is where you revise what variables you are collecting, from where (intake, assessments, and front line staff notes, supervision reports) to streamline collection processes and data entry.  This is also a good time to make note of what needs to be upgraded as far as the data collection plan – moving surveys from paper to web-based platforms, collapsing data collection timelines to be more efficient and determining if staff are  getting appropriate training on the process.

Me: How would you advise a client agency wondering how much data is enough data? There always seems to be too little data collected at first, which is often why we are there, but when the wish lists of what they want tracked come out — to use your closet analogy — it’s like going from a sock drawer to a walk-in.

Maria: An evaluation plan is a great help in listing a program’s goals and expected outcomes, paired with indicator statements which offer further clarification by listing the variables to be collected. Some evaluation plans also include the name of the person or persons responsible for collection of particular pieces of data and the preferred time schedule of collection and reporting.  This plan can be introduced in phases to lessen the “data shock” associated with collection, entry and storage on a staff new to the process.  It also acts a roadmap for the full transition of the collection and reporting process to the organization – from executive leadership overseeing evaluation and research to the line staff collecting data on a daily basis.

Me: What about the nonprofit that has a solid data collection plan in place but no one really knows what to do with the data because of, say…staff turnover or a change in reporting requirements?

Maria: First off, you need to clean it—make sure that the data you have is complete, fill in missing or fix incorrect identification such as names, dates, codes for services and remove duplicate entries into databases.  If you have filed hardcopies of the completed forms, you can pull them to double-check any issues with data entry.  It is good to have someone on staff who is very detailed oriented to review the data and prepare it for analysis. So, cleaning the data prior to analysis is the first and important step in getting good results.

Next, revisit your evaluation questions and what you said in your proposal or contract. It is easy to be the dog that sees the squirrel and gets running in a different direction on a well-intentioned whim.

Me: I have one of those (dogs that is) – but with him it’s bunnies.

Maria:  Keep it simple – what questions did you want to answer (what was the program’s impact, who did we reach, what program components were most effective) and what do the funders want to know? Those should be your priority for data analysis. Answer those questions first and then you can look for other interesting relationships or connections (those squirrels!) that may be helpful for program planning, such as differences in participation rates or outcomes based on sub-groups.

 Me:  What about the nonprofit that fears the dark side of data? They know they do good work but their reports show that their overall impact is small, or their program benefits are deemed not “important” enough in this time of growing need and declining resources. It is a realistic fear.

Maria: That is why the development and communications/marketing team should be at the table as far as the data collection and reporting process. That is their niche, right?

Me:  Right – storytelling is more compelling, not to mention authentic, when there is performance data to back it up.  Outputs and outcomes data shouldn’t be trotted out only once a year in a few pie charts in the annual report; it is an integral part of building a market and community engagement strategy.  Many variables could look ridiculous when reported out of context, but are valuable as part of a larger set measuring a condition, such as overall health, mobility, academic or vocational achievement or quality of life.  Communications professionals know how to use performance data to enrich the story of their nonprofit’s impact.  Let them.

 

 

 

Leveraging the #Hashtag

Still find yourself explaining to coworkers that Twitter is so much more than reading about what people eat for lunch?

If you have been looking for ways to better utilize twitter to research or crowd source a topic or event the post How to Get Useful Business Information Out of Twitter: Hashtags for Social Media Research at Uber.la, social media strategies from John McElhenny is an excellent, must-bookmark, resource. An aside for non-tweeting readers, here is an explanation of hashtags (the # symbol) from the folks at Twitter.

With over a dozen links to websites and applications that will assist you in tracking, monitoring, and even plotting (check out TwitterVenn) what Tweeters are talking about, as well some maintenance tips for accounts, the post will fill your week with all kinds of interesting ways to leverage hashtags and gain insight and information on your corner of the community/market/world.

How do you use Twitter hashtags?