Posts Tagged by outcomes
|July 31, 2016||Posted by M. P. under Evaluation, Management|
Outcomes measurement in the nonprofit sector needs a reboot. First off, the word “impact” should not cause stomachs, or any other body parts, to clench, shoulders to sag, or blood pressure to rise. Second, measurement should not be viewed as a zero sum game – as in program directors are terrified that the sum of their outcomes will result in zero funding. This kind of anxiety just creates extra obstacles, especially for the small-to-mid-size organizations that are trying to build capacity for program measurement and reporting. Let’s shake off those fears and shake up how you approach outcomes.
You, yes YOU, get to drive the outcomes bus.
Unlike the pigeon from the 2003 children’s story, I say we should LET nonprofits drive this bus. You are the experts when it comes to your work. As experts, you define program success and require a regular flow of relevant information to ensure programs are operating in a way that enables them to achieve that success. Outcomes are not just about looking backward; they help you plot a more informed course forward. I recommend David Grant’s The Social Profit Handbook as an excellent resource for mission-driven organizations struggling to assess their impact in accordance with more traditional sector standards. He brings a new perspective to assessment that includes nonprofits taking back the power to define what success looks like. Talk about shaking up the status quo!
Bottom line, if you do not measure your program, you are letting someone else do that for you. Don’t let the perceived value or understanding of your work be left solely up to other people’s business models, compliance checks, and anecdotes.
Your model matters.
Unless you are just in the start–up phase, you likely have a program model or logic model of some kind. I hope it isn’t sitting undisturbed exactly where it was placed upon completion. See, this document is the essence of your nonprofit. It should live, breathe, and change just as your nonprofit does. These models display the elements of your programs and services, the reality that your organization operates in, and how the programs are expected to address the problem(s) driving your work. At the most basic level, the model answers the questions: What’s the problem here? What are we going to do? With what? And how? What do we expect will happen? If any of the answers to those questions change over time, the model should be updated and reviewed for internal consistency.
“Oh please,” you think, how can we shake up the sector when you are using phrases like “internal consistency?” Well, here is where it gets a little bit radical. Not only do you define your success; you take the reins to design a measurement plan that will best fit your operations and resources. Take that model off the shelf and transform it* into a strategic data collection plan, where activities (what you do) link to outcomes (the intended result), and are documented by indicators (measure to determine if outcome is being achieved.) Add a timeline for the collection and reporting schedule, and BOOM – get measuring.
*Ok, this part gets technical, and it may be best to seek out training or technical assistance to take your group through this process. I’ve worked with clients who bring our team back in to monitor data collection and assist staff with reporting. Still, it often comes as a surprise to sector professionals that they already have exactly what is needed to develop a measurement plan that provides useful information for service and program planning, not just for funding reports. No need to start at square one.
You say social impact; I say long-term outcome.
I have to admit, one of my favorite topics of discussion with colleagues of late is how to navigate the veritable scrum of terms associated with assessing impact. The concept of measuring program outcomes has become mistaken for the idea of demonstrating sweeping social impact. While long-term outcomes do refer to a change in a condition or status brought about by program activities, that is not synonymous with causing a wave of change in their region for the typical nonprofit.
Andrew Harding of the London School of Economics and Political Science wrote a blog post on the difference between these two results in human welfare research to challenge this interchangeable use of terms. He describes an outcome as “a finite and often measurable change,” with a pre-defined reach and limited scope. Impact on the other hand, “refers to a much broader effect – perhaps the effect information and advice had on ability to make an informed choice, empowerment or wider life experiences. Impact can be conceptualized as the longer term effect of an outcome.”
I think much anxiety around outcomes can be attributed to this misconception; that long-term outcomes must be far-reaching and epic when in reality the change is only expected within the population that experienced your program. That said, leaders should absolutely challenge their organizations by setting ambitious long-term program goals. With a robust model, the condition-changing outcomes you aim to achieve will be in proportion to your program scope and the intensity of your resources. You cannot control every factor, but you must have the fortitude to regularly examine the results of what you are doing.
Reboot your expectations around measurement. Define success. Take ownership of your outcomes.
|August 20, 2013||Posted by M. P. under Budget, Education, Research||
As we approach the start of another school year, students in Pennsylvania may find themselves returning to fewer elective classes (even in math science and English), increased class sizes, old textbooks, suspension of field trips, and fewer teachers and staff due to furloughs and hiring freezes. These intended changes, from a survey conducted by the Pennsylvania Association of School Business Officials and the Pennsylvania Association of School Administrators, also include, 22 percent of districts cutting tutoring programs for students (just under a third – 32 percent – did the same for the 2012-13 school year), and 13 percent of districts ending summer school programs for 2013-14, as did 21 percent last year.
While the enormous impact of the recession prompted serious budgetary reviews, from the dinner table to the halls of the State Capitol, the reduction in education funding has hit urban schools first, and worst. While fingers point at various “causes of the problem” and some argue the problem doesn’t exist but for mismanagement, the financial shortfall, at least in urban Pennsylvania schools, appears to be a mixture of shrinking tax bases, shrinking enrollment, ever-increasing per-pupil spending, and bureaucratic administrations, coupled with reductions in funding from the Commonwealth. Still, cutting programs (like tutoring) that are designed to help struggling students seems to only contribute to the achievement gap that already exists between schools in poorer areas and their more affluent counterparts.
The report, Poverty and Education: Finding the Way Forward by Richard J. Coley of Educational Testing Service (ETS) and Rutgers University professor Bruce Baker, examines the connection between poverty and life outcomes, including success in education and future employment. The researchers note the academic achievement gap is larger between poor and not poor than between races, with those living in extreme poverty lagging most behind peers in cognitive performance. Poverty is also associated with outcomes of less schooling, lower income, and higher likelihood of involvement in the criminal justice system. The impact of poverty on educational quality is illustrated in the brief, The Impact of Teacher Experience, Examining the Evidence and Policy Implications by Jennifer King Rice, through a discussion of data that indicate high-poverty schools have teachers with the least experience and, according to some studies, a lower level of effectiveness. A National Center on Educational Evaluation brief reports that, overall, poorer students had unequal access to the highest quality teachers (although the study on just 10 districts is not generalizable).
Lest one think such relationships have little bearing on their local schools, the issue of poverty and education is no longer just a concern for city residents as the 2000’s saw a shift in the distribution of families living below the poverty line. Suburbs are the fastest growing pockets of poverty in the country, according to the book Confronting Suburban Poverty in America by Elizabeth Kneebone and Alan Berube. Over the last decade, the population of poor in the suburbs grew by 64 percent and at a brisker pace than in many of their regional cities. According to Kneebone and Berube, there are more poor people living in the suburbs now than anywhere else in America.
This past year, school districts – urban and suburban – have dealt with budget issues by challenging mandates that limited the number of students to teachers in a classroom, removing access to or increasing participation fees for extracurricular activities, and reducing the number of available courses. A cursory read of the trends in income, funding steams and predicted economic growth suggests that even the more affluent districts won’t be able to escape the experience of severe budget cuts and need for increased tax revenues for too much longer.
|June 25, 2013||Posted by M. P. under Evaluation, Management|
If you have worked in the realm of nonprofit program evaluation you may be used to the slightly disconcerting experience of being warmly welcomed by about a quarter of those assembled in the meeting room while receiving a combination of frozen half-smiles and the side (or even evil) eye by the other 75 percent. I don’t blame them. According to conventional wisdom, it is likely that you are there to tell them how to do their jobs, add more paperwork to their jobs or cost them their jobs. Luckily, an increased focus on the value of data and how it can help secure funding, motivate donors and highlight program accomplishments has led to a much better understanding of its importance and, in turn, many nonprofit organizations are already collecting information to assess performance and report outcomes.
Maria Townsend is a colleague and friend whom I met nearly a decade ago and have been lucky enough to partner with again on some recent projects. During her years of research experience in both an academic setting and as an independent evaluator and consultant, she has guided many nonprofit programs through the data collection and reporting process, and since I have been after her for some time to write a guest post here I thought that a conversation about using data “for good” (especially as it is the topic of the month for the Nonprofit Blog Carnival) could provide some insights for small to mid-sized nonprofits.
Me: Reporting outcomes is a standard requirement for most funders of late, but many small nonprofits struggle to get the “evidence” that their funders, or donors, or board want to “prove” program effectiveness. Personally, I think that this is when it is best to have some professional guidance – the DIY approach may be too daunting and pull too much time and energy away from the daily operations of an organization with a staff of 10 or less. That said, hiring a research firm to handle all aspects of an evaluation may be a pipe dream for a small nonprofit and even research consultants may be too pricey to serve as a long term solution for a small organization. What is your advice to the small or start-up nonprofit?
Maria: There are low or no cost resources on data collection, survey research and program evaluation for nonprofits online or through national or regional nonprofit associations (American Evaluation Association, Canadian Evaluation Society, The Community Toolbox University of Kansas, Outreach Evaluation Resource Center). The more educated a nonprofit leader is on what they need and what their office is capable of, the better prepared they are to choose someone to assist in designing and implementing an evaluation plan that will meet them where they are. Another option is looking at small grants from funding organizations and foundations that subsidize the building of evaluation capacity within an agency.
Me: Even with an evaluator on board – in house, consultant or pro bono through a capacity building grant – a gap may exist between what a nonprofit collects to measure its program(s) and what data or even what collection instruments a funder requires for reporting purposes. How can the two competing interests be addressed efficiently?
Maria: In these situations, it is important to do a data inventory. Think of it as if you are cleaning out your closets…
Me: As if data collection didn’t already have a reputation for being tedious and overwhelming.
Maria: It isn’t glamorous – but stay with me. Look at what is already hanging in the closet in terms of currently collected data. How can we coordinate what we currently have to meet the new reporting requirements? Do we have a mainstay survey that can be the foundation, the little black dress (or for the fellas, the grey business suit) that you can dress up or pare down based on the occasion. Craft and insert questions to gather additional outcome data or remove items that you don’t need. If you want to take the dress (or suit) to the next level you add something substantial …
Me: Ah, the statement piece… adds pop, shows confidence, represents your style.
Maria: Right –take the data you have already collected to the next level by adding a focus group or site observation that would provide qualitative data that adds context to the quantitative data. Maybe in this closet assessment you find that your wardrobe is out of season or too small – in the same way you may discover that your existing data collection is no longer a good fit for the current reporting expectations and just adding a few “extras” will not cut it. You need to do some serious shopping or in other words, major revisions or additions to the data set. This is the time to get rid of what you will not need anymore, such as surveys that are relics of past funder requirements or from programs that have since changed in scope. This is where you revise what variables you are collecting, from where (intake, assessments, and front line staff notes, supervision reports) to streamline collection processes and data entry. This is also a good time to make note of what needs to be upgraded as far as the data collection plan – moving surveys from paper to web-based platforms, collapsing data collection timelines to be more efficient and determining if staff are getting appropriate training on the process.
Me: How would you advise a client agency wondering how much data is enough data? There always seems to be too little data collected at first, which is often why we are there, but when the wish lists of what they want tracked come out — to use your closet analogy — it’s like going from a sock drawer to a walk-in.
Maria: An evaluation plan is a great help in listing a program’s goals and expected outcomes, paired with indicator statements which offer further clarification by listing the variables to be collected. Some evaluation plans also include the name of the person or persons responsible for collection of particular pieces of data and the preferred time schedule of collection and reporting. This plan can be introduced in phases to lessen the “data shock” associated with collection, entry and storage on a staff new to the process. It also acts a roadmap for the full transition of the collection and reporting process to the organization – from executive leadership overseeing evaluation and research to the line staff collecting data on a daily basis.
Me: What about the nonprofit that has a solid data collection plan in place but no one really knows what to do with the data because of, say…staff turnover or a change in reporting requirements?
Maria: First off, you need to clean it—make sure that the data you have is complete, fill in missing or fix incorrect identification such as names, dates, codes for services and remove duplicate entries into databases. If you have filed hardcopies of the completed forms, you can pull them to double-check any issues with data entry. It is good to have someone on staff who is very detailed oriented to review the data and prepare it for analysis. So, cleaning the data prior to analysis is the first and important step in getting good results.
Next, revisit your evaluation questions and what you said in your proposal or contract. It is easy to be the dog that sees the squirrel and gets running in a different direction on a well-intentioned whim.
Me: I have one of those (dogs that is) – but with him it’s bunnies.
Maria: Keep it simple – what questions did you want to answer (what was the program’s impact, who did we reach, what program components were most effective) and what do the funders want to know? Those should be your priority for data analysis. Answer those questions first and then you can look for other interesting relationships or connections (those squirrels!) that may be helpful for program planning, such as differences in participation rates or outcomes based on sub-groups.
Me: What about the nonprofit that fears the dark side of data? They know they do good work but their reports show that their overall impact is small, or their program benefits are deemed not “important” enough in this time of growing need and declining resources. It is a realistic fear.
Maria: That is why the development and communications/marketing team should be at the table as far as the data collection and reporting process. That is their niche, right?
Me: Right – storytelling is more compelling, not to mention authentic, when there is performance data to back it up. Outputs and outcomes data shouldn’t be trotted out only once a year in a few pie charts in the annual report; it is an integral part of building a market and community engagement strategy. Many variables could look ridiculous when reported out of context, but are valuable as part of a larger set measuring a condition, such as overall health, mobility, academic or vocational achievement or quality of life. Communications professionals know how to use performance data to enrich the story of their nonprofit’s impact. Let them.
|April 9, 2013||Posted by M. P. under Children and Family, Drug and Alcohol, Policy, Program Model, Youth Development||
The impact of parental substance abuse on children’s stability and well-being is a concern that crosses systems. Data suggests that parental drug and alcohol use is related to abuse and neglect and increases the likelihood of a parent’s involvement in the justice system – including the possibility of incarceration. The National Center on Substance Abuse and Child Welfare (NCSACW) provides In-Depth Technical Assistance (IDTA) to a handful of sites across the country in the areas of substance abuse, child welfare and the courts to result in better outcomes for families involved in these systems. For approximately 18 months, the IDTA team works with local, state or tribal entities to coordinate strategy and services across systems through the use of evidence-based programs and on-site technical assistance in order to grow capacity for improved child and family outcomes.
The report, In-Depth Technical Assistance (IDTA) Final Report 2007-2012 provides an overview of the IDTA program model, related site accomplishments, and the lessons of system change at various levels. Some findings include,
- 50 percent of the sites implemented (or enhanced) a recovery specialist model in their programs;
- 68 percent developed and/or implemented cross-system training plans;
- 60 percent developed and/or implemented screening protocols that resulted in lowers costs, reduced redundancy and a more efficient referral process;
- 27 percent used cross-system data collection and tracking processes, such as case reviews and drop-off analysis, to inform policy and program decisions. (Note: according to the SAMHSA website, a Drop-Off Analysis is “a method used to assess linkages among child welfare, treatment agencies and courts. The method helps to identify connections that families need to make between systems to obtain services and achieve their child welfare case goals.”)
In addition to program findings, the brief discusses numerous lessons learned around systems change, particularly: issues in achieving long-term policy and practice changes and avoiding the fracture of collaborative relationships post-project, leadership focused on engaging and sustaining partners, use of data to identify areas of and opportunities for change, and realistic timelines for implementing system change and shared accountability.