|September 12, 2016||Posted by M. P. under News, Research, Uncategorized|
Last month The National Endowment for the Arts (NEA) published research on the participation of Americans in the arts at both a national and state level. In 2015, approximately two-thirds of American adults attended at least one film or visual art or performance event within the last year. Films appeared to be the most popular choice (among both urban and rural residents) with 55 percent of adults reporting that they took in a movie, while 32 percent attended a live dance, music or drama performance, and 19 percent an art exhibit. Residents of urban areas attended live arts events (33 percent versus 21 percent) and movies (60 percent versus 46 percent) more than their rural counterparts.
The proportion of American adults reading literature (plays, poetry, novels – not work or school materials) declined from 47 percent in 2012 to 43 percent in 2015. Women (49.8 percent) reported reading literature more than men (35.9 percent). Generally, better educated respondents reported a higher level of literature consumption than those with less education.
Pennsylvania had a slightly lower rate of adults attending a live arts performance or movie than the national average (65.2 percent versus 66.2). Overall, Pennsylvania residents’ rates of arts participation via literature, art class enrollment, personal creation, or use of electronic media to experience the arts were not significantly greater or less than the U.S average. All state profiles and additional briefs on arts engagement are available at the NEA webpage.
|August 26, 2016||Posted by M. P. under Evaluation, Management, Program Model|
This is an unexpected follow-up to my last post. I just heard about a nonprofit losing a hefty grant at renewal time due primarily to a lack of reported outcomes. There was measurement – lots of data on process and organizational performance metrics – but not much to demonstrate the difference the program made in the lives of participants. This kind of news is disheartening.
My first thought is – how did it get to that point? Were the grant terms a surprise sprung on an unsuspecting organization at the last moment? Was any mention of measuring program outcomes waved off by executives who preferred to discuss ways to scale up at the next funding cycle?
That said, I am pretty certain that…
- the funder/s made their reporting criteria and protocol clear;
- the program administration and staff were dedicated to their mission and conducted outreach and activities according to their model;
- people who experienced the program gained something from it;
- the nonprofit thought that they were collecting data that showed the impact they made on participants and in the community.
So what went wrong in that story I heard? I’ll never know. No one accepts a grant award with the expectation of a giant hole in their final report, but if there are questions about program application, geographic distance between sites, and/or irregular communication, measurement can and will get lost in the shuffle. Here are some steps you can take to prevent a similar situation from happening to your organization.
- Update your data collection plan. The outcomes listed in a column on a chart in your program materials will not measure themselves. What are you currently collecting that may also fit as an indicator of your expected results? Can you create a measure to better capture a specific change expected in the program participants?
- Make expectations clear and follow up regularly. Keep staff up-to-date on data collection with a matrix that lays out indicators, data sources, person(s) responsible and timeline. Have a check-in call monthly to report on progress and address questions and other issues around the collection.
- Have patience. It will take a while to get used to a shift from collecting process metrics (still important – don’t stop doing that) to outcomes data, But, if you have a plan ready to go you can work out any knots early on in the funding period rather than panic at report time.
|July 31, 2016||Posted by M. P. under Evaluation, Management|
Outcomes measurement in the nonprofit sector needs a reboot. First off, the word “impact” should not cause stomachs, or any other body parts, to clench, shoulders to sag, or blood pressure to rise. Second, measurement should not be viewed as a zero sum game – as in program directors are terrified that the sum of their outcomes will result in zero funding. This kind of anxiety just creates extra obstacles, especially for the small-to-mid-size organizations that are trying to build capacity for program measurement and reporting. Let’s shake off those fears and shake up how you approach outcomes.
You, yes YOU, get to drive the outcomes bus.
Unlike the pigeon from the 2003 children’s story, I say we should LET nonprofits drive this bus. You are the experts when it comes to your work. As experts, you define program success and require a regular flow of relevant information to ensure programs are operating in a way that enables them to achieve that success. Outcomes are not just about looking backward; they help you plot a more informed course forward. I recommend David Grant’s The Social Profit Handbook as an excellent resource for mission-driven organizations struggling to assess their impact in accordance with more traditional sector standards. He brings a new perspective to assessment that includes nonprofits taking back the power to define what success looks like. Talk about shaking up the status quo!
Bottom line, if you do not measure your program, you are letting someone else do that for you. Don’t let the perceived value or understanding of your work be left solely up to other people’s business models, compliance checks, and anecdotes.
Your model matters.
Unless you are just in the start–up phase, you likely have a program model or logic model of some kind. I hope it isn’t sitting undisturbed exactly where it was placed upon completion. See, this document is the essence of your nonprofit. It should live, breathe, and change just as your nonprofit does. These models display the elements of your programs and services, the reality that your organization operates in, and how the programs are expected to address the problem(s) driving your work. At the most basic level, the model answers the questions: What’s the problem here? What are we going to do? With what? And how? What do we expect will happen? If any of the answers to those questions change over time, the model should be updated and reviewed for internal consistency.
“Oh please,” you think, how can we shake up the sector when you are using phrases like “internal consistency?” Well, here is where it gets a little bit radical. Not only do you define your success; you take the reins to design a measurement plan that will best fit your operations and resources. Take that model off the shelf and transform it* into a strategic data collection plan, where activities (what you do) link to outcomes (the intended result), and are documented by indicators (measure to determine if outcome is being achieved.) Add a timeline for the collection and reporting schedule, and BOOM – get measuring.
*Ok, this part gets technical, and it may be best to seek out training or technical assistance to take your group through this process. I’ve worked with clients who bring our team back in to monitor data collection and assist staff with reporting. Still, it often comes as a surprise to sector professionals that they already have exactly what is needed to develop a measurement plan that provides useful information for service and program planning, not just for funding reports. No need to start at square one.
You say social impact; I say long-term outcome.
I have to admit, one of my favorite topics of discussion with colleagues of late is how to navigate the veritable scrum of terms associated with assessing impact. The concept of measuring program outcomes has become mistaken for the idea of demonstrating sweeping social impact. While long-term outcomes do refer to a change in a condition or status brought about by program activities, that is not synonymous with causing a wave of change in their region for the typical nonprofit.
Andrew Harding of the London School of Economics and Political Science wrote a blog post on the difference between these two results in human welfare research to challenge this interchangeable use of terms. He describes an outcome as “a finite and often measurable change,” with a pre-defined reach and limited scope. Impact on the other hand, “refers to a much broader effect – perhaps the effect information and advice had on ability to make an informed choice, empowerment or wider life experiences. Impact can be conceptualized as the longer term effect of an outcome.”
I think much anxiety around outcomes can be attributed to this misconception; that long-term outcomes must be far-reaching and epic when in reality the change is only expected within the population that experienced your program. That said, leaders should absolutely challenge their organizations by setting ambitious long-term program goals. With a robust model, the condition-changing outcomes you aim to achieve will be in proportion to your program scope and the intensity of your resources. You cannot control every factor, but you must have the fortitude to regularly examine the results of what you are doing.
Reboot your expectations around measurement. Define success. Take ownership of your outcomes.
|June 6, 2016||Posted by M. P. under Education||
In the midst of commencement season, there is good news from the America’s Promise Alliance regarding high school graduation rates in the United States. In 2014, the rate hit a record high of 82.3 percent, with a reduction in the number of schools with low graduation rates. According to the Bureau of Labor Statistics, approximately 69 percent of new high school graduates continue their education at a college or university. Unfortunately, the cost of attending public two-and-four year colleges continues to increase, while state funding for these institutions remains below pre-recession levels.
A May 2016 brief from the Center on Budget and Policy Priorities explores how the cuts to higher education funding have resulted in these ever-increasing costs being passed on to students and families. Twenty states have cut funding by 20 percent or more since the recession. In Pennsylvania, per-student funding is 33 percent less than it was during 2007-08, even as tuition has increased.
Nationally, tuition increased nearly 30 percent from 2007-08 to 2014-15, while the median income decreased by 6.5 percent during that same time period. Tuition alone for an incoming student at my (private 4-year) alma mater increased 348 percent since my own freshman year (total inflation between January of that year January 2016 calculated to be 86 percent). Not surprisingly, the amount of debt that students graduate from public four-year colleges with has increased by 18 percent since 2007-08. To put this in perspective, the authors point out that in the 6 years prior to the recession the average amount of student debt increased just one percent.
A concern raised in this brief, Funding Down, Tuition Up State Cuts to Higher Education Threaten Quality and Affordability at Public Colleges, is that while tuition and fees have increased, faculty positions have been reduced or replaced by part-time instructors, classes have been cut, services for the student body have been scaled back, and some campuses have closed altogether. Students and families are taking on more debt to meet the increased costs but appear to be getting less.
Analyses and opinions vary on what has led to the jump in the cost of higher education and on any possible remedies. Perhaps it is time to adjust a system that has been in place for too long – before another bubble situation (similar to mortgages). Perhaps student success outcomes should be tied to funding? There are no easy answers. Yet, although anecdotal, there seems to be a swath of the population that make too much for their academically successful teenager to receive aid, but too little to not require high 5-figure loans to off set the expense of a bachelor’s degree. Is this the new normal?