Making the Most of Each Budget Dollar

One of the few things both sides of the political aisle are able to agree on is getting better performance out of programs, as seen by the move toward evidence-based policymaking under prior administrations. The federal government could lead by example, suggests Brookings Fellow Andrew Feldman, by creating a bipartisan team focused on improving performance of federally funded programs, giving states the freedom to implement innovative programming and shoulder more accountability for results, and reducing hurdles to program evaluation while encouraging the incorporation of data analytics into regular reporting. In a time of new federal spending priorities, budget shortfalls, increased need, and much uncertainty, states must get serious about investing in programs that work by actively incorporating outcomes research into their policymaking.

A report from the Pew-MacArthur Results First Initiative, How States Engage in Evidence-Based Policymaking: A National Assessment, assessed the levels of commitment and action of states using research to guide decision-making related behavioral health, criminal justice, juvenile justice and child welfare policy. This study scored each state and the District of Columbia on the extent to which they incorporated research findings in policy, including defining categories of evidence, conducting program cost-benefit analyses, and identifying specific funding for evidence-based programming. According to the brief, 50 states have taken some sort of action through the allocation of funding for programs supported by research findings, while 42 states report outcomes in the budget annually. Just 17 states compare program outcomes and costs.

While Washington, Utah, Minnesota, Connecticut, and Oregon lead the nation in evidence–based policymaking, Pennsylvania is one of 11 “established” states, with 13 evidence-based policymaking actions (three advanced and ten minimum) across the four policy areas studied. According to the assessment scorecard, Pennsylvania uses advanced research-driven policy actions most often in the juvenile justice sector.

Read more about the levels of evidence-based policy-making and individual state scorecards, in the report available at the Pew Charitable Trusts website. Case studies are also available on how to design contracts and grants to require outcomes reporting tied to program performance.

Talking Data – Collection, Reporting and…the Closet

 

If you have worked in the realm of nonprofit program evaluation you may be used to the slightly disconcerting experience of being warmly welcomed by about a quarter of those assembled in the meeting room while receiving a combination of frozen half-smiles and the side (or even evil) eye by the other 75 percent.  I don’t blame them. According to conventional wisdom, it is likely that you are there to tell them how to do their jobs, add more paperwork to their jobs or cost them their jobs.  Luckily, an increased focus on the value of data and how it can help secure funding, motivate donors and highlight program accomplishments has led to a much better understanding of its importance and, in turn, many nonprofit organizations are already collecting information to assess performance and report outcomes.

Maria Townsend is a colleague and friend whom I met nearly a decade ago and have been lucky enough to partner with again on some recent projects. During her years of research experience in both an academic setting and as an independent evaluator and consultant, she has guided many nonprofit programs through the data collection and reporting process, and since I have been after her for some time to write a guest post here I thought that a conversation about using data “for good” (especially as it is the topic of the month for the Nonprofit Blog Carnival) could provide some insights for small to mid-sized nonprofits.

Me: Reporting outcomes is a standard requirement for most funders of late, but many small nonprofits struggle to get the “evidence” that their funders, or donors, or board want to “prove” program effectiveness.  Personally, I think that this is when it is best to have some professional guidance – the DIY approach may be too daunting and pull too much time and energy away from the daily operations of an organization with a staff of 10 or less.  That said, hiring a research firm to handle all aspects of an evaluation may be a pipe dream for a small nonprofit and even research consultants may be too pricey to serve as a long term solution for a small organization.   What is your advice to the small or start-up nonprofit?

Maria:  There are low or no cost resources on data collection, survey research and program evaluation for nonprofits online or through national or regional nonprofit associations (American Evaluation Association, Canadian Evaluation Society, The Community Toolbox University of Kansas, Outreach Evaluation Resource Center).  The more educated a nonprofit leader is on what they need and what their office is capable of, the better prepared they are to choose someone to assist in designing and implementing an evaluation plan that will meet them where they are.  Another option is looking at small grants from funding organizations and foundations that subsidize the building of evaluation capacity within an agency.

Me:  Even with an evaluator on board – in house, consultant or pro bono through a capacity building grant – a gap may exist between what a nonprofit collects to measure its program(s) and what data or even what collection instruments a funder requires for reporting purposes. How can the two competing interests be addressed efficiently?

Maria:  In these situations, it is important to do a data inventory. Think of it as if you are cleaning out your closets…

Me: As if data collection didn’t already have a reputation for being tedious and overwhelming.

Maria:  It isn’t glamorous – but stay with me.  Look at what is already hanging in the closet in terms of currently collected data.  How can we coordinate what we currently have to meet the new reporting requirements?  Do we have a mainstay survey that can be the foundation, the little black dress (or for the fellas, the grey business suit) that you can dress up or pare down based on the occasion. Craft and insert questions to gather additional outcome data or remove items that you don’t need. If you want to take the dress (or suit) to the next level you add something substantial …

Me:  Ah, the statement piece… adds pop, shows confidence, represents your style.

Maria: Right –take the data you have already collected to the next level by adding a focus group or site observation that would provide qualitative data that adds context to the quantitative data.  Maybe in this closet assessment you find that your wardrobe is out of season or too small – in the same way you may discover that your existing data collection is no longer a good fit for the current reporting expectations and just adding a few “extras” will not cut it.  You need to do some serious shopping or in other words, major revisions or additions to the data set.  This is the time to get rid of what you will not need anymore, such as surveys that are relics of past funder requirements or from programs that have since changed in scope. This is where you revise what variables you are collecting, from where (intake, assessments, and front line staff notes, supervision reports) to streamline collection processes and data entry.  This is also a good time to make note of what needs to be upgraded as far as the data collection plan – moving surveys from paper to web-based platforms, collapsing data collection timelines to be more efficient and determining if staff are  getting appropriate training on the process.

Me: How would you advise a client agency wondering how much data is enough data? There always seems to be too little data collected at first, which is often why we are there, but when the wish lists of what they want tracked come out — to use your closet analogy — it’s like going from a sock drawer to a walk-in.

Maria: An evaluation plan is a great help in listing a program’s goals and expected outcomes, paired with indicator statements which offer further clarification by listing the variables to be collected. Some evaluation plans also include the name of the person or persons responsible for collection of particular pieces of data and the preferred time schedule of collection and reporting.  This plan can be introduced in phases to lessen the “data shock” associated with collection, entry and storage on a staff new to the process.  It also acts a roadmap for the full transition of the collection and reporting process to the organization – from executive leadership overseeing evaluation and research to the line staff collecting data on a daily basis.

Me: What about the nonprofit that has a solid data collection plan in place but no one really knows what to do with the data because of, say…staff turnover or a change in reporting requirements?

Maria: First off, you need to clean it—make sure that the data you have is complete, fill in missing or fix incorrect identification such as names, dates, codes for services and remove duplicate entries into databases.  If you have filed hardcopies of the completed forms, you can pull them to double-check any issues with data entry.  It is good to have someone on staff who is very detailed oriented to review the data and prepare it for analysis. So, cleaning the data prior to analysis is the first and important step in getting good results.

Next, revisit your evaluation questions and what you said in your proposal or contract. It is easy to be the dog that sees the squirrel and gets running in a different direction on a well-intentioned whim.

Me: I have one of those (dogs that is) – but with him it’s bunnies.

Maria:  Keep it simple – what questions did you want to answer (what was the program’s impact, who did we reach, what program components were most effective) and what do the funders want to know? Those should be your priority for data analysis. Answer those questions first and then you can look for other interesting relationships or connections (those squirrels!) that may be helpful for program planning, such as differences in participation rates or outcomes based on sub-groups.

 Me:  What about the nonprofit that fears the dark side of data? They know they do good work but their reports show that their overall impact is small, or their program benefits are deemed not “important” enough in this time of growing need and declining resources. It is a realistic fear.

Maria: That is why the development and communications/marketing team should be at the table as far as the data collection and reporting process. That is their niche, right?

Me:  Right – storytelling is more compelling, not to mention authentic, when there is performance data to back it up.  Outputs and outcomes data shouldn’t be trotted out only once a year in a few pie charts in the annual report; it is an integral part of building a market and community engagement strategy.  Many variables could look ridiculous when reported out of context, but are valuable as part of a larger set measuring a condition, such as overall health, mobility, academic or vocational achievement or quality of life.  Communications professionals know how to use performance data to enrich the story of their nonprofit’s impact.  Let them.

 

 

 

Implementing, Modeling and Managing a Measurement Culture

Regardless of the outcome of the upcoming election, the nonprofit social services sector – from mental health clinics to food banks – will still be challenged to meet an increased need with fewer resources and limited funding.  Savvy nonprofits have already moved toward an evaluation culture, embracing logic models and short-and-long term impact data to illustrate why (and how) their programs work. Organizational innovation and unique program accomplishments are practically prerequisites for making a successful connection with alternate funding sources, including corporate partnerships, yet nonprofits are still struggling to identify and quantify their impact on clients, the community, and the overall condition they work to modify.  Performance measurement, logic model and outcomes are not new or faddish terms, so why the hesitation?

The report, Tough Times, Creative Measures: What Will it Take to Help the Social Sector Embrace an Outcomes Culture? from the Urban Institute, came out of a Fall 2011 event that brought together leaders from the government, nonprofit, philanthropy, and business sectors to discuss the issue of data-driven management in social and human services and the challenges related to successfully utilizing a performance management system.  Some of the challenges identified:

The difficulty of turning away from the organization’s immediate needs to plan and implement a measurement system. No matter how small the agency, the demands on the executive director’s time and talent are immense. Writing up an organization-wide evaluation strategy and implementation plan, including models, indicators, instruments, and data collection plans is an enormous amount of work – and I haven’t mentioned the pilot testing, analysis and reporting aspects.  The role of director should be to communicate progress and needs with the board as they guide the agency through this kind of culture change, not create every step of the process.

The reality that  sometimes the best outcomes may not be rewarded.  Conspiracy theories and snarky excuses aside, well-crafted stories, high profile connections and nonprofits with missions or target audiences that are more interesting or appealing than your own may have an easier time selling their effectiveness. That said, incomplete or inaccurate information on program impact won’t help remedy the situation.

Some nonprofits may be waiting for the trends to flip and the tides to turn. Why move heaven and earth within your organization to embrace a culture that may seem like a phase (especially to long-time employees who have seen edicts from funders come and go).   Buy-in for outcomes tracking and reporting  may be based on acceptance of the  hoop-jumping norms, not the real value of performance measurement to the overall health of the organization.  It is time for boards and directors to be brave and commit to an organizational culture change – but be prepared to illustrate how it will be  beneficial for staff and (more importantly) clients.

In response to these and other impediments, I mean realities,  the symposium attendees identified strategic areas that would have the most impact in encouraging and implementing a data-centered culture: human and financial capital –  the tenacity and the tab, creative advocacy – sector giants to back this shift,  and ready-to-use systems and tools so directors don’t have to start from square one.  How can nonprofit leaders better model and manage  a measurement culture?   Why are some nonprofits hesitant to embrace this shift? 

 

 

Series Provides Issue Analysis, Resources for Grantmakers

Grantmakers for Effective Organizations (GEO) and the Association of Small Foundations (ASF) have partnered to create and distribute a series of tear-sheets on timely issues related to grantmaking such as organizational learning and new ways to use evaluation.

Currently, two tear sheets are available: Using Evaluation to Become an Effective Learning Organization, and Engaging Stakeholders for More Effective Grantmaking. These briefs provide useful information on critical issues, terminology, core questions and action items as well as suggest additional resources for further research.

The tear sheet series is available to members on the ASF website and in PDF format on the GEO site.