Monthly Archives: June 2013
|June 25, 2013||Posted by M. P. under Evaluation, Management|
If you have worked in the realm of nonprofit program evaluation you may be used to the slightly disconcerting experience of being warmly welcomed by about a quarter of those assembled in the meeting room while receiving a combination of frozen half-smiles and the side (or even evil) eye by the other 75 percent. I don’t blame them. According to conventional wisdom, it is likely that you are there to tell them how to do their jobs, add more paperwork to their jobs or cost them their jobs. Luckily, an increased focus on the value of data and how it can help secure funding, motivate donors and highlight program accomplishments has led to a much better understanding of its importance and, in turn, many nonprofit organizations are already collecting information to assess performance and report outcomes.
Maria Townsend is a colleague and friend whom I met nearly a decade ago and have been lucky enough to partner with again on some recent projects. During her years of research experience in both an academic setting and as an independent evaluator and consultant, she has guided many nonprofit programs through the data collection and reporting process, and since I have been after her for some time to write a guest post here I thought that a conversation about using data “for good” (especially as it is the topic of the month for the Nonprofit Blog Carnival) could provide some insights for small to mid-sized nonprofits.
Me: Reporting outcomes is a standard requirement for most funders of late, but many small nonprofits struggle to get the “evidence” that their funders, or donors, or board want to “prove” program effectiveness. Personally, I think that this is when it is best to have some professional guidance – the DIY approach may be too daunting and pull too much time and energy away from the daily operations of an organization with a staff of 10 or less. That said, hiring a research firm to handle all aspects of an evaluation may be a pipe dream for a small nonprofit and even research consultants may be too pricey to serve as a long term solution for a small organization. What is your advice to the small or start-up nonprofit?
Maria: There are low or no cost resources on data collection, survey research and program evaluation for nonprofits online or through national or regional nonprofit associations (American Evaluation Association, Canadian Evaluation Society, The Community Toolbox University of Kansas, Outreach Evaluation Resource Center). The more educated a nonprofit leader is on what they need and what their office is capable of, the better prepared they are to choose someone to assist in designing and implementing an evaluation plan that will meet them where they are. Another option is looking at small grants from funding organizations and foundations that subsidize the building of evaluation capacity within an agency.
Me: Even with an evaluator on board – in house, consultant or pro bono through a capacity building grant – a gap may exist between what a nonprofit collects to measure its program(s) and what data or even what collection instruments a funder requires for reporting purposes. How can the two competing interests be addressed efficiently?
Maria: In these situations, it is important to do a data inventory. Think of it as if you are cleaning out your closets…
Me: As if data collection didn’t already have a reputation for being tedious and overwhelming.
Maria: It isn’t glamorous – but stay with me. Look at what is already hanging in the closet in terms of currently collected data. How can we coordinate what we currently have to meet the new reporting requirements? Do we have a mainstay survey that can be the foundation, the little black dress (or for the fellas, the grey business suit) that you can dress up or pare down based on the occasion. Craft and insert questions to gather additional outcome data or remove items that you don’t need. If you want to take the dress (or suit) to the next level you add something substantial …
Me: Ah, the statement piece… adds pop, shows confidence, represents your style.
Maria: Right –take the data you have already collected to the next level by adding a focus group or site observation that would provide qualitative data that adds context to the quantitative data. Maybe in this closet assessment you find that your wardrobe is out of season or too small – in the same way you may discover that your existing data collection is no longer a good fit for the current reporting expectations and just adding a few “extras” will not cut it. You need to do some serious shopping or in other words, major revisions or additions to the data set. This is the time to get rid of what you will not need anymore, such as surveys that are relics of past funder requirements or from programs that have since changed in scope. This is where you revise what variables you are collecting, from where (intake, assessments, and front line staff notes, supervision reports) to streamline collection processes and data entry. This is also a good time to make note of what needs to be upgraded as far as the data collection plan – moving surveys from paper to web-based platforms, collapsing data collection timelines to be more efficient and determining if staff are getting appropriate training on the process.
Me: How would you advise a client agency wondering how much data is enough data? There always seems to be too little data collected at first, which is often why we are there, but when the wish lists of what they want tracked come out — to use your closet analogy — it’s like going from a sock drawer to a walk-in.
Maria: An evaluation plan is a great help in listing a program’s goals and expected outcomes, paired with indicator statements which offer further clarification by listing the variables to be collected. Some evaluation plans also include the name of the person or persons responsible for collection of particular pieces of data and the preferred time schedule of collection and reporting. This plan can be introduced in phases to lessen the “data shock” associated with collection, entry and storage on a staff new to the process. It also acts a roadmap for the full transition of the collection and reporting process to the organization – from executive leadership overseeing evaluation and research to the line staff collecting data on a daily basis.
Me: What about the nonprofit that has a solid data collection plan in place but no one really knows what to do with the data because of, say…staff turnover or a change in reporting requirements?
Maria: First off, you need to clean it—make sure that the data you have is complete, fill in missing or fix incorrect identification such as names, dates, codes for services and remove duplicate entries into databases. If you have filed hardcopies of the completed forms, you can pull them to double-check any issues with data entry. It is good to have someone on staff who is very detailed oriented to review the data and prepare it for analysis. So, cleaning the data prior to analysis is the first and important step in getting good results.
Next, revisit your evaluation questions and what you said in your proposal or contract. It is easy to be the dog that sees the squirrel and gets running in a different direction on a well-intentioned whim.
Me: I have one of those (dogs that is) – but with him it’s bunnies.
Maria: Keep it simple – what questions did you want to answer (what was the program’s impact, who did we reach, what program components were most effective) and what do the funders want to know? Those should be your priority for data analysis. Answer those questions first and then you can look for other interesting relationships or connections (those squirrels!) that may be helpful for program planning, such as differences in participation rates or outcomes based on sub-groups.
Me: What about the nonprofit that fears the dark side of data? They know they do good work but their reports show that their overall impact is small, or their program benefits are deemed not “important” enough in this time of growing need and declining resources. It is a realistic fear.
Maria: That is why the development and communications/marketing team should be at the table as far as the data collection and reporting process. That is their niche, right?
Me: Right – storytelling is more compelling, not to mention authentic, when there is performance data to back it up. Outputs and outcomes data shouldn’t be trotted out only once a year in a few pie charts in the annual report; it is an integral part of building a market and community engagement strategy. Many variables could look ridiculous when reported out of context, but are valuable as part of a larger set measuring a condition, such as overall health, mobility, academic or vocational achievement or quality of life. Communications professionals know how to use performance data to enrich the story of their nonprofit’s impact. Let them.
|June 19, 2013||Posted by M. P. under Federal Government, News, Policy, Research||
An April 2013 briefing to Congress on surveys and statistics focused on the problematic trend of declining response rates for federal surveys, including the American Community Survey and the National Survey of Child Health. The briefing, Policy Makers & Businesses Need Reliable Information and Data: The Impact of Falling Response Rates to Social Surveys and What Can Be Done, organized by The American Academy of Political and Social Science (AAPSS), outlined the risks to research and the impact on policy-making if response rates to surveys on health, employment and household continue to subside. The largest risk is that of biased results. Other issues:
- Nonresponse rates currently range from 30 to more than 60 percent. This is an all-time high.
- Over 60 percent of nonresponses were refusals, while approximately another 1/4 were due to the inability to contact the intended recipient.
- Young single-person households, minorities, renters and the poor were less likely to respond.
- One-time surveys have higher nonresponse rates than more complex longitudinal studies that follow the same group of respondents over period of time.
While incentives (such as a gift card or a small amount of money) for completing and returning a survey would boost response rates, it would also increase costs – a risky proposition in an atmosphere of austerity. The authors of a related paper, Where Do We Go from Here? Nonresponse and Social Measurement, published in the January 2013 volume of AAPSS’s The Annals, suggest that a solution to this growing problem is a strategic outreach plan to inform both politicians and the public of the purpose of national surveys. Clear explanation of what the data is used for, as well as the regulations and protocols in place to protect it from being presented other than in aggregate form could have a favorable impact on perception. Unfortunately for these and other large-scale surveys, the recent news of metadata collected absent suspicion may have even the most tech-savvy survey-loving among us rethinking issues of privacy, transparency and information storage and retrieval.
Perhaps in the future these surveys that, by the way, inform funding decisions on infrastructure, education, and transportation to name a few, will be deemed too intrusive and/or obsolete and left behind. Funding and other governing decisions can then be made based on variables extracted from all that we have uploaded onto the digital data heap. So, will big data replace big surveys? Will traditional statistical methods be successful in tracking, analyzing and accurately reporting big data to inform policies at the federal, state and local level?
|June 10, 2013||Posted by M. P. under Policy, Program Model, Youth Development|
Summertime employment has traditionally been seen as a rite of passage, a builder of character and a source of funding for teenage frolic, but even a part-time job serving burgers or minding the retail racks isn’t easy to come by these days. In 2000, the average summer employment rate was nearly 52 percent, dropping to 30 percent last year. In 2012, just a quarter of teens reported having a paying job.*
The report, The Dismal State of the Nation’s Teen Summer Job Market, 2008-2012, and the Outlook for the Summer of 2013, from the Center for Labor and Market studies at Northeastern University details the decline in teenage employment rates and the weakness of the current job market and what that means for young adults. A key take-away from the report is that household income was a better predictor of youth employment than race. Low income youth were least likely to be working. As family income rose, so did teen employment rates, with 21 percent of youth from households earning under $20,000 reporting summer employment compared to 38 percent from households earning between $100,000 and $150,000 a year.
The sad state of teen summer employment isn’t surprising considering the decline of the overall youth employment rate. A policy report from the Annie E. Casey Foundation on the growing number of teens and young adults both unemployed and not in school – referred to as disconnected youth in the brief – found that such youth were most likely to be from low income families. Specifically,
- 21 percent of low income (under $20,000/household) 16-to-19 year old youth were disconnected compared to 8 percent of their counterparts in families with an income over $100,000; and
- among 20-to 24-year-olds from low income families, 30 percent were not in school or employed,compared to 10 percent of those from families earning $100,000 or more.
The Pennsylvania employment rate for young adults (20 to 24 years old) is approximately 62 percent, for teens 16 to 19, 39 percent.
The authors of the report Youth and Work: Restoring Teen and Young Adult Connections to Opportunity, challenge policy makers to find cooperative cross-system approaches to reconnecting and reengaging youth with education and employment opportunities. An approach that is flexible enough to use the strengths of the community where it operates but based in proven outreach and engagement strategies that go beyond mere job-matching might have a chance, if the funding survives.
If you are interested in learning more about models of youth employment initiatives check out Best Practices for Youth Employment Programs:A Synthesis of Current Research from What Works, Wisconsin.
*Source: Andrew Sum, Ishwar Khatiwada, Walter McHugh, and Sheila Palma, The Dismal State of the Nation’s Teen Summer Job Market, 2008-2012, and the Outlook for the Summer of 2013, Center for Labor Market Studies, Northeastern University, May 2013.