Inmate Education Programs: Save Dollars, Decrease Risk of Re-Offending, Help Employment Odds

Latest analysis of educational and vocational programs in prisons indicate they lead to employment, decrease risk of recidivism.
Latest analysis of educational and vocational programs in prisons found they lead to employment, decrease risk of recidivism.

American prisons have offered education programs of one kind or another since the end of the 18th century; and though funding decreased during the 1980’s and 1990’s, the majority of correctional institutions still offer some type of educational and vocational programming. While opinions vary on what prisons and time spent within them should “look like”, research in the correctional field indicates that vocational training and educational programs increase the likelihood that a participant will maintain a law-abiding existence upon their return to the community.  The most recent addition to this research is the RAND report, Evaluating the Effectiveness of Correctional Education A Meta-Analysis of Programs That Provide Education to Incarcerated Adults by Lois M. Davis, Robert Bozick, Jennifer L. Steele, Jessica Saunders and Jeremy N. V. Miles, discussing the impact of educating persons housed in correctional institutions and how the most effective programs could be administered across different settings.

Studies have shown that employment is a major predictor of recidivism; RAND’s meta-analysis (utilizing studies with treatment and comparison groups) spans from 1980 until 2011, including programs funded under the Second Chance Act of 2007 aimed at improving outcomes (such as employment) through education to inmates planning to return to their communities upon completion of their term.  Some highlights  (the full report is available at the RAND website):

  • Participation in correctional education did result in a decreased recidivism risk, the likelihood of re-offending, after release.
  • Although any participation in educational/vocational programming increased the likelihood of post-release employment, the likelihood was higher for those in vocational training rather than academic programs.
  • Initial cost comparisons found that money is saved through prison education due to the lower risk of recidivism gained through such programs, compared to the costs of  re-incarcerating re-offending inmates.

Not only does such research inform (or confirm) policy decisions impacting state and federal prisons, it has opened the door for start-ups, and even nonprofits, to provide more convenient and potentially more secure ways to educate inmates.  Prison education, wireless technology and social enterprise – keep an eye on that intersection in the upcoming year.

 

NOTE: If you are interested in reading more about correctional education, employment training and returning to the community post-incarceration, check out the documents from the 2008  Reentry Roundtable sponsored by the The Prisoner Reentry Institute at the John Jay College of Criminal Justice and The Urban Institute.

 

 

 

Photo by User:Nyttend (Own work) [Public domain], via Wikimedia Commons

Talking Data – Collection, Reporting and…the Closet

 

If you have worked in the realm of nonprofit program evaluation you may be used to the slightly disconcerting experience of being warmly welcomed by about a quarter of those assembled in the meeting room while receiving a combination of frozen half-smiles and the side (or even evil) eye by the other 75 percent.  I don’t blame them. According to conventional wisdom, it is likely that you are there to tell them how to do their jobs, add more paperwork to their jobs or cost them their jobs.  Luckily, an increased focus on the value of data and how it can help secure funding, motivate donors and highlight program accomplishments has led to a much better understanding of its importance and, in turn, many nonprofit organizations are already collecting information to assess performance and report outcomes.

Maria Townsend is a colleague and friend whom I met nearly a decade ago and have been lucky enough to partner with again on some recent projects. During her years of research experience in both an academic setting and as an independent evaluator and consultant, she has guided many nonprofit programs through the data collection and reporting process, and since I have been after her for some time to write a guest post here I thought that a conversation about using data “for good” (especially as it is the topic of the month for the Nonprofit Blog Carnival) could provide some insights for small to mid-sized nonprofits.

Me: Reporting outcomes is a standard requirement for most funders of late, but many small nonprofits struggle to get the “evidence” that their funders, or donors, or board want to “prove” program effectiveness.  Personally, I think that this is when it is best to have some professional guidance – the DIY approach may be too daunting and pull too much time and energy away from the daily operations of an organization with a staff of 10 or less.  That said, hiring a research firm to handle all aspects of an evaluation may be a pipe dream for a small nonprofit and even research consultants may be too pricey to serve as a long term solution for a small organization.   What is your advice to the small or start-up nonprofit?

Maria:  There are low or no cost resources on data collection, survey research and program evaluation for nonprofits online or through national or regional nonprofit associations (American Evaluation Association, Canadian Evaluation Society, The Community Toolbox University of Kansas, Outreach Evaluation Resource Center).  The more educated a nonprofit leader is on what they need and what their office is capable of, the better prepared they are to choose someone to assist in designing and implementing an evaluation plan that will meet them where they are.  Another option is looking at small grants from funding organizations and foundations that subsidize the building of evaluation capacity within an agency.

Me:  Even with an evaluator on board – in house, consultant or pro bono through a capacity building grant – a gap may exist between what a nonprofit collects to measure its program(s) and what data or even what collection instruments a funder requires for reporting purposes. How can the two competing interests be addressed efficiently?

Maria:  In these situations, it is important to do a data inventory. Think of it as if you are cleaning out your closets…

Me: As if data collection didn’t already have a reputation for being tedious and overwhelming.

Maria:  It isn’t glamorous – but stay with me.  Look at what is already hanging in the closet in terms of currently collected data.  How can we coordinate what we currently have to meet the new reporting requirements?  Do we have a mainstay survey that can be the foundation, the little black dress (or for the fellas, the grey business suit) that you can dress up or pare down based on the occasion. Craft and insert questions to gather additional outcome data or remove items that you don’t need. If you want to take the dress (or suit) to the next level you add something substantial …

Me:  Ah, the statement piece… adds pop, shows confidence, represents your style.

Maria: Right –take the data you have already collected to the next level by adding a focus group or site observation that would provide qualitative data that adds context to the quantitative data.  Maybe in this closet assessment you find that your wardrobe is out of season or too small – in the same way you may discover that your existing data collection is no longer a good fit for the current reporting expectations and just adding a few “extras” will not cut it.  You need to do some serious shopping or in other words, major revisions or additions to the data set.  This is the time to get rid of what you will not need anymore, such as surveys that are relics of past funder requirements or from programs that have since changed in scope. This is where you revise what variables you are collecting, from where (intake, assessments, and front line staff notes, supervision reports) to streamline collection processes and data entry.  This is also a good time to make note of what needs to be upgraded as far as the data collection plan – moving surveys from paper to web-based platforms, collapsing data collection timelines to be more efficient and determining if staff are  getting appropriate training on the process.

Me: How would you advise a client agency wondering how much data is enough data? There always seems to be too little data collected at first, which is often why we are there, but when the wish lists of what they want tracked come out — to use your closet analogy — it’s like going from a sock drawer to a walk-in.

Maria: An evaluation plan is a great help in listing a program’s goals and expected outcomes, paired with indicator statements which offer further clarification by listing the variables to be collected. Some evaluation plans also include the name of the person or persons responsible for collection of particular pieces of data and the preferred time schedule of collection and reporting.  This plan can be introduced in phases to lessen the “data shock” associated with collection, entry and storage on a staff new to the process.  It also acts a roadmap for the full transition of the collection and reporting process to the organization – from executive leadership overseeing evaluation and research to the line staff collecting data on a daily basis.

Me: What about the nonprofit that has a solid data collection plan in place but no one really knows what to do with the data because of, say…staff turnover or a change in reporting requirements?

Maria: First off, you need to clean it—make sure that the data you have is complete, fill in missing or fix incorrect identification such as names, dates, codes for services and remove duplicate entries into databases.  If you have filed hardcopies of the completed forms, you can pull them to double-check any issues with data entry.  It is good to have someone on staff who is very detailed oriented to review the data and prepare it for analysis. So, cleaning the data prior to analysis is the first and important step in getting good results.

Next, revisit your evaluation questions and what you said in your proposal or contract. It is easy to be the dog that sees the squirrel and gets running in a different direction on a well-intentioned whim.

Me: I have one of those (dogs that is) – but with him it’s bunnies.

Maria:  Keep it simple – what questions did you want to answer (what was the program’s impact, who did we reach, what program components were most effective) and what do the funders want to know? Those should be your priority for data analysis. Answer those questions first and then you can look for other interesting relationships or connections (those squirrels!) that may be helpful for program planning, such as differences in participation rates or outcomes based on sub-groups.

 Me:  What about the nonprofit that fears the dark side of data? They know they do good work but their reports show that their overall impact is small, or their program benefits are deemed not “important” enough in this time of growing need and declining resources. It is a realistic fear.

Maria: That is why the development and communications/marketing team should be at the table as far as the data collection and reporting process. That is their niche, right?

Me:  Right – storytelling is more compelling, not to mention authentic, when there is performance data to back it up.  Outputs and outcomes data shouldn’t be trotted out only once a year in a few pie charts in the annual report; it is an integral part of building a market and community engagement strategy.  Many variables could look ridiculous when reported out of context, but are valuable as part of a larger set measuring a condition, such as overall health, mobility, academic or vocational achievement or quality of life.  Communications professionals know how to use performance data to enrich the story of their nonprofit’s impact.  Let them.

 

 

 

Implementing, Modeling and Managing a Measurement Culture

Regardless of the outcome of the upcoming election, the nonprofit social services sector – from mental health clinics to food banks – will still be challenged to meet an increased need with fewer resources and limited funding.  Savvy nonprofits have already moved toward an evaluation culture, embracing logic models and short-and-long term impact data to illustrate why (and how) their programs work. Organizational innovation and unique program accomplishments are practically prerequisites for making a successful connection with alternate funding sources, including corporate partnerships, yet nonprofits are still struggling to identify and quantify their impact on clients, the community, and the overall condition they work to modify.  Performance measurement, logic model and outcomes are not new or faddish terms, so why the hesitation?

The report, Tough Times, Creative Measures: What Will it Take to Help the Social Sector Embrace an Outcomes Culture? from the Urban Institute, came out of a Fall 2011 event that brought together leaders from the government, nonprofit, philanthropy, and business sectors to discuss the issue of data-driven management in social and human services and the challenges related to successfully utilizing a performance management system.  Some of the challenges identified:

The difficulty of turning away from the organization’s immediate needs to plan and implement a measurement system. No matter how small the agency, the demands on the executive director’s time and talent are immense. Writing up an organization-wide evaluation strategy and implementation plan, including models, indicators, instruments, and data collection plans is an enormous amount of work – and I haven’t mentioned the pilot testing, analysis and reporting aspects.  The role of director should be to communicate progress and needs with the board as they guide the agency through this kind of culture change, not create every step of the process.

The reality that  sometimes the best outcomes may not be rewarded.  Conspiracy theories and snarky excuses aside, well-crafted stories, high profile connections and nonprofits with missions or target audiences that are more interesting or appealing than your own may have an easier time selling their effectiveness. That said, incomplete or inaccurate information on program impact won’t help remedy the situation.

Some nonprofits may be waiting for the trends to flip and the tides to turn. Why move heaven and earth within your organization to embrace a culture that may seem like a phase (especially to long-time employees who have seen edicts from funders come and go).   Buy-in for outcomes tracking and reporting  may be based on acceptance of the  hoop-jumping norms, not the real value of performance measurement to the overall health of the organization.  It is time for boards and directors to be brave and commit to an organizational culture change – but be prepared to illustrate how it will be  beneficial for staff and (more importantly) clients.

In response to these and other impediments, I mean realities,  the symposium attendees identified strategic areas that would have the most impact in encouraging and implementing a data-centered culture: human and financial capital –  the tenacity and the tab, creative advocacy – sector giants to back this shift,  and ready-to-use systems and tools so directors don’t have to start from square one.  How can nonprofit leaders better model and manage  a measurement culture?   Why are some nonprofits hesitant to embrace this shift? 

 

 

Getting Acquainted with Evaluation (Careful, it Smells Fear)

This spring I’ve been lucky enough to be working with a colleague on a multi-program evaluation project after an extended absence from the world of outcome measurement. It is a bit like riding a bicycle, in that your never forget HOW to do it, but it seems I did forget the pleasure that is found in working with agency staff as they help inform the evaluation plan and models, assist in identifying key indicators and witness the first round of data come in for review. Each project allows me to get up close and personal with a new nonprofit organization  as well as to meet exemplary, dedicated nonprofit professionals at all phases of their careers, but there is something about evaluation that really gets to the essence of a nonprofit.  I am, indeed, glad to be back in the measurement mix.

My colleague shared this link with me and because there is so much I love about this succinct, on point article, Six Pieces of Advice to Demystify Evaluation by Johanna Morariu, Director of the Innovation Network, I wanted to post on it rather than just send the link off into the tweetosphere.

No matter where your organization is in the  evaluation (or for that matter strategic) planning process,  start making data collection your friend.  Immediately. It’s not going away (ever), there are more tools than ever before to help with it,  and even if you hire an outside firm to conduct your evaluation – eventually their contract ends and it falls to your organization to sustain it.  Don’t spend a dime on a contract or software until you know you will be able to do so.  Not to worry though, a thorough consultant involves you and your staff  in each step of the process and will provide the necessary technical assistance during the transition to ensure you will be able to take over the reins.

So, feel free to make eye contact with and extend a hand to that evaluation. Soon, when you are knee deep in useful data for your board, clients, funders and community supporters you won’t be able to remember life without it.