Monthly Archives: August 2011
|August 26, 2011||Posted by M. P. under Evaluation, Research||
Determining the overall success of a campaign for social change or other advocacy efforts, whether in conjunction with outside funders or for your own internal performance measures, is a far different undertaking than measuring the impact of program services. One of the reasons for this gap is that all advocacy is inherently political. Advocacy is designed specifically to change something, be it regulations or behaviors, and such activities are impossible NOT to be somehow tinged with politics.
Unlike delivery of services, nonprofits (or even the foundations themselves) may not be certain what is to be measured, or for how long, in order to empirically note attitudinal and practical change. While the end result may be clear, the benchmarks and other markers along the path may be extraordinarily tricky to empiricize. In a process fraught with uncertainty and shifting strategies, especially in this age of digital media with its capacity for instantaneous response and quick mobilization, how can you map out an expected, although untested, process? How can you plot, for the purpose of measurement, where you are versus where you should be when the process itself is utterly unpredictable?
If you have ever struggled with these measurement questions then you already realize that you cannot fit advocacy into a traditional evaluation framework, so you should not even attempt to force it into that mold. This is the starting point for Steven Teles and Mark Schmitt in their article The Elusive Craft of Evaluating Advocacy in the Stanford Social Innovation Review in the Summer 2011 edition. Some of their observations:
- Capacity is key in determining success of advocacy efforts – the ability of the grantee to be nimble and quickly adjust to changing tactics and perception, opinions.
- To better ascertain whether an investment has been successful, grantors may want to examine the totality of the return on their investment in shaping social change rather than only one or two elements of it.
- Look at longevity of the policy change – will it stand up to opposition, including legal challenges? Will it need additional funding to maintain it?
- Both grant makers and nonprofits involved in advocacy need to think about evaluation with a fresh set of eyes and fewer assumptions.
|August 18, 2011||Posted by M. P. under Education, Evaluation, Research||
A study of New York City public schools by the RAND Corporation shed some light on the complexity of motivational factors and education outcomes as it found that financial incentives did not improve school performance.
The 3-year study (2007-2010) was designed to evaluate a program that used bonuses to reward school and student achievement. The findings indicated that money alone was not a strong motivational factor as no overall improvement was noted in any grade, at any school. Data point to educators perhaps not buying into or understanding (or agreeing with) the logic behind the promise of fiscal incentives as many reported not changing their teaching style to attempt to draw down a part of the monetary prize.
The report, A Big Apple for Educators New York City’s Experiment with Schoolwide Performance Bonuses: Final Evaluation Report by Julie A. Marsh, Matthew G. Springer, Daniel F. McCaffrey, Kun Yuan, Scott Epstein, Julia Koppich, Nidhi Kalra, Catherine DiMartino and Art (Xiao) Peng is available for download at the RAND website.