Does a lack of bias equal automatic credibility as an evaluator? The always enlightening blog, Genuine Evaluation by Patricia J Rogers and E. Jane Davidson, posits that an independent, outside, evaluator actually works against a credible evaluation. Rather, connectedness to the provider, the community receiving services, the funders and other stakeholders is what makes (or breaks) an evaluator’s credibility, and allows for a richer, perhaps more honest evaluation process.
Giving insights from the Kaupapa Maori culture and their own evaluation experiences, they suggest that trust is built through competent work conducted in a respectful manner and that those relationships enhance one’s credibility more than a forced, detached impartiality. A must read entry (with a streaming video piece) and it is here.
Just wanted to point out some of the new stuff at NRM…
New links to logic model, outcomes & other program evaluation resources.
New links to resources on advocacy and lobbying for nonprofits.
Addition of the blogroll.
Resource links and the blog list will be updated as needed. Have a suggestion or a topic area you’d like to see featured as a post? Contact NRM here.
“Performance measurement”, “outcomes” and “innovation” are all popular buzzwords used by funders looking to support programs that are innovative but also tested, analyzed and otherwise proven. Earlier this year, the John Hopkins University Center for Civil Policy Studies conducted a survey to examine use of innovative programs and strategies, program evaluation and the challenges nonprofits face in balancing both. The project sampled nonprofits nationwide from the areas of child and family services, community development, economic development, the arts and elderly services.
The brief, entitled, Nonprofits, Innovation and Performance Measurement: Separating Fact from Fiction, discusses their findings, including:
- Over 80% of respondents implemented a minimum of one innovation in the past 5 years.
- Over 66% reported at least one innovation that they had wanted to implement but couldn’t in the past 2 years.
- The majority (85%) conducted measurement of effectiveness of at least some of their services annually while 66% did so for over half their programs.
Challenges to being both innovative and evidence-based included use of complex, time-consuming and often unclear measurement tools and little to no funding for effectiveness measurement and evaluation.