Evaluating innovative ideas


 

QUICK LINKS
Model Evaluation Frameworks
Health Financing Models
Health Delivery Models

 


 

 

 

Evaluating Organizations


 

CHMI's Reported Results Initiative

 

Background

 

The CHMI database currently profiles more than 1000 programs from 105 countries. While much rich descriptive information has been captured, an important informational gap remains – which programs are actually “working”, or achieving the kind of health and financial protection results that are important to national and global health policymakers, donors, investors, and other program implementers looking to emulate proven models.

 

The ultimate goal of the system is to enable greater transparency and standardization of how programs’ performance is tracked and shared with the global health community.

 

Methodology

Reported results are measures of program performance across a number of key categories, including the following examples [Click here to review the full set of results categories and definitions.]:

 

 

CHMI collects Reported Results through a standardized template. All results statements are self-reported. Where available, Reported Results generated by third-party evaluations are included. Statements may be edited for consistency and layout restrictions.

 

 

 

 

GiveWell.org: Charity Review and Ranking Process

 

Background

The mission of GiveWell is to review charities and publish detail analysis to help donors identify charities for donation.

 

Methodology

 

General Approach (quoted)

 

Measuring Cost-effectiveness

http://www.givewell.org/international/technical/criteria/cost-effectiveness#Howcosteffectiveiscosteffective

 

Considerations:

 

Limitations of Cost-effectiveness methods

The estimates we use do not capture all considerations for cost-effectiveness. In particular:

 

Survey Questions:

 

     Example organization: Partners in Health

     Questions:

 

 

 

Selected Resources


 

Evaluation Approaches

 

 

Evaluation for Models and Adaptive Initiatives

Heather Britt and Julia Coffman

Center for Evaluation Innovation

November, 2012

application/pdf iconBritt and Coffman.pdf

 

Heather Britt and Julia Coffman outline a framework for selecting evaluation approaches for two main types of grant-making programs: models and adaptive initiatives.

Key Points:

  • Although there has been a growing emphasis on use of experimental designs in evaluation, there is also increasing agreement that evaluation designs should be situation specific. The nature of the program is one of the key factors to consider in evaluation design.
  • Two types of programs:
    • Models: provide replicable or semi-standardized solutions; evaluation of models requires understanding the stage of development of the model program, with summative evaluation done only when the model is fully developed
    • Adaptive initiatives,: are flexible programming strategies used to address problems that require unique, context-based solutions – require different evaluation designs; require consideration of both the timing and scale of the initiative in determining the appropriate evaluation design

Evaluating Social Innovation

Hallie Preskill and Tanya Beer

Center for Evaluation Innovation

August, 2012

application/pdf iconEvaluatingSocialInnovation.pdf


Hallie Preskill and Tanya Beer explore how grantmakers must re-envision evaluation so that social innovations have a better chance of success.

Key Points:

  • Traditional evaluation approaches (formative & summative evaluations) fail to meet the fast-paced information needs of "philanthropic decision makers" and innovators in the midst of complex social change efforts; they restrict implementers to pre-set plans that lose their relevance as the initiative unfolds. TEA are based on strategic philanthropy (eg. articulated goals, theory of change, well-aligned partners and grantees, performance metrics, evaluation to measure progress against desired outcomes. But these principles can work against social innovation because innovators have to conform to the plans and metrics that don't evolve in response to the dynamic context. 
  • Social innovation is a fundamentally different approach to change than implementing program models with a known set of elements or “ingredients.” While the long-term goals of a social innovation might be well defined, the path to achieving them is less clear—little is known about what will work, where, under what conditions, how, and with whom. Instead, decision makers need to explore what activities will trigger change; and activities that successfully trigger a desired change may never work again.
  • Developmental evaluation: The DE evaluator works collaboratively with social innovators to conceptualize, design, and test new approaches in a long-term, on-going process of adaptation, intentional change, and development. DE’s focusis on social innovations where there is no accepted model (and might never be) for solving the problem.
  • Types of questions answered by DE:
    • What is developing or emerging as the innovation takes shape?
    • What variations in effects are we seeing?
    • What do the initial results reveal about expected progress?
    • What seems to be working and not working?
    • What elements merit more attention or changes?
    • How is the larger system or environment responding to the innovation?
    • How should the innovation be adapted in response to changing circumstances?
    • How can the project adapt to the context in ways that are within the project’s control?

 

 

  • Choosing evaluation approach:

 

 

 
 

 

 

Business Model Evaluation

 

Business Model Evaluation Scorecard

Mark Sniukas - Sniukas.com

 

  • Provides a scorecard template for evaluating a business model
  • Additional resources:
    • The Innovation Map: A framework for defining innovation outcomes

 

Evaluating Your Business Model

FastTrac, Kauffman Foundation

 

 

Additional Web Resources

 

 

 

Impact Evaluation Frameworks

 

Nirali M Shah, Wenjuan Wang, and David M Bishai. (2011) Comparing private sector family planning
services to government and NGO services in Ethiopia and Pakistan: how do social franchises compare across quality, equity and cost? Health Policy and Planning 2011;26:i63–i71. doi:10.1093/heapol/czr027  PDF

 

ABSTRACT: Policy makers in developing countries need to assess how public health programmes function across both public and private sectors. We propose an evaluation framework to assist in simultaneously tracking performance on efficiency, quality and access by the poor in family planning services. We apply this framework to field data from family planning programmes in Ethiopia and Pakistan, comparing (1) independent private sector providers; (2) social franchises of private providers; (3) non-government organization (NGO) providers; and (4) government providers on these three factors. Franchised private clinics have higher quality than non-franchised private clinics in both countries. In Pakistan, the costs per client and the proportion of poorest clients showed no differences between franchised and non-franchised private clinics, whereas in Ethiopia, franchised clinics had higher costs and fewer clients from the poorest quintile. Our results highlight that there are trade-offs between access, cost and quality of care that must be balanced as competing priorities. The relative programme performance of various service arrangements on each metric will be context specific.