NPL 473  Nonprofit Leadership


Evaluating the Effectiveness of Nonprofit Organizations (Vic Murray)

 

Focus is on “Organizational Effectiveness Evaluation” (OEE)—not outcomes of specific programs, but assessing the overall state of the organization.

1.                              How well is it achieving its stated mission (effectiveness or responsiveness)

2.                              How well it is using its resources to achieve that mission (efficiency)

 

Why Evaluate?

1.      “Accountability movement”—organization should “return an account” to those they serve and those who fund them.

2.      Distinction between legal and moral accountability—may be no legal requirement to report to clients, but expectation may be reasonable.

 

Ideal Evaluation Process and Its Problems

  1. Ideally, rational and objective
  2. Politics is inevitable, since reasonable people can reasonably desire different things
    1. Design:  Determining the purpose and then how to measure it (inputs, activities/processes, outputs, outcomes?)
    2. Implementation:  How will the information be gathered?
    3. Interpretation:  What is “success”?  “Failure”?  If (when) problems are found, what is their cause (usually, insufficient information to answer this definitively, but this is the important question)?
    4. Application:  So what?  Deciding how to act on the information will involve resolving reasonable (and unreasonable) differences
  3. Technical Problems are also inevitable—carefully planning and pre-testing helps, but as the assessment progresses new information will shed new light on previous assessment decisions and choices.
    1. Goals are not clearly and unambiguously stated
    2. No “logic model” to frame assessment of  inputs, outputs, and outcomes
    3. Links between individuals, programs, and functions are not specified
    4. Outcome measures may fail to capture the goals they are intended to measure
  4. Human foibles are also inevitable, at least until robots are doing everything (and even then, remember HAL from Space Odyssey?)
    1. LGAB—“Look good, avoid blame”
    2. SIR—“subjective interpretation of reality”—in field research, there are always too many variables and too little control over them to permit solid conclusions about causal connections.
    3. Trust factor—the lower the level of trust, the more likely political game-playing

 

Tools for Improving OEE

  1. Program Outcomes:  United Way approach
    1. Build commitment to outcomes, clarify expectations
    2. Build capacity to measure outcomes
    3. Identify outcomes, indicators, and data collection methods
    4. Collect and analyze outcome data (Need to establish a baseline before establishing targets.) 
    5. Improve outcome measuring system (For first few years, data say more about what is wrong with evaluation system than what is taking place in the program)
    6. Use & communicate outcome information
  2. The Balanced Scorecard:  Goal is to measure achievement of the mission statement, through a “balanced scorecard of performance attributes” grouped into four perspectives:
    1. Funder/potential funder perspective (satisfying externally set goals)
    2. Client/program user perspective (satisfaction)
    3. Internal business perspective (internal efficiency & quality)
    4. Innovation/learning perspective (adaptability to changing environment)
  3. CCAF/FCVI Framework
    1. Management direction
    2. Relevance
    3. Appropriateness
    4. Achievement of intended results
    5. Acceptance
    6. Secondary impact
    7. Costs and productivity (costs/inputs/outputs)
    8. Responsiveness
    9. Financial results (revenues & expenditures/assets & liabilities)
    10. Working environment
    11. Protection of assets
    12. Monitoring & reporting
  4. Best-Practice Benchmarking—Compare organization’s practices with those which are “best in class”
    1. Difficult to identify best performers, and even more difficult to obtain information about their practices
    2. “Measurement churn”—tendency to keep changing the indicators that are reported
    3. Performance practices may not be the cause of different outcomes—context may be different, or may be due to other practices not identified
  5. Charity Rating Services
    1. BBB “Wise Giving Alliance
    2. AIP “Charity Rating Guide”
    3. MN Charities Review Council
    4. Based almost entirely on process standards (availability of audit reports, basic financial ratios, conduct of fundraising, board policies such as conflict of interest)

 

Final Notes

  1. Trust BuildingInvolve the participants!  If a prior relationship does not exist before evaluation begins, it must consciously be worked on as the process is developed.  All parties must deal with the following:
    1. What is the purpose of the evaluation?
    2. What should be measured?
    3. What evaluation methods should be used?
    4. What standards/criteria should be applied to the analysis of the information obtained?
    5. How should the data be interpreted?
    6. How will the evaluation be used?
  2. Logic Model Building
    1. Generic form is

                                                               i.      Inputs (and other, external influences)

                                                             ii.      Outputs/Activities (and other, external influences)

                                                            iii.      Outcomes (which might have side effects on others in the external environment)

                                                           iv.      Goals

    1. Should be developed in the design phase (not once the program has been implemented and a decision is made to do an evaluation)
  1. Relationship Problems
    1. Board has due diligence duty to evaluate outcomes.  But Board may not feel it has technical capacity.  Ideally, a task force of the Board should work with staff representatives and an exernal evaluator.
    2. Independent evaluators may not have time to build trust/develop involvement, and there is an inherent tension between duty to the funder and duty to the organization being evaluated.  May lead to gathering information that is not used by the recipient (commonly, the funder)
    3. Appreciative Inquiry (AI)—Focus is on

                                                               i.      Appreciating the best of “what is”

                                                             ii.      Envisioning “what might be”

                                                            iii.      Dialogue on “what should be”


MSU

© 2003 A.J.Filipovitch
Revised 1 April 2008