473 Nonprofit Leadership
Evaluating the Effectiveness of
Nonprofit Organizations (Vic Murray)
Focus is on “Organizational Effectiveness
Evaluation” (OEE)—not outcomes of specific programs, but assessing
the overall state of the organization.
How well is it achieving its stated mission (effectiveness or
How well it is using its resources to achieve that mission (efficiency)
movement”—organization should “return an account” to
those they serve and those who fund them.
between legal and moral accountability—may be no legal requirement to
report to clients, but expectation may be reasonable.
Ideal Evaluation Process and Its Problems
rational and objective
is inevitable, since reasonable people can reasonably desire different
- Design: Determining the purpose and then
how to measure it (inputs, activities/processes, outputs, outcomes?)
- Implementation: How will the information be
- Interpretation: What is
If (when) problems are
found, what is their cause (usually, insufficient information to answer
this definitively, but this is the important question)?
- Application: So what? Deciding how to act on the
information will involve resolving reasonable (and unreasonable)
Problems are also inevitable—carefully planning and pre-testing
helps, but as the assessment progresses new information will shed new
light on previous assessment decisions and choices.
are not clearly and unambiguously stated
“logic model” to frame assessment of inputs, outputs, and outcomes
between individuals, programs, and functions are not specified
measures may fail to capture the goals they are intended to measure
foibles are also inevitable, at least until robots are doing everything
(and even then, remember HAL from Space Odyssey?)
good, avoid blame”
interpretation of reality”—in field research, there are
always too many variables and too little control over them to permit
solid conclusions about causal connections.
factor—the lower the level of trust, the more likely political
Tools for Improving OEE
Outcomes: United Way approach
commitment to outcomes, clarify expectations
capacity to measure outcomes
outcomes, indicators, and data collection methods
and analyze outcome data (Need to establish a baseline before
outcome measuring system (For first few years, data say more about what
is wrong with evaluation system than what is taking place in the program)
& communicate outcome information
Balanced Scorecard: Goal is to
measure achievement of the mission statement, through a “balanced
scorecard of performance attributes” grouped into four perspectives:
funder perspective (satisfying externally set goals)
user perspective (satisfaction)
business perspective (internal efficiency & quality)
perspective (adaptability to changing environment)
of intended results
and productivity (costs/inputs/outputs)
results (revenues & expenditures/assets & liabilities)
Benchmarking—Compare organization’s practices with those which
are “best in class”
to identify best performers, and even more difficult to obtain
information about their practices
churn”—tendency to keep changing the indicators that are reported
practices may not be the cause of different outcomes—context may be
different, or may be due to other practices not identified
“Wise Giving Alliance”
“Charity Rating Guide”
- MN Charities
almost entirely on process standards (availability of audit reports,
basic financial ratios, conduct of fundraising, board policies such as
conflict of interest)
- Trust Building—Involve the participants!
If a prior relationship does not exist before evaluation begins, it
must consciously be worked on as the process is developed. All parties must deal with the
is the purpose of the evaluation?
should be measured?
evaluation methods should be used?
standards/criteria should be applied to the analysis of the information
should the data be interpreted?
will the evaluation be used?
- Logic Model Building
(and other, external influences)
(and other, external influences)
(which might have side effects on others in the external environment)
be developed in the design phase
(not once the program has been implemented and a decision is made to do
has due diligence duty to evaluate outcomes. But Board may not feel it has
technical capacity. Ideally,
a task force of the Board should work with staff representatives and an exernal evaluator.
evaluators may not have time to build trust/develop involvement, and
there is an inherent tension between duty to the funder and duty to the
organization being evaluated.
May lead to gathering information that is not used by the
recipient (commonly, the funder)
Inquiry (AI)—Focus is on
the best of “what is”
“what might be”
on “what should be”
© 2003 A.J.Filipovitch
Revised 1 April 2008