NPL
473 Nonprofit Leadership
Evaluating the Effectiveness of
Nonprofit Organizations (Vic Murray)
Focus is on “Organizational Effectiveness
Evaluation” (OEE)—not outcomes of specific programs, but assessing
the overall state of the organization.
1.
How well is it achieving its stated mission (effectiveness or
responsiveness)
2.
How well it is using its resources to achieve that mission (efficiency)
Why Evaluate?
1. “Accountability
movement”—organization should “return an account” to
those they serve and those who fund them.
2. Distinction
between legal and moral accountability—may be no legal requirement to
report to clients, but expectation may be reasonable.
Ideal Evaluation Process and Its Problems
- Ideally,
rational and objective
- Politics
is inevitable, since reasonable people can reasonably desire different
things
- Design: Determining the purpose and then
how to measure it (inputs, activities/processes, outputs, outcomes?)
- Implementation: How will the information be
gathered?
- Interpretation: What is
“success”?
“Failure”?
If (when) problems are
found, what is their cause (usually, insufficient information to answer
this definitively, but this is the important question)?
- Application: So what? Deciding how to act on the
information will involve resolving reasonable (and unreasonable)
differences
- Technical
Problems are also inevitable—carefully planning and pre-testing
helps, but as the assessment progresses new information will shed new
light on previous assessment decisions and choices.
- Goals
are not clearly and unambiguously stated
- No
“logic model” to frame assessment of inputs, outputs, and outcomes
- Links
between individuals, programs, and functions are not specified
- Outcome
measures may fail to capture the goals they are intended to measure
- Human
foibles are also inevitable, at least until robots are doing everything
(and even then, remember HAL from Space Odyssey?)
- LGAB—“Look
good, avoid blame”
- SIR—“subjective
interpretation of reality”—in field research, there are
always too many variables and too little control over them to permit
solid conclusions about causal connections.
- Trust
factor—the lower the level of trust, the more likely political
game-playing
Tools for Improving OEE
- Program
Outcomes: United Way approach
- Build
commitment to outcomes, clarify expectations
- Build
capacity to measure outcomes
- Identify
outcomes, indicators, and data collection methods
- Collect
and analyze outcome data (Need to establish a baseline before
establishing targets.)
- Improve
outcome measuring system (For first few years, data say more about what
is wrong with evaluation system than what is taking place in the program)
- Use
& communicate outcome information
- The
Balanced Scorecard: Goal is to
measure achievement of the mission statement, through a “balanced
scorecard of performance attributes” grouped into four perspectives:
- Funder/potential
funder perspective (satisfying externally set goals)
- Client/program
user perspective (satisfaction)
- Internal
business perspective (internal efficiency & quality)
- Innovation/learning
perspective (adaptability to changing environment)
- CCAF/FCVI
Framework
- Management
direction
- Relevance
- Appropriateness
- Achievement
of intended results
- Acceptance
- Secondary
impact
- Costs
and productivity (costs/inputs/outputs)
- Responsiveness
- Financial
results (revenues & expenditures/assets & liabilities)
- Working
environment
- Protection
of assets
- Monitoring
& reporting
- Best-Practice
Benchmarking—Compare organization’s practices with those which
are “best in class”
- Difficult
to identify best performers, and even more difficult to obtain
information about their practices
- “Measurement
churn”—tendency to keep changing the indicators that are reported
- Performance
practices may not be the cause of different outcomes—context may be
different, or may be due to other practices not identified
- Charity
Rating Services
- BBB
“Wise Giving Alliance”
- AIP
“Charity Rating Guide”
- MN Charities
Review Council
- Based
almost entirely on process standards (availability of audit reports,
basic financial ratios, conduct of fundraising, board policies such as
conflict of interest)
Final Notes
- Trust Building—Involve the participants!
If a prior relationship does not exist before evaluation begins, it
must consciously be worked on as the process is developed. All parties must deal with the
following:
- What
is the purpose of the evaluation?
- What
should be measured?
- What
evaluation methods should be used?
- What
standards/criteria should be applied to the analysis of the information
obtained?
- How
should the data be interpreted?
- How
will the evaluation be used?
- Logic Model Building
- Generic
form is
i. Inputs
(and other, external influences)
ii. Outputs/Activities
(and other, external influences)
iii. Outcomes
(which might have side effects on others in the external environment)
iv. Goals
- Should
be developed in the design phase
(not once the program has been implemented and a decision is made to do
an evaluation)
- Relationship
Problems
- Board
has due diligence duty to evaluate outcomes. But Board may not feel it has
technical capacity. Ideally,
a task force of the Board should work with staff representatives and an exernal evaluator.
- Independent
evaluators may not have time to build trust/develop involvement, and
there is an inherent tension between duty to the funder and duty to the
organization being evaluated.
May lead to gathering information that is not used by the
recipient (commonly, the funder)
- Appreciative
Inquiry (AI)—Focus is on
i. Appreciating
the best of “what is”
ii. Envisioning
“what might be”
iii. Dialogue
on “what should be”
© 2003 A.J.Filipovitch
Revised 1 April 2008