Outcome Assessment and Program
Evaluation
I.
Planning the
process
a.
Chief executive
should support the initiative
b.
Specific
individual assigned primary responsibility
c.
Involve staff and
clients
d.
Link to organization’s
information technology
II.
Defining program
goals
a.
Types of Goals
i.
Outcome
goals—ultimate desired program impact
ii.
Activity
goals—internal mechanics of a program
iii.
Bridging
goals—connect activities to outcomes
b.
Rules for
formulating goals
i.
Each goal should
contain only one idea
ii.
Each goal should
be distinct from every other goal
iii.
Use action verbs
c.
Politics of goals
definition
--Without clearly defined goals, “policy drift”
results
d.
Impact or ‘Logic’
model
i.
Specifies how the
various goals are expected to link to produce desired outcomes
1.
abstraction
2.
simplification
3.
specifies
significant relationships among its elements
4.
formulates
hypotheses
ii.
Benefits
1.
clarifies how
program is expected to work
2.
specifies
questions about how program operates
3.
how to use assessment
data once available
III.
Measuring goals
a.
Concepts of
measurement
i.
Measurement
validity—does it measure what it is supposed to measure?
ii.
Measurement
reliability—is measurement consistent from one application to another?
iii.
Multiple
measures—minimizes risks of reliability or fallibility of measurement
iv.
Face
validity—does measure appear valid to stakeholders?
b.
Types of measures
i.
Program records
& statistics
1.
staff are placed
in untenable position if asked to provide principal measures of own
effectiveness
2.
staff who will
record measures should be involved in defining measures
ii.
Client
questionnaire surveys
1.
phone surveys
2.
mail surveys
3.
e-mail surveys
iii.
Formal testing
instruments
iv.
Trainer observer
ratings
v.
Qualitative
measures
IV.
Data Collection,
analysis and reporting
a.
First, pilot-test
the measures and data collection procedures
b.
Establish
schedule for regular reporting and review of data
c.
Difference
between outcome assessment & program evaluation
i.
Outcome
assessment data answer questions about progress on key objectives, but not the
role of the agency in producing those changes
ii.
Program
evaluation adds comparison & control to assess that second question (role
of agency)
V.
Two approaches to
program evaluation
a.
Objective
scientist approach
i.
Critical distance
ii.
Quantitative data
iii.
Little interest
in functioning of internal mechanics
b.
Utilization-focused
evaluation (Michael Patton)
i.
Balance rather
than objectivity
ii.
Combine
qualitative and quantitative
iii.
Process as well
as outcome assessment
VI.
Who does the
evaluation?
a.
Internal
evaluation performed by staff
b.
External
evaluation performed by outside consultants
c.
Externally
directed evaluation with extensive internal staff assistance
VII.
Defining the
purpose
a.
Summative
b.
Formative
c.
Implementation
assessment
d.
Possibility of
covert purposes
VIII.
Outcome evaluation
design
a.
Causality
i.
Must satisfy 3
conditions:
1.
Covariation
2.
Time order
3.
Nonspuriousness
ii.
Must satisfy
1.
internal validity
(design accurately describes what program achieved)
2.
external validity
(generalizes to contexts beyond the program being evaluated)
b.
Threats to
internal validity:
i.
“Pre-experimental”
designs
1.
One-shot case
study
2.
Posttest only
with comparison group
3.
One-group
pretest-posttest
ii.
Threats to nonspuriousness:
1.
maturation
2.
regression
3.
history
c.
Experiments
i.
Randomization
ii.
control
d.
Quasi-experiments
i.
Non-equivalent
control group (depends on quality of the match)
ii.
Interrupted time
series (history and obtaining multiple observations cause problems)
iii.
Multiple
interrupted time series
e.
Other designs and
controls (statistical controls)
IX.
Process
evaluation
X.
Data development,
report writing, and follow-up
a.
Request
opportunity to review & comment, while leaving final authority on substance
in hands of external evaluator
b.
Decide what final
written products to request
i.
Comprehensive
report
ii.
Executive summary
iii.
Client-focused
reports
c.
Staff determine
changes in programs in light of evaluation
© 2004 A.J.Filipovitch
Revised 20 December 2005