The alignment table for sound project evaluation reports
can be viewed either as a whole, displaying all components,
or as six separate tables corresponding to report
components: (1) Executive Summary, (2) Project Description,
(3) Evaluation Overview, (4) Design, (5) Analysis Process,
and (6) Results & Recommendations. See the
alignment table overview for a
general description of what appears in the alignment
tables.
Component |
Glossary Entry |
Quality Criteria |
Related Program Evaluation Standards |
Executive
Summary
|
Summarizes the purpose of the evaluation, the project
goals, implementation, impacts,
conclusions,
and recommendations.
|
The executive summary should provide essential
information about the evaluation report that is easily
understood by
stakeholders. It should clearly
summarize the purpose of the evaluation, the project
goals, project implementation and impacts, and
recommendations and
conclusions drawn from the
results
of the evaluation.
|
U5 Report Clarity
Evaluation reports should clearly describe the program
being evaluated, including its context, and the
purposes, procedures, and findings of the evaluation,
so that essential information is provided and easily
understood.
|
Project Description
|
Describes the evaluated project so that the reader of
the report will understand the scope of the evaluation
and be able to understand the association between the
project's components and its outcomes (e.g., impacts
and payoff).
|
|
|
|
Describes the project's features (e.g., philosophy,
rationale,
goals,
objectives, strategies, activities,
procedures, location, duration, resources).
|
The following features of the evaluated project
should be clearly described:
- project
goals (both explicit and implicit) and
objectives
- principal project activities designed to achieve
the goals
- project location and implementation sites
- project duration
- resources used to implement the project
- expected short-term and long-term outcomes
If more than one site is implementing a project, the
evaluation should describe each site and the
anticipated variation that may be expected across
sites.
|
A1 Program Documentation
The program being evaluated should be described and
documented clearly and accurately, so that the program
is clearly identified.
|
|
Identifies individuals or groups participating in, or
otherwise affected by or invested in the project.
|
The different
stakeholder groups should be
identified, their relationships to the project
described, and their different perspectives about the
project's significance articulated.
|
U1
Stakeholder Identification
Persons involved in or affected by the evaluation
should be identified, so that their needs can be
addressed.
|
|
Identifies external influences on the project (e.g.,
the timing of the project relative to other factors or
events; organizational/institutional, historical,
economic, political, and social conditions;
demographic characteristics of project
participants).
|
An understanding of contextual factors is necessary
if an evaluation is to be realistic and responsive to
the conditions within which the project operates.
Contextual information is also needed to help
audiences
interpret the evaluation. It should
be described in enough detail to enable
stakeholders
to understand the impact of the context on project
implementation and outcomes.
|
A2 Context Analysis
The context in which the project exists should be
examined in enough detail, so that its likely
influences on the project can be identified.
|
Evaluation Overview
|
Describes the purposes and questions driving the
evaluation, as well as the credentials of the
evaluator and the involvement of
stakeholders
in the evaluation.
|
|
|
|
Describes the
goals and
objectives of the evaluation.
These should be focused around identifying the
project's strengths and weaknesses as well as
accomplishments and challenges, either in terms of how
well its implementation was carried out
(formative evaluation)
and/or how successful it was in achieving
intended outcomes
(summative evaluation).
This section of the report may also describe
additional "goal-free" purposes that involve
gathering and inductively analyzing data in order to
understand dimensions of the project that were not
anticipated in the setting of goals.
|
The purposes of the evaluation should be:
- stated in terms of
goals and intended uses of
results by
stakeholders
- described in enough detail to help stakeholders
extrapolate critical meanings from the results
The evaluation should focus on whether or not
promised project components are delivered and compare
project outcomes against the assessed needs of the
targeted
participants or other beneficiaries. They
should also be directed at finding unanticipated
outcomes, both positive and negative.
|
A3 Described Purposes and
Procedures
The purposes and procedures of the evaluation should
be monitored and described in enough detail, so that
they can be identified and assessed.
|
|
States the questions that will be answered through
data collection, analysis, and
interpretation.
Evaluation questions are developed from the evaluation
goals and
objectives and state specific information
needs. They focus on aspects and outcomes of the
project that are important to the
stakeholders.
|
Evaluation questions that address context,
implementation, and outcome variables provide the
perspective not only for
interpreting
results, but
also for understanding the conditions under which the
results were obtained.
The questions should be justified against the
following criteria:
- To which
stakeholders will answers to the
questions be useful, and how?
- How will answers to the questions provide new
information?
The report can also delineate questions that could
not be addressed because of constraints (e.g., limited
time or resources, insufficiency of available
data-gathering techniques).
|
|
|
Specifies the evaluator's credentials.
|
The professional qualifications of the evaluator
should be specified in order to build trust in the
results.
|
U2 Evaluator Credibility
Persons conducting the evaluation should be both
trustworthy and competent to perform the evaluation,
so that the evaluation findings achieve maximum
credibility and acceptance.
|
|
Describes what interests the various
stakeholders
have had in the evaluation, and what roles they played
in it.
|
The report should describe how the positions and
perspectives of the
stakeholders have been considered
in an ongoing manner, from the planning of the
evaluation through the data collection, analysis, and
interpretation.
Stakeholder involvement in the evaluation can be
beneficial because stakeholders can help the
evaluator better understand project
goals and
objectives, shape evaluation questions,
recommend data sources, and review findings. As a
consequence of being involved, stakeholders are
more likely to find the
results credible, useful, and relevant, and less
likely to curtail evaluation operations or hinder
accurate and appropriate uses of the results.
|
F2 Political Viability
The evaluation should be planned and conducted with
anticipation of the different positions of various
interest groups, so that their cooperation may be
obtained, and so that possible attempts by any of
these groups to curtail evaluation operations or to
bias or misapply the
results can be averted or counteracted.
|
Design |
Describes strategies and procedures for gathering and
analyzing data, as well as procedures employed for the
evaluation's periodic review.
|
|
|
|
Specifies:
- formative or summative approaches that were
taken
- types of data that were needed (e.g.,
quantitative,
qualitative, pre-post,
longitudinal)
- sources of the data (e.g.,
participants,
documents)
|
The report should describe the selected
methodological approaches and how, within the
constraints of time and cost, they yielded data that
help answer the evaluation questions. The data
gathered need to be aligned with the
goals that the
project is intended to achieve. The data can vary,
however, in how directly they indicate the attainment
of project goals. Most projects are more likely to
show effects on proximal outcomes than on distal
outcomes that are either logically or temporally
remote. (For example, a project has been designed to
improve high school students' motivation to learn
science. A proximal measure of the project's success
would be student self-reports of interest in science
content gathered immediately before and after the
project. A distal measure would be whether the
students decide to study science in college.)
Furthermore, the approaches should be grounded in
respected methodological frameworks and best-practice
literature. This increases the chance that project
features and context that are likely to make a
difference in project operations and outcomes will be
identified.
Methodological approaches that look narrowly at
project inputs and solely examine the
results of
quantitative outcome measures may not capture all
the noteworthy influences, impacts, and outcomes of a
complex project. Qualitative and mixed method
approaches present alternative ways of detecting
impacts, especially unanticipated ones. To corroborate
evaluation findings and to provide multiple
perspectives, it is highly desirable that evaluators
measure multiple outcomes and gather data from
multiple sources (triangulation).
Important constraints on the evaluation design (e.g.,
lack of random assignment of respondents to treatment
and comparison groups, or lack of data on long-term
effects) should also be stated at this point in the
report.
|
U3 Information Scope and
Selection
Information collected should be broadly selected to
address pertinent questions about the project and be
responsive to the needs and interests of clients and
other specified
stakeholders.
F3 Cost Effectiveness
The evaluation should be efficient and produce
information of sufficient value, so that the resources
expended can be justified.
|
|
Describes the sources of information used in the
evaluation, which may include:
- records and archival documents that contain
relevant information
- the entire population of
participants in the
project, if data were collected on all of them
- the sample or samples of participants or other
informants that were observed or solicited for
information, in order to maximize the
generalizability of the findings to the population
from which the sample or samples were drawn
|
The sources of information used in a project
evaluation should be described in enough detail to
show that the information is sufficient to meet the
evaluation's purposes.
The groups selected to provide information (e.g.,
administrators, teachers, students, parents) should be
described. If a sample was used, the description
should include:
- the sample selection criteria (e.g., the lowest
achievers, the best instructors)
- the process by which the sample was selected
(e.g., random, purposive)
- the sample size
- whether or not any comparison or control groups
were included
- whether and how
participants were assigned to
treatment and comparison groups
The extent to which the sample is representative of
the entire population should be indicated. Information
about the sample will help reviewers determine the
extent to which the information provided about the
sample is of sufficient depth to help users of the
report judge its representativeness and
appropriateness given the scope, context, and
resources of the evaluation.
|
A3 Described Purposes and
Procedures
The purposes and procedures of the evaluation should
be monitored and described in enough detail, so that
they can be identified and assessed.
A4 Defensible Information
Sources
The sources of information used in a program
evaluation should be described in enough detail, so
that the adequacy of the information can be
assessed.
|
|
Describes the design and content of the instruments
used to collect and analyze data (e.g., survey
questionnaires, interview protocols, observation
forms,
learning assessments).
|
The report should describe the nature of the various
instruments and how they are used to gather the needed
information. Instruments should be used as intended in
order for the data produced to be
reliable and
valid.
|
A3 Described Purposes and
Procedures
The purposes and procedures of the evaluation should
be monitored and described in enough detail, so that
they can be identified and assessed.
|
|
Describes how the data and other information have
been gathered to meet the criteria of
validity and
reliability. Also describes the frequency,
order, and
duration of the various data collection
activities.
|
The report should describe how and when data were
obtained from the various sources and how the sources
provide corroboration and multiple perspectives.
A description of the data collection and its intent
provides a context for judging and
interpreting
evaluation findings and recommendations. The
description of the data collection can inform the
conduct of similar evaluations in other settings.
Information about the timing of data collection is
important because the project's maturity needs to be
considered when drawing
conclusions about the
project's strengths and weaknesses. For example, a
survey questionnaire administered to
participants
halfway through the project is likely to have
different
results than a survey administered at the
completion of the project.
Hence, this section should describe:
- how and when an appropriately broad range of data
were collected
- what steps were taken to get essential data from
the sample and other targeted sources (this might
include a human subjects review)
- how the data have met the criteria of
validity
- how
reliability was achieved through the
systematic training of data collectors and
consistent data collection and scoring
procedures
- how the data collection procedures limited the
burden of time and effort placed on project
participants
Different models of evaluation present different data
collection needs. For example, a
formative evaluation
requires that ongoing project activities be assessed
at points in time that enable project developers to
refine the project's components.
|
F1 Practical Procedures
The evaluation procedures should be practical, to keep
disruption to a minimum while needed information is
obtained.
A3 Described Purposes and
Procedures
The purposes and procedures of the evaluation should
be monitored and described in enough detail, so that
they can be identified and assessed.
A5
Valid Information
The information-gathering procedures should be chosen
or developed and then implemented so that they will
assure that the
interpretation arrived at is
valid for
the intended use.
A6
Reliable Information
The information-gathering procedures should be chosen
or developed and then implemented so that they will
assure that the information obtained is sufficiently
reliable for the intended use.
|
|
Describes procedures that were undertaken to review
the quality of the evaluation being conducted.
|
Evaluation purposes and procedures should be reviewed
periodically, particularly during longitudinal
evaluations, to determine whether the evaluation
design, instruments, and procedures are adequately
capturing the project's implementation, impacts, and
outcomes.
|
A12 Meta-Evaluation
The evaluation itself should be
formatively and
summatively evaluated against
standards, so that its conduct is appropriately guided
and, on completion,
stakeholders can closely examine
its strengths and weaknesses.
|
Analysis Process
|
Describes the type or types of analyses conducted
(e.g.,
quantitative,
qualitative, mixed methods) and
procedures used for examining
results and ensuring
their trustworthiness, such as:
- training conducted to ensure
reliable coding and
scoring of data
- checks of the data to remove errors
- procedures for reducing and summarizing the
data
- descriptions of analyses, that identify a pattern
of results
This section also describes results
non-interpretively (e.g., without being subject to
values, perspectives, and conceptual frameworks).
|
|
|
|
Describes procedures taken to analyze numeric
data:
- organizing the data
- verifying it
- summarizing it
- presenting purely descriptive information about
the project (e.g., percentages of different
responses to a survey question; percentages of
different scores on a test item) that could lead to
patterns and trends
- examining relationships among variables (e.g.,
Pearson Product Moment correlations, multiple
regression, factor analyses)
- using inferential statistical techniques to test
for significant differences between comparison
groups (e.g., t-tests, analyses of variance,
analyses of covariance)
|
The
quantitative analysis procedures should be
appropriate to the evaluation questions being
addressed and the characteristics of the information
being analyzed. The practical significance (e.g.,
effect sizes) and replicability, as well as
statistical significance, should be considered when
drawing inferences and formulating
conclusions from
quantitative analyses. Analyses of effects for
identifiable subgroups should be considered, as
appropriate, because a program may have differential
effects for them.
In addition, the number of informants who actually
provided data should be reported. (Informants who fill
out a survey are called "respondents," and
the percent of those solicited who actually respond is
called the "response rate." This will help
reviewers determine the extent to which the informants
are representative of the total population.
Potential weaknesses in the quantitative data
analysis, along with their possible influence on
interpretations and conclusions, should be
described.
|
A8 Analysis of
Quantitative Information
Quantitative information in an evaluation
should be appropriately and systematically analyzed so
that evaluation questions are effectively
answered.
A7 Systematic Information
The information collected, processed, and reported in
an evaluation should be systematically reviewed, and
any errors found should be corrected.
|
|
Describes the
qualitative analysis procedures used to
compile, analyze, and interpret the data in order to
find themes, patterns, and trends.
|
The
qualitative analysis procedures should be
appropriate to the evaluation questions being
addressed and the characteristics of the information
being analyzed. As the evaluation progresses, the
accuracy of findings from qualitative data must be
confirmed by gathering evidence from more than one
source and by subjecting inferences to independent
verification.
Potential weaknesses in the qualitative data
analysis, along with their possible influence on
interpretations and
conclusions, should be
described.
|
A9 Analysis of
Qualitative Information
Qualitative information in an evaluation
should be appropriately and systematically analyzed so
that evaluation questions are effectively
answered.
A7 Systematic Information
The information collected, processed, and reported in
an evaluation should be systematically reviewed, and
any errors found should be corrected.
|
Results & Recommendations
|
This is the culminating section of the report. It
synthesizes and
interprets the data that were
collected and analyzed in order to draw
conclusions
about the project's strengths and weaknesses. It is
also the section in the report that contains
recommendations for the project and describes how
stakeholders have been involved in reviewing the
results.
|
|
|
|
Describes
interpretations and
conclusions that have
been drawn from the data.
|
This section of the report should be thorough and
fair in noting, in a balanced and unbiased way, the
project's anticipated and unanticipated strengths
(e.g., smooth implementation, positive outcomes) and
weaknesses (e.g., obstacles to implementation,
evidence of negative outcomes), so that the strengths
can be built on and problem areas addressed. When
relevant data are inaccessible because of time and
cost constraints, the resultant omissions should be
noted and the effect of such omissions on the overall
judgment of the project's impacts and effectiveness
should be estimated.
If the project has been implemented in multiple
settings, and each setting was a locus of data
collection, the evaluation should compare and contrast
findings across the sites in order to find
results
that are generalizable to the project as a whole. Some
lessons learned about the project may also be
generalizable to other projects, and should be
identified in the report. When legitimate,
generalizable statements about program effectiveness
can contribute to theory development by providing
positive examples for analysis and replication.
The
conclusions section should report the findings
with more broad-based statements that relate back to
the project's
goals and the evaluation questions. To
view the significance of the project's impacts from a
sufficiently wide perspective, the impacts can be
examined in light of the alternatives (such as no
other project, or a different type of project, to meet
the need).
In posing conclusions, the evaluators should be open
and candid about the values and perspectives they have
brought to the task so that readers of the evaluation
will be able to understand the context in which their
judgments are rendered.
The conclusions can contribute to the furthering of
professional excellence in the evaluation community by
relating the outcomes of the evaluation to approaches
and practices espoused by other evaluators.
|
A11 Impartial Reporting
Reporting procedures should guard against distortion
caused by personal feelings and biases of any party to
the evaluation, so that evaluation reports fairly
reflect the evaluation findings.
P5 Complete and Fair Assessment
The evaluation should be complete and fair in its
examination and recording of strengths and weaknesses
of the program being evaluated, so that strengths
can be built upon and problem areas addressed.
A10 Justified
Conclusions
The
conclusions reached in an evaluation should be
explicitly justified, so that
stakeholders can assess
them.
U4 Values Identification
The perspectives, procedures, and rationale used to
interpret the findings should be carefully described,
so that the bases for value judgments are clear.
|
|
Recommendations involve using the
conclusions to
suggest follow-up actions for the project's
continuation as is, improvement, or elimination.
|
When appropriate, recommendations should be included,
either for current
stakeholders or for others
undertaking projects similar in
goals, focus, and
scope which were designed to serve similar
participant
groups in similar contexts. Care must be taken to base
the recommendations solely on robust findings and not
on anecdotal evidence, no matter how persuasive.
|
P5 Complete and Fair Assessment
The evaluation should be complete and fair in its
examination and recording of strengths and weaknesses
of the program being evaluated, so that strengths
can be built upon and problem areas addressed.
|
|
Describes steps taken to get
stakeholder feedback on the report. Also,
describes how the report will be used and
disseminated.
|
On sharing the report with
stakeholders:
A draft of the report should be reviewed by key
stakeholders so that the findings can be discussed,
lingering issues can be resolved, and the stage can be
set for the next steps to be taken, given the
successes and failures that the
results have revealed.
After the draft of the evaluation report has been
reviewed, all stakeholders and others with legal
rights to the results should receive access to the
final version of the report. The evaluator's judgments
and recommendations need to be perceived as clearly
and frankly presented, backed by descriptions of
information and methods used to obtain them. Such
disclosures are essential if the evaluation is to be
defensible.
The report needs to be written in a responsive style
and format. Different reports may need to be provided
for different
audiences that have different needs and
perspectives (e.g., perhaps a longer, more technical
report for the funder and a shorter report for lay
audiences such as parents of student
participants).
|
A11 Impartial Reporting
Reporting procedures should guard against distortion
caused by personal feelings and biases of any party to
the evaluation, so that evaluation reports fairly
reflect the evaluation findings.
U7 Evaluation Impact
Evaluations should be planned, conducted, and reported
in ways that encourage follow-through by
stakeholders,
so that the likelihood that the evaluation will be
used is increased.
U6 Report Timeliness and
Dissemination
Significant interim findings and evaluation reports
should be disseminated to intended users, so that they
can be used in a timely fashion.
P6 Disclosure of Findings
The formal parties to an evaluation should ensure that
the full set of evaluation findings along with
pertinent limitations are made accessible to the
persons affected by the evaluation and to any others
with expressed legal rights to receive the
results.
|