Alignment Table for Report
Components
Design
The alignment table for sound project evaluation reports
can be viewed either as a whole, displaying all components,
or as six separate tables corresponding to report
components: (1) Executive Summary, (2) Project Description,
(3) Evaluation Overview, (4) Design, (5) Analysis Process,
and (6) Results & Recommendations. See the
alignment table overview for a
general description of what appears in the alignment
tables.
The glossary and
quality criteria entries for
report components are also available on their own.
Component |
Glossary Entry |
Quality Criteria |
Related Program Evaluation Standards |
Design |
Describes strategies and procedures for gathering and
analyzing data, as well as procedures employed for the
evaluation's periodic review.
|
|
|
|
Specifies:
- formative or summative approaches that were
taken
- types of data that were needed (e.g.,
quantitative,
qualitative, pre-post,
longitudinal)
- sources of the data (e.g.,
participants,
documents)
|
The report should describe the selected
methodological approaches and how, within the
constraints of time and cost, they yielded data that
help answer the evaluation questions. The data
gathered need to be aligned with the
goals that the
project is intended to achieve. The data can vary,
however, in how directly they indicate the attainment
of project goals. Most projects are more likely to
show effects on proximal outcomes than on distal
outcomes that are either logically or temporally
remote. (For example, a project has been designed to
improve high school students' motivation to learn
science. A proximal measure of the project's success
would be student self-reports of interest in science
content gathered immediately before and after the
project. A distal measure would be whether the
students decide to study science in college.)
Furthermore, the approaches should be grounded in
respected methodological frameworks and best-practice
literature. This increases the chance that project
features and context that are likely to make a
difference in project operations and outcomes will be
identified.
Methodological approaches that look narrowly at
project inputs and solely examine the
results of
quantitative outcome measures may not capture all
the noteworthy influences, impacts, and outcomes of a
complex project. Qualitative and mixed method
approaches present alternative ways of detecting
impacts, especially unanticipated ones. To corroborate
evaluation findings and to provide multiple
perspectives, it is highly desirable that evaluators
measure multiple outcomes and gather data from
multiple sources (triangulation).
Important constraints on the evaluation design (e.g.,
lack of random assignment of respondents to treatment
and comparison groups, or lack of data on long-term
effects) should also be stated at this point in the
report.
|
U3 Information Scope and
Selection
Information collected should be broadly selected to
address pertinent questions about the project and be
responsive to the needs and interests of clients and
other specified
stakeholders.
F3 Cost Effectiveness
The evaluation should be efficient and produce
information of sufficient value, so that the resources
expended can be justified.
|
|
Describes the sources of information used in the
evaluation, which may include:
- records and archival documents that contain
relevant information
- the entire population of
participants in the
project, if data were collected on all of them
- the sample or samples of participants or other
informants that were observed or solicited for
information, in order to maximize the
generalizability of the findings to the population
from which the sample or samples were drawn
|
The sources of information used in a project
evaluation should be described in enough detail to
show that the information is sufficient to meet the
evaluation's purposes.
The groups selected to provide information (e.g.,
administrators, teachers, students, parents) should be
described. If a sample was used, the description
should include:
- the sample selection criteria (e.g., the lowest
achievers, the best instructors)
- the process by which the sample was selected
(e.g., random, purposive)
- the sample size
- whether or not any comparison or control groups
were included
- whether and how
participants were assigned to
treatment and comparison groups
The extent to which the sample is representative of
the entire population should be indicated. Information
about the sample will help reviewers determine the
extent to which the information provided about the
sample is of sufficient depth to help users of the
report judge its representativeness and
appropriateness given the scope, context, and
resources of the evaluation.
|
A3 Described Purposes and
Procedures
The purposes and procedures of the evaluation should
be monitored and described in enough detail, so that
they can be identified and assessed.
A4 Defensible Information
Sources
The sources of information used in a program
evaluation should be described in enough detail, so
that the adequacy of the information can be
assessed.
|
|
Describes the design and content of the instruments
used to collect and analyze data (e.g., survey
questionnaires, interview protocols, observation
forms,
learning assessments).
|
The report should describe the nature of the various
instruments and how they are used to gather the needed
information. Instruments should be used as intended in
order for the data produced to be
reliable and
valid.
|
A3 Described Purposes and
Procedures
The purposes and procedures of the evaluation should
be monitored and described in enough detail, so that
they can be identified and assessed.
|
|
Describes how the data and other information have
been gathered to meet the criteria of
validity and
reliability. Also describes the frequency,
order, and
duration of the various data collection
activities.
|
The report should describe how and when data were
obtained from the various sources and how the sources
provide corroboration and multiple perspectives.
A description of the data collection and its intent
provides a context for judging and
interpreting
evaluation findings and recommendations. The
description of the data collection can inform the
conduct of similar evaluations in other settings.
Information about the timing of data collection is
important because the project's maturity needs to be
considered when drawing
conclusions about the
project's strengths and weaknesses. For example, a
survey questionnaire administered to
participants
halfway through the project is likely to have
different
results than a survey administered at the
completion of the project.
Hence, this section should describe:
- how and when an appropriately broad range of data
were collected
- what steps were taken to get essential data from
the sample and other targeted sources (this might
include a human subjects review)
- how the data have met the criteria of
validity
- how
reliability was achieved through the
systematic training of data collectors and
consistent data collection and scoring
procedures
- how the data collection procedures limited the
burden of time and effort placed on project
participants
Different models of evaluation present different data
collection needs. For example, a
formative evaluation
requires that ongoing project activities be assessed
at points in time that enable project developers to
refine the project's components.
|
F1 Practical Procedures
The evaluation procedures should be practical, to keep
disruption to a minimum while needed information is
obtained.
A3 Described Purposes and
Procedures
The purposes and procedures of the evaluation should
be monitored and described in enough detail, so that
they can be identified and assessed.
A5
Valid Information
The information-gathering procedures should be chosen
or developed and then implemented so that they will
assure that the
interpretation arrived at is
valid for
the intended use.
A6
Reliable Information
The information-gathering procedures should be chosen
or developed and then implemented so that they will
assure that the information obtained is sufficiently
reliable for the intended use.
|
|
Describes procedures that were undertaken to review
the quality of the evaluation being conducted.
|
Evaluation purposes and procedures should be reviewed
periodically, particularly during longitudinal
evaluations, to determine whether the evaluation
design, instruments, and procedures are adequately
capturing the project's implementation, impacts, and
outcomes.
|
A12 Meta-Evaluation
The evaluation itself should be
formatively and
summatively evaluated against
standards, so that its conduct is appropriately guided
and, on completion,
stakeholders can closely examine
its strengths and weaknesses.
|
|
|