Step 2: How will implementation of the treatment be monitored (P, R)?
|
|
(P) = plan example
(R) = report example
|
|
Data on how well the intervention is implemented can be used to address three research questions of central interest to many stakeholders:
- Was the intervention implemented as planned?
- Which aspects of the interventions are more difficult to implement than others?
- How does the effectiveness of the intervention vary with different levels of implementation and different conditions and practices?
To address the first of these questions—Was the intervention implemented as
planned?—requires the evaluators to monitor implementation fidelity. It is the
responsibility of the evaluator to document the extent to which the participants
assigned to the intervention actually carried out the intervention as designed. Was
the intervention implemented in a manner that is consistent with what the developers
intended for maximizing the effects on participants? This is an important question to
address when trying to interpret the effects of an intervention, particularly when the
measured effects turn out to be weaker than expected. If the results of an evaluation
are high stakes and may result in the costly purchase or cancellation of the program,
consumer of the evaluation are likely to raise questions about whether an ineffective
intervention was properly implemented. Moreover, there is no point investing resources to
examine the effects of an intervention if it was never really implemented in any
meaningful way
The second question addressed—Which aspects of the intervention are more
difficult to implement than others?—emphasizes the potential use of data on
implementation by the developers, users, and potential users of the intervention. For
developers, formative data on which aspects of the intervention are proving difficult to
implement may be used to improve the intervention’s supporting documentation or
training or to redesign certain aspects of the intervention. For institutions
contemplating an investment in the purchase of a program, implementation data can
provide useful information about possible difficulties that they will want to be
aware of and the supports that instructors and students may need.
The third question—How does the effectiveness of the intervention vary with
different levels of implementation?—highlights the fact that stakeholders may not
only be interested in whether an intervention is effective but in understanding how
robust are the effects at different levels of implementation. Is the intervention
only effective under ideal conditions of implementation (which may be an unrealistic
expectation for most settings) or is the intervention effective even when implementation
is not ideal? For example, in a study of a new instructional software tool,
administrators and instructors faced with restraints on when they are able to
find time to insert use of the tool during the school day, may be interested in
whether it is possible for students who only get half of the intended time-on-task with
the tool to experience meaningful gains in achievement. In addition, administrators
may be interested to learn whether the software tool can be implemented effectively by
teachers with varying degrees of subject matter expertise or classroom management
skills.
Information on how well an intervention was implemented and the instructional
practices in the comparison group can be collected through a variety of data
collection instruments including surveys, observations, instructor logs, interviews,
and case studies. In addition, for interventions that make use of computer technology,
objective measures of the duration and frequency of use may be available to
evaluators through records stored by the technology. The aspects of implementation
and instruction that are critical to measure can be identified through interviews with
developers and reviews of intervention documentation, as well as through a review of
training materials associated with the intervention.
|