For investigating the impact of the intervention on student learning, the instructors' implementations are too idiosyncratic to measure effect in any systematic way. The evaluators will be able to report for each instructor whether better learning outcomes are being observed in relation to idiosyncratic implementation data and pre- and post-assessment data.

For the comparison-group study of the intervention's impact on student interest, there is potential for associating Learning by Doing with increased student learning and interest; but without random assignment, such an association cannot be called an effect.

To strengthen the evidence of an association, the evaluators should examine the archived questionnaires of all three intervention years and the previous year. Different patterns of response by the intervention participants could have different implications. For example, if an instructor's students in the intervention years were more interested than his students in his last pre-intervention year, this would be convincing evidence. Increased interest each year would suggest a maturing and sustaining effect. A converse pattern of decreasing satisfaction would suggest that the intervention is becoming less effective, perhaps because the instructor's initial enthusiasm for it has waned. A flattening of the interest level could be either good or bad, depending on whether the level maintained is higher than for the non-intervention group. Inconsistent fluctuations of interest level within and across intervention classes would suggest that other factors are intervening, such as changes in student background characteristics.

Close Window