This
case study provides an opportunity to apply the strategies you just
have learned.
Read
the following case study. Then answer questions about it in the
spaces provided. Click "View the Answers of Your Peers"
if you want to compare your answers with those of other users of
OERL. Click "View the Expert's Answers" if you want to
compare your answers with those of experts.
Keep
in mind that because evaluations are complex tasks, the expert's
analyses are not the only plausible ones that can be made.
In
contrast to the previously discussed scenario, which presents a
fairly ideal project context for exploring an effect, the following
case presents more challenges for the design.
Background
A
large public university has eight science departments (biology,
chemistry, etc.). Each department has an introductory course that
all students must take first. Within a department, the courses share
a common scope, sequence, and learning objectives, though individual
instructors are given latitude on instructional methods. All instructors
are required to administer a year-end, university-developed satisfaction
questionnaire to their students. It consists of scaled
items designed to measure how interested the students were in the
material and how satisfied they were with the instructor's teaching.
The results are archived in a database for 7 years.
All
instructors are also required to administer a final exam, but they
have freedom over the exam's format and content. They are also free
to develop additional assessments during the course. As a result,
there is considerable diversity in assessment procedures across
courses. For example, some instructors grade solely on midterm and
final exam scores; others grade on weekly or biweekly problem sets.
Some instructors' tests are composed solely of multiple-choice questions;
others require constructed responses.
|