home
  : Reports : Course, Curriculum, and Laboratory Improvement (CCLI)





























home reports instruments plans
search

CCLI Annotated Report Excerpts

Return to CCLI Reports

Design

The table below contains report excerpts (right column) accompanied by annotations (left column) identifying how the excerpts represent the Design Criteria.

Annotations Report Excerpts
 

Excerpt 1 [Teaching Introductory Combinatonics by Guided Group Discovery, Dartmouth College]

Information Sources & Sampling:
Describes the data collection sources

We compared learning in the two methods by documenting student accomplishment and experience at two institutions (one large, one small) where an instructor first taught combinatorics by the lecture method and then, in a subsequent term, by guided discovery. One instructor was female, the other male. Piloting the course at different kinds of institutions allowed us to gauge the effectiveness of guided discovery with students of different interests and backgrounds. Students in the Institution A courses were all undergraduates; two-thirds in each iteration were math majors or prospective math majors. In the lecture course at College B 93% were math majors; two of those were graduate students. In the Institution B guided discovery course, two-thirds were graduate students, although slightly less than half were mathematics majors. While the heterogeneous student populations provided a good test of the method's flexibility, it must be remembered that the student outcomes reported here also reflect the math preparation students brought to the course. The only valid comparisons are within institutions, not between them.

Exploring the rational for selecting data sources

Because the population of combinatorics students is small, and no single measure will yield unequivocal results, we employed multiple measures, both quantitative and qualitative, to substantiate our conclusions. To judge the extent and depth of students' understanding of the material, students participated in friendly oral examinations about combinatorics with an outside combinatorialist. These conversations with a mathematician in the field allowed us to gauge student learning with an accuracy, and at a depth, rarely afforded to experimental curricula. The protocol that guided these discussions. We also asked students to assess their own learning using a self-assessment instrument. Because learning mathematics ought to include improving thinking and problem solving skills and gaining a deeper understanding of mathematics as an activity, we used a pre-post survey to measure change in students' attitudes about mathematics.

Describes use of qualitative measurements

Student interviews and independent classroom observation by the evaluator provided data 'to identify those pedagogical strategies that promote learning' In-depth interviews with students at Institution A gave us the students' perspective on the respective pedagogies—and expanded our understanding of attitude changes (students at Institution B completed an abbreviated form of this interview by email). Finally, in-class observation of courses at Institution A, including the 'alpha' iteration of guided discovery not included in the comparative design, provided an independent record of instruction strategies and classroom activity to contextualize student data.

 

Excerpt 2 [Reinventing Introductory Geology Courses for Majors and Non-Majors Using Peer Instruction and Other Inquiry-Based learning Strategies, University of Akron]

Methodological Approach:
Describes the use of pre-post design and comparison groups

Two faculty (Professor X, Professor Y) have exclusively used inquiry-based and active learning strategies in their introductory geology courses. (These will henceforth be referred to as IBL classes.) Their hypothesis was that these methods would promote the development of higher-order thinking skills in their students. Reasoning skills were measured for 741 students using the Group Assessment of Logical Thinking instrument (GALT, Roadrangka et al., 1982, 1983) as a pre- and post-test in ten sections of general education introductory geoscience courses titled Earth Science or Environmental Geology with an audience of non-majors at a large Midwestern university. More than 90% of students gave consent for us to collect data on their performance. Students were assigned to collaborative, in-class groups for the semester based on their initial GALT scores. The five 160-student sections that defined the test population (n = 465 completed both pre- and post-test) were taught by two Earth Science instructors (Instructor 1 had one class of 82 students; instructor 2 taught four classes of 98, 89, 108, and 88 students. All students completed pre- and post-tests). The five sections that defined the control population (n = 276) were taught by four instructors. One control section was from a 35-student Earth Science class (n = 26 took both pre-and post-test). Two sections were from 90-student Earth Science classes taught by two different instructors (n = 50, 51) and two sections (one Earth Science, one Environmental Geology) were from 160 student classes taught by the same instructor (n = 77, 72). All students in the control groups took both re- and post-tests. The majority (~70%) of students in each class were freshmen.

 

Excerpt 3 [How students think: Implications for learning in introductory Geoscience Courses, University of Akron]

Instruments:
Describes instruments validity and reliability

The GALT is a valid and reliable instrument for measuring logical thinking in student populations from sixth grade through college and consistently yields higher scores with increasing grade level (Roadrangka et al., 1982; Bitner, 1991; Mattheis et al., 1992). The questions used in the GALT were taken from other tests with proven reliability and validity (Roadrangka et al., 1983). A strong correlation (0.80) between GALT results and the use of Piaget student interview protocols to determine logical thinking ability supports the validity of the instrument (Roadrangka et al., 1983). Furthermore, higher 7 GALT scores correlate with other measures of academic achievement such as course grades, SAT scores, and grade point average (Bunce and Hutchinson, 1993; Nicoll and Francisco, 2001). Students with GALT scores of 0-4 are considered to be concrete operational, scores of 5-7 are interpreted as indicative of transitional learners, and scores of 8-12 are characteristic of abstract operational learners for the tasks tested (Roadrangka et al., 1982). Success on the GALT test requires competence in five logical operations; proportional reasoning, controlling variables, combinational reasoning, probabilistic reasoning, and correlational reasoning (Roadrangka et al., 1982). The abbreviated form of the GALT survey contains twelve illustrated questions, a pair for each of the five logical operations listed above and another two that evaluate conservation.

 

Excerpt 4 [Collaborative Research: Developing and Implementing Just-in-Time-Teaching (JiTT) Techniques in the Principles of Economic Course, North Carolina A&T State University]

Methodological Approach:
Summary of methodological approach and how the intervention will be implemented

The assessment analysis covers students who were enrolled in two sections of the Principles of Macroeconomics course during the fall, 2002 semester. Students in each section were randomly divided into two groups (A and B) at the start of the semester, so that each group had approximately the same number of students. Prior to the first exam, students in Group A completed four JiTT assignments, while those in Group B completed alternative assignments (two-page Economic Issues articles that asked students to summarize and comment on a macroeconomics-related current-event issue); following the first exam the groups switched assignments, with Group B completing three JiTTs and Group A completing Economics Issues articles prior to the second exam; following the second exam the groups switched back to their original assignments (with group A completing three additional JiTTs prior to the third exam). Overall, JiTT assignments accounted for 5% of students' course grades, and completion rates were quite high, around 80-90%.

Statement of testable hypotheses

Analysis of Learning Outcomes

Our analysis of learning outcomes focuses on the relative exam scores of students from the JiTT and non-JiTT groups on each of the three midterm exams; each exam included one or two questions that were directly related to JiTT questions assigned since the previous exam (or start of the course in the case of exam #1). The null hypothesis is that students who completed the JiTT assignments during the period leading up to a particular exam will perform better on that exam, and in particular, on the JiTT-related questions included on that exam, than students who were in the non-JiTT group for that period. To identify and test the effects of JiTT on student learning we also collected data on a variety of student characteristics (age, gender, credit hours, SAT scores, GPAs).