|
|
|
: Reports : Curriculum Development |
|
|
|
|
|
|
|
|
|
|
Annotations |
Report Excerpts |
|
|
|
|
Excerpt 3
[Oregon State University]
|
Data
Collection Procedures & Schedule
|
In 1992 a pre-institute survey was conducted to determine
the best approach for preparing the teachers to use
the material and calculators.
At the end of each workshop, the teachers filled
out lengthy questionnaires developed at Oregon State
University and the Math Learning Center.
|
|
|
Excerpt 4
[Oregon State University]
|
Methodological
Approach:
Describes how effectiveness was judged
|
In evaluating students' achievement, we have used
both common exams and standardized test measures (AP
exams). With respect to student attitudes, we have
used some traditional student course evaluation measures
as well as some open-responses forms: |
Instruments
|
- College test sites administer the MAA's Calculus
Readiness Exam at the beginning of the academic
year. This provides us with a baseline measure of
students' preparedness for calculus across the many
different institutions represented by the test
sites.
- Throughout the academic year, the test sites were
provided common questions to be used on quizzes
and exams. These questions were designed to examine
specifically students' use of numeric, graphical,
and symbolic representations of functions in their
approach to problems involving limits, derivatives,
and integrals.
- Near the end of the academic year, test sites
will administer a test version of the AP exam. This
exam will be administered with no technology available
to the student. This will provide some indication
of how students fare on traditional standardized
measures of calculus achievement under traditional
circumstances.
|
|
|
|
Excerpt 6
[Purdue University]
|
Information
Sources & Sampling:
Describes sample selection
|
This study explored the use of the telephone and
of electronic mail to initiate advisor contact with
students. Out of a freshman class of approximately
1,710, all students enrolled in an introductory engineering
lecture class as well as a beginning computer class
were selected. All students who were enrolled in some
kind of special orientation class, such as those offered
for women, minorities, honors and undecided students,
were excluded because of the intervention they were
already receiving. This resulted in a pool of 910
students.
|
Methodological
Approach:
Specifies how effectiveness was judged
|
Six experimental groups and a control group were set
up. The students were assigned an expected grade point
average based on high school background and test scores.
The expected grade point average was used to rank
the students, and they were assigned to groups in
such a way as to randomize this variable. Some students
did not have an expected grade point average due to
missing data. These students were randomly assigned
to one of the groups.
|
Data
Collection Procedures & Schedule
|
The control group received no special contact from
advisors. Among the other six groups, half were assigned
to a professional staff person to contact and the
other half were assigned to a senior engineering student.
An attempt was made to contact all students twice
through the course of the first semester of their
college career. The first contact was made in weeks
four through eight and the second in weeks ten through
twelve.
|
|
|
Excerpt 7
[Inter-American University of Puerto Rico, San Juan]
|
Methodological
Approach:
Describes external evaluator
Describes how effectiveness was judged
|
The evaluation of the Laboratory-Driven Instruction
in Chemistry project for the three years of the program
was conducted at the Metropolitan Campus of the Inter-American
University of Puerto Rico (IAU). The projects
effectiveness was assessed through a comprehensive
and continuous process that included formative and
summative evaluation. Qualitative and quantitative
data was gathered from participating students using
various instruments at different times during each
semester. The emphasis of the evaluation was to assess
the impact of the implementation of this approach
in students learning and attitudes towards
chemistry.
|
|
|
Excerpt 8
[Duke University]
|
Instruments:
Describes test development
Methodological
Approach:
Describes underlying hypothesis and how
effectiveness was judged
|
A five-question test of problem solving was developed
and administered to all the Calculus II students (project
class and traditional class). The test items were
selected with contexts from various fieldsbiology,
chemistry, economics as well as mathematics. This
was done to address the goal of using mathematics
to investigate real world questions. As many fields
as possible were selected so that the test would not
be biased in favor of a group of students with any
one academic background. It was hypothesized that
Project Calc students would do better than traditional
students on this test. The Project Calc course was
designed to provide students with many more opportunities
to solve problems than they would have in the traditional
course. A significant difference in favor of the Project
Calc students would be necessary but not sufficient
condition for considering whether the course is
successful.
|
Data
Collection Procedures & Schedule:
Describes scoring procedures
|
The students in both groups took the test at the
same time in the same location. The students were
given two hours. Each test was graded by two independent
graders who were blind to group membership. All exams
were mixed and randomly ordered prior to grading.
Interrater agreement was high (.94). Mean scores for
groups were computed by averaging the scores of the
two graders.
|
|
|
Excerpt9
[Five-College Consortium]
|
Methodological
Approach
|
The evaluation during the first three years of the
Calculus in Context project was primarily process-oriented.
It was important to identify strengths and weaknesses
in the implementation of the project so that corrective
action might be undertaken. Given this focus, the
evaluation of the project was carried out by obtaining
feedback from the students through questionnaires
and interviews.
|
Describes use of multiple methods
|
The purpose of the questionnaires and interviews
was twofold: The first purpose was to obtain information
regarding the students attitudes towards mathematics
and their attitudes towards the approach to teaching
calculus taken in the course. The second purpose was
to identify weaknesses and strengths in the implementation
of the project so that the strengths could be enhanced
and the weaknesses overcome.
|
Instruments:
Describes instrument components
|
The questionnaires and interviews included demographic,
attitude, and evaluative questions. Demographic questions
dealt with the students' backgrounds, such as their
preparation in calculus, familiarity with computers,
their reasons for taking the course, and their prospective
majors. Attitude questions dealt with students' interest
in mathematics, their enjoyment of the subject, their
anxiety levels, and their confidence in their ability
to solve math problems. Evaluative questions dealt
with the adequacy of the training they received in
using computers, the effectiveness of the various
components of instruction such as lectures, assignments,
lab work, and the text book. To gain more in-depth
information than could be obtained through questionnaires,
interviews were also conducted, with the interview
comprising the same types of questions as the questionnaire.
Questionnaires were administered at the beginning
and end of the semester, while interviews were conducted
toward the end of the semester.
|
|
|
Excerpt 10
[Oregon State University]
|
Data
Collection Procedures & Schedule:
Describes consistent application of data
collection procedure
|
The intent of the interview process was to use open-ended
questions in such a way that all responses of the
teachers were not based on a priori set of categories,
as well as not due to the biases of the interviewer.
To ensure consistency among the teachers, the interviewer
directly followed the interview protocol and did not
delve more deeply into the teachers responses.
(It is possible that no matter how carefully the interview
items were asked, potential biases may have been detected
by the tone of the interviewers voice, thus
producing a response effect.)
|
|
|
Excerpt 11
[Inter-American University of Puerto Rico, San Juan]
|
Data
Collection Procedures & Schedule:
Describes multiple data collection methods
|
The data gathered through the reflective diaries,
the individual interviews, and the open-ended question
of the Final Evaluation of the Course Questionnaire
was qualitative. Quantitative data was gathered with
the Attitude Towards Chemistry Questionnaire (administered
at the beginning and the end of each semester) and
the Final Evaluation of the Course Questionnaire.
|
|
|
Excerpt 12
[Rensselaer
Polytechnic Institute]
|
Methodological
Approach
|
After dividing volunteers into two groups matched
for reasoning ability by a pre-test, we offered
the students in the experimental group an hour-long
course in the use of the software, without presenting
to them any content. The purpose of this training
was to ensure that the hour they would have later
for instruction would not be compromised by a
lack of familiarity with the mechanics of using
the interface, which was in some ways unlike the
other programs used in the course. We then gave
each group simultaneous instruction the experimental
group viewed the interactive software; the control
received instruction as normal from their professor.
|
Data
Collection Procedures & Schedule:
Describes limitations of small sample
|
Following the instruction, we gave each participant
a post-test consisting of a logic proof that required
understanding of the concept discussed in the
lessons (again, proof by contradiction). Three
of six students in the experimental group received
full credit, only one of seven did so from the
control, with another earning partial credit.
Relatively low attendance rates complicated the
statistical interpretation somewhat, but the difference
was reported at a significance of .092, indicating
that it was quite unlikely that in-person instruction
was better than instruction by the artificial
intelligent agent.
|
|
|
Excerpt 13
[Capital
University]
|
Overviews different components of the evaluation ("types")
|
Evaluation Section for the Computational
Science Across the Curriculum NSF Grant
For all course and curricular materials, three
types of evaluation will occur: a) Formative,
to assess the development of the materials; b)
Procedural, to evaluate the implementation of
the materials; and c) Summative, to determine
the effect of the developed materials. A matrix
of evaluation activities appears below. This matrix
includes the evaluation questions, methods of
data collection, timing of evaluation activities,
and the type of evaluation.
|
Data
Collection Procedures & Schedule;
Information
Sources & Sampling:
Specifies data collection method, respondent types,
schedule, and evaluation types in relation to the
evaluation questions and subquestions
|
Summary Matrix
of Evaluation for Computational Science
Across the Curriculum
(This information
was adapted from NSF publications 93-152
- revised 2/96 and 97-153.)
|
Question I: Do the curricular materials
fulfill the intended outcomes?
|
Subquestion |
Data Collection Method |
Respondents |
Schedule |
Eval. Type*
|
|
A. Will the course materials facilitate
students' understanding of concepts and procedures?
|
1. Review of course materials
|
NA |
Prior to implementing each course |
F
|
B. Do the courses reflect the interdisciplinary
nature of computational science?
|
1. Review of course material
|
NA
|
Prior to implementing each course
|
F
|
2. Questionnaire |
Students |
At the end of each course |
P
|
C. Do the teaching methods and
materials stimulate critical thinking?
|
1. Review of course materials |
NA
|
Prior to implementing each course
|
F
|
2. Observation |
NA |
Once during each course |
P
|
3. Portfolio of student work |
NA |
At the end of each course |
S
|
D. Are students learning to
work effectively and solve problems in small
groups (peer learning)?
|
1. Observation
|
NA
|
Once during each course
|
P
|
2. Questionnaire |
Students |
At the end of each course |
S
|
Question II: How does
the proposed curriculum affect student learning?
|
Subquestion |
Data Collection Method |
Respondents |
Schedule |
Eval. Type*
|
|
A. Does the instructor evoke
students' solutions rather than imposing her/
his own? |
1. Observation |
NA |
Once during each course |
P
|
2. Questionnaire |
Students |
At the end of each course |
S
|
B. What concepts have students
learned from their experiences? |
1. Questionnaire |
Students |
At the end of each course |
S
|
2. Portfolio of student work |
NA |
At the end of each course |
S
|
C. How do faculty assess their performance
and the performance of their students? |
1. Questionnaire |
Faculty |
At the end of each course |
S
|
Question III: What
was the impact of the inquiry-based method
and peer learning on student attitudes toward
learning and their desire to continue post-graduate
studies in computational science?
|
Subquestion |
Data Collection Method |
Respondents |
Schedule |
Eval. Type*
|
|
A. How have students' attitudes toward science,
math, and computing changed?
|
1. Questionnaire |
Students |
At the end of each course
|
S
|
B. What do students plan to do with their
education in computational science?
|
1. Questionnaire |
Students |
After completing the minor |
S
|
|
* Evaluation Type: F = Formative: addresses the development of the project;
P = Procedural: addresses how the project is being conducted; S = Summative:
addresses the outcome of the project.
|
|
|
Excerpt 14
[Anonymous 9]
|
Design
|
University Y has approximately 8,800 students
with 1600 in engineering. There are six undergraduate
programs in engineering and all except one requires
students to take engineering mechanics. Most students
take this course the first semester of their sophomore
year. Generally, to accommodate the demand, three
sections of engineering mechanics per semester
are offered with up to 50 students each. A setup
and installation protocol was developed using
Install Shield® such that the network administrator
at University Y could load the software from a
central location for use in any of approximately
30 campus computer labs. Install Shield® was
also used to allow students to install the software
on their own computers.
. . . [B]efore the student could access the program
on the network or individual PC, the instructor
distributed a file to the students containing
the homework assignments. This file, called a
student data file (SDF), was created using the
instructor version of the treatment and was modified
(i.e., updated to include worked assignments)
each time a student used the software. At University
Y, this file was posted on the instructor's web
page and was downloaded by the students at the
start of the semester.
As students completed their work, the instructor
would periodically require the students to electronically
submit their SDF by placing the file in a secure
network location created by the network administrator
upon installation of the treatment. The instructor
would then use the instructor version to access
the file and monitor student progress and grades
either for an individual or an entire class.
|
Methodological
Approach:
Describes the use of control classes
Instruments
|
The approach for testing the treatment was to
have the same instructor teach one or more experimental
classes and one or more control classes. Both
the experimental and control classes had identical
formats (i.e., same text, coverage, assignments,
grading, lecture format, etc.) except that the
experimental classes fully utilized the treatment
for submission of homework (the virtual classroom,
practice problems, and practice test
functions were also available and encouraged),
and the control classes followed a traditional
homework approach (i.e., submission of problems
on paper). Students in both classes were given
similar tests (four per semester) including an
identical fully comprehensive final exam. At the
end of each semester, test scores and final grades
for the experimental and control populations were
compared. Modifications to the treatment were
implemented each semester based on the overall
findings, including student scores, student surveys,
and instructor surveys, and at the end of the
test program overall results were evaluated.
Development of the treatment began at University
Y in 1997; however, the NSF study of the software
began with a pre-experimental group of students
in the summer of 1999. A single class of fifteen
engineering mechanics students used the software
throughout the summer semester. The intent was
to test the network installation protocol, identify
errors in the programming or delivery for the
CD or network versions, and record and respond
to student reactions to the software before the
full experimental system was implemented.
|
Data
Collection Procedures & Schedule
|
Following the initial trial, the test program
began in Fall 1999 and continued through Spring
2001 with a single instructor teaching sets of
control and experimental classes as shown in Table
1.
Table 1: Treatment Implementation
at University Y
|
Semester
|
Description
|
No. of Students
|
Fall 1999
|
Experimental 1
|
32
|
|
Experimental 2
|
33
|
|
Control 1
|
43
|
|
|
|
Spring 2000
|
Experimental 3
|
20
|
|
|
|
Fall 2000
|
Experimental 4
|
20
|
|
Control 2
|
29
|
|
|
|
Fall 2001
|
Experimental 5
|
16
|
|
|
|
|
Total
|
193
|
Each semester, the software was loaded onto the
campus computer system using a Windows NT platform
and was available in all campus computer rooms.
CDs were also available to students who wanted
to use their own computers. Experimental and control
classes were taught as indicated in Table 1, and
every effort was made to produce random populations
(as a function of student ability) in all of the
classes. This effort was later proven successful
by the statistical comparison of entering grade
point averages and pretest scores for each classroom
population.
On the first day of each semester, the experimental
classes were provided with a brief (15 minute)
explanation of the treatment and were instructed
to retrieve all remaining information about the
process from the treatment Web site (which contained
step-by-step instructions and an interactive demonstration).
The second day of classes a pretest was given
in both the experimental and control classes covering
a range of concepts in geometry, trigonometry,
vectors, and calculus in order to assess incoming
ability. By the third day, students were to have:
- downloaded the generic SDF (Student Data Filecreated
by the instructor using the instructor version;
contains all assigned homework) from the treatment
Web site;
- worked the first set of homework; and
- placed the modified copy of their SDF (which
included grades for the first worked assignment)
in the "homework drop" location set
up by the network administrator. [This network
folder was accessible to each student for placement
of a file, but only to the instructor for copying
or deleting files.]
Usually, after the first full week of classes,
the experimental group was generally comfortable
with using the software for each of its four main
functions. . . . [Control classes submitted homework
on paper, and therefore, a similar initial learning
curve was unnecessary.] The complete homework
submission process for each semester was as follows:
- Enter the Homework section of the treatment.
. . .
- Select a homework problem to complete (only
assigned problems reside in the homework section;
all other text problems are in the practice
section). . . . Up to six variables and six
answers are possible.
- Print each assigned homework problem and work
them neatly and carefully on the treatment printout.
. . . [Students could easily print out individual
homework problems or all homework for the semester,
including their randomized values.]
- Submit answers to each problem in the solution
boxes. If answers are incorrect, coaching is provided
and points are deducted. When answers are correct,
the student's score is recorded to the SDF.
- On the last day of class each week, electronically
place the modified SDF (containing grades for
all worked homework) in the "homework drop"
folder (on the campus network) and submit the
assigned problems, completed neatly on the treatment
printout.
|
|
|
Excerpt 15
[University of Minnesota-Twin Cities]
|
Describes the underlying constructs of a set
of assessment items, and describes how the same
instructors scored both intervention and comparison
students
|
An item was classified as procedural if students
could answer the item primarily by demonstrating
a sequence of steps such as those used to solve
a linear equation in one variable. An item was
classified as conceptual if students were required
to provide an explanation, such as explaining
the difference between a square root and a cube
root. Final exams were graded by having each instructor
grade a small number of items for both the lecture
and computer-mediated courses. Instructors were
provided with a detailed answer key for the procedural
items showing suggested partial credit and a rubric
for the conceptual items.
|
|
|
|
|
|