|
|
|
: Reports : Teacher Education |
|
|
|
|
|
|
|
|
|
|
Annotations |
Report Excerpts |
|
|
Excerpt 1
[Los Angeles Collaborative]
|
Methodological
Approach
|
Evaluation Activities
Methodology
For the Year-One evaluation of LACTE, ETI developed
interview guides, a written survey, and focus group
guides incorporating input from key LACTE staff.
|
Data
Collection Procedures & Schedule:
Identifies information sources
Identifies multiple methods of inquiry
Specifies number of survey respondents
|
During the course of Year One, ETI staff engaged
in a variety of evaluation activities.
- Field Observation. Site visits
were conducted at seven LACTE postsecondary institutions.
California State University Dominguez Hills, East
Los Angeles Community College, and Fullerton Community
College requested that site visits be scheduled
for Winter 1997.
- Focus Groups. Focus groups were
conducted with math, science, and education faculty
at seven of the collaborative campuses. Student
focus groups were conducted at campuses where student
recruitment had taken place in Year One. Using a
moderator's guide, a trained facilitator led each
group in a discussion of LACTE activities. The faculty
and student moderator guides are presented in Appendices
A and B, respectively.
- ETI researchers also interviewed
key LACTE staff regarding their LACTE plans, objectives,
and activities. In addition, telephone interviews
were conducted with California State University
Los Angeles and Loyola Marymount University students
who had LACTE internships in Year One.
- Written Surveys. Written surveys
were administered to faculty who participated in
the spring faculty workshop. Surveys were designed
to determine the relationship between workshop training
and desired teaching improvement outcomes as well
as specific likes and dislikes about the content
of the faculty seminars. Twenty-five faculty members
completed the survey. The survey is presented in
Appendix C with the detailed survey
results included in Appendix D.
- Document Review. ETI amassed
and analyzed through content analysis all available
documentation of LACTE activities. In addition,
ETI reviewed the results from LACTE written assessments
administered during faculty workshops.
|
|
|
Excerpt 2
[Philadelphia Collaborative]
|
Methodological
Approach:
Describes multiple methods and instrument
|
Process Evaluation: Methods and
Procedures
For the evaluation subcommittee, process evaluation
has involved monitoring the ongoing CETP programs
to ensure that the overall program and its components
adhere to the goals specified in the original CETP
proposal. The charts on the next few pages illustrate,
at a broad level, many of the tasks, linkages, and
timelines involved with the elements of the
CETP.
Following the initial semesters of revised courses
(Fall 1995 and Spring 1996), students who were enrolled
in the new CETP courses were asked to participate
in focus groups. The focus groups provided insights
which could not be obtained from survey data and allowed
students to elaborate on elements of the course which
might not be obvious to the evaluators.
|
Data
Collection Procedures & Schedule
|
Faculty and other staff were surveyed in Year 1
regarding their reactions to the implementation of
the CETP. In Year 2, the "Sugarloaf Survey"
again asked faculty about their reactions to the formative
stages of the CETP. In addition, the survey also asked
course directors to identify methods for evaluating
the success of their initiatives. The responses from
the Year 2 survey indicated that many CETP participants
wanted more discussion on how to evaluate specific
components of the project. In Year 3, the Principal
Investigators developed and administered a questionnaire
to measure participants vision of the goals of the
grant and the degree to which the vision has been
met. |
Instruments
Data
Collection Procedures & Schedule:
Specifies how effectiveness will be judged
|
Quantitative Evaluation: Methods and Procedures
Surveys are administered at the end of each semester
to students in CETP courses and to students in matched
"control" courses. The survey questions
are derived, in part, from the stated objectives of
the CETP project and measure student attitudes about
the course and mathematics and science generally.
Several items measure the students' comfort level
with the material, interest in the subject, and whether
specific teaching techniques (e.g., use of "hands
on" examples, collaborative working groups, etc.)
have been demonstrated. Four open-ended questions
were included on the survey. One of the open-ended
questions asks if students had observed anything in
the course that they thought they could incorporate
into their own later teaching.
|
Specifies how effectiveness will be judged
|
A dose-response model which measures the effects
of multiple CETP course enrollments will provide one
technique for assessing the overall CETP project.
In Year 3, a pilot study of students' teaching ability
was begun. Student teachers who have not participated
in CETP courses were videotaped in classroom situations.
These videotapes will be scored and the results will
be compared to the results obtained from CETP student
teachers.
|
|
|
Excerpt 3
[Arizona Collaborative]
|
Instruments
Data
Collection Procedures & Schedule
|
Teacher Surveys
Participating teachers were administered two surveys
during the 1996 summer workshop. The first survey,
which was given the first week of the workshop, assessed
participant experiences during the 1995-96 academic
year. The second survey, which was given at the end
of the workshop, assessed the workshop itself.
|
Cites sources of theoretical framework
|
Participants were also administered a revised
form of the Views About Sciences Survey (VASS).
Their answers and comments on this form, as well
as follow-up discussions with a number of them,
helped us develop the enclosed VASS Form P12 which
is being currently used for student assessment.
The development of VASS and interpretation of
its results are discussed in the articles of Halloun
(1996) and Halloun & Hestenes (1996).
Student Assessment
The impact of Modeling Instruction on students of
participating teachers was assessed in two respects.
The first was the impact on student conceptual understanding
of Newtonian mechanics as assessed by the Force Concept
Inventory (FCI). The second was the impact on student
views about knowing and learning physics as measured
by the Views About Sciences Survey (VASS).
|
Instruments:
Describes instrument validity
Specifies how effectiveness was judged
|
The FCI is a well-validated
instrument that measures qualitative understanding
of the basic concepts and principles of Newtonian
theory (Hestenes et al., 1992 & 1995). We used
it as a posttest in the spring of 1995 to assess the
level of understanding reached by students of participating
teachers following pre-workshop instruction. Following
the first modeling workshop conducted in the summer
of 1995, we administered the FCI as pretest and posttest
during the 1995-96 academic year to assess the impact
of post-workshop instruction. Table
2 summarizes results of the three FCI
administrations, and Table
3 compares FCI results of the three groups
of teachers as distinguished above according to the
way modeling instruction was implemented (consistently
to erratically). Figure
7 displays 1995-96 pretest/posttest results
for individual teachers in the three groups.
|
Describes instruments' conceptual framework
|
VASS probes student
views about science along six dimensions. Three cognitive
dimensions address views about learnability of science,
personal relevance and reflective thinking; and three
scientific dimensions address views about the methodology,
structure and validity of science (Halloun, 1996;
Halloun & Hestenes, 1996). In each VASS item,
students are asked to balance between two contrasting
alternatives, and their response is consequently classified
as expert, mixed, or folk. Figure
8 shows the distribution of expert views
expressed by all students in the various six dimensions
on the pretest and posttest. No significant differences
were detected among students of the three groups of
teachers distinguished in the first part of this report.
|
|
|
Excerpt 4
[Arizona Collaborative]
|
|
Putting Evaluation Results Into Action
|
Methodological
Approach:
Describes refinement of evaluation design
|
Additional instruments are now being considered for
the internal project evaluation. These instruments
would address issues other than those addressed by
current instruments. A more comprehensive picture
of both teacher and student progress is sought in
this respect, and consequently more efficient means
would be developed for the project to have continuously
more positive impact on physics education.
|
|
|
Excerpt 5
[Rocky Mountain Collaborative]
|
|
(examples of data collection activities that correspond
to selected instruments)
|
Data
Collection Procedures & Schedule
|
3. Collect and analyze data from the Student
Course Checklist, which was filled out by all students
involved in new and established RMTEC classes each
semester. This checklist helped the evaluation
team to establish the formative implementation evaluation,
and was developed to obtain information to guide individual
instructors and staff members. Each question on the
checklist reflects the goals and objectives of the
strategic plan. Each question is answered on a five
point scale, ranging from "didn't happen,"
to "happened and extremely helpful."
|
Instruments:
Describes steps to ensure quality survey
development, including reliability
|
Administration of Course-Evaluation Survey.
A pilot survey was designed by the evaluation
team and modified according to feedback from project
staff, instructors, RMTEC scholarship recipients,
and students enrolled in RMTEC courses. Items
were originally developed based on the "Checklist
for RMTEC Curriculum Redesign" ( MSCD). The
original version of the Student Course Checklist
was pilot tested in the spring of 1995, revised
for fall 1995, and revised slightly for spring
1996. The final version of the survey was administered
during the final weeks of spring semester, 1996
(see Appendix B). Twenty-eight items addressed
respondents' perceptions about course features
such as implementation of cooperative learning,
use of technology to support learning, problem
solving with complex rather than simple solutions,
and involvement of public school teachers (Cronbach's
alpha was .94 on the entire scale, indicating
substantial homogeneity in the item pool). In
addition, two open-ended items asked students
to comment on the strengths of the courses and
any recommendations they might have for improving
them.
|
Information
Sources & Sampling
|
Comparisons of samples from the demographic report
and the present course-evaluation report indicate
similar distributions by gender and ethnicity. The
sample in the present report was 44% male and 56%
female. By ethnicity, the sample was 0.6% African
American, 4.5% Hispanic, 6.5% Asian/Pacific Islander,
1.1% Native American, and 80.7% White/Non-Hispanic.
45.2% of the respondents declared an intention to
teach; 54.8% were not preparing to teach.
|
Data
Collection Procedures & Schedule
|
4. Evaluate RMTEC course changes using a
faculty survey (developed by RMTEC evaluation team)
to determine how faculty have added value to the course
they have revised. Each faculty member has
received a survey which asks questions such as: "What
efforts have you made, if any, to determine if these
instructional methods have been successful?"
or "What evidence do you have that students are
learning course content in the RMTEC course(s) you've
taught as well or better than in traditionally taught
classes?"
|
Data
Collection Procedures & Schedule:
Describes pilot testing of instruments
Identifies number of respondents
|
The faculty
survey was pilot tested with specific RMTEC
faculty Fall Semester, 1995. Changes were made, and
the revised faculty survey (Appendix B), which has seven
questions was carried out Spring Semester, 1996 by all
RMTEC faculty who were currently teaching or who had
taught RMTEC courses. The faculty
survey was completed by 3 faculty members from MSCD,
2 from CSU, and 6 from UNC.
Table 1
shows the RMTEC courses
taught at the 3 institutions by these faculty
members.
|
|
|
Excerpt 6
[Oklahoma Collaborative]
|
|
1997 Summer Academy Evaluation
|
Methodological
Approach:
Describes evaluation design in relation to
project components and participants
|
The 1997 Oklahoma Teacher Education Collaborative
(O-TEC) Summer Academies (SAs) attracted high
school students, college students, and in-service
teachers to come together in multi-week educational
workshops at various sites in Oklahoma. The summer
academies combined didactic elements with opportunities
for participants to observe, experience, and lead
hands-on educational experiences with children.
The summer academies were designed with two primary
goals. First, it was hoped that the summer academies
would help to recruit students into teaching science
and math. Second, it was hoped that both students
and in-service teachers would develop greater skills
in using hands-on, inquiry-based, and cooperative
teaching techniques.
The evaluation program was designed to assess the
extent to which these goals were met. The O-TEC assessment
team designed surveys that were used at several sites
to examine how participants attitudes towards
teaching and education changed as a result of their
summer academy experiences. In order to provide formative
feedback to each summer academy site, we also gauged
overall levels of satisfaction with the summer academies.
(
)
|
Describes project components
|
Langston University/Oklahoma State University: These
two universities conducted a joint summer academy.
The academy focused on teaching the principles of
science. This academy was titled SPLASH (water being
the primary teaching tool), an acronym for Students
and Potential teachers Learning About Science through
Hands-on inquiry. Each day the participants worked
in groups to run experiments, make hypotheses, and
provide a theory regarding the outcomes of the experiments.
For example, students observed how clay dissolved
in a pool of water, drew up hypotheses concerning
the observed changes, and theorized as to why the
changes occurred. There were as many as five such
activities in a day, with at least one per day. In
addition to the experiments, the students were required
to keep daily journals of their reactions or ideas
regarding the summer academy.
|
Describes project participants
|
This academy had forty-eight participants, which
were comprised of high school students (N=41) and
teachers (N=7). Of this group, 15% were male and 85%
were female. Additional demographics of the participating
group are 52% Caucasian, 38% African American, 4%
Native American, 4% Hispanic, 2% unknown.
(
)
|
Data
Collection Procedures & Schedule
Methodological
Approach:
Specifies how effectiveness is to be judged
|
Summer academy participants were asked to voluntarily
complete an Attitudinal Survey at the beginning and
the end of their respective summer academies. This
pre-test/post-test design permitted examination of
change that occurred during the summer programs. In
addition to the Attitudinal Survey, the summer academy
assessment included a Final Evaluation Survey to track
satisfaction with each summer academy. The Final Evaluation
Survey was administered anonymously to participants
at the end of each summer academy.
Measures Used to Track Changes in Attitudes
|
Instruments
|
The Attitudinal Survey consisted of 145 questions
rated for agreement on a Likert-type scale from 1
(strongly disagree) to 5 (strongly agree). The question
set was a compilation of 12 subscales adapted from
published and unpublished literature to assess attitudes
on a variety of topics related to educational reform.
The subscales were as follows ("shorthand"
names for some scales are given in parentheses):
- Pupil Control Ideology
(Pupil Control)
Higher scores indicate more "authoritarian"
attitudes towards discipline.
- Attitudes Towards Teaching
(Teaching)
Higher scores reflect greater pride and excitement
about the teaching profession.
- Science Teaching Self-Efficacy
(Science SE)
Belief in ones ability to teach science
courses and material.
- Math Teaching Self-Efficacy
(Math SE)
Belief in ones ability to teach math courses
and material.
- Science Teaching Outcome Expectancy
(Science Outcome)
Expectation that good teaching will lead students
to understand science better.
- Math Teaching Outcome Expectancy
(Math Outcome)
Expectation that good teaching will lead students
to understand math better.
- Math Anxiety
A measure of personal anxiety about math problems
and courses.
- Science Anxiety
A measure of personal anxiety about science problems
and courses.
- Self-Efficacy for Inquiry-Based Learning
(Inquiry SE)
Belief in ones ability to teach inquiry-based
classes (e.g., Socratic method).
- Self-Efficacy for Hands-on Learning
(Hands-on SE)
Belief in ones ability to teach with hands-on
methods (e.g., laboratory projects).
- Attitude towards Inquiry-Based and Hands-on
Learning
(Inquiry Attitude)
The extent to which one endorses those as important
teaching activities.
- Learning Motivation
Ones desire to learn and benefit from teacher
education experiences.
(
)
|
Data
Collection Procedures & Schedule
|
The pretest/posttest Attitudinal Survey was completed
by participants before and after summer academy programs
at Northeastern Oklahoma State University, Southwestern
Oklahoma State University, Cameron University, Langston
University/Oklahoma State University Joint Program,
and Pawhuska. Tulsa Community College did not participate
in the evaluation process in the summer of 1997. The
University of Tulsa summer academy is still on-going
and will be reported on at a later date.
|
Information
Sources & Sampling
|
A total of 131 participants were tested on both occasions
(10 additional participants were tested on only one
occasion and were dropped from analysis). Distribution
of respondents by site is shown in Table 2.2.
Table 2.2
Attitudinal
Survey Respondents by Site
N |
Site |
15 |
Northeastern Oklahoma State University |
26 |
Southwestern Oklahoma State University |
18 |
Cameron University |
41 |
Langston/Oklahoma State Universities
Joint Program
|
31 |
Pawhuska |
|
|
|
|
Excerpt 7
[New York City Collaborative]
|
Instruments
Data
Collection Procedures & Schedule
|
1. NYCETP Year 2 Case Study Outline (April
9, 1997) and Case Study Follow-up (October
28, 1997). This outline was provided to faculty members
conducting the case study as well as those being studied
to guide their documentation of the Collaborative
course being "case studied." The Case
Study Follow-up interview provided the evaluators
with information necessary to modify the original
outline.
2. NYCETP Guidelines for Self-Study of Course
Documents/Curriculum (January 9, 1998) and Glossary
of Terms. These documents were developed for faculty
to evaluate the extent to which new and revised course
documents are aligned with NYCETP goals. Ratings are
supported with written comments to aid NYCETP Curriculum
Development Group Meetings and by evaluators.
3. BIO 183 Student Survey: Your views
about the course (Fall 1997). This survey was given
to 8 sections of Biology183 students (N=133) at Lehman
College this past semester. Results of a pilot administration
(Summer 1997) were used to modify an original version.
The faculty member teaching this course is using the
students responses to inform continued revision
of this course and in the supervision of adjunct
instructors.
|
Describes pilot testing of instruments
|
4. Mathematics Attitude Survey (adaptation
of the Fennema-Sherman Mathematics Attitude Scale).
The Fennema-Sherman Mathematics Attitude Scale was
adapted for use in a revised mathematics class. Pilot
data was analyzed and a written report was submitted
to the course faculty.
5. Views About Sciences Survey (VASS, Arizona
State University). Several versions of the VASS have
been pilot tested. These include the Math and Chemistry
versions which had not yet been fully standardized
at the time we used the instruments.
|
|
|
Excerpt 8
[Maryland Collaborative]
|
Instruments:
Identifies multiple sources of data
|
We have designed a documentation system to address
these and other research questions. The documentation
system includes ongoing teacher candidate interviews,
classroom observations, and a regularly administered
valid and reliable 45-item attitudes and beliefs survey,
Attitudes and Beliefs about the Nature of and the
Teaching of Mathematics and Science.
|
|
|
Excerpt 9
[Louisiana Collaborative]
|
Methodological
Approach:
Describes multiple methods of inquiry
|
The evaluation strategy during the past year continued
to be three-fold. First, data was collected through
surveys of the participating campuses to gain a more
thorough understanding of the nature of curriculum
and course revisions. Second, LaCEPT encouraged campuses
to collect data regarding the effects on students
of campus reform efforts. They were encouraged wherever
possible to collect student learning outcome results
for students participating in reform classes or other
efforts with those from comparable groups of students
who had not participated in reform efforts. Third,
LaCEPT evaluators conducted site visits to each campus
for the purpose of learning the status of the reform
effort on each campus, understanding more thoroughly
factors that influenced the degree of success experienced
on the campuses, and identifying useful ideas and
approaches that could be shared with other campuses.
During the site visits (with a few minor variations
to meet the needs of the individual campuses)
evaluators:
|
Identifies multiple sources of data
|
- conducted interviews with the CRP project directors,
- interviewed the Dean of Science and the Dean of
Education,
- interviewed involved faculty,
- observed one or two classes identified by the
project directors as exemplary, and
- engaged in discussions with groups of students
taking reform classes.
Based on these visits, summary information about
courses significantly revised both directly or indirectly
as a result of the LaCEPT program was developed, distributed
to the campus sites for review, and revised as
necessary.
|
Describes pilot testing of survey instrument
|
Fourth, LaCEPT surveyed NSF/LaCEPT Teaching Scholars
and interviewed groups of Scholars to determine the
nature of their Teaching Scholar experiences and the
effects of the Teaching Scholar program on their attitude
toward reform. Finally LaCEPT pilot-tested, at five
universities, a survey of graduating pre-service teachers
regarding their attitudes toward, and plans for utilizing,
standards-based reform philosophies and approaches
in their own classrooms.
|
|
|
Excerpt 10
[Oregon Collaborative]
|
Data
Collection Procedures & Schedule:
Describes who did what
|
The Summer Institute Evaluation was completed in
September 1997 and is part of the OCEPT formative
evaluation. Several Team members who attended the
Institute were instrumental in helping to develop
the survey instrument used. Several Co-PIs and Mentor
Team members also contributed ideas and reviewed survey
drafts. The Project Coordinator summarized the statistical
results and the Evaluation Coordinator summarized
the qualitative responses.
2. Pre-Award and 1997 CETP Data Reporting
Process
|
Describes data collection challenges
|
One of the major challenges in completing both the
pre-award and the more recently due annual CETP data
set has been identifying key contact individuals for
gathering student data at each of the 34 institutions
in the Collaborative. These contacts were not adequately
in place during the collection of the pre-award data.
We now have identified the "right" individuals on
each campus who can help us collect the needed student
demographic information. Also, while many were appreciative
of the ability to enter the data electronically, many
others did not have access to a current version of
Netscape or Explorer. Still others experienced considerable
time delays in calling up their file and then in moving
from one reporting form to another. Some of this delay
may have to do with the particular way in which the
Web site is organized, though some of it is due to
local communication systems.
We also did not have adequate support systems in
place for Faculty Fellows to gather required data
on students in their courses. At this summer's Institute,
Fellows will each get a packet with clear guidelines
on the data needed and tools they can use in gathering
it, such as the brief Student Information Survey,
along with other resources for planning and assessment.
3. Entering Teacher Education Student Survey |
Instruments:
Describes a survey's development process
and the survey's purpose
|
With assistance from project staff and some Research
and Evaluation Team members, a Teacher Education Student
Survey was developed during Summer 1997 and administered
to students at 14 of the 16 OCEPT institutions with
teacher education programs. An effort was made to
identify and review instruments designed for a similar
purpose from other CETP Collaboratives. Information
was received from the Rocky Mountain Collaborative
which was adapted for use on our survey.
The primary purpose of the survey was to collect
information from students entering teacher education
programs about their experiences in undergraduate
math and science courses. The survey was also designed
to collect information about how much undergraduate
math and science courses these students had taken,
their attitudes about teaching math and science, and
how they assessed their science literacy
skills.
|
Information
Sources & Sampling:
Identifies how many were surveyed
|
We estimate that we have surveyed at least one-third
of the students who had recently entered (Summer or
Fall 1997) a teacher education program. We are still
in the process of entering the information from the
330+ surveys received and expect to have a report
available in May 1998. Before this, however, we expect
to have descriptive summary reports available for
each of the participating institutions.
|
Methodological
Approach:
Describes eventual uses of the data by
stakeholders
|
The information gained from the survey will serve
both as baseline OCEPT information about the math
and science course experiences of students who enter
teacher education programs. Also, we hope the information
will serve to heighten and extend local and statewide
conversations about the math and science preparation
of our future teachers. What do they think about the
results? How do they interpret them? As "good"? "Troubling"?
What do the OCEPT Management Team and the teacher
education leaders in the State think of the
results?
|
Meta-Evaluation
|
Based on our experiences with the survey this first
year, some modifications in both the survey itself
and in the way it is administered are needed. More
individuals from teacher education programs at OCEPT
institutions need to be involved in helping to shape
these modifications. The intention is to administer
the survey every year.
|
Data
Collection Procedures & Schedule:
Describes data collection challenges
|
The survey administration process itself was very
time consuming and difficult. Local contacts had to
be established at each institution. An important by-product
of this initial survey experience has been to establish
contact with the key teacher education faculty at
each of the institutions. We now have a network with
whom to communicate about the future survey and the
administration process. Also, individual faculty had
to be willing to take 25 minutes of class time to
administer the survey; and few had had any involvement
in its development.
Even though the Human Subjects Committee at Portland
State University had reviewed the survey and the administration
and analysis plan and approved it, several other campuses
had to send the proposal through their own local committee,
thus delaying survey administration. Also, we thought
we might need student social security numbers for
later follow-up purposes. Students in several classes
refused to participate in the study because such information
was called for. Although we indicated to local contacts
that having students participate was the priority
and to abandon asking for SSNs if necessary, some
students still would not participate. In the future,
SSNs will not be called for.
A future instrument needs to require less time to
administer. And students and faculty have to understand
better the reasons for the survey and what they may
gain from participation. In the rush to collect some
data from students early in the life of OCEPT, we
failed to develop sufficient understanding about the
survey and OCEPT and developed insufficient ownership
in the survey process.
|
Methodological
Approach:
Describes redesign plans based on evaluation
experience
|
Even given the problems encountered, we are still
optimistic that the data set will provide added impetus
for thoughtful conversations about the preparation
of our teacher educators in math and science. We expect
to develop an improved instrument for use next year,
with some carry-over of survey items.
|
Describes strategy for ensuring quality of
instrument
|
Finally, we are analyzing a small data set to gauge
instrument stability over a two week period of time
between a first and second administration to the same
group (n=14). Despite the small number, the preliminary
results look positive.
4. Mid-Year Interviews
|
Data
Collection Procedures & Schedule
|
Interviews were conducted in February with all Co-Principal
Investigators and a sample of Faculty Fellows and
Mentor Team Members.
A total of 25 individuals were interviewed by phone
by the Evaluation Coordinator.
|
Instruments:
Describes interview protocol
|
These 15-minute mid-year interviews were designed
to learn more about how Mentor Teams were functioning,
what was working especially well and where improvements
were needed; about how Fellows felt about the quality
of support they're receiving for their local projects;
and about how Co-PIs perceive the collaborative itself
to be developing.
|
|
|
Excerpt 11
[Montana Collaborative]
|
Instruments
Data
Collection Procedures & Schedule:
Describes multiple sources of inquiry and data,
and types of instruments used
|
Although each evaluation strategy is developed to
suit the needs of the particular activity, some common
approaches will be used for the major types of project
activities. The following summaries indicate the methods
and data collection strategies that are used.
Course Revisions
- Documentation including course syllabi and materials
- Field notes from interactions, observations, interviews
- Student demographic data
- Questionnaire on course revision strategies
- Class observations
- Student interviews
- Student surveys
- Faculty interviews
- Faculty survey
Workshops and Institutes
- Fieldnotes for meetings, observations, interviews
- Participant applications and registration lists
- Project correspondence with participants
- Agendas, schedules
- Checklists & questionnaires
- Participant surveys
- Photographs
Conferences and Meetings
- Field notes from meetings, observations, interviews
- Participant applications & registration lists
- Participant questionnaire/survey
Recruitment of Underrepresented Groups
- Field notes from meetings, observations, interviews
- Participant applications
- Student demographic data
- Questionnaires
- Case Studies
Project Policies and Management
- Field notes from meetings, observations, interviews
- Correspondence, records, budgets, proposals, reports
- University catalogs, requirements, teacher certification
records
|
Methodological
Approach:
Specifies how effectiveness will be judged
|
Although the primary focus for evaluation timelines
will be specific activities with identifiable beginning
and ending dates, nearly all of the activities also
have expected long-term implications and effects.
For most activities, the evaluation strategies will
include follow-up and "post-post" questionnaires
and interviews to assess the long-term effects of
a particular activity on practice, dissemination,
or institutionalization.
|
|
|
Excerpt 12
[Anonymous 2]
|
Methodological
Approach:
Describes scope of the evaluation
|
IV. How is the Evaluation being
done?
We have selected an evaluation design that balances
the need for comprehensive data about the core institute
and all of its participants with the need for more
fine grained, qualitative data about a smaller, purposeful
sample of district teams. Critical to understanding
Program A is an in depth look at teachers learning
about environmental science and at the consequences
of their new knowledge, skills, and professional relationships
for classroom practice, teacher leadership activity,
school or district reform, and students opportunity
to learn.
|
Data
Collection Procedures & Schedule
|
Thus, the design will rely on intensive observation
at the annual institute, whole population (n=100/year)
data collection (through electronically administered
questionnaires) before and after the institute, as
well as detailed on-site observations and on-site
telephone interview with a small sample of district
team members (2-4 teachers per team) from years 1997
and 1998. These teams will be tracked throughout their
participation in the core institute, during teacher
outreach activities, and in their classroom, school,
and district work over the life of the project. Data
collected on these district teams will form the basis
of thematically based mini-case studies to be included
in the final report. |
Describes collection of feedback from program
participants
|
In addition, each year, we will convene a focus group
of core institute participants (in the final week
of the institute) to meet with evaluators to provide
formative feedback on key components of the core institute
program and strategy.
|
|
|
Excerpt 13
[City Science Workshop,
City College of New York]
|
Methodological
Approach:
Overviews primary data gathering strategies
|
Evaluation Data Source
The primary source of data for the summative evaluation
came from the project-focused questionnaire which
was used to interview the project participants. Data
was also obtained from ethnographic observations of
the workshops at City College and the classrooms of
some participants. The principal group study was project
participants since they were the focus of the
program.
|
Instruments:
Lists survey topics
|
Instrumentation
For the purpose of conducting a telephone interview
of all participants at the end of the project a questionnaire
instrument was developed. This was done with the assistance
of the project directors and consultants in the field
of education evaluation. Initial ethnographic classroom
studies provided background information on the varying
teaching methodologies of participants. This information
served as the basis from which the questionnaire was
developed. The questionnaire addressed the
following:
- The teachers' learning of science
- Past teaching practices
- Changes in the amount of time spent on
science
- The goals of the teachers
- The use of material resources and of the
child's environment
- Questions about the participants' teaching
practices
- The degree of child centeredness
- The impact of the project in the
schools
|
Describes pilot testing of survey
instrument
|
This questionnaire was first shown to experienced
professionals in the field of education evaluations.
It was later pilot-tested with the four elementary
school teachers who were also on staff. Following
these pilot tests adjustments were made to the wording
and the sequencing of the probes prior to the instrument
being used with the project participants. |
Data
Collection Procedures & Schedule:
Describes standardized data collection
procedures, issues of confidentiality, and
issues of convenience
|
Letters were sent to the participants prior to the
commencement of the interview study informing them
of the evaluation process and the reasons for such
a study. The letters stressed their anonymity in the
process. In addition to this, announcements were made
in the classrooms informing the participants that
the project directors or staff would not know the
identity of each interviewee. Appointments were made
with the participants for interviews which were made
at their convenience. These efforts helped to create
an atmosphere in which the interviews could be conducted
smoothly. The interviews were done mainly in the evenings
and on some weekends. This timing was convenient for
most participants because of their work schedules. |
Information
Sources & Sampling:
Specifies sample size
|
Interviews were conducted with 61 of the 74 participants,
a response rate of 82%.
|
|
|
Excerpt 14
[TEAMSS, George Washington University]
|
Methodological
Approach:
Identifies the conceptual framework behind the
methodology
|
TEAMSS Evaluation
The naturalistic inquiry research approach was the
methodology selected to understand the ongoing processes
of staff development and to determine the quality
of the TEAMSS program, its impact on the participants
and their home schools, and the extent to which each
of the five objectives has been achieved.
|
Methodological
Approach
Data
Collection Procedures & Schedule:
Describes multiple data sources and multiple
data collection times
|
The primarily qualitative data obtained from multiple
data sources (daily evaluation sheets, daily learning
logs with directed and free writing, observational
field notes, informal interviews, and a formal project
evaluation form) at different times during the year
can be synthesized to convey the relationship among
multiple perspectives. Following are some observations
and preliminary findings of the data collected thus
far.
|
|
|
Excerpt 15
[The Nebraska Economics Fellows Institute]
|
Instruments:
Describes multiple choice exam used to measure
outcomes
|
The Test of Understanding in College Economics (TUCE)
(Saunders, 1991) was used as the primary measure of
teacher knowledge of basic economics. The TUCE is
a two-part multiple choice exam, with one part covering
macroeconomics and the other microeconomics. As stated
in the Examiner's Manual, the TUCE was designed to
meet two objectives: "(1) to serve as a measuring
instrument for controlled experiments in teaching
introductory economics at the college level; and (2)
to enable instructors of particular introductory courses
to compare the performance of their students with
that of students in other colleges and universities"
(p. 1).
The TUCE was considered to be a suitable instrument
for the evaluation for several reasons, despite the
fact that it was designed for test use in introductory
economics courses. First, a minimum expectation of
the program was that the Fellows would do better than
undergraduate students in introductory economics courses,
and thus significantly outperform students on the
national norms. Second, the Fellows were being trained
to teach high school economics. The content of the
TUCE covers the basic concepts that they would likely
be teaching and measures their understanding of these
concepts at a high level of complexity. Third, past
studies have found that the TUCE is a valid and reliable
instrument for measuring economic understanding in
advanced courses because the difficulty of the test
provides ample room for measuring change in economic
understanding across a wide achievement range (Walstad,
1984). Fourth, no nationally normed exam was available
at the graduate level that was appropriate for the
group being evaluated. Although the TUCE is not a
perfect instrument for the evaluation of the Institute,
it had desirable measurement properties for assessing
whether the Fellows sufficiently understood basic
college-level economics so that they would be properly
prepared to teach it in secondary schools. |
Documents instrument reliability
|
The TUCE is a reliable instrument for assessing economic
understanding of students taking an introductory economics
course. Reliability often refers to the internal consistency
among test items for measuring an achievement outcome
such as economic understanding. The Cronbach alpha
(or its equivalent, the KR-20) is used to estimate
this type of reliability with tests that scored items
right and wrong. The alpha is essentially a coefficient
that provides an estimate of the average correlation
between test scores on all possible split halves of
a test. The coefficient ranges from a low of 0, indicating
no internal reliability, to a high of 1, indicating
perfect reliability. The alpha for the TUCE norming
sample was .76 for macro and .82 for micro. The alpha
for the TUCE used with the Fellows was similar73
for macro and .75 for micro. The alpha for a combined
forms version of the TUCE was .85. These results indicate
that the TUCE was a reliable test to use with the
Fellows.
|
|
|
Excerpt 16
[Educational Cooperative Service Unit, MN]
|
Methodological
Approach:
Describes formative feedback process from
evaluators to project
|
FORMATIVE EVALUATION
Because this is a developmental project, the project
director and staff have sought continual feedback
as to what has worked successfully and what has not.
The purpose is to keep the successful components of
the project, and to modify the unsuccessful components,
in order to ensure that the participants will attain
the level of competence that is desired and that the
materials are effective in the classroom. Once the
"formula" for success is identified, the desire is
to replicate the training of the teachers and the
production of materials for other audiences.
|
Describes formative goals and approach
|
To provide this constant feedback, a formative evaluation
model was used during the first year of the project.
The activities of the project were documented using
a process approach. The project director used continual
feedback from evaluation questionnaires completed
by the participants, advice from a project steering
committee that met on a regular basis, and review
sessions with the program participants. The evaluation
questions during the formative stage of the project
included:
|
Relates formative approach to evaluation
questions
|
- What activities were conducted by the
project?
- What materials and training were delivered by
the project?
- Were the timelines met?
- How effective were these activities and materials
in meeting interim project goals?
|
Methodological
Approach:
Describes summative goals and approach
|
SUMMATIVE EVALUATION
The "summative evaluation" of the project began the
second year of the project and gathered some preliminary
information relative to the desired outcome of the
three-year project: to change teacher and student
attitude, knowledge, and behavior related to computational
science. The evaluation questions for the final evaluation
of the project will include: |
Relates summative approach to evaluation
questions
|
- What changes in teacher attitude, knowledge,
and behavior occurred that can be related to
the project?
- What changes in student attitude, knowledge,
and behavior occurred that can be related to
the project?
|
Describes multiple methods of inquiry
|
These questions will be answered by identifying behavioral
outcomes through the use of interviews, surveys, and
observations, as well as assessing knowledge through
the use of the more traditional paper and pencil
assessments.
|
Data
Collection Procedures & Schedule
|
EVALUATION TIMELINE
Evaluation Plan Approved
|
July 15, 1995 |
Administer questionnaire for each training activity |
July-August, 1995 |
Summarize questionnaire data |
Sept. 1, 1995 |
Complete activity profile |
Sept. 1, 1995 |
Complete first year teacher profile |
Sept. 1, 1995 |
First Year Interim Report |
November 1, 1995 |
Develop teacher and student questionnaires |
Nov. 15, 1995 |
Administer teacher and student questionnaires |
Jan. 31, 1996 |
Tabulate and analyze teacher and student questionnaire
responses |
Feb. 28, 1996 |
Administer questionnaire for each training activity |
July-August, 1996 |
Tabulate Internet data |
August 1, 1996 |
Summarize questionnaire responses |
Sept. 1, 1996 |
Complete activity profile |
Sept. 1, 1996 |
Complete second year teacher profile |
Sept. 1, 1996 |
Second Year Interim Report |
February 1, 1997 |
Review and site visit sample of Cadre 2 projects |
April 15, 1997 |
Review and site visit sample of Cadre 1 projects |
Oct. 15, 1997 |
Administer teacher and student questionnaires
including information about materials |
April 1, 1998 |
Tabulate and analyze teacher and student questionnaires |
April 15, 1998 |
Compile description of materials developed |
April 15, 1998 |
Final Report |
May 1, 1998 |
|
|
|
Excerpt 17
[Oklahoma Collaborative]
|
Methodological
Approach:
Relates project features to evaluation
strategies
|
It is important to refrain from comparing sites to
one another, because of the diversity of participants
and programs. For example, a site that placed no emphasis
on Inquiry-based teaching would be expected to show
little improvement in that area. Likewise, a site
with many in-service teachers might show less attitudinal
change than sites where the material was presented
to high school students with no educational experience.
Also, data collection methods were implemented differently
at different sites. For example, one site chose to
conduct the attitudinal "pretest" as a retrospective
exercise at the end of their program. Although this
approach is interesting, it makes direct comparisons
with other sites problematic.
Instead of comparison between sites, we suggest that
the data presented here would be most useful for sites
to conduct self-reviews in order to see how the program
results matched the site goals and how the programs
could be improved in the future. Also, if a site performed
particularly well in one area, other sites might wish
to contact the MTIR of that program for suggestions
about how to present material related to the improved
area.
|
|
|
Excerpt 18
[Oregon Collaborative]
|
Meta-Evaluation
|
An on-going concern of the Team is the scope of work
proposed and the feasibility of completing all of it
within current evaluation resources. The Team will continue
to review the scope of work and in consultation with
the Management Team, make needed adjustments.
|
|
|
Excerpt 19
[Montana Collaborative]
|
Instruments:
Describes survey questionnaire
|
The survey was constructed fall semester 1995 by STEP
graduate research assistants in consultation with project
directors. The final draft was circulated to each of
the five STEP campus coordinator groups for review.
Their suggestions were incorporated into the final version
of the survey.
The survey asked students to provide demographic information
and to answer questions about their class experiences.
Several types of questions were used including: (1)
yes-no-dont know responses; (2) percentages of
time used for various class activities, such as lecture,
discussion, and lab activities; (3) never-rarely-sometimes-frequently-almost
always responses; and (4) two open-ended written response
questions.
|
Data
Collection Procedures & Schedule:
Describes rationale for the schedule
|
Campus coordinators selected March 1996 for survey
administration. This meant the survey would not be confused
with end-of-semester institutional data collection.
By mid-semester students have experienced sufficient
instruction to make complete survey responses, but are
less likely to have "evaluation fatigue" at
this time. |
Describes survey administration procedures and
number of respondents
|
Survey distribution and administration for all sections
of each reform course was coordinated by <name
of person> and <name of person>. Each
instructor was provided (1)
information on the purpose of the survey; (2) instructions
for administration, including a statement to be read
aloud to each class; and (3) post-paid envelopes for
survey return. Forty-five (45) sections of thirty-one
(31) courses were surveyed.
|
|
|
Excerpt 20
[Maryland Collaborative]
|
Data
Collection Procedures & Schedule:
Specifies data collection methods
|
Both numerical and qualitative data are being collected
to address the MCTP research questions. Numerical data
derive from the administration of two Likert-type surveys
developed by the MCTP Research Group; a college student
version and a faculty version of "Attitudes and
Beliefs About The Nature Of And The Teaching Of Mathematics
And Science." Participating faculty and students
in MCTP classes (both MCTP teacher candidates and non-MCTP
students) contribute to this database. Data are analyzed
using the software program SPSS.
Qualitative data derive from semi-structured ongoing interviews
with participants in MCTP classes, MCTP class observations,
participant journals, and MCTP course materials. Standard
qualitative analysis techniques (analytic induction, constant
comparison, and discourse analysis) assist in the interpretation
and presentation of case studies emerging from this rich
data set. The software program NUDIST facilitates the
data analysis.
|
|
|
Excerpt 21
[Philadelphia Collaborative]
|
Methodological
Approach
|
The evaluation committee has focused on a) the implementation
of an overall evaluation plan; b) the development of
evaluation plans within courses and across the project;
c) the collection of baseline demographic, attitudinal,
and academic performance data; and d) the development
of measures and procedures which will allow participants
to measure project outcomes.
|
Describes evaluation design in relation to the
evaluation purposes
|
Several evaluation components are in place to measure
whether the CETP Project is moving towards its goals.
These components include both measures of process and
of outcomes. "Process" evaluation involves monitoring
the match between CETP goals and the methods, timetables,
and procedures in which these goals are developed, implemented,
and then institutionalized. Outcome evaluation involves
monitoring participants' progress towards meeting these
goals by providing evidence gathered through some measures
of performance. Based on these measures comparisons are
madeeither within the participant group (pre-post,
value-added, dosage models) or between an experimental
and a control groupto determine if gains have been
made.
|
|
|
Excerpt 22
[Arizona Collaborative]
|
Methodological
Approach:
Describes evaluation design in relation to
evaluation purposes
|
It is hoped that modeling instruction impacts students
positively not only with respect to the physics subject
matter they are taught, also with regard to their views
about knowing and learning physics and science in general.
Although the latter impact is expected to show only
in the long run, we were interested to assess it after
the first year of the project, especially since the
literature abounds with evidence that high school science
instruction has a negative impact on student beliefs
about, and attitudes towards, science (Halloun &
Hestenes, 1996). For this purpose, in 1995-96, we administered
the Views About Sciences Survey (VASS-Form P11) to students
of workshop participants as pretest and posttest. Figure
8 compares the two 1995-96 tests' results. Different
VASS forms were administered only as posttest in the
spring of 1995, and only 10 items were common with the
1995-96 form.
|
|
|
Excerpt 23
[Rocky Mountain Collaborative]
|
|
Objective
- Carry out summative evaluation.
|
Methodological
Approach:
Relates design to evaluation purpose
|
STRATEGY
Provide summative evaluation to assess the extent to which
the objectives of the RMTEC project have been achieved.
Baseline data will be collected during the initial stages
of the project and then assessed yearly toward carrying
out the summative evaluation. Some of these baseline data
involve frequency counts of Teachers-in-Residence, diverse
groupings, course syllabi and specified course content.
Separate studies with different courses will be conducted
to determine student achievement.
|
Data
Collection Procedures & Schedule
|
Activity |
Timeline |
Responsibility |
|
CSU |
UNC |
MSCD |
CSU |
UNC |
MSCD |
Collection of demographic information specific
to CETP students and RMTEC courses. |
Fall '96
Spring '96 |
J.Gliner |
Shaw |
G.Gliner |
Collection of demographic information specific
to the National Science Foundation as formulated
by WESTAT and analyzed by Quantum Research. |
Spring '96 |
J.Gliner |
Shaw |
G.Gliner |
Collect and analyze data from the Student Course
Checklist which was filled out by all CETP students
taking classes each semester. |
Fall '96
Spring '96 |
J.Gliner |
McDevitt |
G.Gliner |
Evaluate student achievement in RMTEC courses. |
Fall '96
Spring '96 |
J.Gliner |
Shaw |
G.Gliner |
Determine efforts to institutionalize RMTEC by
conducting RMTEC PI and Department Head/Dean interviews. |
Once Per Year |
J.Gliner |
Shaw |
G.Gliner |
|
|
|
Excerpt 24
[New York City Collaborative]
|
Methodological
Approach:
Describes and critiques data sources in relation
to evaluation purposes
|
The New York Collaborative for Excellence in Teacher
Preparation (NYCETP) is a project jointly undertaken
by five college campuses of the City University of New
York (CUNY) and New York University (NYU). Traditional
evaluation design calls for pre- and post-intervention
assessment. Some of the formative evaluation activities
have focused on end of the year course evaluations and
pre- and post-course attitude changes. In terms of standardized
student outcome measures, New York State has recently
begun the administration of the first level of the teacher
certification examinations, the Liberal Arts and Science
Test (LAST). Beginning in 1997, each college campus
is provided information from this examination, thus
serving as a potential source of baseline data and a
continuing source of data. Use of these data requires
NYCETP project personnel on each campus to review the
data to identify individuals who have participated in
the program on their campus. Longitudinal subject matter
examinations and videotapes submitted for permanent
certification will be available. However, the usefulness
of data from exams such as the LAST depends upon the
degree to which the course in Liberal Arts and Sciences
and Teacher Education are changing. Thus, the formative
evaluation practices of the NYCETP have been focused
on facilitating staff development, documenting change
within the Collaborative courses, developing peer reviews
of course documents, and assisting interested faculty
in end of year course evaluations.
|
|
|
Excerpt 25
[Louisiana Collaborative]
|
Methodological
Approach:
Relates design to evaluation purpose
|
During the past year, as the reform effort has become
more institutionalized, LaCEPTs focus has turned
more summative. LaCEPTs evaluation strategy this
past year has been to continue to understand more systematically
the extent to which campuses are attaining their own
and LaCEPTs process outcome goals. LaCEPT continues
to encourage and advise campuses on how they can both
quantitatively and qualitatively assess the impact upon
students in their own reformed courses.
|
Describes relation between evaluation goals and
program goals
Addresses best practices
|
To help reexamine LaCEPTs goals and their implications
for future implementation and evaluation efforts, LaCEPT
envisions that a three-fold strategy will be pursued
in its evaluation efforts. First, more attention and
effort will be placed on helping each CRP implement
its own summative evaluation activities, especially
now that professors have had time to refine many of
their reform course offerings. Second, an evaluation
team will continue to make site visits to assess the
status of reform on the campuses participating in LaCEPT
and to identify "best practices" worthy of
consideration and possible replication at other campuses.
The team will interview project leaders, hold focus
group discussions with faculty members, and observe
reform classes. They will also hold interviews with
university administrators. Consistent with the recommendation
of the NSF Visiting Committee, the evaluation team will
solicit information useful in assessing the factors
that have contributed to or hindered success at the
various campuses. By analyzing information collected
in these site visits and information collected through
surveys of project directors as well as the data provided
to the National Science Foundation, LaCEPT will have
a rich source of data from which to draw
conclusions.
|
Describes data collection strategy to yield
generalizable findings
|
Finally, LaCEPT has begun on a pilot basis this fall
a survey of graduating pre-service teachers to determine
their exposure to reform principles and techniques in
their pre-service programs and their assessment of its
value. Information was also solicited concerning the
graduating students future career plans and the
extent to which they anticipate employing reform techniques
and principles in their classrooms. In the spring of
1998, LaCEPT plans to expand the survey to other LaCEPT
university campuses. Discussions are under way with
three other CETP projects (Montana, Maryland, and San
Francisco) to collaborate on the graduating student
survey and share results. |
Describes comparison groups
|
Eventually, LaCEPT will analyze State Department of
Education data regarding public school teacher employment
and student norm-and criterion-referenced test scores.
By doing so, LaCEPT will be able to determine the degree
to which reform-prepared pre-service teachers accept
public school teaching jobs, stay in public school teaching,
and have a positive effect on test scores of their own
students. Of particular interest will be how NSF/Regents
Teaching Scholars compare to other students on these
measures. In its supplemental grant proposal as well,
LaCEPT envisions that it will analyze the extent to
which a sample of teachers previously benefiting from
reform pre-service programs is implementing reform and
the factors which affect their degree of
implementation.
|
|
|
|
|
|