|
|
|
: Plans : Teacher Education |
|
|
|
|
|
|
|
|
|
|
Annotations |
Plan Excerpts |
|
|
Excerpt 1
[Los Angeles Collaborative]
|
Methodological Approach:
Specifies scope of data to be collected
|
The evaluation will utilize both qualitative and
quantitative research activities, with ETI providing
on-going feedback to all partners. During Year Five,
in addition to documenting program outcomes against
baseline data, the evaluation is designed to provide
extensive documentation for program replication in
other undergraduate institutions.
(
)
|
Addresses cost effectiveness of evaluation
|
Given the limited evaluation funds and the five-year
duration of the study, the activities are designed
to optimize project dollars. ETI has designed an evaluation
which will give the greatest return for the most
cost-effective use of project resources.
(
)
|
|
|
Excerpt 2
[Philadelphia Collaborative]
|
Methodological Approach:
Specifies design using experimental and control
groups
|
The basic evaluation design will involve a comparison
of matched groups of student teachers enrolled in
the new and traditional programs. Students in the
classes taught by the student teachers from the program,
as compared to students taught by student teachers
from the traditional program.
|
|
|
|
Excerpt 4
[Los Angeles Collaborative]
|
Information
Sources & Sampling
Methodological Approach:
Specifies need for pre- and post-tests
|
ETI will work closely with LACTE staff to determine
the appropriate individuals to be surveyed and interviewed.
Because the number of participants is not currently
known and will vary from campus to campus, ETI will
work closely with LACTE to develop a sampling plan
that adequately covers the population of program participantsincluding
administrators, faculty master teachers, mentors,
students, and others. In addition, since many program
goals involve changes in values and orientation, a
pre-test and post-test design would be appropriate.
|
|
|
Excerpt 5
[Los
Angeles Collaborative]
|
Methodological
Approach:
Specifies how effectiveness will be judged
|
Proposed Evaluation Activities:
The major evaluation activities during Year One
will be centered on refining the evaluation design
and establishing baseline data against which to
compare subsequent program outcomes.
|
Data Collection
Procedures & Schedule:
Describes clients' role
|
1. Meet with LACTE and finalize evaluation
design. ETI will meet with LACTE staff
to review and revise evaluation activities outlined
in this proposal. In addition, issues such as
deadlines, deliverables, coordination, etc. will
be discussed.
2. Refine evaluation research questions.
The evaluation research questions will drive the
study. ETI proposes to work closely with LACTE
in developing the appropriate questions for the
evaluation.
(
)
|
Checks validity of instruments
|
3. Develop site visit interview guides
that reflect evaluation research questions.
The interview guides will be developed by ETI
and reviewed by LACTE prior to their use.
|
Data Collection
Procedures & Schedule
Methodological
Approach:
Describes use of multiple methods
|
4. Develop and distribute workshop evaluation
instruments to:
- Assess pre-workshop attitudes
and involvement; and
- Evaluate the impact and effectiveness
of the workshops.
ETI proposes to develop workshop evaluation instruments
and to analyze the findings. ETI proposes that
LACTE be responsible for distributing and collecting
the survey instruments.
5. Conduct on-site visits and interviews
with program participants using interview guides.
ETI proposes that all 10 sites be visited. ETI
will attend workshops and interview key staff,
faculty participants, and students individually
and in focus groups. Student/mentor interactions
and meetings will also be observed and selected
classes will be observed.
6. Develop pilot course review instrument
for faculty and students. With LACTE,
ETI will develop a course review to be completed
by both faculty and students at the end of each
pilot course.
7. Review student recruitment materials
and strategy LACTE staff. ETI will review
the recruitment materials and their use at each
site.
8. Analyze findings from workshop evaluation
instruments and course reviews. All
surveys will be entered into a computer database.
ETI will analyze the data and document findings.
9. Review program records of recruitment
efforts and outcomes for students, faculty, and
master teachers. ETI will collect and
analyze recruitment activities and outcomes at
each campus, including data from the NSF Collaboratives
for Excellence in Teacher Preparation (CETP) annual
information survey. Also, recruitment vs. retention
numbers will be reviewed.
|
Specifies development of findings & recommendations
|
10. Identify where and why recruitment
efforts have been most effective. Based
on the review of program records, ETI will develop
findings and recommendations as to effective and
ineffective recruitment and retention strategies
for students, faculty, and master teachers.
|
Specifies plan to
present recommendations |
11. Present Year One findings to LACTE.
ETI will prepare for LACTE a summary of findings
at the end of Year One.
[Proposed evaluation activities are also listed
for years 2 - 5 of the project.]
|
|
|
Excerpt 6
[Philadelphia Collaborative]
|
Methodological Approach:
Describes multiple sources of information and data
collection procedures in relation to desired project
outcomes
|
Student teachers in the program, as compared to student
teachers in the traditional program will:
Outcome
|
Assessed by
|
have a better understanding of math and science
|
- grades in relevant courses
- scores on the NTE specialty exams
|
perform better in student teaching
|
- ratings from cooperating teachers and
supervisors
- portfolio assessment (generated during
student teaching)
|
prefer the revised math and science courses
|
- course evaluations
- focus groups
|
have better attitudes toward math and science
|
- scores on math and science attitude questionnaires
|
Students in the classes taught by the student teachers
from the program, as compared to students taught by
student teachers from the traditional program will:
Outcome
|
Assessed by
|
have a better understanding of math and science
|
- grades in relevant subjects
- scores on district-wide math and science
final exams
|
have better attitudes towards math and science
|
- scores on math and science attitude questionnaires
- focus groups
|
have better attendance
|
- attendance during student teaching
|
have higher graduation rates (with a special
focus on minority retention)
|
|
|
|
|
Excerpt 7
[Massachusetts Collaborative]
|
Methodological Approach:
Describes multiple methods for formative and summative
evaluation
|
Formative evaluation will be accomplished through
the use of observations, survey instruments, and focus
groups to provide ongoing feedback to the PI's. Periodic
reviews of progress with respect to identified milestones
will inform management of any needed modifications
to their efforts. A major function of the summative
evaluation in the area of recruiting will be the documentation
of the extent to which science and mathematics majors,
especially women and minorities, choose to enter the
teaching profession. Because of the symbiotic nature
of this project, a second function of the summative
evaluation is to determine the effect of the school/college
interactions on both parties.
|
Data
Collection Procedures & Schedule:
Describes evaluator credibility
|
In addition to the evaluation and documentation activities
described above, co-PI Feldman, an experienced program
evaluator, will lead a team that will document the
implementation and evolution of STEMTEC at the organizational
level. Ethnographic and action research methods will
be used to document the ways in which this multi-institutional
reform project works. The information will be of particular
importance to NSF in its attempts to replicate CTEP
reform projects.
Evaluation and assessment will be carried out at
several levels by an integrated team directed by the
Donahue Institute. The evaluation, which is described
in detail in Section 14, will focus on summative and
formative evaluation of two major areas: course reform
and teacher recruitment. Summative evaluation will
be done primarily by the Institute. Formative evaluation
will be done by members of the team who can remain
in close contact with teachers and students in the
project. Dr. Eric Heller, Director of Research, Evaluation
and Information Technology at the Donahue Institute,
will oversee the design and implementation of the
evaluation plan. He has extensive experience over
the past decade in program evaluation for a broad
range of client groups and substantive domains. As
an organization reporting to the President of the
five UMass campuses, the institute is positioned to
perform an independent summative evaluation of the
project, and yet is physically located at the Amherst
campus. Dr. Mary Dean Sorcinelli, Director of the
UMass Center for Teaching, will oversee the standard
formative evaluations of the reformed courses. Dr.
Sorcinelli has many years of experience in providing
critical feedback and course revision suggestions
to UMass faculty. The Center will also conduct more
intensive formative evaluations such as video tape
analysis in selected courses. Dr. John Clement will
conduct intensive formative evaluation and research
in selected courses. He has twenty years of experience
in the analysis of student learning difficulties.
His efforts will address generalizable learning issues
which can inform other projects in the nation. CO-PI
Allan Feldman will document the organizational development
of STEMTEC.
|
|
|
Excerpt 8
[Oregon Collaborative]
|
Methodological Approach
|
A multi-method, multi-audience approach will be adopted
that collects both qualitative and quantitative data
and information from students, faculty and administrators
as well as from existing records.
|
Data
Collection Procedures & Schedule
|
Three clusters of evaluation-related activities
are planned. Cluster 1 will involve the Evaluation
Coordinator (EC), working with the Principal Investigator
(PI), the Project Coordinator and the Co-PIs to provide
NSF with the baseline, interim and final year information
they need for their evaluation. Cluster 2 will involve
the Evaluation Coordinator, working with the Research
Team, to develop and implement the evaluation activities
needed to address the major formative and summative
evaluation questions of the project. And Cluster 3
will consist of a series of special studies that are
reviewed and approved by the project leadership and
the Research Team.
(
)
|
Describes data sources
|
Five different groups will be asked to provide evaluative
information over the course of the project: faculty,
students, administrators, Co-PIs and those involved
with sponsoring special programs designed to recruit
students of color and women into the fields of math
and science as well as teacher education.
(
)
This section provides a summary of the major formative
and summative evaluation procedures for the project
and also proposes a series of special studies which
are contingent on the availability of resources.
|
Data
Collection Procedures & Schedule:
Describes multiple methods for collecting
formative data
|
Formative evaluation:
- annual evaluation of the Summer Institute
- late Fall surveys of Faculty Fellows & Mentor
Team members; and focus group of Teacher's in Residence
about implementation
- end of year surveys of Faculty Fellows, Teachers-in
Residence and Mentor Team members about activities,
accomplishments, concerns and progress toward developing
a collaborative
- mid-year interviews with Co-PIs about project
activities, accomplishments, concerns and progress
toward developing a collaborative
- midyear interviews of those involved with diversity-related
initiatives
- end of year review of institutional reports and
summarizing of these reports during the summer regarding
progress on project goals and includes information
on the kinds of project-related course or program
revisions made (new courses, revised courses, courses
dropped)
- annual summary of various indicators of interest
levels in OCEPT-related activities
- first, third and final year interviews with institutional
administrators about project progress
|
Describes multiple methods for collecting
summative data
|
Summative evaluation:
- a comparison of survey data collected each summer
from Institute participants with survey data collected
from the same individuals during the Winter Term
of the last year of the project
- a comparison of survey data collected Fall 1997
from samples of non-Institute faculty from different
institutions with survey data collected from the
same individuals during Winter Term of the last
year of the project; and a comparison of first-year
and last-year data collected from Institute and
non-Institute faculty
- a comparison of survey data collected during the
first and last year of the project from a sample
of undergraduates enrolled in math and science courses,
samples of students entering teacher education programs,
samples of students exiting teacher education programs
and samples of novice teachers (those in their first
or second year of teaching)
- a comparison of target project goals (see Cooperative
Agreement) with actual achieved goals, annually
and at the end of the project
|
Describes multiple methods for collecting
additional research data beyond the purposes of
the evaluation
|
Special studies:
- 4 or 5 case studies of institutional changewhat
supports or hinders change efforts?
- individual case studies of students of color and
women in various stages of interest in and preparation
for a future career in teaching
- individual case studies of a sample of faculty
who teach undergraduate courses in math and science
and who participate in the second and third Summer
Institutes
- a study of misconceptions and critical barriers
to student learning of important concepts in math
and science and what approaches hold promise for
lowering these barriers
- a closer examination of curricular materials and
instructional approaches that may hold promise for
wider dissemination
|
|
|
Excerpt 9
[Montana Collaborative]
|
Methodological
Approach:
Describes data sources
|
Methods and Instruments
Two primary data sources are used by the evaluation:
(1) Process data that is generated via internally
developed instruments and (2) institutional records.
Institutional data are collected primarily through
access to computer files at registrar's offices, student
teaching placement offices, and departments. For information
on Native American students, the data base developed
by the Center for Native American Studies, Admissions
Office, Student Service Information Systems, and Montana
Tracks Program have all been important sources.
|
Describes multiple methods
|
Process data will be collected by means of a variety
approaches including questionnaires, checklists, interviews,
journals, minutes from meetings, calendars, and narrative
reports. The specific data collection strategy and
the types of methods and instruments will depend on
a variety of factors including the complexity of the
activity being evaluated, the timeline, and the need
and purpose for formative or summative evaluation.
Evaluation Strategies
Although each evaluation strategy is developed to suit
the needs of the particular activity, some common approaches
will be used for the major types of project activities.
The following summaries indicate the methods and data
collection strategies to be used: |
Describes multiple methods and data sources
Instruments:
Identifies multiple sources for data about
project activities
|
- Course Revisions:
- Documentation including course syllabi and
materials
- Field notes from interactions, observations,
interviews
- Student demographic data
- Questionnaire on course revision strategies
- Class observations
- Student interviews
- Student surveys
- Faculty interviews
- Faculty survey
(
)
[part of the outline of evaluation questions]
C. Evaluation of evaluation
|
Meta-Evaluation
Describes processes for monitoring the evaluation
of the project
|
- What mechanisms are used for project evaluation?
- How is the evaluation plan assessed and revised?
- What is the nature of the involvement by other
STEP staff in evaluation?
- What problems, changes, or extensions of the evaluation
plan take place?
|
|
|
Excerpt 10
[Boston/Cambridge Collaborative]
|
Data
Collection Procedures & Schedule
|
November 1 - March 1
- Finish complete review of West and Beckwith files
- Attend and track results of CC and committee members
- Conduct cross-institution programmatic comparison/analysis
- Research summer institute impact/interview mentors
- Write and submit evaluation report for 3/17 annual
meeting
March 1 - July 1
- Continue to track decisions and work of task committees
- Interview committee chairs, members
- Conduct analysis of organizational change impact
- Select intern/teacher sample for April/May interviews
- Conduct intern teacher interviews
- Submit sketch of Year Four Evaluation Plan to
CC for input
- Attend reverse site visit 4/7 at NSF
July 1 - September 1
- Continue to track decisions and work of task committees
- Analyze teacher/intern data
- Attend summer institute
- Conduct individual mentor interviews
- Finalize Year Four Evaluation Plan
- Submit Evaluation Project Update by September 15
|
|
|
Excerpt 11
[Maryland Collaborative]
|
Data
Collection Procedures & Schedule:
Describes multiple methods and data sources
|
From the perspectives of faculty and students, the
MCTP Research Group continually documents the unique
elements of the program, particularly the instruction
methods that model active, interdisciplinary teaching.
Data collection strategies include regular surveys
of students in MCTP classes; audio-taped and videotaped
interviews of MCTP faculty and students; observations
of selected MCTP classes; and collection of course
materials. Thus far, areas of research have focused
on the following topics:
What data are being collected for MCTP
research?
|
Describes multiple methods and instruments
|
Both numerical and qualitative data are being collected
to address the MCTP research questions. Numerical
data derive from the administration of two Likert-type
surveys developed by the MCTP Research Group: a college
student version and a faculty version of "Attitudes
and Beliefs About The Nature Of And The Teaching Of
Mathematics And Science". Participating faculty
and students in MCTP classes (both MCTP teacher candidates
and non-MCTP students) contribute to this data base.
Qualitative data derive from semi-structured ongoing
interviews with participants in MCTP classes, MCTP
class observations, participant journals, and MCTP
course materials.
(
)
|
|
|
Excerpt 12
[Oregon Collaborative]
|
Information
Sources & Sampling:
Explains why certain participants will be
targeted for data collection
Describes anticipated differences by sub-groups
and suggests sub-group analyses
|
Several assumptions will influence the evaluation.
First, those involved in the first year Summer Institute
are viewed as a kind of "vanguard" of
the math and science educator leadership and
innovation in the State. They have been sought
out and selected precisely because they are such
"standouts". They are expected
to play a role in initiating changes and/or continuing
to initiate changes at their own institutions and
in Statewide organizations and to serve as mentors
to future Institute participants. Pre-assessment profiles
of those who participate during this first year are
expected to differ somewhat from those who participate
in subsequent years and possibly to show less change
on a number of characteristics over the five years
compared to their colleagues.
We expect to see more change in faculty practices
and attitudes at institutions where Institute faculty
continue to be especially active and visible. At one
planning meeting of the Research Team we considered
labeling our evaluation design an
"epidemiology model". Individuals
participate in the Institute, become
"carriers" of new perspectives and
practices and transmit their "virus"
to those in their local host culture. An interesting
sub-study will be one that seeks to understand how
new perspectives and practices take hold to a greater
extent in certain settings than in others. How is
it that some local environments or cultures are more
open or resistant than others to particular kinds
of change?
|
|
|
Excerpt 13
[Montana Collaborative]
|
Data
Collection Procedures & Schedule:
Describes role of key stakeholders in data
collection
|
These reports are "stand alone" documents
which include complete information about an activity,
how it was evaluated, and the results and/or discussion.
Report drafts are to be circulated to the staff member(s)
responsible for developing and/or carrying out the
activity to get input, especially on the description
of the purposes and procedures of the
activity.
|
Describes dissemination of evaluation data
|
The reports or portions of them are used
as a basis for an overall summative evaluation of the
project. They are also used for other purposes such
as developing papers for presentation at local or national
meetings, developing publications, or inclusion in project
reports to NSF. Reports and data are available to all
project staff for these or other purposes. The annual
report will be completed by March 15 each
year.
|
|
|
Excerpt 14
[Louisiana Collaborative]
|
Methodological
Approach:
Describes how project impact will be
ascertained
|
Data collected will provide the basis for evaluation
of individual Collaborative programs and the overall
Collaborative effort. The yardstick to measure success
of the Collaborative will be the extent to which national
standards are being achieved in the implementation
of specific reform programs, as reflected in the MAAs
A Call to Change and the NSTA/NCATE Standards.
A major component of evaluation efforts for the Collaborative
will be the rigorous review of campus-based proposals
by consultants. Their charge will be to assess and
review not only the potential or actual impact of
individual programs, but also to judge the Collaborative
impact of all programs.
|
|
|
Excerpt 15
[West Tennessee Geography Project]
|
Instruments:
Describes what information will be collected
from a survey
|
Institute Teacher Survey and
Interview
At the completion of the six weeks instruction, all
participating teachers will be administered a survey
regarding their experiences and reactions to them.
The survey will employ both closed-ended (Likert-type
ratings) and open-ended items addressing the following
dimensions: (a) quality of instruction (e.g., organization,
clarity, interest level); (b) relevancy/usefulness
of the material; (c) appropriateness/value of each
of the five core concepts; (d) perceived degree of
learning of each core concept, geographic skills,
content, current issues, and instructional strategies
for teachers to use in their own classrooms; and (e)
helpfulness/usefulness of specific components of the
leadership model (sharing the vision, developing expertise,
empowering others through communication, and continued
monitoring and support. The survey will also address
the quality/usefulness of the leadership training
manual and teacher self-evaluations of the effectiveness
of the in-service training seminars that they conduct
at their schools. Demographic information (age, experience,
gender) will also be collected as possible correlates
with survey responses.
|
|
|
Excerpt 16
[Mercy
College]
|
States evaluation goals
Information
Sources & Sampling;
Methodological
Approach:
Describes one-group pre-post design with three
data collection points
Describes stakeholder involvement
Describes iterative process of instrument development
|
Post-Assessment: In our evaluation
component of the seminars/workshops we must test
(a) if participants have become aware of new information
during the course of the workshops/seminars and
(b) if they are likely to integrate what they
have learned into their courses at their home
institutions. To gather this information, we will
ask all participants to complete three questionnaires.
One will be part of the workshop application (pre-test)
that will measure the level of participants' awareness
and experience before the workshop. A second will
be part of the workshop evaluation that will be
filled out at the conclusion of the workshop (post-test)
and will measure any growth in awareness. Finally,
there will be a follow-up questionnaire to measure
the integration of the information gleaned by
participants into their own courses. A workshop
evaluation with regard to format and content will
also be administered. We will also be in contact
with the Center for the Study of Ethical Development
as part of our external evaluation process and
we will use other pertinent instruments of evaluation
that they recommend. Ongoing modifications will
be made based on the results of the questionnaires
and instruments of evaluation. The quality and
extent of syllabus changes of the Mercy faculty
will be a component of the evaluation of the project
as will be the quality of the modules developed.
|
|
|
Excerpt 17
[Maricopa County Community College District]
|
|
The project evaluation plan is outlined below. Column 1 of the plan lists the tasks that involve the project evaluators. Column 2 lists the primary source(s) of the data for each task, with PD indicating project directorate and PE indicating project evaluators. Column 3 shows expected dates that the task will be included in the end-of-semester evaluation reports for the four semesters, beginning with Fall 2000 and ending with Spring 2002.
This plan is supplemented by event schedules for peer mentors, faculty development activities and module development.
Key
PD= Project Directorate
PE= Project Evaluators
|
Data
Collection Procedures & Schedule:
Presents collections schedule & data sources
|
Evaluation Plan
Task
|
Data Source |
Report Dates |
Develop project data-collection and evaluation plan |
PD, PE
|
Fall 2000 |
Survey faculty on use of science reform approaches |
PD, PE
|
Fall 2000 |
Audit peer mentor tasks |
PD, PE
|
End of each semester |
Audit faculty development tasks |
PD, PE
|
End of each semester |
Write module format specs |
PD, PE
|
End of summer 2000 |
Train developers on module specs |
PD, PE
|
Fall 2000 to Fall 2001 |
Review modules |
PD, PE
|
End of each semester |
Develop field-test methods and
instruments |
PE
|
Fall 2000 |
Conduct field tests and submit reports |
PD, PE
|
End of each semester |
Audit module development and implementation |
PE
|
End of each semester |
Write evaluation report |
PE
|
End of each semester |
|
|
|
|
|
|