home
  : Plans : Teacher Education





























home reports instruments plans
search

Teacher Education Embedded Plan 2

Return to Teacher Education Plans

A Summer Research Experience For Teachers

This evaluation plan is embedded in a larger proposal prepared by Southwest Texas State University for the Summer Research Experience for Teachers project.

Table of Contents:

  1. Evaluation Plan
    • Evaluation Overview: Evaluation Purposes, Evaluator Credibility
  2. Evaluation of Summer Activities
    • Design: Methodological Approach, Information Sources & Sampling, Instruments, Data Collection Procedures & Schedules
    • Analysis Process: Quantitative Analysis
  3. Evaluation of Activities in the Academic Year following the Summer Research Experience
    • Design: Methodological Approach, Information Sources & Sampling, Instruments, Data Collection Procedures & Schedules
    • Analysis Process: Quantitative Analysis

A Summer Research Experience For Teachers

Evaluation Plan

We will employ formative and summative evaluation of both cognitive and affective effects of the summer research experience in order to ensure and evaluate the success of our program. Formative evaluation measures will be employed to continually hone the project to maximize the benefits of the program for mentors, teachers, and teachers' students. Summative evaluations will be used to determine what effect the program is having at these three levels and whether the program is successful in accomplishing its goals. We will consider the program a success if we can demonstrate (1) improvement in teachers' understanding of disciplinary content, (2) an increase in their enthusiasm for science, (3) a transfer of the research experience into the classroom through the development of new teaching exercises based on their research experience, (4) the development of a long-term relationship between teachers and the research community, particularly through the use of educational technology, (5) a positive impact on students' appreciation and understanding of science, and (6) dissemination of the program, especially that results in an increase in the number of applicants.

The evaluation procedure will be designed, implemented and analyzed in consultation with Dr. Rose Asera (Charles A. Dan Center, University of Texas, Austin) and Dr. Paul Raffeld (Testing Research-Support and Evaluation Center, SWT). The Dana Center is home to a number of educational initiatives that work for equity and excellence for all students in Texas. These initiatives include the Texas SSI, the regional center for technical assistance to federal programs, the Texas Educational Network (TENET), among others. The research and evaluation component of the Dana Center works to incorporate quantitative and qualitative evaluation in all Dana Center program design and implementation activities and to address applied research questions raised in the process of implementing center initiatives. The Testing Research-Support and Evaluation Center at SWT was established to support the faculty with regard to research methods, data analysis, test constructions, survey construction and program evaluation. Dr. Raffeld was the first director of this Center established in 1994. He has had 22 years of experience in program evaluation at both the public school and university level, teaches courses in test development and statistics, and is currently assisting in the program assessment of our university departments.

Return to Table of Contents

Evaluation of Summer Activities

Formative Evaluation. Formative evaluation measures will be designed to optimize the research experience for participants, both teachers and mentors, and transferability of activities. As a part of the application process, teachers will be asked to identify and rate areas of interest for their research experience (1 = first choice, most interested; 2 = second choice; 3 = third choice) and will be asked to rate themselves in terms of competence in the subject area (1 = highly competent; 2 = moderately competent; 3 = barely competent). Response to these questions will be used in matching teachers to mentors with area of interest being weighed more heavily that perceived level of competence (See C. Recruitment and Placement of Teachers with Mentors). We will try to place teachers in labs most closely corresponding to their greatest area of interest, and where possible, in areas in which they feel moderately competent. The reason for placing teachers where they feel moderately competent is to maximize gains in their learning of disciplinary content while minimizing the level of anxiety and frustration they are likely to feel if they are thrown into a situation where they feel incompetent. These data may also be used for further recruitment of mentors into the program if a given interest area cannot support the number of teachers interested in the area.

In order to prepare teachers and mentors for the summer research experience, teachers accepted into the program will be given a short diagnostics, "we're-about-to-learn-really- useful-subjects" (WALRUS), examination of the general subject area of their research (biology, chemistry, math, physics, or technology). This cognitive exam will be used to help identify content areas in need of review so that prescriptive measures can be taken during the course of the research experience.

The level of participation of the teachers and their mentors will be evaluated during the course of the research experience. Teachers will keep a journal (separate from any laboratory notebook required by their mentor) in which they will record their daily activities, thoughts and notes on interactions with their mentor (e.g. hours spent with mentor). A set of questions will be developed to guide teachers in minimal entries into their journal. The journals and log books will be the property of SWT and will eventually be archived by the PI and co-PI.

Over the course of the project, mentors will be evaluated for the experience they provide to the teacher. This evaluation will be based on information obtained during both the weekly meetings with the participating teaching and periodic, focus group meetings of the mentors. The first focus group meeting will occur during the second week of the summer experience and will be led by a professional facilitator. The objective of the meeting will be to have the scientist/mentors present an outline of their teachers' projects for general interest and for quality control purposes, and to avert or address any problems they may have with participating teachers. If irresolvable problems arise between the scientist/mentor and a teacher, the teacher will be reassigned to a different mentor. In the longer term, mentors consistently providing a poor research experience to teachers will be removed from the program. Such determination will be made as a group by the Key Personnel and will be based on their first-hand knowledge and upon teachers' written evaluations of themselves, their mentor, and the program as well as discussions during focus group meetings.

Summative Evaluation. Summative evaluation will begin with data collected on the applicant pool. Demographic information, including age, ethnicity, sex, marital status, family status, major in college, membership in professional organizations, years of experience in teaching the subject area, professional journals read regularly and recent professional development experiences, will be provided at the applicants' discretion and summarized. Marital and family status will be evaluated for two reasons: (1) to determine if married teachers and teachers with children are applying at a rate corresponding to their representation in the population, and (2) to determine whether inclusion of literature on activities and facilities available for children would be helpful in the recruitment process. Effort will be made to maintain parity between the applicant pool and the participants, especially with respect to sex, ethnicity and age. We will not use these data to correlate performance with any of these parameters. Applicants will also be asked to designate areas of interest and level of comfort/confidence in the designated areas. Additionally, we will summarize how teachers were placed (e.g., 75% were placed with their first choice; 20% with their choice, etc.) for use during the analysis of the outcome of the program.

Summative evaluation will include cognitive data on the project's success in enhancing participants' disciplinary knowledge. These data will be obtained through pre- and post- testing. Once teachers have been accepted into the program and assigned to a subject area, they will be administrated a WALRUS test (see Formative Evaluation) in that area. The same test will be administered to teachers after their research experience to determine whether they have advanced in their knowledge of their area of interest, particularly in the sub-area where they have been doing research. The percent change in exam scores will be summarized for the group by subject area as well as for the whole group. Amount of time spent with mentor, amount of time spent on the project (as determined by journal entries), initial level of interest in the research area to which the teacher was assigned, and the comfort level the teacher felt initially will be considered when examining percent change in exam scores.

Return to Table of Contents

Evaluation of Activities in the Academic Year following the Summer Research Experience

Formative Evaluation. In addition to follow-up activities described earlier, scientist/mentors and teacher participants will keep records during the academic year following the research experience, recording the number and type of interactions they have with each other. Key Personnel will monitor the mentors' log books monthly to evaluate the level of interactions that occur and will evaluate whether changes need to be made in the number or type of interactions.

Teachers admitted into the program will be asked to use questionnaires to survey their students' attitudes toward science prior to their participation in the program. The questionnaires will document students' knowledge of science, awareness of career opportunities in sciences, the nature of scientific research, the future of scientific research and more. Teachers will process the surveys during the summer, analyze the results and consider what would be desirable responses, then use this information in the planning of activities designed to transfer the research experience.

Summative Evaluation. The summative evaluation of the activities that occur after the summer research experience will focus primarily on the teacher participants and their students. Teachers will be asked to evaluate changes they perceive in themselves and their students as a result of the experience. Survey instruments used in the evaluation will focus on teachers' excitement about their subject area, their perceived success in transferring their experience from the research laboratory into the classroom, and their perception of the impact on their students. In addition, a catalog of changes made in teaching practices will be developed in which changes made will be categorized and summarized. (e.g. 70% of teachers incorporated new labs; 90% of teachers changed lecture content, etc.). These data will also be used to determine what sorts of changes have the greatest impact on students' attitudes and mastery of content.

The initial analysis of the impact of the program on students will be based on changes observed between students' answers to questions on surveys administered by a teacher prior to the research experience (see Formative Evaluation of post-research activities) and students' answers after the research experience. Teacher will administer identical surveys to their students at the end of the year following the research experience. The outcome of the surveys will be compared and analyzed by the SWT Testing Center.

The impact on students will be assessed further by instruments designed by participating teachers. This evaluation tool would require a written description of changes to be made in teaching practice and testing of students to evaluate/reflect the changes that were made. For example, if a teacher chose to cover a content area previously left uncovered, she or he would test students over the newly covered area.

Quantitative assessment of the impact of this program on students will be made by tallying the number of students of participant teachers that go on to take advanced courses in science. The post-experience students can be compared to the pre-experience students of a given teacher. Data can also be summarized for teachers who participated in a given discipline. We are aware that looking at changes in the pattern of courses taken is vulnerable to changes in state mandates about graduation requirements; however, such changes can be easily tracked and taken into account.

Although not the main goal of the program, an assessment will be made of the effect of the program on the mentor scientists. The tool used for the assessment will include questions or statements employing a five-point Likert scale and will also pose questions to be answered in essay format. These questions will be designed to determine what mentor scientists learn about schools, classrooms, teachers and students, and will also evaluate their ideas on preparing future citizens and scientists.

One of the interim measures of programmatic success will be the ways the information is shared and extended, for example with other teachers within the school or district, through the established network, or through professional organizations. We anticipate that excited, engaged teachers will want to describe their work and share it with colleagues. Cataloging if and how teachers share their summer experience with their colleagues will therefore provide a measure of success. Additionally, we would consider serial applications from the same teacher or from other teachers from the same school to be a measure of success of the program.

Return to Table of Contents