home
  : Reports : Innovative Technology Experiences for Students and Teachers (ITEST)





























home reports instruments plans
search

ITEST Annotated Report Excerpts

Return to ITEST Reports

Design

The table below contains report excerpts (right column) accompanied by annotations (left column) identifying how the excerpts represent the Design Criteria.

Annotations Report Excerpts
 

Excerpt 1 [Silicon Prairie Initiative for Robotics in Information Technology (SPIRIT) University of Nebraska, Lincoln]

Instruments:
Describes content and structure of survey instrument

Teachers responded to a survey that was given at the beginning of the workshops and then again at the end. The beginning survey asked for basic biographical information, professional qualifications, teaching experience, and professional development. A series of questions also measured perceptions about project-based learning (PBL) and science, technology, engineering and mathematics (STEM). Another set of questions was designed to measure participants evolving experiences and expectations, but did not repeat the demographic or background questions. The ending survey did, however ask three specific open-ended questions about the teachers' experiences about the workshops they had just completed. Responses to the open-ended questions were reviewed and coded into categories.

 

Excerpt 2 [Silicon Prairie Initiative for Robotics in Information Technology (SPIRIT) University of Nebraska, Lincoln]

Instruments
Reports on technical quality in terms of reliability and its implications for conducting analyses

Reliability of the subscale for perceptions about PBL was measured using ten items. Cronbach's Alpha for the PBL scale was .82, which is an acceptable level of reliability. Reliability of the subscale for perceptions about STEM was measured using only 10 of the 13 items administered, as three items did not perform well and were adversely affecting reliability of the scale. Using just the 10 acceptable items, Cronbach's Alpha was .75, which is an acceptable level of reliability.

 

Excerpt 3 [Urban Ecology, Information Technology, and Inquiry Science for Students and Teachers, Boston College]

Instruments
Describes instruments and data used to revise and document their technical quality

Self-Efficacy and Other Attitudes Regarding Career Education, Science Teaching, and Technology Use

In Year 1, EDC had developed and administered a pre-post teacher survey at the 2006 summer institute, and in Year 2, EDC analyzed and reported on the data from that survey to the BC team in November 2007. The findings were encouraging but for some items on the survey there was a great dea of variability. Therefore, for Year 2 the survey was revised -- creating, modifying, and deleting items for some scales in order to achieve greater reliability. The survey was tested for reliability with a pilot group of 79 undergraduate education students and adjusted based on the results from Cronbach's alpha and factor analyses. The instrument was sufficiently reliable to be administered during the Year 2 summer institute training in July and August 2007. The findings of the pre-post teacher survey are summarized as follows (survey included in Appendix A).

 

Excerpt 4 [Urban Ecology, Information Technology, and Inquiry Science for Students and Teachers, Boston College]

Instruments
Summarizes technical quality of subscales for instrument

Self-Efficacy and Other Attitudes Regarding Career Educaion Science Teaching, and Technology Use

As mentioned above, items 11-45 had been grouped into six attitude scales. A reliability score (Cronbach’s alpha) was calculated for each. Reliability was calculated on the entire twenty-nine member sample by including the pre-test responses for the twenty-six who completed the pre-test and the post-test responses for the three others. The results below in Table 1 demonstrate sufficient reliability for all scales.

Table 1. Scale Reliabilities for the Pre-Post Teacher Survey

Domain

Scale Name

Scale Description

Number of Items*

n

Cronbach's Alpha

Career Education

 

Career Ed.: Ownership

Educators' perception of the importance of their own role in providing STEM career information to students

5

29

.745

Career Ed.: Competency

Educators' level of knowledge about how to guide students into STEM careers

4

29

.873

Science Learning and Teaching

Self-Efficacy Teaching Field Investigations

Educators' self-efficacy in teaching science field investigations (comfort with site selection, managing students and equipment outdoors)

3

28

.927

Technology Use

Attitude: IT to Engage Students in Science Content

Educators' attitude about the usefulness of IT to engage students with scientific content.

5

29

.932

Inquiry Science

 

Formulating Explanations, Models, and Arguments

Educators' self-efficacy in teaching students to formulate scientific explanations, models, and arguments

7

29

.967

Designing and Conducting Investigations

Educators' self-efficacy in teaching students to design and conduct scientific investigations

11

27

.986

*The actual items comprising each scale are listed in Appendix A, "Educator Survey: Scales and Items."

The next step in analyzing items 11-45 was to compute scale scores. A scale score is the mean of a subject’s responses to the individual items comprising a scale. Then, we averaged the scale scores for each scale, coming up with a set of grand means. A two-tailed, paired-sample t-test was used to check the statistical significance of the differences between the grand means on the pre- and post- administrations of each scale.

 

Excerpt 5 [EcoScienceWorks (Foundation for Blood Research)]

Methodological Approach
Formative methodological approach describes the purposes, methods, data collection, and schedule

The evaluation consisted of observation as well as survey methodology. Specifically, the evaluator spent two days at Camp Kieve observing process and recording impressions. An evaluation was conducted at week’s end to collect data from teachers about their experience, successes, and concerns. During the second week on Hog Island, the evaluator was in residence the entire week with project staff, teachers, and students. Again, she observed all activities, making notes and recording impressions. Five different curricular units, each with an EcoBeaker: Maine Explorer (EBME) computer simulation component and a field exercise (FE) were taught to the 13 attending students. Students were asked to complete an evaluation of the EBME and FE for each of the units. Additionally, students completed an end-of-week feedback form in which they evaluated the week’s activities.

 

Excerpt 6 [Highly Interactive, Fun Internet Virtual Environment in Science (HI-FIVES), North Carolina State University]

Instruments
Overview of evaluation instruments for Cohort 1

Just prior to their program participation, the five ITEST HI-FIVES Kenan Fellows (KF 1) completed a baseline Teacher Leadership with Technology Survey in January 2006. This survey was designed to capture their incoming perceptions regarding their ability to serve in a technology leadership capacity within their school, as well as their own likelihood of using technology to foster problem-based instruction. A professional development evaluation was also administered immediately following the Gaming in Science Workshop that provided data on whether HI-FIVES Fellows believed the training enhanced their knowledge and comfort level with designing and using video games for teaching science. ITEST HI-FIVES fellows also recorded (and will continue to record) their leadership behaviors on the leadership profile section of the Kenan Fellows web site in the form of presentations, grant writing, and other leadership activities such as school committee work, curriculum writing or professional development provided to colleagues. Further, cohort I completed the TAC 3.2B instrument which was created to gather technology efficacy among the participants.

 

Excerpt 7 [Robotics & CPS/CIS in 4-H: Workforce Skills for the 21st Century, University of Nebraska]

Instruments
Description of development of a think-aloud protocol for a robotics problems.

The second activity involves a “think-aloud” protocol where students are asked to verbalize how they approached and solved a robotics lesson on gears and gear ratios. The lesson is built around the concept of using gear ratios to increase the speed and the related distance a robot will travel over three seconds. The lesson includes an experiment with three conditions based on the number of teeth (t) on each gear. The three experimental conditions in the lessons are: 1) a 24t gear meshed with a 24t gear, 2) an 8t gear meshed with a 40t gear, and 3) an 8t gear meshed with a 24t gear. Each experimental condition is tested three times, and data concerning the gear ratio, average speed (cm/sec) per trial, and overall average speed for each condition is recorded. The STEM concepts covered in this example include science (gathering and comparing data), technology (programming the robot to move forward for three seconds), engineering (designing and modifying gears), and mathematics (calculating gear ratios and averages). The student’s explanation will be videotaped and analyzed according to a rubric developed by the principal investigators. It is estimated that this activity will take approximately 10 minutes.

 

Excerpt 8 [SRI Build IT]

Methodological Approach
Overviews methods, types of instruments, and respondents.

Methods
The formative evaluation of the Build IT program is an extensive and organic process ofdata collection by SRI researchers, ongoing communication between sites and SRI, andefforts by the BuildIT group leaders to gather and submit information to SRI. The various data sources of the formative evaluation of Unit 1 comprised a set of structured observations at each of the BuildIT programs, pre- and post-interviews with key stakeholders (i.e., program manager, program coordinators, and group leaders), weekly group leader feedback forms, opportunistic girl participant feedback forms, and review of thecurriculum and artifacts produced by the girls and group leaders.

 

Excerpt 9 [SRI Build IT]

Methodological Approach

Data
We use student self-report survey data in order to assess girls' technology-related development. Surveys were distributed to Build IT participants at EXPLORE Middle School (Oakland Unified School District [OUSD]) and Muir Middle School (San Leandro Unified School District [SLUSD]) at the beginning and end of the school year. Nearly half (n=25) of the original 55 Build IT participants at these two sites exited the ALL STARS program before completing a post-test (and in some cases, pre-test) assessment.

The comparison group of non-Build IT middle school girls was comprised of a convenience sample of girls at schools in OUSD and SLUSD with which HTA staff were able to secure agreements. F orty-five female students at Bret Harte Middle School were selected as the OUSD comparison group. We had intended to collect information from girls in the after-school program. However, surveys were administered by school personnel to girls in a PE class during the school day. The second comparison group consisted of 45 non-ALL STARS SLUSD after-school program participants at Muir Middle School. The after-school program at Muir served over 100 students during 2005-2006. A variety of activities were offered and the highlight was the academic intervention run by teachers. According to the activity log submitted in the fall, the after-school program offered year-round academic intervention for 4 hours each week for students who scored Basic or below in either subject. These sessions were taught by certificated math and English teachers. There were also homework sessions, arts & crafts, dance, board games, basketball, and critical thinking sessions, but additional supplies and training are required to bring the enrichment portion up to par.

 

Excerpt 10 [SRI Build IT]

Instruments

Participants were asked to respond to a pair of surveys: the IT Attitudes Survey covered the topics of academic plans, interest in and attitudes regarding IT careers, and the perceived skills of respondents. The Fundamental IT Concepts Survey asked respondents to read a series of brief vignettes regarding issues and problems that arise in the design process or everyday technology usage and choose the correct response from a multiple choice selection. The use of multiple choice items—as opposed to open response—was deemed desirable in order to limit survey administration time and to increase the response rate. Items on both surveys were constructed principally by the Build IT SRI team with the assistance and input of ALL STARS staff and HTA evaluators. It should be emphasized that this report covers the pilot year of both curriculum development and survey development. As such, we will use the results presented herein to assess the adequacy of the survey instruments to measure the attitudes and knowledge of study participants. HTA staff conducted interviews with the Build IT Girls Inc staff at the end of the school year in order to assess their overall views of the curriculum, the influence of Build IT on staff and participants and their satisfaction with the program. These interviews consisted of a set of 12 questions and each interview ran for approximately 1 hour. SRI staff conducted formative interviews with the Girls Inc staff in order to address specific issues relating to each curriculum unit.