 |
 |
 |
: Reports : Innovative Technology Experiences for Students and Teachers (ITEST) |
 |
 |
|
 |
|
 |
 |
 |
 |
 |
ITEST Annotated Report Excerpts
|
Return to ITEST Reports
|
Design
The table below contains report excerpts (right
column) accompanied by annotations (left column) identifying
how the excerpts represent the
Design
Criteria.
|
Annotations |
Report Excerpts |
|
|
Excerpt 1
[Silicon Prairie Initiative for Robotics in Information Technology (SPIRIT) University of Nebraska, Lincoln]
|
Instruments
Describes content and structure of survey instrument |
Teachers responded to a survey that was given at the beginning of the workshops and then again at the end. The beginning survey asked for basic biographical information, professional qualifications, teaching experience, and professional development. A series of questions also measured perceptions about project-based learning (PBL) and science, technology, engineering and mathematics (STEM). Another set of questions was designed to measure participants evolving experiences and expectations, but did not repeat the demographic or background questions. The ending survey did, however ask three specific open-ended questions about the teachers' experiences about the workshops they had just completed. Responses to the open-ended questions were reviewed and coded into categories.
|
|
|
Excerpt 2
[University of Nebraska, Lincoln]
|
Instrument
Reports on technical quality in terms of reliability and its implications for conducting analyses.
|
Reliability of the subscale for perceptions about PBL was measured using ten items. Cronbach's Alpha for the PBL scale was .82, which is an acceptable level of reliability. Reliability of the subscale for perceptions about STEM was measured using only 10 of the 13 items administered, as three items did not perform well and were adversely affecting reliability of the scale. Using just the 10 acceptable items, Cronbach's Alpha was .75, which is an acceptable level of reliability.
|
|
|
Excerpt 3
[Boston College]
|
Instrument
Describes annotation instruments and data used to revise and document their
technical quality.
|
In Year 1, EDC had developed and administered a pre-post teacher survey at
the 2006 summer institute, and in Year 2, EDC analyzed and reported on the
data from that survey to the BC team in November 2007. The findings were
encouraging but for some items on the survey there was a great dea of
variability. Therefore, for Year 2 the survey was revised -- creating,
modifying, and deleting items for some scales in order to achieve greater
reliability. The survey was tested for reliability with a pilot group of
79 undergraduate education students and adjusted based on the results from
Cronbach's alpha and factor analyses. The instrument was sufficiently
reliable to be administered during the Year 2 summer institute training in
July and August 2007. The findings of the pre-post teacher survey are
summarized as follows (survey included in Appendix A).
|
|
|
Excerpt 4
[Boston College]
|
Instrument
Summarizes technical quality of subscales for instrument
|
Self-Efficacy and Other Attitudes Regarding Career Education, Science
Teaching, and Technology Use
As mentioned above, items 11-45 had been grouped into six attitude scales.
A reliability score (Cronbach’s alpha) was calculated for each. Reliability
was calculated on the entire twenty-nine member sample by including the
pre-test responses for the twenty-six who completed the pre-test and the
post-test responses for the three others. The results below in Table 1
demonstrate sufficient reliability for all scales.
Table 1. Scale Reliabilities for the Pre-Post Teacher Survey
Domain |
Scale Name |
Scale Description |
Number of Items* |
n |
Cronbach's Alpha |
Career Education |
Career Ed.: Ownership |
Educators' perception of the importance of their own role in providing STEM career information to students |
5 |
29 |
.745 |
Career Ed: Competency |
Educators' level of knowledge about how to guide students into STEM careers |
4 |
29 |
.873 |
Science Learning and Teaching |
Self-Efficacy Teaching Field Investigations |
Educators' self-efficacy in teaching science field investigations (comfort with site selection, managing students and equipment outdoors) |
3 |
28 |
.927 |
Technology Use |
Attitude: IT to Engage Students in Science Content |
Educators' attitude about the usefulness of IT to engage students with scientific content. |
5 |
29 |
.932 |
Inquiry Science |
Formulating Explanations, Models, and Arguments |
Educators' self-efficacy in teaching students to formulate scientific explanations, models, and arguments |
7 |
29 |
.967 |
Designing and Conducting Investigations |
Educators' self-efficacy in teaching students to design and conduct scientific investigations |
11 |
27 |
.986 |
*The actual items comprising each scale are listed in
Appendix A, "Educator Survey: Scales and Items."
|
The next step in analyzing items 11-45 was to compute scale scores. A
scale score is the mean of a subject’s responses to the individual items
comprising a scale. Then, we averaged the scale scores for each scale,
coming up with a set of grand means. A two-tailed, paired-sample t-test
was used to check the statistical significance of the differences between
the grand means on the pre- and post- administrations of each scale.
|
|
|
 |
|
|