Step 5: Determine the optimal sample size.
You learn from the superintendent and the math curriculum committee that they do not
expect to see a big intervention effect when comparing the intervention and control
groups on the standardized test scores. Therefore, they want to be assured that the
statistical tests used in data analysis will detect any true difference, even if it is
small. Otherwise, they will not be able to properly weigh the pros and cons of a full
adoption of the intervention.
There are no previous studies of the effect of Math World on student performance.
Hence, there are no answers to the question of how large an effect to expect. You
discuss with the key stakeholders what effect size would satisfy them. Knowing what
previous experience others have had with the intervention in similar schools would
help in determining the size of the expected effect.
Then you discuss with them about the relative consequences of Type I and Type II
errors. You learn that they are more worried about missing an effect (Type II error)
than finding an effect that does not really exist (Type I error). In other words,
they are more interested in having high power than high confidence. They are not as
worried about the latter because there are few risks or expenses associated with
continued use of Math World by interested teachers, even if the detected effect is
erroneous. You decide that you can set Type I error at 10% and use this value to do
your sample size and power calculations.
Finally, with the help of a statistician, you explore the effect size that you can
detect with 80% power (and a Type I error of 10%). You examine the trade-offs and reach
a decision on an acceptable sample size. The additional evidence provided by the use of
a pretest in the experimental design has made it possible to keep the sample size
manageably small.
|