go home

Select a Professional Development Module:
Key Topics Strategy Scenario Case Study References

Introduction  |  Step 1  |  Step 2  |  Step 3  |  Step 4  |  Step 5  |  Step 6  |  Step 7  

Step 7: Check all questionnaire responses for completeness and interpretability before data entry and analysis.

If you are administering a computer-based questionnaire in which the responses are automatically entered into a database, the software usually will offer built-in checks to ensure the quality of the data (e.g., not allowing double entries or missing data). If your questionnaire does not have these features or if you are using a paper form, you should check responses for errors before sending them on for data entry and analysis. This check is best done by evaluation staffers who are familiar with the content and administration of the questionnaire. The check should be focused on two issues: completeness and interpretability.

Completeness

It is important to make sure that no essential information is missing. For example, does every questionnaire have the proper identifying information (e.g., ID code, date)? Are any answers missing or improperly recorded? Have respondents properly skipped over questions when directed? (For more information on branching in questionnaires, see the discussion of navigational cues in Questionnaire Design.) Evaluators must decide how missing or improper data will be handled, and then apply those decisions consistently. For example, assign the same value to missing data for all variables. Never assign to missing data the same value that you assign to a possible response, unless you can be positive that a nonresponse means the same thing as a particular legitimate response. With well-constructed questions, this should never be the case.

Missing data of any kind are challenging, because they raise the issue of whether to include individuals who do not provide complete data. Doing so means that the study will have different numbers of respondents for different items. An evaluator's decisions about missing data will depend on factors such as the relative importance of different questions and the larger purpose of the evaluation. Your decision-making on this should rest on your being clear about what statistics you will need to answer your evaluation questions and what amounts of data are needed to generate those statistics.

Large blocks of missing data usually suggest inadvertent omission. It may be worth considering whether the respondent can be asked to fill them in. Sporadic missing data are more likely to denote purposeful omission, which typically occurs when the respondent considers a question confusing or intrusive. Confusion can be minimized by posing clear questions and sets of answer choices that capture the full gamut of possible responses (see more on this topic in Writing Questionaires). Intrusive questions, such as ones about salary, are less threatening if they appear at the end of a questionnaire (see Questionnaire Design).

Interpretability

The topic of interpretability applies principally to paper questionnaires, where handwritten answers may be ambiguous or illegible. A respondent may check two answers where only one is allowed, or have handwriting that is hard to read. It is advisable for more than one reader to decide independently how to handle each anomaly and then reach a group consensus. If there is no way to resolve the coding of some answers, the data should be treated as missing (see the preceding section).