Tuesday 16th July
Wednesday 17th July
Thursday 18th July
Friday 19th July
Download the conference book
Considerations in Choosing Between Different Pretesting Methods |
|
Convenor | Dr Timo Lenzner (GESIS - Leibniz Institute for the Social Sciences) |
Coordinator 1 | Ms Astrid Schuessler (Justus-Liebig-University Giessen) |
Nowadays, survey researchers have the possibility to choose between a huge amount of different pretesting methods. On the one hand, there are qualitative methods such as focus groups, vignettes or card sorts, expert reviews and cognitive interviewing, which are often used at an early stage of scale development or testing (pre-field-techniques). On the other hand, researchers can draw on quantitative methods (so-called field-techniques) such as interviewer or respondent debriefing, behavior coding, response latency, split-ballot experiments and statistical modelling. Each of these methods has its own strengths and weaknesses in identifying question problems. Therefore, questionnaire designers normally use a combination of different methods to design and pretest a (new) questionnaire. This session invites papers that...
(1) exemplify which pretesting method might best be used during which phase of the questionnaire development process;
(2) highlight the relative effectiveness of different pretesting methods in comparison to each other;
(3) demonstrate how different quantitative and qualitative pretesting methods might best be used in combination (best-practice examples).
While cognitive interviewing helps to identify how respondents understand questions and reveal problems in self-completion of the questionnaire, experts deliver additional expertise on survey content, questionnaire design and fieldwork administration. Apart from the particular purpose of use, involving experts offer a less time-consuming and cost-efficient approach for applying various qualitative methods from another perspective. In literature, most attention has been paid to structured expert reviews, which are primarily applied in the early stages of questionnaire testing.
The Federal Statistical Office of Germany (FSO) is currently developing an individual strategy to conduct qualitative interviews with external experts at later stages of questionnaire testing in addition to cognitive interviews with potential respondents. Especially for business surveys, which play a major role in official statistics, the FSO highly benefits from experts in trade associations or other institutions related to survey content. Besides their privileged access to essential information about the subject and the target population, experts can also deliver feedback on first testing results. Combining both viewpoints (respondents and experts) helps to confirm testing results from different angles and prevent measuring artefacts by balancing the weaknesses of one method by the strengths of the other.
This presentation focuses on possibilities to link expert knowledge gainfully to respondents' experiences by outlining pretesting examples of the FSO. Hence, advantages of combining both methods become evident. Furthermore, some challenges of implementing this approach need to be mentioned e. g. how to handle differing.
Often, preparations for fielding a new quality survey include testing of the questionnaire, interviewer-respondent interaction and numerous technical details of the survey, followed by a pilot to see how the whole process and interview setting work.
In response to the difficult and sensitive topic, the survey on gender-based violence against women by the European Union Agency for Fundamental Rights (FRA) was pre-tested using (i) cognitive interviews, (ii) behavioural coding, (iii) focus group discussions, and (iv) interviewer feedback questionnaire which was completed after each (v) pilot interview. Consequently, the testing collected information on the reactions and inputs of the interviewees as well as the interviewers, and experts who were assessing the interview situation as external observers.
Three topics from the questionnaire were chosen for detailed testing: (a) experiences of harassment and stalking, (b) experiences of psychological violence by current or previous partner and (c) experiences of physical and sexual violence by current or previous partner.
The presentation will discuss the extent to which similar or different problems were identified by these testing methods, and the issues which might have been left uncovered in the absence of a particular testing approach. The presenters will also examine the ways in which it was possible to improve the questionnaire and adapt the training of the interviewers to take into account the potential difficulties, as demonstrated in the pre-test.
Comparing the results of different pre-test methods can not only validate findings, but provide a fuller understanding of the quality of questionnaire items. During the development of modules for Round 6 of the European Social Survey (ESS), a range of quantitative and qualitative pre-test methods were employed, leading to the development of questions that were further tested in a large scale mixed method two nation pilot.
One of the modules selected for ESS Round 6 was 'understandings and evaluations of democracy', developed in collaboration with a team of academics led by Professor Hanspeter Kriesi from the University of Zurich. Focusing on this module, this paper evaluates the use of different pre-test methods within the Round 6 pilot study, triangulating qualitative feedback from respondent and interviewer debriefs with quantitative pilot data. The Cross-national Error Source Typology (CNEST), which was developed as a tool for improving the effectiveness of cross-national questionnaire design (Fitzgerald et al, 2009), is applied to these pre-test findings to identify and categorise sources of error in the questions. This paper illustrates how this triangulation, framed by the CNEST, increased the effectiveness of the pilot and facilitated the development of improved questions or dropping of a concept from the module where appropriate. The benefits and challenges that accompany the simultaneous use of multiple pre-testing tools are highlighted.
There are many possible procedures for pretesting survey questions mentioned in the literature. We will mention 13 procedures. Given this situation it makes sense to evaluate the different procedures with respect to their advantages and disadvantages. In my presentation different criteria for the choice of an evaluation procedure for survey questions are discussed. Firstly, we mention a practical criterion: the amount of data collection the procedures require. Secondly, we suggest the distinction between personal judgments and model based evaluations of questions. Thirdly , we suggest that it would be attractive if the procedure could evaluate the following aspects of the questions: 1. The relationship between the concept to be measured and the question specified, 2. The effects of the form of the question on the quality of the question with respect to: a. the complexity of the formulation, b. the precision, c. possible method effects, d. many other characteristics, 3. The social desirability of some of the response categories. Besides that, it would be attractive if the procedure could indicate the effect of the lack of the knowledge of the respondents about the topic on the respondents answers. We compare 13 procedures for the evaluation of questions with respect to these criteria and will derive some conclusions from this overview.