Tuesday 16th July
Wednesday 17th July
Thursday 18th July
Friday 19th July
Download the conference book
Do pretesting methods identify 'real' problems and help us develop 'better' questions? 1 |
|
Convenor | Ms Jo D'ardenne (NatCen Social Research) |
Coordinator 1 | Sally Widdop (City University, London) |
It is common practice for new or significantly modified survey questions to be subject to some form of pretesting before being fielded on the main survey. Pretesting includes a range of qualitative and quantitative methods, such as focus groups, cognitive interviewing, respondent debriefing, use of the Survey Quality Predictor program, piloting, behaviour coding and split ballot experiments (for example see Oksenberg et al, 1991; and Forsyth & Lessler, 1991). On large-scale surveys pretesting may involve several iterations, possibly involving a number of different pretesting methods. Esposito and Rothgeb (1997) proposed 'an idealized quality assessment program' involving the collection of data from interviewers, respondents, survey sponsors and in-field interactions between interviewers and respondents, to assess the performance of survey questions. However there is relatively little systematic evidence on whether implementing the changes suggested by prestesting methods actually detect 'real problems' and if they do, whether such changes help us to produce 'better' questions and more accurate survey estimates (for some examples see Presser & Blair, (1994); Willis et al (1999); Rothgeb et al (2001)).
We invite papers which present findings from studies that seek to demonstrate:
• whether different pretesting methods used to test the same set of questions come up with similar or different findings and the reasons for this;
• whether the same pretesting method used to test the same set of questions comes up with the same or different findings and the reasons for this;
• whether findings from different pretesting methods are replicated in the survey itself;
• the difference that pretesting makes to the validity and reliability of survey estimates or to other data quality indicators e.g. item non-response.
Each round of the European Social Survey (ESS) fields two 'rotating' modules that focus on particular topics in depth. The survey is administered face to face (using CAPI or PAPI) in more than 20 countries. During the twenty-month module design process, a range of qualitative and quantitative pre-testing methods is employed.
In ESS Round 6, omnibus testing, cognitive interviewing and Survey Quality Predictor (SQP) coding were used to pre-test items taken from modules measuring personal and social well-being and understandings and evaluations of democracy. Omnibus testing was used to check distributions, item non-response and ordering effects; cognitive interviewing was used to explore how respondents understood the questions and SQP coding was used to predict the quality of the questions in terms of their validity and reliability. Following this early stage of pre-testing in a range of countries, the items were revised and subsequently fielded in a two-nation quantitative pilot.
This paper has two main purposes. Firstly, it will describe the three techniques employed to pre-test questions measuring well-being and democracy and report on the findings from each method using examples from selected questions. Secondly, it will reflect on whether the methods that were used to test the questions produced similar or different findings. Possible reasons to account for these will also be discussed.
Although the use of Cognitive Interviewing (CI) has become a common practice in survey research, there is a still pervasive doubt about if CI detects "real problems", and is really helpful in investigating what the survey questions are truly measuring. The lack of confidence can be partially due to the absence of methodological link between qualitative evidence provided by CI and quantitative results from field tests. The rationale behind our proposal is to restore to mixed methods research to investigate whether CI findings can contribute to improve validity of survey estimates. We illustrate how a mixed method approach that incorporates CI with quantitative methods can begin to address this problem. This paper comes out of a larger study to evaluate a set of disability questions for comparability across the US and six Asian countries. For this paper, we draw on data collected in the US, and we will focus our analysis on the particular disability questions intended to measure "pain". 40 interviews conducted in the US to identify interpretive patterns, calculations processes, and types of response error problems. Based on the analysis of those interviews, follow-up probe questions were developed and placed on a field test questionnaire included in the 2010 National Health Interview Survey. Then, a multivariate logistic regression analyses were conducted to evaluate the performance of the "pain" question, and the follow-up probe questions. The benefits of a mixed methods approach to find out the extent to which CI findings can improve survey estimates will be
In this paper, we use a category follow up probe administered to respondents who initially select the mid-point in the 9-point left-right scale, to determine whether they selected this alternative in order to indicate a distinct opinion, or to indicate that they do not have an opinion on the issue (position). We find in the cross-section CATI- and in an access panel survey that the vast majority of the middle category responses turn out to be 'don't knows' and that reallocating these responses from the mid-point to the don't know category significantly alters descriptive inferences. Our findings have important implications for the design and analysis of bipolar rating scales especially of the left-right political orientation scale.
There are rapid changes in social and survey surroundings since the Population and Housing Census of Korea has started in 1960. Especially, internet survey newly adpated to the 2010 Census does main role in questionnaire redesign. Internet survey as self-administered way is a different mode compared to face to face interviewing which is main way to collect the Census data.
As needs for questionnaire chage grow, the Census Division decided to redesign questionnaire toward user friendly. Statistical Research Institute identified problems using three common pretesting methods such as cognitive interviewing, interviewer debriefing, field test and suggested a new questionnaire version. This paper compares the results of three pretesting methods and offers some conclusions about the effectiveness of using each these methods.