Split Questionnaire Design 2 |
|
Session Organisers |
Professor Barry Schouten (Statistics Netherlands and Utrecht University) Dr Andy Peytchev (RTI) |
Time | Wednesday 17th July, 14:00 - 15:00 |
Room | D18 |
Over the last ten years, many larger surveys migrated to mixed-mode survey designs including web as a survey mode. In recent years, access to web has diversified rapidly and a variety of devices exist, both fixed and mobile. Online surveys now face a range of devices that they may discourage, accept or encourage. Such decisions depend on the features of surveys and the devices. Three prominent device features are screen size, navigation and timing. Devices can be small as smartphones and large as smart TV's. Navigation can be through touchscreen, to mouse and keyboard. Timing refers to the moment and place the devices are used.
It is generally believed that smartphones demand for a shorter duration of the survey, although empirical evidence is mostly restricted to break-off rates.
This session is about designs that attempt to shorten survey questionnaires without deleting modules for all sample units. So-called split questionnaire designs (SQD) allocate different sections of the questionaire to different (random) subsamples. These SQD are not at all new and have been suggested already several decades ago. However, there never was a sufficiently strong business case to implement them. With the emergence of mobile devices, this business case seems to be strong.
SQD affects questionnaire design and data analysis. The (planned) missing parts of the qustonaire need to be selected in a sophisticated way acknowledging both questionnaire logic and strength of associations between survey variables. Imputation techniques are a natural option but can be quite advanced for some users.
In the session, we invite papers that address one or more aspects of SQD, ranging from questionnaire design to imputation approaches.
Keywords: Smartphone; Adaptive survey design; Imputation
Dr Michael Ochsner (FORS) - Presenting Author
Dr Jessica M. E. Herzing (University of Lausanne)
Mrs Patricia J. Milbert (University of Lausanne)
Face-to-face (f2f) surveys are considered as the gold standard for general population surveys for two reasons: (1) the potential for high data quality, and (2) the possibility to conduct long surveys. In recent years, push-to-web surveys for the general population have been proven similarly efficient as f2f surveys with regard to data quality. Hence, more and more survey researchers switch from f2f surveys to web surveys. However, different conventions in survey length challenge survey researchers when switching from a f2f survey (about 60 minutes) to a web survey (about 20 minutes). Besides not changing the survey length when switching from one mode to another, one can reduce survey length by using a matrix questionnaire design. In matrix questionnaire designs the original questionnaire is split in parts and sample units are randomly assigned to these parts. Furthermore, conditional on participating in the questionnaire with the matrix design, respondents can participate in a follow-up survey that consists of the complementary questionnaire of the first matrix questionnaire. Yet, little is known about whether push-to-web surveys with a matrix design challenge the pole position of long f2f surveys in terms of data quality?
Using the Swiss data of the European Values Study (EVS) from 2017, we investigate the effects of different survey designs on population estimates. For this purpose, we apply the same substantive analysis to data collected via a) a f2f survey, b) a long web survey, c) a web survey with matrix design with complete cases, d) a web survey with matrix design with multiply imputed data, e) a web survey with matrix design with multiply imputed data considering information from the follow-up survey, f) a web survey with matrix design and follow-up survey with complete cases, g) a web survey with matrix and follow-up survey with multiply imputed data.
Dr Andy Peytchev (RTI International) - Presenting Author
Dr Emilia Peytcheva (RTI International)
Dr Trivellore Raghunathan (University of Michigan)
The substantial cognitive and time demands on survey respondents, combined with declining survey participation, motivate the need for methods to reduce survey length. By reducing respondent burden, shortening the survey has the potential to increase response rates, reduce nonresponse bias, and reduce measurement error. Split questionnaire design (SQD) is a method to ask only a subset of the survey questions while still producing a dataset with values for all questions and respondents, through multiple imputation. SQD can affect estimates and their variances. While promising, few studies have evaluated the benefits and drawbacks related to SQD.
At the 2017 ESRA conference we presented the design of a study to decompose nonresponse and measurement error through an experimental design, manipulating the number or order of survey modules. We completed data collection in May 2018. Early analyses show that nonresponse is not reduced, but indicate reduction in measurement error. We will further extend the results by implementing multiple imputation for the omitted modules, evaluating the impact on survey estimates and their variances. Finally, the tradeoff from using a SQD, between reduction in nonresponse and measurement error, and increase in variance due to imputation, will be evaluated using MSE.
Dr Jens Ambrasat (German Centre of Higher Education and Science Studies) - Presenting Author
Psycholinguistic rating studies are difficult to implement in general population surveys because the amount of items to be rated often exceeds the limits of a survey, especially in the case of repeated-measures designs. Therefore, such studies are usually conducted with small and mostly homogeneous student populations only and are not transferred to the general population. Unfortunately, thereby social research misses out on interesting research methods. If psycholinguistic material is to be investigated in population representative samples, it must be made possible to analyze the various personal characteristics of the subjects in combination with characteristics of the stimulus material. Imputation, however, is only a solution if the amount of missing data is not too great. However, following the literature on planned missing-data design, this limit is seen at about a third.
I present a planned missing-data design for a population-wide rating study (N=2849) with linguistic material where this limit is far exceeded. The study contains a total of 900 items to be rated on three evaluation dimensions. To keep the burden on the respondents acceptable, each receives only 60 items out of 900, resulting in a missing-data structure that cannot be eliminated by imputation. I then show how mixed-effects models with crossed random effects can offer an analysis strategy here.