ESRA logo

ESRA 2019 full progam


Monday 15th July Tuesday 16th July Wednesday 17th July Thursday 18th July Friday 19th July


Mixed-Device Online Surveys: A Total Survey Error Perspective 2

Session Organisers Dr Olga Maslovskaya (University of Southampton)
Professor Gabriele Durrant (University of Southampton)
Professor Patrick Sturgis (University of Southampton)
TimeThursday 18th July, 14:00 - 15:30
Room D13

We live in a digital age with widespread use of technologies in everyday life. Technologies change very rapidly and affect all aspects of life, including surveys and their designs. Online data collection is now common in many countries and researchers are adapting online surveys in response to the requirements of mobile devices, especially smartphones. Mobile devices can facilitate collection of new forms of data such as sensor data. It is important to assess different sources of error in mixed-device online surveys.

This session welcomes submissions of papers on different sources of error in online surveys in both cross-sectional and longitudinal contexts. The following topics are of interest:

• Coverage issues
• Data quality issues
• Item nonresponse
• Unit nonresponse
• Breakoff rates
• Completion times
• Response styles
• Mobile device use
• Optimisation of surveys and adaptation of question design for smartphones and data quality,
• Impact of different questions’ designs or presentations on response quality across devices,
• New types of data collection associated with mobile devices such as sensor data and data quality.

We encourage papers from researchers with a variety of backgrounds and across different sectors, including academia, national statistics and research agencies.

This session aims to foster discussion, knowledge exchange and shared learning among researchers and methodologists around issues related to increased use of mobile devices for survey completion. The format of the session will be designed to encourage interaction and discussion between the presenters and audience.

The session is proposed by the National Centre for Research Methods (NCRM) Research Work Package 1 ‘Data Collection for Data Quality’. ‘Data Collection for Data Quality’ is funded by the UK Economic and Social Research Council (ESRC) and led by a team from the University of Southampton. The project investigates amongst other topics mobile device use in mixed-device online surveys.

Keywords: data quality, online survey, total survey error, sensor data, sources of error

Uptake and Data Quality in UK Mixed-Device Online Surveys: Results from Experiments in the ONS Online Household Study

Dr Olga Maslovskaya (University of Southampton)
Professor Gabriele Durrant (University of Southampton) - Presenting Author
Professor Peter Smith (University of Southampton)

Social surveys are increasingly conducted via online data collection. They also started embracing smartphones for data collection. In the UK, there is a significant move towards online data collection, including the ambition to move established household surveys such as Labour Force Survey (LFS) as well as the next UK 2021 Census online. Not much research has been conducted so far in the UK to address uptake and response quality among the general population in mixed-device online surveys. This research is timely and fills the knowledge gap in these areas.

We use Office for National Statistics (ONS) data for LFS online experiments (Test 1 and Test 2) which were conducted in 2017. The main aim of these experiments was to move LFS online.
Descriptive analysis and then linear, logistic or multinomial logistic regressions are used depending on the outcome variables to study data quality indicators associated with different devices in the survey. The following data quality indicators are assessed for different devices used by respondents: break-off rates, response latencies, timeout rates, restart rates and differential reporting.
The good news is that we can be less concerned about allowing smartphone data completion in the contexts where mobile-first design is used for questionnaires as data quality are not very different by devices.

The findings from this analysis will be instrumental to better understanding of data quality issues associated with mixed-device online surveys in the UK in general and specifically for informing online versions of social surveys and the next UK Census 2021. The results can help improving designs of the surveys and response rates as well as reducing survey costs and efforts.


Designing a Device-Agnostic Online Survey for 17 Year Olds: Experiences of the Millennium Cohort Study

Dr Emily Gilbert (Centre for Longitudinal Studies, UCL) - Presenting Author
Ms Lucy Lindley (Ipsos MORI)

Download presentation

We live in an age where there are a proliferation of technologies in widespread use in daily life. Increasingly, survey respondents expect to be able to complete online questionnaires on a multitude of devices, including PCs, tablets and smartphones. However, this creates challenges in ensuring the information captured is equivalent across devices, as well as making sure respondents have a positive experience of completing the surveys.

The Millennium Cohort Study (MCS) is a longitudinal cohort study in the UK, following the lives of 19,000 young people born at the turn of the century. As part of the Age 17 Survey, study members were asked to complete an online questionnaire. The questionnaire was optimised for a range of devices, including PCs, tablets and smartphones, as well as for a variety of operating systems and browsers. This involved design choices about question formats, particularly to make all questions smartphone-friendly, as well as decisions about usability of the questionnaire for each device.

This presentation will focus on the development of the device-agnostic questionnaire, and discuss take-up of different devices, selection into devices, and data quality across the different modes of completion.


Looking Up the Right Answer: Errors of Optimization when Answering Political Knowledge Questions in Web Surveys

Dr Jan Karem Höhne (University of Mannheim) - Presenting Author
Dr Carina Cornesse (University of Mannheim)
Mr Stephan Schlosser (University of Göttingen)
Professor Mick P. Couper (University of Michigan)
Professor Annelies Blom (University of Mannheim)

Political knowledge is an important determinant affecting outcomes in public opinion research and political science, which can have a profound impact on governmental decision-making processes. However, some respondents look up the right answer (e.g., on the Internet), which inflates political knowledge scores and can be seen as a kind of “optimizing error” (Yan, 2006) committed by engaged respondents with good intentions. As indicated by previous research, this response behavior is detectable in web surveys using indirect methods. In this study, we investigate optimizing errors when answering political knowledge questions in web surveys by using paradata. More precisely, we use JavaScript “OnBlur” functions enabling us to gather whether respondents switch away from the web survey to search for the correct answer on the Internet using the same device. We conducted a web survey experiment in a German non-probability access panel (N = 3,332) and used a two-step split-ballot design with four groups defined by device type (i.e., PC and smartphone) and question difficulty (i.e., open and closed response format). Our expectation is that looking up the answer is more likely on PCs and open response formats. Additionally, we measured response times in milliseconds, employed self-report questions, and measured several respondent characteristics. The preliminary results indicate that respondents indeed switch away from the web survey page to search for the right answer on the Internet. This finding is supported by the JavaScript “OnBlur” functions and by respondents’ self-reports. In line with our expectations this is more common on PCs and open response formats. The findings provide new insights on optimizing errors when answering knowledge questions. Furthermore, they reveal that paradata seem to be a promising way to observe response behavior that may lead to incorrect inferences about respondents’ knowledge measured in web surveys.


Design of Grid Questions for Mixed-Device Surveys

Ms Deirdre Giesen (Statistics Netherlands) - Presenting Author

Download presentation

The classic presentation of grid question as seen on paper, PCs and tablets is not easily transferable to the smaller smartphone screens. For Statistics Netherlands surveys, we tested in the lab three possibilities for presenting grid questions on smartphones:
1) a paging design: all items of the grid are presented on separate pages, the stem of the grid is repeated on each screen.
2)a stem-fix scrolling- by-respondents design: the stem of the grid is fixed and respondents have to scroll vertically to see all items
3)a stem-fix auto-scroll design: the stem of the grid is fixed, after completion of an item the questionnaire automatically vertically scrolls to the next item.

The labtest was conducted with 20 respondents of various age groups and backgrounds. We observed and recorded completion of the questionnaire on multiple devices and with multiple grid question designs, including the classic grid design on PC and tablet. We evaluated with respondents the validity of their answers and their perception of the response task. The test showed that the tested version of the stem-fix auto-scroll design was not working well; the auto-scroll worked too fast and confused respondents. In the short run it was not possible to improve this.
For an experiment in the School leavers Survey that will run in the first quarter of 2019, we will randomize the presentation of grid questions. Respondents self-select the device they use. For PCs and tablets respondents will either receive a paging design or the classis grid design;for smartphones, respondents will either receive a paging design or a stem-fix scrolling-by-respondent design. Next to data on completion times and breakoff rates, we will collect information on the respondents’ perception of the questionnaire.
In the presentation I will present the results of the labtest and first results of the