ESRA logo

Tuesday 16th July       Wednesday 17th July       Thursday 18th July       Friday 19th July      

Download the conference book

Download the program





Tuesday 16th July 2013, 14:00 - 15:30, Room: No. 16

Mixed Mode or Mixed Device? Surveying in a new technological era

Convenor Dr Mario Callegaro (Google)
Coordinator 1Professor Edith De Leeuw (Utrecht University)

Session Details

Due to growing Internet coverage and increased emphasis on survey costs, web surveys have become an important part of the survey landscape. In the past years several handbooks have been published on designing effective web surveys. However, a new technological challenge is facing survey designers. Modern society has become more interactive and especially the younger generation is now geared to the potential of being online at will, be it through laptop, smart phone, or tablet. Web surveys are morphing from a computer-oriented into a multi-device concept.
How to design quality surveys for this new situation? In the past, attention has been paid to the optimal design of questionnaires for mixed-mode surveys, and we may learn from it. But, here a new situation is created. We do not have a mixed-mode in the traditional sense, where two discrete modes (e.g., self-administered visual mode vs. telephone aural mode). We have one overall data collection principle: a self-administered survey meant to be completed on web, tablet, or smart phone. This means that traditional question formats, such as grids, or long rating scales are no longer appropriate as they add to a device specific measurement error. Also, customs associated with the use of different devices (e.g., quick exchange of information through a tweet on a mobile device vs. more detailed information through facebook or e-mail) may influence length of questionnaire, break-offs and nonresponse.
This session invites presentations that investigate how different devices may be combined and influence different sources of survey errors. We particularly invite presentations that discuss how different survey errors can be reduced by optimal design of the questionnaires. Randomized experiments or quasi-experiments where the difference across devices due to self-selection is taken account in the statistical analysis, are welcomed.


Paper Details

1. Mobile devices a way to recruit hard-to-reach groups? Results from a pilot study comparing desk top and mobile device surveys.

Dr Vera Toepoel (Utrecht University)
Dr Peter Lugtig (utrecht university)

According to numbers from Statistics Netherlands, already 50% of Dutch people in the age of 12-72 years old use mobile devices to access the Internet. This number will probably increase in a rapid pace in the next couple of years. The literature on web surveys, mode effects and visual design is still in progress, but researchers need to adapt their research strategy in order to investigate how the use of these new mobile devices affect total survey error. This research uses a mixed device survey. We compare two parallel surveys: one with a traditional web survey and one suitable for mobile devices where respondents could choose their preferred device. Data come from an online probability-based research panel of Market Response in the Netherlands. We investigate response rates, measurement error indicators and evaluation of the questionnaire in both groups. In addition, we take the use of QR codes, the transmission of GPS coordinates and feedback from other respondents into account. The goal of this research is to gain insight on how to design quality surveys in this new situation and if it is successful in recruiting hard-to-reach populations.




2. Mobility and Smartphones: a pilot study of travel data collection among experienced and inexperienced users

Dr Salima Douhou (CentERdata)
Dr Annette Scherpenzeel (CentERdata)

The understanding of mobility is usually based on travel surveys where only one day is surveyed for each respondent. Current technology allows travel data to be collected for a longer period and in a more user-friendly way. An application is developed which uses smartphones to collect travel information and list this in the interface of the app. Respondents adjust their travel information on a regular basis during one month via an online portal, which has an online survey look-and-feel. To compare this with former travel surveys, respondents complete online a one-day displacements-diary.
CentERdata, a research institute associated with Tilburg University, and University of Twente have jointly started to collect travel data using smartphones. CentERdata is the operator of the LISS panel; an online panel based on a true probability sample of households. For the pilot study we were especially interested in the suitability of smartphones for this purpose, user experiences, and performance of the app. We selected 20 smartphone-owners and 30 respondents without a smartphone. The latter were provided with an Android smartphone with the app already installed on it. Smartphone-owners could download the app.
The paper will present the feasibility for respondents to participate in a travel study by means of a combination of online surveys and smartphones. In particular, we look at differences in self-reported and registered travel information, frequency of adjustments on the online portal, and user experiences for this type of data collection method.


3. Online Survey Participation via Mobile Devices: implications for nonresponse

Dr Teresio Poggio (Free University of Bozen-Bolzano)
Professor Michael Bosnjak (GESIS Leibniz Institute for the Social Sciences)

Michael Bosniak (first author) & Teresio Poggio

The diffusion of mobile devices such as tablet computers and smartphones enabling respondents to participate in self-administered online surveys create new challenges for survey methodology in terms of measurement (e.g., equivalence of mobile versus traditional online instruments) and nonresponse issues (e.g., response patterns among mobile participants in comparison to desktop-based respondents).
By analysing several online access panel studies, we are addressing four nonresponse-related research questions:
* How large is the share of mobile participants in online panel surveys overall?
* How can the propensity to choose mobile modes be explained?
* Do mobile participants differ on participation parameters, such as the number of completed questions, and the length of entries to open-ended questions?
* Does mobile participation change as more advanced technological features (such as Flash technology) are being embedded?

The results show that (1) a considerable share of online panel members participate using a mobile device, (2) that the propensity for choosing mobile devices to participate in online surveys is a function of age and gender (younger subjects and males are more likely to participate in this way, rather than older subjects and women), (3) mobile respondents did not substantially differ from traditional online survey respondents on an array of participation rate indicators. However, when using Flash technology, (4) mobile participants showed high dropout rates (about twice as much mobile drop-out rate when using Flash technology compared to traditional computers). Implications for survey methodology and future research will be discussed.


4. Comparing text and voice survey modes on smartphones

Professor Frederick Conrad (University of Michigan)
Professor Michael Schober (New School for Social Research)

We conducted an experiment to explore how different interviewing modes on a single device (text and voice on an iPhone), and being able to choose the mode, affect data quality, completion and satisfaction. 1268 iPhone users were contacted on their iPhones by either a human or automated interviewer via voice or SMS text. This created four modes: Human Voice, Human Text, Automated Voice, and Automated Text. In half of the initial contacts, respondents were able to choose their interview mode (which could be the contact mode); in the remaining half the mode was simply assigned. Text interviewing led to higher quality data (less satisficing--fewer rounded numerical answers and less non-differentiation--and more disclosure of sensitive information) than voice interviews, whether questions were asked by an interviewer or automated system. Overall, more than half the mode choices involved a mode switch. But just being able to choose (whether switching or not) improved data quality: when respondents chose the interview mode, there was less satisficing than when the mode was assigned. There was a small loss of participants at the point the choice was made but those who began the interview in a mode they chose were more likely to complete it than respondents interviewed in an assigned mode. Finally, those who chose their interview mode were more satisfied with the experience than those who were interviewed in an assigned mode. The results point to clear benefits from text vs. voice and from mode choice on a single device.