ESRA logo

ESRA 2021 Program at a glance



Survey design and nonresponse

Session Organiser Dr Christoph Zangger (University of Zurich)
TimeFriday 23 July, 15:00 - 16:30

How can we foster survey participation in times of declining response rates? What survey design aspects help to reduce the risk of unit nonresponse and refusal? This session looks at new techniques and tests established efforts to increase participation rates in cross-sectional and longitudinal social surveys. Papers investigate the use of different design aspects and apply various methodological experiments to reduce survey nonresponse by means of easy-to-implement approaches to survey design in a variety of settings.

Keywords: Survey design, nonresponse, mode effects, incentives

Reducing Respondent Burden with Efficient Survey Invitation Design

Mr Hafsteinn Einarsson (University of Manchester) - Presenting Author
Dr Alexandru Cernat (University of Manchester)
Professor Natalie Shlomo (University of Manchester)

Increasing costs of data collection and decreasing response rates in social surveys has led to a proliferation of mixed-mode and self-administered surveys. In this context the design and content of survey invitations is increasingly important as it influences propensities to participate. By reducing the respondents’ burden of engaging with the survey invitation survey organisations can streamline the participation process. Reducing respondent burden by efficient invitation design may increase the number of early respondents, the number overall responses and reduce non-response bias. This study implemented a randomised experiment where two design features thought to be associated with respondent burden were randomly manipulated: the length of the text and the location of the survey invitation link. The experiment was carried out in a sequential mixed-mode survey among young adults (18-35-year-old) in Iceland, where design features (text length and survey link location) of mailed letters with links to a web survey were manipulated. Results show that participants are more likely to participate in the survey when they receive shorter survey invitation texts and with survey links in the middle Additionally, short letters with links in the middle perform well compared to other letter types in terms of non-response bias and mean squared error.


Assessing the use of incentives and mode effects on a survey among 16-year olds

Dr Dirk Schubotz (Queen's University Belfast) - Presenting Author

The Young Life and Times (YLT) survey is an annual cross-sectional survey ran among random samples of 16-year olds in Northern Ireland. The survey has been undertaken by ARK - a joint initiative by Queen's University Belfast and Ulster University - since 2003 and has covered a wide range of issues relevant to young people's lives.

Over the years, we have experimented with different incentives and survey modes in order to combat falling response rates to the survey. The falling response rate is in stark difference to the growing interest and use of the survey. In fact, in some survey years we had to run split surveys as the number of funders and their questions exceeded the space that we had and the number of questions we could reasonably accommodate. During these years we used mode and incentive experiments to establish the best format for the survey to maximise the response rate.

In this presentation I will review the outcome of these experiments with regard to non-response. I will compare how an online survey mode has fared compared to the original paper mode and how gift vouchers of small amounts of money compare with prize draws for fewer prizes but more substantial amounts of money. I will also discuss whether we found any evidence for socio-demographic differences between online and paper respondents and how different levels of incentives related to respondents' socio-demographic background and to response rates. The evidence from YLT can be taken into consideration when designing surveys for young people.


Effects of Expected and Actual Interview Duration on Survey Participation

Dr Helge Emmler (Hans-Böckler-Stiftung) - Presenting Author

Download presentation

In survey research, there is an ongoing debate about the effects of actual and expected interview duration on survey participation. While the expected (or announced) interview duration is one of the most frequently asked questions of potential survey respondents, the actual interview duration seems to have only very limited impact on both interview termination and survey participation of consecutive waves (Schnell 1997).

With the WSI works councils survey, we have a rich data source for analyzing the effects of both the actual and expected interview duration:

- in 2007, about 2100 respondents were assigned to a 50-minute interview (response rate 50.6%) and 1800 to a 10-minute interview (62.5%). All respondents were recontacted in 2008.
- in 2015, about 2000 respondents were assigned to a 40-minute interview (response rate 50.2%) and 2100 to a 25-minute interview (56.1%). All respondents were recontacted in 2016.

For both the years 2007/2008 and 2015/2016 we find not only that the response rate is lower for the longer interviews (which is not very surprising), but also that the longer interview results in higher response rates for the following waves, so that the negative impact of the interview duration diminishes over time in a panel study. Possible reasons for this finding are the following:

(1) The net sample for the longer interviewss in e.g. 2007 is highly selective and consists of respondents more interested in the survey.
(2) A longer interview is more rewarding for the respondent. This results in higher response rates for consecutive waves.

While it is impossible to disentangle those two possibilities completely, we can address the problem in several ways: Firstly, we can control for the stratification variables like establishment size and industry. Secondly, we can compare the expected (i.e., announced) interview duration to the actual duration and check if the response rates lowers when the interviews are longer than expected. We find that the opposite is the case: The longer the actual interview (for all subgroups), the higher the response rate in the following wave. Hence, we assume, that an interview is more pleasant when there is a sufficient number of questions asked, i.e., the topic of the questionnaire is more salient to the respondent.


Advantages of representative face-to-face surveys for adjusting phone survey biases

Mr Kevin McGee (World Bank) - Presenting Author
Mr Alemayehu Ambel (World Bank)
Mr Asmelash Tsegay (World Bank)

Several developing countries are currently implementing phone surveys in response to immediate data needs to monitor the socioeconomic impact of COVID 19. However, phone surveys are often subject to coverage and non-response bias that can compromise the representativity of the sample and the external validity of the estimates obtained from the survey. These biases can be more relevant to developing countries where a considerable share of the population lacks access to phone and connectivity problems are pervasive. Using data from high frequency phone surveys in Ethiopia, Malawi, Nigeria, and Uganda first implemented in April 2020, this study investigates the magnitude and source of biases present in these four surveys and explores the effectiveness of techniques applied to reduce bias. Substantial coverage bias is found in Ethiopia, Malawi, and Uganda while coverage bias is minimal in Nigeria, largely reflecting relatively low mobile phone penetration in Ethiopia, Malawi, and Uganda compared with Nigeria. However, a more serious problem in Ethiopia, Malawi, and Nigeria is non-response bias due to unsuccessful contact with the respondent. In Uganda, where contact and response rates were very high, nonresponse bias was much more limited and dwarfed by the coverage bias. The successfully contacted samples in these four countries were biased towards wealthier households with higher living standards. This bias, left unaddressed, would result in biased estimates from the interviewed sample that do not fully reflect the situation of poorer households in the county. This is a population of critical interest to policy makers since poorer households are likely most vulnerable to the negative impacts of the COVID-19 crisis. However, phone survey biases can be substantially reduced by applying survey weight adjustments using information from the representative survey the sample is drawn from. Applying these methods to the four surveys resulted in a substantial reduction in bias, though the bias was not fully eradicated. This highlights the advantages to drawing phone survey samples from existing face-to-face, representative surveys over random digit dialing or using lists from telecom providers where such robust adjustment methods would not be possible.


Interview or interviewer effect? Impact of interviewer characteristics and interview context on the item nonresponse to income question in the European Social Survey 2002-2018

Dr Piotr Jabkowski (Adam Mickiewicz University, Poznan)
Dr Aneta Piekut (Sheffields Methods Institute, University of Sheffield, UK ) - Presenting Author

Apart from the unit nonresponse (i.e., a complete failure to obtain data from a sample unit), the item nonresponse (i.e., a failure to obtain an answer on a particular survey question) is a crucial factor determining measurement quality. It decreases the sample size and boosts the risk of significant nonresponse bias if the missingness mechanism is not random. According to data from 15 countries participating in all nine rounds of the European Social Survey (the ESS, 2002-2018), the question on a 'household's total net income' has the highest level of item nonresponse among all core variables.

Our presentation will unpack various mechanisms driving the item nonresponse phenomenon to the income question in the ESS. We will provide a cross-national and longitudinal comparison of two types of nonresponse rates, i.e., 'do not know' answers and 'refusals', and identify potential reasons for differences in their degree across studied countries, and survey rounds. Two key questions of our analysis are: a) what is the impact of interviewer- and interview-related characteristics on the odds of income item-nonresponse occurrence; b) is it the same across all ESS countries and rounds. By implementing multilevel analysis (respondents nested ‘in’ interviewers), we will test what factors are more important for income nonresponse: the interview-related characteristics, such as the number of item nonresponses preceding income question, duration of the interview, respondent's involvement in the interview process, and the presence of others during the interview, or rather interviewer's characteristics, i.e. age, gender and interviewer workload. Finally, we will explore how the interview and interviewer effects differ between 'do not know' answers and 'refusals'.