ESRA 2025 Preliminary Program
All time references are in CEST
Exploring non-respondents in self-completion surveys 2 |
Session Organisers |
Dr Michèle Ernst Stähli (FORS, Swiss Centre of Expertise in the Social Sciences) Mr Alexandre Pollien (FORS, Swiss Centre of Expertise in the Social Sciences) Dr Michael Ochsner (FORS, Swiss Centre of Expertise in the Social Sciences) Dr Marlène Sapin (FORS, Swiss Centre of Expertise in the Social Sciences) Mrs Karin Nisple (FORS, Swiss Centre of Expertise in the Social Sciences)
|
Time | Thursday 17 July, 13:30 - 15:00 |
Room |
Ruppert 040 |
Although face-to-face surveys are still considered the ‘gold standard’ in comparative probability-based surveys, there has been a notable shift towards self-completion surveys, especially in the combination of web and paper (push-to-web or concurrent modes). These modes are distinguished by the absence of fieldworkers’ involvement in the recruitment process. Indeed, the invitations are mainly sent by postal letters or, eventually, by email or SMS. This implies that no or very little information exists regarding the target person who does not react to such invitations or participate in the survey. In the context of face-to-face surveys, the entire recruitment process allows for the collection of a substantial amount of “paradata” that help describing both respondents and non-respondents and thus allow identifying factors of risk for non-response. This includes information taken by the interviewer, such as the time of the visit, the reaction of the target person when contacted, and the environment. Within self-completion, there is no information on the reasons and forms for non-participation. It is even impossible to distinguish between refusals and non-contacts. As self-completion becomes the main survey mode, survey researchers should pay more attention to the non-respondents of such surveys to better understand specific mechanisms of participation and non-participation in the current settings and find ways to improve response rates and/or mitigate non-response bias.
This session invites contributions that provide insights into non-respondents in today’s self-completion surveys. This can be done through non-respondent surveys, qualitative investigations of the reception of invitation letters, or any other method to examine non-respondents in self-completion surveys. Presentations on methods to monitor non-respondents during fieldwork are also highly welcome.
Keywords: non-response, self-completion modes
Papers
Understanding the Nonresponse Process of Secondary Respondents: Evidence from a Self-administered Multi-Actor Survey
Ms Almut Schumann (University Mannheim) - Presenting Author
The shift from face-to-face to self-completion surveys has significant practical implications for conducting multi-actor surveys. The recruitment of secondary respondents, such as the partner of the primary respondent, must be organized without the assistance of an interviewer, making the cooperation of both primary and secondary respondents crucial for collecting multi-actor data. The recruitment of secondary respondents is a multistep process: First, the primary respondent must identify the target person and provide consent and contact information for interviewing the secondary respondent before the latter can decide to participate. Each of these steps can contribute to nonresponse among secondary respondents, and the reasons for nonresponse may depend on the individual situations of each respondent, as well as on dynamics within their relationship. Using data from the German family demography panel study FReDA, a self-completion multi-actor survey, this study identifies the steps that lead to the highest dropout rates among secondary respondents and investigates whether individual characteristics of both actors, dyadic aspects of their relationship, and design-specific elements of the contact method can predict nonresponse during the process. The results suggest that dyadic characteristics of the relationship, such as low levels of commitment and closeness, lead to lower consent and participation rates, thereby increasing nonresponse throughout the process. Individual factors, including sociodemographic characteristics associated with a higher respondent burden, exhibit varying effects at different stages. Furthermore, the method of establishing contact with secondary respondents seems to play an important role; sending invitations in later batches and through primary respondents, rather than contacting each secondary sample member directly, results in higher nonresponse rates. Overall, the findings identify risk factors for nonresponse in the collection of self-administered multi-actor data, helping to increase response rates and reduce sample selectivity in this data.
Exploring Response Dynamics throughout Twelve Years of Online Survey Panel Activity in Changing Context – the Role of the Panellists’ Digital Practices and the Type of Device Used to Respond
Mr Malick Nam (CDSP - Sciences Po / CNRS) - Presenting Author
Dr Blazej Palat (CDSP - Sciences Po / CNRS)
ELIPSS is a panel of adults living in ordinary households in mainland France. Recruited in four successive waves that were conducted every three to four years from 2012, the panellists have taken part in monthly surveys implemented entirely online. However, even though they were provided with 4G connected tablets in the first phase of the panel activity, they have had to use their own devices to respond from 2020 onwards. Since the panel’s establishment, around one hundred surveys have been conducted on a wide variety of themes, involving a total of 5,000 panellists (with the number of respondents varying between around 900 and 3,000, depending on the study). We have access to rich socio-demographic data and information on the use of digital tools for all the panellists, thanks to two surveys replicated in several waves: the Annual Survey and the Digital Practices Survey.
As part of this analysis, we focus on the propensity of the panellists to respond from two angles:
● Analysis of response rates according to different socio-demographic variables and those linked to the use of digital tools.
● Comparison of monthly response accumulation curves, according to these same variables.
The aim of this research is to improve our understanding of the factors influencing the dynamics of panellists’ behaviours according to their profile as users of digital tools. More generally, our aim is to find out to what extent this and similar online research infrastructures are inclusive in terms of such uses and therefore truly representative of their parent populations.
Who are we losing? An analysis of “skips” and “drop-offs” in United States government surveys
Dr Gwen Gardiner (Internal Revenue Service) - Presenting Author
Dr Scott Leary (Internal Revenue Service)
Dr Nick Yeh (Internal Revenue Service)
Ms Brenda Schafer (Internal Revenue Service)
The United States Internal Revenue Service (IRS) administers annual surveys to nationally representative samples of taxpayers to gather data about their federal income tax filing experiences. However, these surveys are relatively lengthy and require significant recall, leading to skipped questions and incomplete responses. Approximately 15% of respondents don’t answer enough critical questions to be included in the final estimate, which can impact the representativeness of the findings. This analysis focuses on identifying patterns among respondents who begin but do not complete the surveys. First, we analyze demographic factors such as age and income to determine which groups are most likely to drop off. Then we examine metadata, including time spent on survey pages and total clicks per page, to uncover behavioral trends that may predict survey drop-offs. Lastly, we will examine the patterns and characteristics of those who do complete the survey but skip many of the critical questions to determine if similar characteristics exist as those who drop off. By exploring why participants disengage from the survey we expect to identify potential improvements to the survey design to reduce question skipping and drop-offs thereby enhancing data quality and ensuring a more statistically representative sample.