ESRA logo

ESRA 2023 Glance Program


All time references are in CEST

Exploring non-respondents in self-completion surveys

Session Organisers Dr Michèle Ernst Stähli (FORS, Swiss Centre of Expertise in the Social Sciences)
Mr Alexandre Pollien (FORS, Swiss Centre of Expertise in the Social Sciences)
Dr Michael Ochsner (FORS, Swiss Centre of Expertise in the Social Sciences)
Dr Marlène Sapin (FORS, Swiss Centre of Expertise in the Social Sciences)
Mrs Karin Nisple (FORS, Swiss Centre of Expertise in the Social Sciences)
TimeTuesday 18 July, 09:00 - 10:30
Room

Although face-to-face surveys are still considered the ‘gold standard’ in comparative probability-based surveys, there has been a notable shift towards self-completion surveys, especially in the combination of web and paper (push-to-web or concurrent modes). These modes are distinguished by the absence of fieldworkers’ involvement in the recruitment process. Indeed, the invitations are mainly sent by postal letters or, eventually, by email or SMS. This implies that no or very little information exists regarding the target person who does not react to such invitations or participate in the survey. In the context of face-to-face surveys, the entire recruitment process allows for the collection of a substantial amount of “paradata” that help describing both respondents and non-respondents and thus allow identifying factors of risk for non-response. This includes information taken by the interviewer, such as the time of the visit, the reaction of the target person when contacted, and the environment. Within self-completion, there is no information on the reasons and forms for non-participation. It is even impossible to distinguish between refusals and non-contacts. As self-completion becomes the main survey mode, survey researchers should pay more attention to the non-respondents of such surveys to better understand specific mechanisms of participation and non-participation in the current settings and find ways to improve response rates and/or mitigate non-response bias.
This session invites contributions that provide insights into non-respondents in today’s self-completion surveys. This can be done through non-respondent surveys, qualitative investigations of the reception of invitation letters, or any other method to examine non-respondents in self-completion surveys. Presentations on methods to monitor non-respondents during fieldwork are also highly welcome.

Keywords: non-response, self-completion modes

Papers

Understanding the Nonresponse Process of Secondary Respondents: Evidence from a Self-administered Multi-Actor Survey

Ms Almut Schumann (University Mannheim) - Presenting Author

The shift from face-to-face to self-completion surveys has significant practical implications for conducting multi-actor surveys. The recruitment of secondary respondents, such as the partner of the primary respondent, must be organized without the assistance of an interviewer, making the cooperation of both primary and secondary respondents crucial for collecting multi-actor data. The recruitment of secondary respondents is a multistep process: First, the primary respondent must identify the target person and provide consent and contact information for interviewing the secondary respondent before the latter can decide to participate. Each of these steps can contribute to nonresponse among secondary respondents, and the reasons for nonresponse may depend on the individual situations of each respondent, as well as on dynamics within their relationship. Using data from the German family demography panel study FReDA, a self-completion multi-actor survey, this study identifies the steps that lead to the highest dropout rates among secondary respondents and investigates whether individual characteristics of both actors, dyadic aspects of their relationship, and design-specific elements of the contact method can predict nonresponse during the process. The results suggest that dyadic characteristics of the relationship, such as low levels of commitment and closeness, lead to lower consent and participation rates, thereby increasing nonresponse throughout the process. Individual factors, including sociodemographic characteristics associated with a higher respondent burden, exhibit varying effects at different stages. Furthermore, the method of establishing contact with secondary respondents seems to play an important role; sending invitations in later batches and through primary respondents, rather than contacting each secondary sample member directly, results in higher nonresponse rates. Overall, the findings identify risk factors for nonresponse in the collection of self-administered multi-actor data, helping to increase response rates and reduce sample selectivity in this data.


Exploring Response Dynamics throughout Twelve Years of Online Survey Panel Activity in Changing Context – the Role of the Panellists’ Digital Practices and the Type of Device Used to Respond

Mr Malick Nam (CDSP - Sciences Po / CNRS) - Presenting Author
Dr Blazej Palat (CDSP - Sciences Po / CNRS)

ELIPSS is a panel of adults living in ordinary households in mainland France. Recruited in four successive waves that were conducted every three to four years from 2012, the panellists have taken part in monthly surveys implemented entirely online. However, even though they were provided with 4G connected tablets in the first phase of the panel activity, they have had to use their own devices to respond from 2020 onwards. Since the panel’s establishment, around one hundred surveys have been conducted on a wide variety of themes, involving a total of 5,000 panellists (with the number of respondents varying between around 900 and 3,000, depending on the study). We have access to rich socio-demographic data and information on the use of digital tools for all the panellists, thanks to two surveys replicated in several waves: the Annual Survey and the Digital Practices Survey.
As part of this analysis, we focus on the propensity of the panellists to respond from two angles:
● Analysis of response rates according to different socio-demographic variables and those linked to the use of digital tools.
● Comparison of monthly response accumulation curves, according to these same variables.
The aim of this research is to improve our understanding of the factors influencing the dynamics of panellists’ behaviours according to their profile as users of digital tools. More generally, our aim is to find out to what extent this and similar online research infrastructures are inclusive in terms of such uses and therefore truly representative of their parent populations.


Understanding Non-Response in Self-Completion Surveys: Innovations and Future Directions from ESS Round 12 Data Collection

Mr Niccolo Ghirelli (European Social Survey HQ (City St George's, University of London)) - Presenting Author

The European Social Survey (ESS) Round 12 introduces a ground-breaking framework to analyse the contacts to and from the sample units in self-completion surveys. To be implemented in about 30 countries between September 2024 and April 2025, this framework aims to systematically capture detailed information on how each case in the gross samples is contacted to improve understanding of survey participation dynamics.

Data will be collected from diverse sources, including postal invitations, visits to the sampled addresses, and interactions with the helplines, enabling comprehensive documentation of contact attempts, explicit and implicit refusals, and participation barriers.

The Round 12 Contact Data framework intends to capture key drivers of response and non-response in self-completion modes, such as logistical challenges, language barriers, and the role of incentives. The ability to track and analyse patterns systematically across all the participating countries provides an unparalleled opportunity to test and refine engagement strategies.

This presentation will delve into the potential of the Round 12 framework to advance non-response research in cross-national surveys. Key areas of focus will include the application of collected data to identify emerging trends, the design of experiments to evaluate targeted engagement approaches, and opportunities for future research and innovation to optimize self-completion data collection strategies.

By proposing the collection of standardised fieldwork para-data in all the ESS countries, this presentation highlights how this can contribute to improving the data quality and inclusivity in self-completion surveys and aims to provide a forward-looking perspective on how these findings can inform methodological advancements in large-scale survey research.


Who are we losing? An analysis of “skips” and “drop-offs” in United States government surveys

Dr Gwen Gardiner (Internal Revenue Service) - Presenting Author
Dr Scott Leary (Internal Revenue Service)
Dr Nick Yeh (Internal Revenue Service)
Ms Brenda Schafer (Internal Revenue Service)

The United States Internal Revenue Service (IRS) administers annual surveys to nationally representative samples of taxpayers to gather data about their federal income tax filing experiences. However, these surveys are relatively lengthy and require significant recall, leading to skipped questions and incomplete responses. Approximately 15% of respondents don’t answer enough critical questions to be included in the final estimate, which can impact the representativeness of the findings. This analysis focuses on identifying patterns among respondents who begin but do not complete the surveys. First, we analyze demographic factors such as age and income to determine which groups are most likely to drop off. Then we examine metadata, including time spent on survey pages and total clicks per page, to uncover behavioral trends that may predict survey drop-offs. Lastly, we will examine the patterns and characteristics of those who do complete the survey but skip many of the critical questions to determine if similar characteristics exist as those who drop off. By exploring why participants disengage from the survey we expect to identify potential improvements to the survey design to reduce question skipping and drop-offs thereby enhancing data quality and ensuring a more statistically representative sample.


Understanding Participation Challenges in the French Health Barometer Survey: Insights from a Qualitative Study

Mrs Maria El Haddad (Santé publique France) - Presenting Author
Mrs Axelle Quiviger (Santé publique France)
Mrs Guillemette Quatremère (Santé publique France)
Mrs Noémie Soullier (Santé publique France)
Mr Jean-Baptiste Richard (Santé publique France)
Mrs Leïla Saboni (Santé publique France)

In self-completion surveys, understanding and addressing non-response remains a persistent challenge, particularly among populations that are less likely to participate. Recent experiences suggest that concerns about fraud are becoming increasingly prevalent among the population, contributing to heightened scepticism and non-response to surveys.
This study presents the results of a qualitative study conducted as part of the development of the French Health Barometer, a cross-sectional repeated survey on opinions, behaviours and knowledge related to health, conducted by the French Public Health Agency.
This qualitative research aims to test and refine survey materials—including invitation letters, emails, and informational content—by engaging a diversity of profiles, especially regarding backgrounds, and with a particular focus on subgroups known to have lower response rates. The study covers both mainland France and the overseas departments, where survey participation is usually lower.
The qualitative survey combines individual interviews and focus groups conducted in December 2024 - January 2025 to explore participants' perceptions of the survey materials. Key themes include concerns about the survey’s authenticity and the credibility of its investigator, the clarity and relevance of the communication, and factors influencing trust and engagement.
By identifying actionable insights to improve communication and foster trust, the study aims to improve survey materials, making them more suitable , particularly for individuals with lower levels of literacy or those who are more distrustful and allowing them to distinguish the survey from fraud or commercial approaches. The findings will also help with tackling obstacles to survey participation and reducing scepticism.


Understanding Non-Respondents in Establishment Self-Completion Surveys: The Case of Germany’s Job Vacancy Survey

Professor Nicole Gürtzgen (IAB)
Dr Alex Kübis (IAB)
Dr Andre Pirralha (IAB) - Presenting Author

As self-completion surveys increasingly supplant traditional interviewer-based methods, understanding non-response mechanisms becomes vital for mitigating biases and improving data quality. This presentation investigates non-response dynamics in the German Job Vacancy Survey (IAB-JVS), an establishment survey that recently shifted to a push-to-web data collection mode. While the JVS continues to rely on postal invitations, respondents are now encouraged to complete the survey online, with paper-based options still available as a residual mode of participation. Against this methodological backdrop, our study offers a quantitative assessment of non-response to identify key factors influencing non-participation among establishments in Germany.
We further examine the role of questionnaire length in shaping non-response outcomes through an embedded experimental design. Establishments were randomly assigned to versions of the JVS questionnaire that varied in length, allowing us to test the hypothesis that shorter instruments would improve response rates and reduce non-response bias. Preliminary findings, however, suggest that questionnaire length does not significantly affect non-response bias, indicating that other, more complex elements—such as survey topic salience, organizational capacity, or the perceived burden of survey participation—may exert greater influence on participation decisions.
By integrating these analyses, this research highlights both the opportunities and limitations associated with self-completion modes in establishment surveys. Limited information about non-respondents complicates efforts to identify specific drivers of non-response, underscoring the need for more robust approaches to the engaging of these groups. The findings contribute to the broader discourse on self-completion survey methods, offering empirically grounded recommendations to enhance response rates and address non-response bias in establishment surveys.


When survey mode aligns with subject matter: investigating non-respondents in digitally conducted surveys on digital society

Mr Alexandre Pollien (FORS) - Presenting Author
Dr Michèle Ernst Stähli (FORS)
Dr Michael Ochsner (FORS)
Dr Marlène Sapin (FORS)

The transition from face to face to web/paper comes with very limited information on non-respondents. Contacts with nonrespondents are generally limited to a minimal number of returned invitations and a few phone inquiries. MOSAiCH was an bi-annual survey in face to face mode until 2017 and changed its design to an annual web/paper survey. Moreover, in 2024, the MOSAiCH survey focused on digitalization of societies.
This raises the issue, whether participation in self-completion web/paper surveys is strongly related to internet and communication abilities, including literacy skills and technical proficiency. This could affect answers to questions on digitalization of societies.
To address this issue, we conducted a non-response follow-up survey (NRS) in paper mode, incorporating key questions regarding respondents’ interactions with digital technologies. The questions in the NRS aim to capture aspects of literacy and technical proficiency that may affect respondents’ ability to engage with the survey. We will perform comparative analyses between non-response survey participants and the overall main survey sample. Although the sequential design of the main survey does not allow for complete separation of paper and web modes, we will examine respondents from the web and paper survey modes separately to identify similarities with non-respondents: do they look specifically more like paper respondents?
To the best of our knowledge, this study represents the first application of a non-response follow-up survey within a web/paper context. It serves as a preliminary exploration of the potential risk of bias, aiming to evaluate the feasibility of obtaining information from non-respondents – individuals who remain inaccessible to survey administrators if they choose not to participate.