All time references are in CEST
Adapting survey mode in a changing survey landscape: Experiences from repeat cross-national, cross-sectional, and general social surveys |
|
Session Organisers | Ms Jodie Smylie (NORC at the University of Chicago) Dr René Bautista (NORC at the University of Chicago) Professor Rory Fitzgerald (ESS HQ; City St Georges, University of London) Mr Tim Hanson (ESS HQ; City St Georges, University of London) Mr Nathan Reece (ESS HQ; City St Georges, University of London) Ms Daphne Ahrendt (Eurofound) |
Time | Tuesday 18 July, 09:00 - 10:30 |
Room |
Studies to measure attitudes, opinions, and behaviors are critical to understanding societies around the world. In the face of social developments, changing trends in respondent recruitment methods, budget constraints, national infrastructure disruptions, and public health concerns, many repeat cross-sectional social surveys are experimenting with self-completion and mixed-mode approaches. The European Social Survey (launched 2001), United States’ General Social Survey (launched 1972), and the European Quality of Life Surveys (launched 2003) are examples of longstanding studies collecting data to inform research on changes over time and now exploring and transitioning to new modes. This session brings together cross-sectional social surveys to share experiences in survey mode transition.
The session's aims include: (1) Share results and lessons from recent mode experiments and mixed-mode applications by general social studies, and potential ways to improve methods. (2) Highlight how different cross-sectional studies have recently modified survey protocols to adapt to changing public conditions. (3) Provide space for data creators, data users, and survey practitioners to discuss methodological and statistical challenges for cross-sectional studies considering such moves. (4) Discuss integrity and comparability of data collected using new data collection methods with the existing time-series. (5) Explore applications of emergent technologies to new modes.
We invite submissions from those involved in transitioning repeat, cross-sectional, and cross-national social surveys to new data collection approaches. Topics of interest include: results from pilots or feasibility studies based on self-completion or mixed-mode approaches; findings from experimental research testing aspects of self-completion/mixed-mode designs (e.g., incentive and mailing strategies, survey length adaptations, sequential vs. concurrent designs); impacts of mode switches on measurement and survey time series; and discussions of experiences and challenges with adapting cross-sectional surveys to new modes across different cultural/national contexts.
Keywords: general surveys, survey methodology, data collection, data collection modes, mixed mode, self administration
Miss Sara Finnbogadottir (Aarhus University) - Presenting Author
Dr Hafsteinn Einarsson (University of Iceland)
Repeated cross-sectional surveys, such as election studies, benefit from maintaining methodological continuity over time but are facing increasing challenges in the form of declining response rates and increasing costs. This has spurred many longstanding surveys to transition from single-mode to mixed-mode data collection. Here, we explore the effects of transitioning the Icelandic National Election Study, which has been fielded after every national election since 1983, from telephone to mixed mode data collection. In 2021, web response was offered for two sample sub-groups: those who were not associated with known telephone numbers and those who refused to participate over telephone. This survey design maintained telephone as the primary response mode while recruiting low-propensity respondents with other modes. We explore survey out comes in terms of response rates, sample composition and measurement quality and equivalence between telephone and web responses. Our findings suggest that the changes to the survey design were not sufficient to halt the decline in response rates over time but nevertheless resulted in additional responses gathered than a telephone-only design. Web responses were associated with higher rates of non-substantive responses relative to telephone response indicating lower data quality. Overall, these findings suggest that transitioning to a mixed-mode data collection protocol provided some benefits to ICENES but is not on its own sufficient to mitigate the trend of declining response rates over time.
Mr Curtis Jessop (The National Centre for Social Research) - Presenting Author
Ms Sarah Butt (The National Centre for Social Research)
Mr Simon Moss (The National Centre for Social Research)
Ms Jo D'Ardenne (The National Centre for Social Research)
Since 1986 the Skills and Employment Survey has collected robust survey data on the skills and employment experiences of people aged 20-65 working in Britain. In response to changes in societal expectations about how they provide data, and to future-proof the study, in 2023 the decision was made to explore the possibility of transitioning to an online design. Data were collected in parallel both via a face-to-face interview with a fresh sample taken from the postcode address file (PAF) and using a sequential mixed-mode (web/telephone) design with sample from a probability-based panel.
This paper will summarise the two designs, the approach taken to transitioning the questionnaire, and outline findings from the parallel run. It will look firstly at how the different sampling and fieldwork designs impacted response rates and the participating sample profile. It will then look at the extent to which the two arms of the study produced different estimates across the survey. It will also examine whether those differences can be explained by differences in the sample profile and/or whether or not there are any patterns in the differences – are certain types of questions more likely to produce different estimates than others, and how does that align with current literature on the impact of survey mode on measurement?
These findings will provide evidence on how transitioning from a face-to-face survey conducted with a fresh sample to an online panel survey may impact estimates, and what types of questions may be more or less at risk of mode effects.
Miss Hajar GAD (Verian) - Presenting Author
Mrs Tanja Kimova (Verian)
Mr Gijs van Houten (Eurofound)
The European Foundations for the Improvement of Living and Working Conditions (Eurofound) entrusted Verian with conducting the eighth edition of the European Working Conditions Survey (EWCS) in Spring 2024. To future-proof its surveys, Eurofound conducted the EWCS 2024 as a parallel run study, combining face-to-face and online data collection in all 29 European countries, using telephone and postal push-to-web recruitment methods.
The online component featured various test, including questionnaire duration. Respondents were randomly assigned to one of two conditions: a full questionnaire (40–45 minutes) or an abridged version (20–25 minutes). To maximize completions of the full questionnaire, those assigned to the abridged version (M1) were invited to complete an additional module (M2), comprising the questions omitted from the full questionnaire version.
When implementing a self-completion survey without an interviewer to ensure respondents complete the survey in one sitting, it is crucial to understand how questionnaire duration affects break-off rates and identify specific questions where respondents are more likely to drop. For this analysis, we define a respondent as having "started the questionnaire" if they completed the screener and the first two substantive questions. Break-off rates refer to respondents who began the survey but stopped partway through, rather than those who failed to engage at all.
To investigate the impact of questionnaire length on break-off rates, we will compare rates in the full-length online questionnaire to those in the M1 questionnaire (excluding break-offs from respondents who started the M2 questionnaire). Additionally, we will analyze break-off rates within modules of the short questionnaire. We anticipate that respondents who opt to proceed to M2 are less likely to break off compared to those completing the equivalent section in the full-length questionnaire.
This modular comparison will provide insights into
Dr Jamie Moore (ISER, University of Essex) - Presenting Author
Dr Pablo Cabrera-Álvarez (Institute for Social and Economic Research)
Professor Gabriele Durrant (University of Southampton)
Professor Annette Jackle (ISER, University of Essex)
Dr Jon Burton (ISER, University of Essex)
Professor Peter Smith (University of Southampton)
Many social surveys are facing significant challenges, such as increasing nonresponse and survey costs. One approach to minimising costs is to shift from face to face (F2F) or telephone interviews to less costly web interviews. Research has shown that while there may be benefits to web mode in terms of responses from individuals not sampled by F2F or telephone modes, generally non-response biases are minimised by offering both. Consequently, some surveys have adopted designs in which sample members are first offered web mode, then non-respondents are followed up with F2F or telephone. However, in recent years proportions of populations using the internet have increased markedly and people have become less willing to welcome interviewers into their homes. Hence, it is possible that the benefits of F2F or telephone follow ups in terms of reducing non-response biases, the justification for such designs, have changed. We investigate this question in datasets from Understanding Society - the UK Household Longitudinal Study (UKHLS). We focus on the Innovation Panel component of the study, in which a subset of sample members has been offered web interviews with F2F or telephone follow-ups of non-respondents since 2012. We evaluate 1) the impact of following up web non-respondents on how well respondent datasets reflect sample datasets (dataset representativeness), and 2) how the impact of these follow-ups has changed over time. For each survey wave, we use Coefficients of Variation of response propensities to quantify the representativeness of web only and web plus F2F or telephone respondents compared to the sample in terms of survey measured characteristics. In addition, we repeat our analyses using the UKHLS main survey dataset, whose more representative composition enables consideration of whether patterns differ for hard-to-reach population elements.
Dr Takayuki Sasaki (Tsuda University) - Presenting Author
It has become difficult to maintain sample representativeness with traditional survey methods, such as direct visits to the homes of respondents. As a result, many existing repeated surveys have shifted their survey modes from the gold standard to mixed-mode approaches. In the midst of a survey mode transition, there are still important questions that are left unanswered. Are there ways to increase respondents’ willingness to participate without face-to-face interactions? The aim of the present study is to investigate the effects of invitation letters on response rates by using a randomized controlled trial.
Traditionally, Japanese researchers send an invitation letter via mail to a sample who are randomly derived from the basic resident registration. Approximately one week later, interviewers ask individuals to participate in the survey by explaining research purposes during the home visit. With new survey modes, however, researchers do not have enough chance to explain the importance of each participation in a survey. Thus, I argue that principal investigators should directly speak to participants to increase their willingness to participate (WTP). One way to do that is to paste a QR code in an invitation letter linking to a video clip.
Data from the National Survey on Family-friendly Society (NSFS) conducted by Tsuda University in February 2025 will be used for this study. A national sample of 3,200 individuals will be split into a half randomly. One group receives an invitation letter with a PI’s video clip, and the other group receives an invitation letter without. Respondents are asked to answer the questionnaire using their own smartphones. Sampling method and many questions are parallel to Japanese General Social Survey, and thus response rates and response patterns can be comparable. Findings from this study should suggest pros and cons of the new survey design.
Dr Gijs van Houten (Eurofound) - Presenting Author
Dr Tanja Kimova (Verian)
Ms Hajar Gad (Verian)
Mr Christopher White (Eurofound)
Mr Jamie Burnett (Verian)
The European Foundation for the Improvement of Living and Working Conditions (Eurofound) commissioned Verian to conduct the eighth edition of the European Working Conditions Survey (EWCS) in Spring 2024 in 37 countries. As part of Eurofound’s strategy for future-proofing its surveys, the EWCS 2024 was conducted both face-to-face and online in all EU Member States (using a telephone push-to-web or a postal push-to-web approach), and the implementation of the online component included a range of test elements.
A key test element is the duration of the online questionnaire. The questionnaire for the face-to-face segment of the EWCS takes around 45 minutes to complete. Ideally, respondents completing the survey online would complete the same questionnaire. However, there are concerns about the impact of questionnaire duration on survey yield, sample performance, and data quality.
To assess these impacts, respondents in the online segment of the survey were randomly allocated to one of two conditions in terms of questionnaire duration: full (ca 40-45 minutes) or abridged (ca 25-30 minutes). The sample split is informed by the results of a pilot test carried out in all countries, with a view to maximise the likelihood of achieving an even split in the net sample.
In this paper we will discuss the results of this test, assessing the differences between the questionnaire duration conditions in terms of yield, sample composition, and data quality. The pilot test suggested a negative impact of questionnaire duration on yield in most countries. Analysis of the mainstage data will allow assessing this overall reduction of yield, and the impact on the cost per item, against the impact on response bias and data quality.
Dr Tom Huskinson (Ipsos) - Presenting Author
The Childcare and early years survey of parents is a high-quality random probability face-to-face survey of c6,000 parents per year in England. Commissioned by the Department for Education, it started in 2004 and is published as an Official Statistic. In line with increasing pressures for face-to-face surveys to transition to predominantly online data collection, this research investigated the extent to which survey estimates could be collected using a push-to-web methodology, and the implications in terms of maintaining trend data, and value for money.
The face-to-face questionnaire was adapted to online administration following ‘Mobile First’ design principles.
Two features of the push-to-web survey were experimentally manipulated to explore the optimal design: incentivisation (none, £5, £10 or £15), and deadline for completion (stated vs not stated). In addition, split-ballot experiments were embedded in the questionnaire to inform aspects of questionnaire design, including collecting continuous data via open numeric versus banded pre-codes, displaying versus hiding “Don’t know” code, and varying the position of certain response options. Measures of respondent experience were collected at the end of the questionnaire. The face-to-face survey was fielded as usual, providing a 'parallel run' against which survey estimates could be compared.
Incentivisation raised the response rate from 12% (none) to 27% (£15), increased representativeness, and delivered value-for-money. Making the survey deadline explicit reduced response slightly. Open numeric data was of high quality, but deciding whether to display “Don’t know” codes online remains a challenge. Changes to code positions had major implications for response distributions. A comparison of weighted key survey estimates between the two modes found significant difference for most, with the extent of these differences varying by whether the questions measured awareness, perceptions, preferences, or behaviours.
Dr Jan-Lucas Schanze (GESIS - Leibniz Institute for the Social Sciences) - Presenting Author
Mrs Caroline Hahn (GESIS - Leibniz Institute for the Social Sciences)
Dr Oshrat Hochman (GESIS - Leibniz Institute for the Social Sciences)
While a sequential, push-to-web mode sequence is very well established in survey research and commonly used in survey practice, many social surveys still prefer to contact older target persons with a concurrent design, offering a paper questionnaire alongside a web-based questionnaire from the first letter onwards. In our presentation, we compare the performance of a sequential design with a concurrent design for target persons older than 60 years. We analyse response rates and compare the sample composition and distributions in substantive items within resulting net samples.
Data stems from the 10th Round of the European Social Survey (ESS) carried out in self-completion modes (CAWI/PAPI) in some countries in 2021. In Germany, a mode choice sequence experiment was implemented for all target persons older than 60 years. 50% of this group was invited with a push-to-web approach, offering a paper questionnaire in the third mailing. The control group was invited with a concurrent mode sequence, offering both modes from the beginning on.
Results shows similar response rates for the concurrent design and the sequential design (AAPOR RR2: 38.4% vs. 37.3%). This difference is not statistically significant. In the concurrent group, 21% of the respondents answered the questionnaire online, while in the sequential group this was the case for 50% of all respondents. Even among target persons older than 75 years, every third respondent took part in the web questionnaire when being pushed to the web. The resulting net samples are very comparable. Looking at various demographic, socio-economic, attitudinal, and behavioural items, no significant differences were found between the sequential and concurrent groups. We conclude that a significant share of the elderly German population can be pushed to the web without negative consequences for response rate or sample composition.
Professor Rory Fitzgerald (European Social Survey ERIC ) - Presenting Author
Mr Tim Hanson (European Social Survey ERIC)
Professor Olga Maslovskaya (Southampton University)
Professor Peter Lynn (University of Essex)
Dr Ruxandra Comanaru (European Social Survey ERIC)
Dr Cristian Domarchi (Southampton University)
Dr Nhlanhla Ndebele (City St George's University of London)
Surveys aim to provide estimates of the behaviour, social condition or attitudes for the population that they seek to represent. To do this well, the total survey error needs to be as minimal as possible, otherwise conclusions might reflect methodological artefacts of data collection, rather than the true population score . Since modern surveys of the general population were first established, the best way to collect high quality data was felt to be face-to-face surveys, amongst probability samples of households or individuals. However, more recently, face-to-face data collection has been seen declining response rates, increasing interviewer effects, increased costs and a reduction in the number of commercial providers of this service, casting doubt on whether it remains the ‘gold standard’. At the same time, self-completion surveys offer an increasingly convincing alternative, with increased web penetration and digital literacy, zero interviewer effects, relative cost efficiency as well as promising response rates and representativeness.
This paper compares face-to-face data collection on the 10th round of the European Social Survey in Great Britain to an experimental self-completion survey. The paper finds that the self-completion data collection approach achieved a considerably higher response rate than the face-to-face survey, similar representativeness, and a substantially lower cost per interview, whilst being completed far quicker than face-to-face data collection. In terms of data comparability between the modes, the authors find whilst there are differences in the point estimates between the data collected face-to-face and self-completion, the correlations between variables are similar, regardless of data collection approach. The paper concludes that self-completion data collection, combining both web and paper approaches, offers a high-quality alternative to face-to-face data collection with the potential to offer a new gold standard.
Mrs Alexandra Asimov (GESIS) - Presenting Author
General social surveys are traditionally conducted in face-to-face and have long time series to monitor trends in the public opinion. To maintain comparability, these surveys generally minimize changes to its design over time. However, face-to-face surveys have suffered from declining response rates and rising survey costs in recent decades. As a result, self-administered mixed-mode surveys (mail, web) are gaining popularity in general social research due to their ability to circumvent these challenges. However, changing the mode of data collection is a significant methodological shift that could affect estimates of public opinion. Therefore, it is important to examine whether differences in estimates of public opinion are due to changes in actual opinion and/or changes in the data collection mode.
One method that enables a comprehensive understanding of the causes of changes in public opinion estimates is to conduct two subsamples simultaneously: the default face-to-face design and the self-administered mixed-mode design. In 2023, the German General Social Survey (ALLBUS) embedded an experiment where cases were randomly assigned in one of three protocols: (1) face-to-face interviews (2) sequential self-administered mixed-mode (mail and web) (3) concurrent self-administered mixed-mode (mail and web).
This presentation examines the differences in measurement and selection between the face-to-face and the two self-administered mixed-mode designs of ALLBUS 2023. Selection bias is examined through variations in response propensities and sample composition, while measurement bias is examined at the variable level, which allows us to examine whether differences are systematic across question characteristics (use of showcards in the face-to-face interview, sensitivity of questions, and type of question (attitudinal vs. factual)).
Mr Jamie Burnett (Verian) - Presenting Author
Miss Alexander Cronberg (Verian)
Mr Gijs van Houten (Eurofound)
Despite ever-growing reliance on survey data and an increasing shift to self-completion formats, little research exists on mode effects between face-to-face and self-completion surveys that use probability-based offline modes of recruitment with the exception of the work done in round 10 of the European Social Survey. We present results from a nationally representative, population-based mode experiment across the EU27 countries. The European Foundations for the Improvement of Living and Working Conditions (Eurofound) commissioned Verian to conduct the eighth edition of the European Working Conditions Survey (EWCS) in Spring 2024. As part of Eurofound’s strategy for future-proofing its surveys, the EWCS 2024 was conducted both face-to-face and online in all EU Member States (using either a telephone push-to-web or a postal push-to-web approach).
In this paper we will discuss the results of this experiment, investigating the impact of a mode switch for response rate, sample composition (e.g. gender, age, level of education, occupation and industry) and data quality (e.g., item nonresponse, filter errors, response patterns). The results of this experiment will help inform researchers and users of research on the future development of viable push to web designs across Europe.
Mr Jamie Burnett (Verian) - Presenting Author
Mr Gijs van Houten (Eurofound)
Miss Alexander Cronberg (Verian)
The European Foundations for the Improvement of Living and Working Conditions (Eurofound) commissioned Verian to conduct the eighth edition of the European Working Conditions Survey (EWCS) in Spring 2024 in 37 countries. As part of Eurofound’s strategy for future-proofing its surveys, the EWCS 2024 was conducted both face-to-face and online in all 27 EU Member States (using either a telephone push-to-web or a postal push-to-web approach), and the implementation of the online component included a range of test elements. We will focus on the experiment that looked at when to ask individuals whether they would like to take part in an online survey.
For many countries in Europe accessing an address or person level register for a push to web survey design is not feasible without a sponsor’s support (e.g. government department, academic partner). If you want to maintain a probability-based approach to recruitment for an online survey then an RDD telephone sample frame is often the only viable option. However recruitment costs by phone often dwarf those using an address or named register and postal letter, it’s therefore important to look at what measures can be taken to reduce these costs. In this paper we examine the response rates and associated costs with an ‘ask first’ vs ‘ask last’ design for the recruitment of the working population to an online survey.
Dr Mónica Méndez-Lago (CIS-Centro de Investigaciones Sociológicas) - Presenting Author
This presentation explores the transformation in survey administration modes for the International Social Survey Program (ISSP) and the European Social Survey (ESS) in Spain from 2014 to the present. In 2014, the ISSP Citizenship module was administered both face-to-face and a mixed self-completion mode, with a "Push-to-Web + mail" approach. As far as we know, this was the first time a social/political attitude survey in Spain used this latter design. At that time, Internet penetration in Spain stood at approximately 76%, with significantly lower figures among the population 50 years old and over. Since then, Internet penetration has risen to 95% of the overall population, with significant reductions in disparities related to Internet usage and familiarity. However, a notable gap remains among people aged 65 and older, who continue to have lower digital coverage.
Since 2020 ISSP surveys in Spain are conducted exclusively using a self-completion mix mode (Push-to-web+mail). This presentation will analyze key aspects of this shift over the last decade, including the evolution of overall response rates and specific trends within different population groups, with a particular focus on individuals over 65 and/or low education attainment, comparing these figures with other modes of administration. It will also present comparative evidence with the ESS, which has traditionally been conducted using a face-to-face mode, except for round 10 (2022) when Spain adopted a Push-to-web+mail design due to the COVID-19 pandemic.
The presentation will also deal with a central question in the current design of self-completion surveys, namely whether offering paper-based response options remains necessary, and for what purpose. This case study of Spain in the last decade offers interesting insights into the broader challenges and opportunities facing survey research.
Dr Marek Muszyński (Institute of Philosophy and Sociology, Polish Academy of Sciences) - Presenting Author
Professor Piotr Jabkowski (Faculty of Sociology, Adam Mickiewicz University, Poznan)
Rising costs and challenges in interviewer-administered surveys necessitate the development of mixed-mode studies. Cross-mode comparisons are essential to maintain consistent measurement quality across modes. Research indicates that mode differences affect response behaviours, including attentiveness (Daikeler et al., 2024; Kim et al., 2019) and response styles (Aichholzer, 2013; Hope et al., 2022; Liu, 2015).
The COVID-19 pandemic disrupted the European Social Survey (ESS) Round 10, planned for late 2020 and early 2021, leading some countries to adopt self-completion methods instead of face-to-face interviews. This shift created a natural experiment, enabling the comparison of response behaviours across different survey modes. Our study capitalises on this opportunity to analyse response patterns such as inattentive responding and response styles in ESS Rounds 9 and 10, contrasting the primarily face-to-face mode in Round 9 with the mixed modes of Round 10, where about a quarter of the countries used self-completion modes (postal and web surveys). This is further compared to Round 11, which returned to face-to-face mode.
We employed multilevel regression models to examine cross-mode response behaviours. Outcome variables included straightlining indices and measures of response styles (extreme, midpoint, and acquiescent), with survey mode and socio-demographics (age, gender, education, etc.) as moderators.
Preliminary findings indicate that respondents in self-completion modes exhibit less straightlining and fewer acquiescent and midpoint responses than in face-to-face modes, but yield more extreme responses and outliers identified by Mahalanobis distance. These differences are more pronounced with 11- compared to 5-point rating scales. Mode effects were held when controlled for participant socio-demographic characteristics. Interaction effects between mode and socio-demographics were mainly non-significant.
The findings are significant for designing mixed-mode international surveys and theoretical understanding of mode differences in response behaviours.
Mr Marcus Weissenbilder (The SOM-institute, University of Gothenburg) - Presenting Author
Ms Cornelia Andersson (The SOM-institute, University of Gothenburg)
Dr Sebastian Lundmark (The SOM-institute, University of Gothenburg)
Ms Elisabeth Falk (Nordicom, University of Gothenburg)
Declining response rates in surveys are a well-documented issue. One potential explanation is survey fatigue due to the number of market research surveys administered to each adult has been thought to have increased immensely (Groves, 2006; Kreuter, 2013; Peytchev, 2013; Leeper, 2019). To stand out amid this flooding of market surveys, government agencies, universities, and research institutes might be able to increase the likelihood of getting respondents to open their envelopes by printing logos from trusted and well-known government agencies sponsoring the survey.
Since 1986, the SOM Institute at the University of Gothenburg has conducted annual mailed paper-and-pencil surveys using probability samples. On each of those surveys, the SOM Institute has printed its logo. In this paper, one preregistered pilot and two preregistered replications of an experiment are presented, altering whether an additional survey sponsor is printed on the envelope. Printing an additional logo of a more well-known and trusted government logo or university may increase the likelihood of respondents opening and completing the questionnaire.
In 2023, 9,000 respondents in Gothenburg were mailed a survey, where half of them received an envelope showing the SOM Institute logo, whereas the other half received an envelope showing both the institute logo and the Gothenburg municipality logo. In 2024, the experiment was directly replicated in Gothenburg and extended to assess the impact of another government logo (Region Västra Götaland).
The pilot study showed that adding a second logo of a survey sponsor with a more well-known government logo increased response rates by 2.8% and decreased the number of reminders having to be sent, albeit without decreasing nonresponse bias. The data for the two replications will finish being collected by the end of 2024 and preliminary results of those studies are not yet known.
Miss Ilaria Francesca Lunardelli (NIDI - KNAW) - Presenting Author
The second round of the Generations and Gender Survey (GGS) introduced Computer-Assisted Web Interviewing (CAWI) as the primary data collection mode, marking a significant shift towards digital methods in sensitive survey research.
While most countries implemented CAWI as the sole mode for GGS-II Wave 1, a subset opted for mixed-mode data collection, incorporating CAWI alongside other modes such as Computer-Assisted Telephone Interviewing (CATI), Paper and Pencil Interviewing (PAPI), or Computer-Assisted Personal Interviewing (CAPI).
The choice to use multiple modes may enhance sample diversity and inclusion, yet it also introduces potential mixed-mode effects. These effects are particularly critical when examining sensitive indicators, such as fertility and well-being metrics, in a longitudinal, cross-country framework.
To examine the impact of mixed-mode effects, this study will employ a combination of descriptive analyses, regression models and statistical tests within a cross-country comparative framework. The analysis will investigate how mixed-mode effects manifest across different geographical and cultural contexts, such as the French and Uruguayan one, to understand whether these settings influence the extent and nature of mode-related biases. Ultimately, special attention will be given to sensitive indicators, as they are particularly susceptible to mixed-mode effects and hold central importance in in GGS.
Preliminary results suggest that, overall, mixed-mode data collection has contributed to higher response rates and greater diversity among respondents. However, mode effects can still be detected, with variations depending on the country and greater impact on sensitive indicators.
The GGP Central Hub Team aims to provide insights into the implications of mixed-mode effects on accuracy and comparability, as well as contributing to the optimization of future data collection.
Dr Piotr Jabkowski (Adam Mickiewicz University, Poznan) - Presenting Author
Dr Piotr Cichocki (Adam Mickiewicz University, Poznan)
Dr Aneta Piekut (The University of Sheffield)
The COVID-19 pandemic forced a shift in data collection methods, with many surveys adopting self-completion modes to overcome social distance constraints. This study examines the consequences of the transition toward self-completion protocols in the European Social Survey (ESS) and aims to answer whether the transition impacts survey quality. Notably, we focus on item nonresponse rates in complex and sensitive questions. Our analysis works on data from 23 countries participating in the ESS round 9 (2018, face-to-face modes), round 10 (2020, self-completion and face-to-face protocols), and round 11 (2022, face-to-face modes).
We focus on item nonresponse in two types of questions: complex items, such as household composition (gender, age and the relationship with respondent), and sensitive items, such as income and several measures of political trust. Using multilevel regression models, we predicted the probability of item nonresponse occurrence across self-completion (web, paper) and face-to-face modes of data collection (PAPI, CAPI, Video: web), controlling for individual-level predictors (e.g. age, education, gender) and several country-level characteristics.
Our results show differences in nonresponse patterns between face-to-face and self-completion modes. Compared to rounds 9 and 11, when countries utilised self-completion protocols in Round 10, they exhibited lower nonresponse for sensitive items but higher nonresponse rates in complex questions. In contrast, when countries constantly used face-to-face modes for data collection across three rounds, they did not demonstrate any significant differences in item nonresponse rates over time.
Our findings highlight the need for tailored methodological adaptations when moving to self-completion protocols, particularly addressing potential biases introduced by item nonresponse. Our analysis contributes to survey methodology by providing insights for optimising mixed-mode designs, ensuring data quality, and maintaining data comparability over time.
Mrs Axelle Quiviger (Santé publique France) - Presenting Author
Mrs Noémie Soullier (Santé publique France)
Mr Jean-Baptiste Richard (Santé publique France)
Mrs Leïla Saboni (Santé publique France)
Mrs Maria El Haddad (Santé publique France)
The French Health Barometer is a cross-sectional repeated survey, interviewing the population living in France about their opinions, behaviours and knowledge related to health. In 2024, the survey changed its methodology, transitioning from an exclusively phone interview collection to a mixed-mode data collection combining internet and telephone. To assess the existence and extent of potential measurement effects, a pilot survey was conducted in 2023 in mainland France among individuals aged 18 to 85. This survey tested several protocols simultaneously, including one using only telephone interviews and another using only internet interviews. This random assignment to a mode of data collection allows us to operate within the experimental framework described by Rosembaum and Rubin (1983). Approximately thirty indicators were studied, grouped into four categories depending on which measurement effect was expected: sensitive questions, non-sensitive subjective questions, difficult questions, and factual questions. Three methods were implemented for each indicator: multivariate logistic regressions with the data collection mode as an explanatory variable, propensity score matching, and multivariate logistic regressions weighted by the inverse of the propensity score. To account as much as possible for selection on observables, a large number of explanatory variables were included in the models. Our results demonstrate measurement effects consistent with the literature (satisficing bias, social desirability bias, etc.), but we also found unexpected effects, for which we suggest plausible explanations. The aim of this paper is also to detangle measurement effects from selection effects and to provide questionnaire design recommendations to minimize measurement effects.
Mrs Tanja Kimova (Verian) - Presenting Author
Mr Gijs van Houten (Eurofound)
Mr Christopher White (Eurofound)
Miss Hajar GAD (Verian)
Mr Jamie Burnett (Verian)
The European Foundations for the Improvement of Living and Working Conditions (Eurofound) commissioned Verian to conduct the eighth edition of the European Working Conditions Survey (EWCS) in Spring 2024 in 37 countries. As part of Eurofound’s strategy for future-proofing its surveys, the EWCS 2024 was conducted both face-to-face and online in all EU Member States (using a telephone push-to-web or a postal push-to-web approach), and the implementation of the online component included a range of test elements.
A key test element is the approach to incentives. As part of the pilot test different approaches to incentives were trialled: a combination of a small unconditional and a larger conditional incentive, an “early bird” approach to conditional incentives where the value of the incentive was reduced after a certain period, and different values for the conditional incentive. It was found that the unconditional incentive and the early-bird approach did not sufficiently increase the yield to warrant the additional cost and complexity. It was also found that offering a higher conditional incentive did improve cost efficiency. Therefore, in mainstage fieldwork in most countries only the higher value conditional incentive was offered. In seven countries, respondents in the online segment of the mainstage survey were randomly allocated to this higher value or to an even higher incentive value, allowing to further calibrate the most effective incentive level, in terms of yield, response profile, and data quality.
In this paper we will discuss the results of the pilot test – mainly focusing on the cost efficiency of the different incentive strategies – and the results from mainstage fieldwork – assessing the effect of the different values of the conditional incentive on yield, as well as on sample composition and data quality.