ESRA 2025 Preliminary Program
All time references are in CEST
Satisficing in Self-Completion Modes: Theoretical Understanding, Assessment, Prevention and Consequences for Data Quality 2 |
Session Organisers |
Dr Daniil Lebedev (GESIS - Leibniz Institute for the Social Sciences, Germany) Dr May Doušak (University of Ljubljana, Slovenia)
|
Time | Wednesday 16 July, 16:00 - 17:30 |
Room |
Ruppert D - 0.24 |
Self-completion surveys, which are increasingly preferred over face-to-face modes, present unique challenges. Rising costs, declining response rates, and interviewer effects make face-to-face surveys less viable. However, self-completion modes (web, mail, or mixed) introduce their own data-quality related challenges. Without an interviewer, respondents face a higher cognitive load, which can lead to satisficing – providing suboptimal answers – especially among those with lower cognitive ability or motivation. This behaviour increases measurement error and lowers data quality.
Survey methodologists have developed various indicators to assess response quality by detecting undesired respondent behaviour, such as straightlining and acquiescence, along with paradata-measured response styles like speeding, multitasking, motivated misreporting, and others. Questions assessing respondents' subjective enjoyment, cognitive effort, and time investment also help identify satisficing propensity. These tools can be used for detection and prevention through immediate feedback or adaptive survey designs based on survey data, paradata, or probing questions.
This session focuses on theoretical advancements in understanding satisficing and response styles in self-completion surveys, whether computer-assisted or paper-based, research on survey data-based indicators, paradata, and probing questions for assessing and preventing satisficing. Developing and implementing satisficing propensity models and tools and evaluating satisficing's impact on data quality in self-completion modes are also key topics. Contributions may address these areas and related research topics.
Keywords: satisficing, response styles, self-completion, web surveys, mail surveys, paradata, data quality indicators, probing questions, experiments, motivation, cognitive effort, cognitive ability, respondent engagement
Keywords: satisficing, response styles, self-completion, web surveys, mail surveys, paradata, data quality indicators, probing questions, experiments, motivation, cognitive effort, cognitive ability, respondent engagement
Papers
Response Styles in Face-to-face and Self-completion Mode - Experimental Evidence from ESS Round 10
Dr Diana Zavala-Rojas (RECSM - Pompeu Fabra University)
Mr David Moreno-Alameda (Complutense University of Madrid) - Presenting Author
Ms Hannah Schwarz (RECSM - Pompeu Fabra University)
Many surveys are currently dealing with the switch from face-to-face to self-completion modes, the European Social Survey being an important example. There is extensive literature assessing whether there are mode effects on measurement, for example comparing face-to-face and self-completion modes. Our research aims to go beyond this and assess if measurement could be substantively altered because response styles are elicited to differing extents in face-to-face versus self-completion modes. We study six types of response styles: acquiescence, mid-point and extreme response style, primacy and recency response styles and straightlining. We use experimental data from the European Social Survey Round 10 for this, available for two countries: Finland and the UK. Preliminary results show that aquiescence is lower in self-completion modes.
Exploring the Role of Need for Cognition and Question Difficulty in Survey Data Quality: Insights from Satisficing Theory
Miss Dörte Naber (University of Granada) - Presenting Author
Dr Patricia Hadler (GESIS - Leibniz Institute for the Social Sciences)
Professor Jose-Luis Padilla (University of Granada; Mind, Brain and Behavior Research Center (CIMCYC))
Satisficing Theory has been extensively applied in survey methodology to investigate how question and respondent characteristics influence satisficing behavior and data quality. Specifically, respondent motivation has emerged as a key predictor of data quality, with survey fatigue and topic interest commonly used as proxies. However, beyond these context-specific indicators, broader motivational traits, such as Need for Cognition (NFC), may provide deeper insights into the role of motivation in response behavior. While NFC is well-studied in other fields, its impact on survey responses remains underexplored. In this study, we address a critical gap in the literature by investigating the effect of NFC on data quality, with a specific focus on question difficulty – a dimension Satisficing Theory predicts will amplify the role of motivation but has not yet been explored in this context. To address this, we investigate how NFC impacts data quality across varying levels of question difficulty at different stages of the question-and-answer process. Specifically, we analyze variations at the response stage by comparing open-ended and closed-ended questions, and at the retrieval stage by contrasting concurrent and retrospective recall tasks, as implemented in Web Probing.
We analyze data from a 2020 experimental Web Probing study using the access panel of respondi/bilendi, involving a sample of 2,184 respondents from Germany. Participants answered three specific probes on quality-of-life aspects presented in one of four experimental conditions: open-ended or closed-ended format and concurrent or retrospective design, reflecting varying levels of question difficulty. Additionally, respondent characteristics such as education and age were included. Data quality was assessed through nonresponse and speeding, enabling comparability between open-ended and closed-ended questions. Our findings contribute to a more nuanced understanding of respondent motivation and its implications for survey design and data quality.
The Impact of Question Wording and Device Choice on Survey Response Patterns Indicating Satisficing
Dr Yfke Ongena (University of Groningen) - Presenting Author
Dr Marieke Haan (University of Groningen)
Agree-disagree (AD) items are assumed to evoke more satisficing behavior than construct specific (CS) items (i.e., items with a different response scale for each item, depending on the response dimension being evaluated). In this study we assess the effects of AD versus CS items on response patterns that are assumed to be indicators of satisficing, such as straightlining. In earlier research, straightlining has been shown to be more prevalent in surveys that are completed on a PC than on smartphones. However, effects of device use for self-completion have not been researched extensively in previous studies comparing AD and CS items.Our survey was conducted in November 2024, with 3,500 flyers distributed across a neighborhood in a large Dutch city with subsequent face-to-face recruitment by students. The flyers included a QR code and URL for survey access, and respondents were incentivized with a locally produced cake for their participation. The survey was filled out by 543 individuals (completing at least 50% of the questions), yielding a 13% response rate at household level. A smartphone was used by 85% of the participants, whereas 15% used a PC.
Respondents were randomly assigned to four blocks of either five AD items or five CS items. Straightlining occurred more frequently in AD items than in CS items, with 18% of respondents in the AD condition showing straight lining, as opposed to 5% of respondents in the CS condition. PC respondents were more likely than smartphone respondents to straightline in battery items phrased as AD items, but this effect was not found when items were phrased as CS items. This shows that using CS items might be more beneficial when the questionnaire is filled out on a computer than on a smartphone.
Does the Effectiveness of Interactive Feedback in Web Surveys Depend on Satisficing as a Personality Trait?
Ms Hannah Schwärzel (TU Darmstadt) - Presenting Author
Professor Marek Fuchs (TU Darmstadt)
Satisficing behavior is a threat to data quality in surveys. Web surveys offer the technical possibilities to detect satisficing behavior while respondents answer the questions and to prompt them to improve their answer. Previous studies have demonstrated that prompts in the questionnaire can be used to mitigate, for example, speeding, non-differentiation, and item-missing. However, some respondents ignore the prompts. In this study, the question is raised if this unresponsiveness to feedback particularly applies to respondents with a general tendency to satisfice. Therefore, we compare the effectiveness of interactive feedback that aims at a reduction of don’t know answers for respondents with a low, intermediate or high tendency to satisfice. To determine this tendency, we use the satisficing scale of the Maximization Inventory (Turner et al. 2012) that provides a measure for satisficing as a personality trait.
Results of a randomized field-experimental web survey indicate that interactive feedback in reaction to initial don’t know answers reduces the prevalence of final don’t know answers. Interestingly, the mitigating effect of the interactive feedback is shrinking for respondents whose personality exhibits high levels of satisficing.
In the discussion we propose that don’t know responses as an indication of satisficing response behavior are caused in part by fluctuating respondent motivation and also by varying levels of task difficulty depending on the questions posed to respondents. Using prompts has the potential to reduce the prevalence of don’t know answers due to low motivation or high question difficulty and is thus able to mitigate the negative effects of situational satisficing on data quality. By contrast, satisficing response behavior in surveys caused by a rather stable personality trait resists the influence of interactive feedback.
Identifying Plausible and Implausible Straightlining: The Impact of Question Characteristics on Survey Response Behavior
Ms Çağla E. Yildiz (GESIS - Leibniz Institute for Social Sciences) - Presenting Author
Professor Henning Silber (University of Michigan)
Dr Jessica Daikeler (GESIS - Leibniz Institute for Social Sciences)
Ms Fabienne Kraemer (GESIS - Leibniz Institute for Social Sciences)
Dr Evgenia Kapousouz (NORC at the University of Chicago)
Satisficing response behavior, including straightlining, can threaten the reliability and validity of survey data. Straightlining refers to selecting (nearly) identical response options across multiple items within a question, potentially compromising data quality. While straightlining is often interpreted as a sign of low-quality responses, there is a need to distinguish between plausible and implausible straightlining (see Schonlau and Toepoel, 2015; Reuning and Plutzer, 2020). We introduce a model that classifies straightlining into plausible and implausible patterns, offering a nuanced understanding of the conditions under which straightlining indicates optimized response behavior (plausible straightlining) vs. satisficing response behavior (implausible straightlining). For instance, straightlining is plausible when answering attitudinal questions with items worded in the same direction, but it becomes implausible when items are reverse-worded. This study examines how question characteristics (grid size, design, straightlining plausibility) influence straightlining behavior. For our analyses, we use the German GESIS Panel, a mixed-mode (mail and online), probability-based panel study, leveraging a change in the panel’s layout strategy in 2017 that shifted grid questions from matrix to single-item designs, offering a unique quasi-experimental set-up. Our initial multilevel regression analyses, using data from 1,917 respondents and 19 grid questions from the wave before and after the design switch, show that matrix designs are associated with higher levels of straightlining compared to single-item designs. Our preliminary analyses, based on coding by five survey methodology experts, classify 26.3% of these questions as exhibiting plausible straightlining, with the remainder showing implausible patterns. Further analyses investigate how these classifications correspond to conditions under which straightlining reflects optimized versus satisficing response behavior, offering deeper insights into the role of question characteristics. This research enhances questionnaire design and the accurate identification of low-quality responses, addressing gaps in linking question characteristics to straightlining plausibility.