ESRA logo

ESRA 2023 Glance Program


All time references are in CEST

Optimizing Grid Questions in Mixed-Mode Surveys Across Devices: Challenges and Solutions

Session Organisers Dr Ellen Ebralidze (Leibniz Institute for Educational Trajectories (LIfBi))
Mrs Annette Trahms (Institute for Employment Research (IAB))
TimeTuesday 18 July, 09:00 - 10:30
Room

Mixed-mode surveys face the challenge of ensuring comparability across different modes and devices, including PCs, tablets, and smartphones. A significant difficulty is optimizing grid question layouts to suit all screen sizes while maintaining consistency across modes.
This session will explore constraints and solutions in designing grid formats. We invite practitioners, especially those involved in large-scale surveys, to share their approaches, focusing on mixed-mode designs, mobile-friendliness, potential question format effects, and lessons learned. Specifically, the following questions can be addressed:

• What modes are used for your survey, and how do they interact? Are you using a concurrent mixed-mode design with two or more modes at the same time, a sequential mixed-mode where you start with one mode and let others follow, or a combination? What is the reference mode in your mix?
• In panel studies, how have mixed-mode designs evolved, and have there been changes in the reference mode?
• Is your grid format mobile-friendly, and does the layout differ across devices? How does it compare to the grid display in other modes you may use (e.g. CATI, PAPI)? What question format effects have you observed or anticipate?
• What compromises have you made, what lessons have you learned?

We welcome case studies from researchers, survey methodologists and survey institute experts on this topic.

Keywords: Mixed-Mode Surveys, Grid Questions,

Papers

Exploring Smartphone-Friendly Alternatives for Grids

Ms Irina Bauer (GESIS – Leibniz Institute for the Social Sciences ) - Presenting Author
Dr Tanja Kunz (GESIS – Leibniz Institute for the Social Sciences )

In web surveys, questions that are part of a battery such as scales are often displayed as a grid, since with this format, question stems and answer options do not have to be repeated for each item, thus saving space. Given the increasing number of respondents who participate in surveys using their smartphones, on which horizontal scrolling would be required to answer grid questions, web survey software often automatically decomposes grids in an item-by-item format on smaller screens. As a result, the presentation of the question differs depending on the device used.

For our study, we collected data in two surveys in which we experimentally varied the presentation of item batteries in order to compare grids to an item-by-item format as well as to two alternatives to traditional grids that are displayable on smaller screens without losing the space-saving benefits of grids. Therefore, we randomly assigned respondents of a non-probability access panel (n=4,011) as well as respondents of a probability-based general population survey in Germany (n=19,464) to one of four groups: (1) grid (2) item-by-item format (3) accordion (4) carousel.

We investigate different aspects of satisficing response behavior (e.g., nondifferentiation, item nonresponse), the internal consistency of the answers as well as the respondent’s assessment of the usability of the question’s layout.


Star Rating Scales in Online Grid Questions: Better than Verbally Labelled Scales?

Ms Lea Königer (LMU Munich) - Presenting Author

Optimizing response scales for mobile devices is becoming increasingly important, as the proportion of respondents participating in online surveys via mobile phones continues to grow. This study evaluates the use of 5-star pictorial rating scales compared to traditional verbal Likert scales, focusing on their usability and response quality in online surveys across modes and devices.

We present data from an experiment conducted in an online survey of the German Longitudinal Environmental Study (GLEN), a large-scale probability-based mixed-mode panel started in 2024. In a split-ballot experiment, participants are randomly assigned either a 5-star scale or a conventional five-point verbalized Likert scale to rate residential quality. To assess the measurement quality of the two response formats, we employ two approaches: First, we compare responses over time as the question was included in two waves of GLEN (late 2024 and early 2025). Second, we compare respondents’ subjective ratings with linked objective geospatial neighborhood data.

We benefit from the mixed-mode approach in GLEN: Following a mixed-mode recruitment survey (paper and web), the second survey is conducted exclusively online. This design allows us to analyze differences between response formats both within and across devices and modes. Our findings have important implications for mixed-mode survey design, which often faces the challenge of optimizing instruments for all available modes and devices.


Optimizing Mobile Survey Design: An Experimental Test in a U.S. Government Survey

Dr Scott Leary (Internal Revenue Service) - Presenting Author
Dr Nick Yeh (Internal Revenue Service)
Dr Gwen Gardiner (Internal Revenue Service)
Mr Kris Pate (Internal Revenue Service)
Mrs Brenda Schafer (Internal Revenue Service)

The United States Internal Revenue Service (IRS) administers the Individual Taxpayer Burden (ITB) Survey annually to a sample of approximately 40,000 individuals. This nationally representative web-only survey collects data on the time and money taxpayers spend fulfilling their federal tax reporting responsibilities. Individuals can complete the survey on a computer or a mobile device. For the 2022 survey, approximately 30% of respondents used a mobile device.

Initial analysis found significant data quality discrepancies between mobile device and computer users. Specifically, item nonresponse was significantly higher for mobile users, particularly on critical questions about the time and money they spent on tax-related activities. Furthermore, we found a correlation between the size of the mobile device and the likelihood of skipped these questions. These findings suggest that the data quality issues for “mobile-friendly” survey questions may in part be due to the layout of the survey questions on mobile screens.

We therefore created a customized “mobile-optimized” version of the critical time and money questions. Among other improvements, we eliminated all left-right scrolling and used default numeric only keypads. We experimentally tested this design during two waves of our Tax Year 2022 survey to individuals who filed after December 22 of the year the return was due by randomly assigning all respondents to either the optimized or standard version. We hypothesized that this “mobile-optimized” design would significantly decrease both incomplete surveys and item nonresponse among time and money questions for mobile device users. Most studies examining data quality by device type are based on observational (non-experimental) designs. This study will advance our understanding by experimentally testing the optimized format on most mobile devices.