ESRA logo

Short Courses

Please note that only short courses with at least 10 participants will take place.

Monday 14th July morning (exact time TBD)

Björn Rohr & Barbara Felderer: Representation Bias in Probability and Non-Probability Surveys – Theoretical Considerations and Practical Applications Using the sampcompR R-Package

Sunghee Lee: Respondent Driven Sampling: Overview and Practical Tools

Thom Volker: Increasing the accessibility of private survey data: Synthetic data generation and evaluation in R

Felicia Loecherbach and Niek de Schipper: Collecting digital trace data through data donation

Liam Wright, Richard Silverwood, Georgia Tomova: Methods for Handling Mode Effects in Analyses of Mixed-Mode Survey Data

Laura Fumagalli and Thomas Martin: Survey experiments: principles and applications

Hanne Oberman: Open Science

Caroline Roberts: Integrating Large Language Models in the Questionnaire Design Process

Monday 14th July afternoon (exact time TBD)

Mariel McKone Leonard: Developing Distress Protocols for Survey Research Respondents

Lydia Repke & Christof Wolf: Collecting Data on Networks and Social Relationships with Social Surveys

Lisa de Vries & Zaza Zindel: Queering Survey Research: Integrating Queer Perspectives in Questionnaire Design, Sampling and Analyses

Susanne de Vogel, Nele Fuchs, Heike Thöricht: Building FAIR Foundations: A LEGO®-Inspired Journey to Better Survey Data Management

Joris Frese: Quasi-Experiments with Surveys: The Unexpected Event during Survey Design

Joshua Claassen & Jan Karem Höhne: Web tracking: Augmenting web surveys with data on website visits, search terms, and app use

Matthias Roth & Daniil Lebedev: Survey response quality assessment: Conceptual approaches and practical indicators

Larissa Pople: Understanding Young Voices: Engagement, Ethics and Measurement in Surveys

Course Descriptions

Representation Bias in Probability and Non-Probability Surveys – Theoretical Considerations and Practical Applications Using the sampcompR R-Package

Instructor:
Björn Rohr and Barbara Felderer, GESIS

Time:
Morning

Room:
TBD

Course Description:
This short course discusses the emergence and analysis of representation bias in surveys. The first part of the course discusses the errors that can occur in every step of a (non)-probability survey and how these errors may lead to representation bias in the analyses of survey data. A special focus will be on non-probability surveys and the question of when they are fit for purpose. The second part of the course covers bias analysis, introducing commonly used measures and their application in R. In the practical part of the course, bias analysis will be conducted on univariate, bivariate, and multivariate levels using the sampcompR R-package that was specifically written to study representation bias. A synthetic dataset will be provided to perform the exercises, but participants are welcome to bring their own datasets to conduct bias analysis.  The course will be on the beginner to intermediate level. Experience with R is helpful but not required. 

Bio:
Björn Rohr is a member of the survey statistics team at GESIS – Leibniz Institute for the Social Sciences. His research focuses on survey methodology, more specific, the comparison of surveys regarding bias. He has a special focus on comparing non-probability surveys and probability surveys.

Dr Barbara Felderer is the head of the survey statistics team at GESIS – Leibniz Institute for the Social Sciences. The first focus of her research is survey methodology, especially nonresponse and nonresponse bias. The second focus is (survey) statistics, currently especially causal machine learning methods and their application to improve survey quality

Respondent Driven Sampling: Overview and Practical Tools 

Instructor:
Sunghee Lee, Univerisity of Michigan

Time:
Morning

Room:
TBD

Course Description:
There is no clear practical, cost-effective solution in probability sampling, when data collection targets rare, hidden and/or elusive populations. Even with unlimited resources, minoritized groups’ low participation presents another challenge. Respondent driven sampling (RDS) has been proposed and used to fill this gap. RDS is feasible because of human nature— RDS is feasible because of human nature—we are connected to other people and form stronger ties with others who share similar characteristics. The peril in RDS implementation, however, is that the recruitment success is dependent on participants’ willingness to recruit others and their subsequent follow-through. These are not under researchers’ control and difficult to predict before the fieldwork begins. This course is designed for survey researchers and social scientists with varying levels of familiarity with RDS, including those without direct experience with implementing RDS or those without theoretical knowledge about RDS. The participants will learn theoretical and practical premises of RDS, design options for RDS studies through existing applications of RDS and how to optimize RDS data collection. The course also will discuss topics related to RDS data confidentiality and data analysis, including its statistical properties and types of analyses unique to RDS data. Throughout the course, the emphasis will be given to the practical aspects of RDS, from its implementation to analysis. In addition to the course slides, the participants will be provided hands-on experiences, facilitated by group as well as individual activities related to the RDS sample managements, RDS data collection monitoring and RDS data analysis, including data visualization, through easy-to-use tools developed by the instructor and her team and toy data sets.   

Bio:
Sunghee Lee is a Research Associate Professor and the Director of Program in Survey and Data Science at the Institute for Social Research, University of Michigan. She is a survey methodologist whose research interest evolves around improving data quality through inclusivity, which has profound implications for equity in social programs and policy decisions. Specifically, she has examined two angles of data quality: representation and measurement. Her research focuses on identifying and addressing error sources that affect inclusivity, including sampling, coverage, nonresponse, translation, question order and response style, often in the intersection with cultural norms. She leads the sampling aspect of the Health and Retirement Study and is the principal investigator of multiple federally funded studies that apply respondent driven sampling for recruiting hard-to-reach populations and population subgroups.

Increasing the accessibility of private survey data: Synthetic data generation and evaluation in R

Instructor:
Thom Volker, Department of Methodology and Statistics, Utrecht University

Time:
Morning

Room:
TBD

Course Description:
Due to data collectors’ impressive efforts, more data than ever facilitates scientific progress on key societal challenges. However, large fractions of these data risk never seeing the daylight due to privacy regulations. Synthetic data can be an excellent solution to this problem: the real data is kept safe, but a “fake” version is made available. If created properly, the synthetic set shares many characteristics with the observed data: it contains the same variables, and distributional characteristics, including relationships between variables, can be preserved. As such, the synthetic data can be used for many purposes. Typically, it is used as an intermediate step while researchers are awaiting access to the full data, allowing for obtaining insights into the data at hand, exploration and model building. Additionally, synthetic data can be used for replication purposes or even direct inferences. This short course covers three crucial aspects of synthetic data generation and evaluation. The first part outlines the concept of synthetic data and introduces state-of-the-art approaches to synthetic data generation. In the second part, we detail how to obtain valid inferences from synthetic data. The third part covers the evaluation of synthetic data in terms of the privacy-utility trade-off. We discuss methods to qualify remaining disclosure risks and the analytical validity of the synthetic data. All parts contain short practical sessions that allow the participants to get hands-on experience with the material. Some experience with R and a basic understanding of linear models are required. After this course, participants will:
1. understand the concept of synthetic data;
2. understand the advantages and disadvantages of synthetic data;
3. be able to generate synthetic data from own data in R;
4. be able to evaluate the quality of the synthetic data in terms of utility and disclosure risks.

Bio:
Thom Volker is a PhD candidate at the Department of Methodology and Statistics at Utrecht University and the Methodology Department of Statistics Netherlands. His research focuses on data privacy and integrates statistics and computer science techniques to enhance the generation and evaluation of realistic, safe and sharable synthetic data. He also works on Bayesian methods for research synthesis and multiple imputation of missing data. 

Collecting digital trace data through data donation

Instructor:
Felicia Loecherbach and Niek de Schipper, University of Amsterdam, Amsterdam School of Communication Research

Time:
Morning

Room:
TBD

Course Description:
This intermediate-level workshop introduces researchers to a workflow for digital trace data donation. Participants will learn how to request and process digital trace data for academic research, adhering to GDPR regulations. The process involves downloading a Data Download Package (DDP) from platforms like Google, Meta, or X. Data is processed locally on participants’ devices, ensuring only relevant parts are shared after obtaining informed consent. By the end of the workshop, attendees will understand key principles of data donation and GDPR-compliant data handling. They will receive information on how to design a data donation study, learn how to develop Python scripts to extract only relevant data from DDPs (data minimization) and will be equipped to deploy a data donation study.  The workshop includes hands-on activities. Participants will design a data donation study using the open-source software Port, focusing on study setup and recruitment strategies. They will practice writing and customizing Python scripts to selectively extract specific digital traces from DDPs. Additionally, attendees will manage the practical aspects of deploying a data donation study, including hosting the study and configuring data storage once participants donate their data. A case study using WhatsApp data, with templates and materials, will be used for hands-on practice.  The workshop comprises four parts:
1. Introduction to data donation – Overview of the concept and key methodology.
2. Study design preparation – Configuring a study and combining digital trace data with other sources.
3. Determining which digital traces to collect – Writing Python scripts with ethical considerations.
4. Deploying a data donation study – Managing participant engagement and data storage.  

Practical Information:

  • Participants should bring laptops with internet access and installation rights. Instructions for Port will be sent in advance.
  • Audience: Researchers experienced in data collection with human participants and experience in programming with Python.
  • Duration: 3 hours. Capacity: 30 participants.

Bio:
Felicia Loecherbach works as an assistant professor in political communication and journalism at the Amsterdam School of Communication Research. Prior to this, she has been a postdoctoral fellow at the Center for Social Media and Politics at the NYU and a PhD student at the Vrije Universiteit Amsterdam. Her research interests include (the diversity of) online news consumption and using computational methods in the social sciences and the impact that changes in online environments have on the understanding and usage of news. Specifically,  she uses computational approaches to study when and where users come across different types of news – collecting digital trace data via data donations, analyzing different dimensions of diversity of the content and how it affects perceptions and attitudes of users. Apart from this, she has been involved in studying the challenges of different modes of news access, for example via news recommender systems, private messaging, and smart assistants.

Niek de Schipper works as a Research Engineer at the University of Amsterdam and the University Utrecht, where he contributes to projects on digital data donation. De Schipper is an integral part of the Digital Data Donation community, helping many researchers in conducting their data donation studies. De Schipper has obtained his PhD at Tilburg University in 2021 at Tilburg University in the Methods and Statistics department.

Methods for Handling Mode Effects in Analyses of Mixed-Mode Survey Data

Instructor:
Liam Wright, Richard Silverwood, Georgia Tomova, CLS, UCL

Time:
Morning

Room:
TBD

Course Description:
Surveys are increasingly adopting mixed-mode methodologies. Due to differences in how items are presented, responses can differ systematically between modes, a phenomenon referred to as a mode effect. Unaccounted for, mode effects can introduce bias in analyses of mixed-mode survey data. Several methods for handling mode effects have been developed but these have mainly appeared in the technical literature and vary in their ease of implementation. Further, the assumptions these methods make (typically, no unmodelled selection into mode) can be implausible. To improve adoption of methods for handling mode effects, in this interactive short course we will provide background on the problem of mode effects by placing it within a simple and intuitive Causal Directed Acyclic Graphs (DAGs) framework. Using this framework, we will then describe the main methods for handling mode effects (e.g., regression adjustment, instrumental variables, and multiple imputation) and introduce a promising but underutilised approach, sensitivity analysis, which uses simulation and does not assume no unmodelled selection into mode. Finally, we will show users how to implement sensitivity analysis with a hands-on R tutorial using real-world mixed-mode data from the Centre for Longitudinal Studies’ (CLS) birth cohort studies. By the end of the session attendees will:
• Understand why mode effects can cause bias in analyses of mixed-mode data.
• Be able to draw DAGs that represent assumptions about mode effects.
• Use DAGs to design an analysis of mixed-mode data and to identify the biases that may appear in such an analysis.
• Understand methods for handling mode effects, including sensitivity analysis.
• Be able to implement sensitivity analysis within the software package R.
Activities will include:
1. Exercises drawing and interpreting DAGs that illustrate the issue of mode effects.
2. An R practical on implementing methods for handling mode effects using CLS cohort data.

Bio:
Liam Wright: Liam is Lecturer in Statistics and Survey Methodology at the Centre for Longitudinal Studies (CLS), University College London. Liam is Principal Investigator on the Survey Futures project Assessing and Disseminating Methods for Handling Mode Effects. Liam has experience creating tutorials on methods for handling mode effects, as well teaching programming skills. Most recently he has co-authored user-friendly guidance (with Richard Silverwood) on accounting for mixed-mode data collection for users of CLS’ cohort data.

Richard Silverwood is Associate Professor in Statistics at CLS. In addition to researching and producing guidance on mode effects, Richard is Chief Statistician for CLS’ cohort studies. He has wide-ranging expertise across many aspects of survey methodology, most notably missing data. Richard leads training at CLS and oversees the production of methods guidance for CLS’s data users. He is also co-investigator on the Survey Futures project Assessing and Disseminating Methods for Handling Mode Effects.

Georgia Tomova is Research Fellow in Quantitative Social Science at the Centre for Longitudinal Studies, University College London, where she works on the Survey Futures project Assessing and Disseminating Methods for Handling Mode Effects. Georgia’s previous experience includes both methodological and applied research in the nutrition domain, with a particular focus on the theory and application of causal inference methods. She also has extensive teaching experience, including lecturing on the renowned Introduction to Causal Inference Course in Leeds.

Survey experiments: principles and applications

Instructor:
Laura Fumagalli and Thomas Martin, University of Essex and University of Warwick

Time:
Morning

Room:
TBD

Course Description:
Survey experiments are becoming increasingly popular in many disciplines, such as survey methodology, economics, sociology, and politics. This course aims to equip participants with the skills to independently design and conduct high-quality survey experiments in their fields of research or industry. Learning Objectives:
By the end of the course, participants will:
1. learn the key principles of survey experiments, including how to use them to carry out causal inference.
2. learn how to elicit individuals’ subjective beliefs and analyse the role they play in decision making.
3. engage with key references from recent literature with a particular focus on information provision experiments.
4. learn how to practically implement a survey experiment from design and survey creation to data analysis and write-up of results.
Activities:
• Lecture: provide theoretical foundations and applications through existing examples of survey experiments across various fields in social sciences.
• Workshop: engage students in designing, implementing, and analysing their own survey experiments. Students will be introduced to survey platforms (e.g., Qualtrics) and statistical software (e.g., Stata) to analyse experimental data. Level:
This is an intermediate-level course, appropriate for researchers with experience of introductory research methods. No prior experience with survey experiments is required, but participants should be familiar with statistical concepts such as regression analysis.

Bio:
Laura Fumagalli is a Research Fellow at ISER, where she has been part of the Understanding Society (the largest panel survey in the UK) team for over 10 years. She has taught courses in public economics, statistics, survey methods, and panel data using Understanding Society. She has publications in multi-disciplinary journals, including: Journal or the Royal Statistical Society A, The Economic Journal, Journal of Economics Behavior and Organization, and Labor Economics.

Thomas Martin, Department of Economics, University of Warwick. Thomas is an Associate Professor (Teaching Track) and has worked at Warwick for over 10 years teaching Econometrics and Development Economics both at the Undergraduate and Postgraduate level. He has publications in multi-disciplinary journals such as World Development.  

Open Science

Instructor:
Hanne Oberman, Methodology and Statistics, Utrecht University

Time:
Morning

Room:
TBD

Course Description:
This short course introduces survey researchers to Open Science principles and equips them with practical tools to enhance transparency and reproducibility in their work. Designed as an interactive session, the course showcases how Open Science practices can improve the credibility of survey-based research, by promoting openness at every stage of the ‘open empirical cycle’ (e.g., study design, data collection, analysis, and reporting). Learning Objectives
• Understanding Open Science: Participants will gain understanding of the core principles of Open Science and its importance in addressing the “reproducibility crisis” in research.
• Practical Tools for Transparency and Reproducibility: Participants will learn how to integrate Open Science practices into their workflow, such as pre-registration, sharing datasets and analysis code, and open publishing.
• Hands-On Experience: Participants will work with tools for open data and reproducible analysis workflows (e.g. OSF and Quarto, respectively). This hands-on approach helps researchers implement Open Science methods in their own research.
• Addressing Ethical and Practical Challenges: Finally, we will discuss the ethical considerations of Open Science in survey research, including maintaining participant privacy and navigating data sharing agreements, while ensuring compliance with legal and ethical guidelines.
Activities:
Through the lecture, hands-on activities, and interactive discussions, participants will be
equipped to incorporate Open Science practices in their own survey research, making their
studies more credible, transparent, and open to collaboration. 

Bio:
Hanne Oberman is junior assistant professor in Methodology & Statistics at Utrecht University. Next to teaching and research roles, Hanne is appointed as chair of the Faculty Open Science Team, and ambassador of the Open Science Community Utrecht. They regularly organize events and workshops to promote Open Science practices among students and staff.

Integrating Large Language Models in the Questionnaire Design Process

Instructor:
Caroline Roberts, Institute of Social Sciences, University of Lausanne, Switzerland

Time:
Morning

Room:
TBD

Course Description:
Effective questionnaire design remains one of the greatest challenges in survey research, requiring a mix of scientific expertise and artistic skill, as well as evaluation and testing. Questionnaires – including how they are administered and how respondents interpret and respond to them – constitute a major source of survey error, but one that can be addressed at relatively low cost. An extensive literature on survey methodology provides guidance on the various pitfalls of poor question formulation, on optimal design choices to improve measurement quality, and on methods available for ensuring research objectives are met, while burden on respondents in minimised. Added to this, recent advances in the field of generative artificial intelligence (GenAI) – notably, increasingly powerful Large Language Models (LLM’s) and chatbots – now provide a new, and ever-expanding, range of tools that can be integrated into different phases of questionnaire development. These not only offer researchers opportunities to save time, but also the potential to optimise the formulation of survey questions. However, as research to validate the effectiveness of such tools remains in its infancy, their integration in the questionnaire design process should be handled in a critical way, based on background knowledge of both the scientific principles and craft of effective social measurement. This course, aimed primarily at beginners, aims to: 1) to present an overview of principal questionnaire design challenges, best-practice guidelines for writing effective questions, and frameworks for evaluating potential sources of error; and 2) to introduce available AI tools and ways they can be integrated at different stages of questionnaire development. Participants will work on practical examples of different types of survey question, to evaluate question problems and identify ways to improve them.

Learning objectives:
At the end of the course, students should be able to:
1. Describe the major challenges of writing effective survey questions, based on theoretical frameworks for identifying potential sources of error;
2. Complete steps involved in writing and evaluating survey questions drawing on best-practice guidelines aimed at minimising measurement error.
3. Integrate AI tools at different stages of questionnaire development and evaluate outputs critically.

Bio:
Caroline Roberts is a senior lecturer in survey methodology and quantitative research methods in the Institute of Social Sciences at the University of Lausanne (UNIL, Switzerland), and an affiliated survey methodologist at FORS, the Swiss Centre of Expertise in the Social Sciences. At UNIL, she teaches courses on survey research methods, questionnaire design, public opinion formation and quantitative methods for the measurement of social attitudes. She has taught a number of summer school and short courses on survey methods, questionnaire design, survey nonresponse, mixed mode surveys, and data literacy. At FORS, she conducts methodological research in collaboration with the teams responsible for carrying out large-scale academic surveys in Switzerland. Her research interests relate to the measurement and reduction of survey error. Her most recent research focuses on attitudinal and behavioural barriers to participation in digital data collection in surveys, and ways to leverage generative AI in questionnaire design, evaluation and testing. Caroline is currently Chair of the Methods Advisory Board of the European Social Survey and was President of the European Survey Research Association from 2019-2021. 

Developing Distress Protocols for Survey Research Respondents

Instructor:
Mariel McKone Leonard, German Institute for Economic Research (DIW Berlin)

Time:
Afternoon

Room:
TBD

Course Description:
When conducting sensitive survey research or working with vulnerable populations, it may be necessary to have a safety protocol in place should research participants experience distress by the topics discussed. This short course will provide participants an introduction to developing distress protocols for survey respondents, including: • When and why survey research might need a distress protocol. • Basic elements of a distress protocol.  • Overview of best practices for developing distress protocols.  • National helplines and resources to include as supporting materials.  • Distress protocols and supporting materials to include in ethics/IRB review package.  Throughout the course, participants will have opportunities to share their own experiences and lessons learned, as well as work in small groups to outline elements required for their own distress protocols.  By the end of the course, participants will be able to: (1) Determine if their project should include a distress protocol; (2) Outline components of a distress protocol appropriate for their project based on best practices; and (3) Prepare distress protocol materials for inclusion in an ethics/IRB review package using provided guidelines and materials.   Anyone who is considering conducting a survey on sensitive topics or with vulnerable populations should attend this course. Prior experience with sensitive topics, vulnerable populations, or distress protocols is not necessary but is welcome.

Bio:
Dr. Mariel McKone Leonard is a researcher with more than ten years’ experience in mixed methods survey research. Her main areas of research are improving representation of minority groups in studies, including methods of probability and non-probability sampling special populations, as well as conducting research within sensitive contexts. In both contexts, her work focuses on innovating and improving methods in an ethical and participant-focused manner.

Collecting Data on Networks and Social Relationships with Social Surveys

Instructor:
Lydia Repke and Christof Wolf, GESIS

Time:
Afternoon

Room:
TBD

Course Description:
This workshop introduces participants to essential concepts and methodologies for collecting data on social networks and relationships using social surveys. The course is structured into three parts. 

1. Introduction.

Participants will first explore basic conceptual aspects of social networks, including egocentric versus sociocentric networks, network composition, and structure. In addition, common theoretical concepts, such as transitivity and the strength of weak ties, will be covered. This part also highlights potential research areas and questions where social networks play a central role.

2. Data collection for egocentric networks in surveys.

This part focuses on best practices for collecting egocentric network data and deriving analytical measures. First, we will discuss different name-generation approaches, their advantages, and limitations. Next, participants will learn about name and edge interpreter items and get practical advice on their selection and design. Then, we will demonstrate how to derive compositional measures, structural measures, and a combination of both and provide examples of how this data can be used in empirical social network research.

3. Further measures for social networks and relationships.

Collecting egocentric network data requires comparatively many items and a lot of questionnaire time, making it impractical for some studies to incorporate these instruments. Moreover, some research may focus on other aspects of social embeddedness, such as social support. Therefore, we will highlight some established scales for measuring these concepts, offering alternatives when egocentric network data collection is neither feasible nor necessary.

By the end of the workshop, participants will understand the theoretical, conceptual, and empirical aspects of collecting data on social networks and relationships. They will be equipped to critically assess the merits and limits of different methodological approaches and apply them in their research projects.

Bio:
Dr. Lydia Repke is a social scientist leading the Survey Quality Predictor (SQP) project and heading the Scale Development and Documentation team at GESIS. She is a member of the Young Academy of the Academy of Sciences and Literature | Mainz, Germany. Her research interests include data quality of survey questions, egocentric networks, and multiculturalism.

Dr. Christof Wolf majored in sociology at Hamburg University and obtained his doctorate in sociology  from the University of Cologne. He is currently President of GESIS and Professor for Sociology at Mannheim University. His research interests include social networks and health.

Queering Survey Research: Integrating Queer Perspectives in Questionnaire Design, Sampling and Analyses

Instructor:
Lisa de Vries and Zaza Zindel, German Institute for Adult Education and German Centre for Integration and Migration Research & Bielefeld University

Time:
Afternoon

Room:
TBD

Course Description:
Political and social advancements have enhanced the acceptance and visibility of sexual and gender minorities in many Western countries. However, the ongoing challenge of accurately addressing their unique experience in survey research remains. Researchers and survey providers often struggle to incorporate queer perspectives. This course offers a comprehensive introduction to the integration of sexual and gender diversity within survey research. It focuses on four key areas:
1) Measurement of Sexual Orientation and Gender Identity: Exploring nuanced approaches for respectful and inclusive data collection on sexual orientation and gender identity.
2) Integrating Queer Perspectives: Learning effective strategies to craft survey questions that resonate with and capture the experiences of sexual and gender minorities.
3) Sampling Methods: Gaining insights into strategies and techniques for effectively reaching and engaging sexual and gender minority populations in survey research.
4) Data Preparation and Analysis: Equipping participants with the skills to sensitively manage and analyze data collected from diverse populations to draw valuable insights.
This dynamic workshop combines informative presentations, group discussions, and hands-on exercises, ensuring participants leave with the confidence and skills to successfully integrate sexual and gender diversity into their research projects.
Learning Objectives:
1) Develop Inclusive Research Practices: Equip participants with the knowledge and skills needed to design surveys and questionnaires that inclusively represent sexual and gender minority perspectives, ensuring that research outcomes reflect their experiences.
2) Enhance Outreach Strategies: Enable participants to employ effective strategies for locating and engaging sexual and gender minority populations in survey research, facilitating meaningful data collection and ensuring that no voices go unheard.
3) Empower Competent Data Analysis: Provide participants with the tools to prepare, analyze, and draw valuable insights from data collected from sexual and gender minority communities, fostering a deeper understanding of their unique experiences.

Bio:
Dr. Lisa de Vries is Research Associate at the German Institute for Adult Education. She did her PhD at Bielefeld University about the effect of discrimination on career opportunities and job preferences of sexual and gender minorities. Further research interests are discrimination and diversity, LGBTQI* people, and the measurement of sexual orientation and gender/sex.

Zaza Zindel is a Research Associate at the German Centre for Integration and Migration Research, as well as a PhD candidate at Bielefeld University. Her research focuses on survey research innovations, such as social media recruitment and survey experiments, aimed at studying rare, hard-to-reach, understudied, and marginalized populations.

Building FAIR Foundations: A LEGO®-Inspired Journey to Better Survey Data Management

Instructor:
Susanne de Vogel, Nele Fuchs, Heike Thöricht, Data Science Centre, University of Bremen

Time:
Afternoon

Room:
TBD

Course Description:

In survey research, adhering to the FAIR principles – Findable, Accessible, Interoperable, and Reusable – transforms data from being merely collected to becoming a valuable asset for future research endeavors. By enhancing the discoverability and usability of data through proper documentation and metadata, survey researchers foster transparency, collaboration, and reproducibility. Applying these principles in survey data management helps ensuring that datasets are easily accessible for reanalysis, comparison, and integration across platforms, ultimately maximizing the impact and utility of research outcomes.

This interactive workshop introduces researchers to the principles of making data FAIR. Using LEGO® bricks as an innovative tool, participants will engage in activities that illustrate the importance of proper documentation, metadata, and data management. The course covers practical tools and best practices to integrate FAIR principles into survey research projects.

Participants learn to:
• Understand FAIR principles and their application in survey data.
• Learn practical tools for effective data management.
• Collaborate through engaging LEGO® activities.

After a brief introduction, participants will engage in a hands-on LEGO® challenge designed to illustrate the importance of the FAIR principles. This fun, interactive activity will serve as an entry point for understanding how these principles can be applied in research data management. The second part of the workshop introduces practical tools, resources, and best practices to integrate FAIR principles into your projects. You will learn effective data documentation, use of metadata standards, and how to ensure data interoperability and reusability. The session concludes with an open discussion and Q&A to address specific challenges.

This course is suitable for survey researchers of all disciplines and career stages. No prior knowledge is required.

Bio:
Susanne de Vogel studied Social Sciences at the University of Cologne and Utrecht University, and earned her Ph.D. in Sociology from the Martin Luther University Halle-Wittenberg. From 2013 to 2024, she worked as a research associate at the German Centre for Higher Education Research and Science Studies (DZHW) in Hannover. There, she was involved in building and conducting two nationwide panel studies, the DZHW PhD Panel and the National Academics Panel Study (Nacaps). Since 2024, she works as a Data Scientist at the Data Science Center (DSC), University of Bremen, where she supports researchers in the field of social sciences in expanding their skills in data collection, preparation, analysis, and management, particularly focusing on survey data.

Nele Fuchs completed her Bachelor’s degree in Philosophy and Material Culture: Textiles at Carl von Ossietzky University Oldenburg in 2019. She then pursued a Master’s degree in Transcultural Studies at the University of Bremen in 2023. From 2020 to 2023, she led the editorial team for the series ‘Studien zur Materiellen Kultur’, the institute’s in-house publishing venture at the Institute for Material Culture (Open Access). Between 2023 and 2024, she worked as a freelance editor for researchers at all career stages. Since 2015, Nele Fuchs has been involved in extracurricular educational work. She currently works as a Data Scientist at Data Science Center, University of Bremen, where she connects and advises researchers in the humanities, providing training to enhance data literacy in the (digital) humanities.

Heike Thöricht earned her sociology diploma from the University of Hamburg in 2008. She then worked as a consultant at MSR Consulting Group GmbH, focusing on market studies and customer surveys. From 2018-2019, she supported the project “e-infrastructures Austria Plus” at the University of Innsbruck, where she contributed to research data management insights with 50 semi structured interviews with scientists from different disciplines, and worked on selecting a research data repository system. In 2020-2022, she was part of the “FAIR Data Austria” project and led efforts to establish a research data service point. As a data steward at DSC, she advises researchers in the social sciences and humanities on research data management, from project proposals to sustainable archiving.

Quasi-Experiments with Surveys: The Unexpected Event during Survey Design

Instructor:
Joris Frese, European University Institute

Time:
Afternoon

Room:
TBD

Course Description:
The Unexpected Event during Survey Design (UESD) has taken the social sciences by storm. The 2020 article by Munoz, Falco-Gimeno & Hernandez introducing the method is already one of the most cited articles in political analysis and research based on this method has now been published in all the top political science journals (and in many other disciplines such as economics or sociology). The basic premise of the UESD is simple: you analyse survey data that was fielded shortly before and
after an unexpected and influential event (such as a terrorist attack). Under certain conditions,
respondents interviewed right before and right after the event can be assumed to only systematically
differ in their (exogenous) exposure to this event. If all the relevant assumptions are met, researchers
can then estimate the causal effects of exposure to this event on relevant (political or social)
attitudes. For example, many political scientists have used this method to demonstrate “rally-aroundthe-flag” effects following terrorist attacks. In this short course, I will walk the participants through the established workflow for UESD projects.
We start by discussing some high-profile UESD applications of recent years. Next, I showcase the basic
assumptions of this method and how to test them. Finally, we conduct an original UESD analysis
based on publicly available survey data to learn the basic empirics and the most common robustness
checks step by step. I will showcase the analysis steps in R, but participants are also free to follow
along with Stata or other software. After the course, all participants will be equipped to conduct their own UESD projects from start to
finish. The course is aimed at beginners who have never used this method before and at intermediate
users who want to broaden their knowledge on the state of the art for UESDs.

Bio:
Joris Frese is a PhD candidate in political science at the European University Institute. In his dissertation, he makes empirical and methodological contributions to the causal analysis of public opinion dynamics following political scandals and catastrophes. He frequently uses the Unexpected Event during Survey Design in his research and is also writing several methodological papers about this method. One of these papers has recently been published in Research & Politics, while another one has been conditionally accepted at Political Science Research and Methods.

Web tracking: Augmenting web surveys with data on website visits, search terms, and app use

Instructor:
Joshua Claassen and Jan Karem Höhne, Leibniz University Hannover, German Centre for Higher Education Research and Science Studies (DZHW), Department of Research Infrastructure and Methods

Time:
Afternoon

Room:
TBD

Course Description:
Web surveys frequently run short to accurately measure digital behavior because they are prone to recall error (i.e., biased recalling and reporting of past behavior), social desirability bias (i.e., misreporting of behavior to comply with social norms and values), and satisficing (i.e., providing non-optimal answers to reduce burden). New advances in the collection of digital trace (or web tracking) data make it possible to directly measure digital behavior in the form of browser logs (e.g., visited websites and search terms) and apps (e.g., duration and frequency of their use). Building on these advances, we will introduce participants to web surveys augmented with web tracking data. In this course, we provide a thorough overview of the manifold new measurement opportunities introduced by web tracking. In addition, participants obtain comprehensive insights into the collection, processing, analysis, and error sources of web tracking data as well as its application to substantive research (e.g., determining online behavior and life circumstances). Importantly, the course includes applied web tracking data exercises in which participants learn how to …

1) … operationalize and collect web tracking data,

2) … work with and process web tracking data,

3) … analyze and extract information from web tracking data.

The course has three overarching learning objectives: Participants will learn to a) independently plan and conceptualize the collection of web tracking data, b) decide on best practices when it comes to data handling and analysis of data on website visits, search terms, and app use, and c) critically reflect upon the opportunities and challenges of web tracking data and its suitability for empirical-based research in the context of social and behavioral science. Previous knowledge on web tracking data or programming skills are not mandatory (beginner level). Participants should bring a laptop PC for the data-driven exercises.

Bio:
Joshua Claassen is doctoral candidate and research associate at Leibniz University Hannover in association with the German Centre for Higher Education Research and Science Studies (DZHW). His research focuses on computational survey and social science with an emphasis on digital trace data. 

Dr. Jan Karem Höhne is junior professor at Leibniz University Hannover in association with the German Centre for Higher Education Research and Science Studies (DZHW). He is head of the CS3 Lab for Computational Survey and Social Science. His research focuses on new data forms and types for measuring political and social attitudes.

Survey response quality assessment: Conceptual approaches and practical indicators

Instructor:
Matthias Roth and Daniil Lebedev, GESIS

Time:
Afternoon

Room:
TBD

Course Description:
This short course will introduce participants to conceptual and practical approaches to assessing survey response quality, focusing on commonly used response quality indicators such as response patterns, response styles, response times and others. The course covers approaches to assess interviewer behaviour, inattentive or careless responding, and satisficing in both face-to-face and self-completion surveys. These frameworks will be presented through different dimensions of survey data quality – accuracy, representativity, validity and reliability. We will explore the theoretical foundations for evaluating response quality using survey data, probing questions, and paradata, aligned with these frameworks.

In addition to understanding the theoretical underpinnings of response quality, participants will engage in practical exercises that include working with the R packages resquin, psych and others to calculate response quality indicators using real-world datasets. Participants will learn how to create graphical representations of the calculated response quality indicators and how to flag low-quality responses. Special attention will be given to the strengths and limitations of response quality indicators in different survey modes.

The workshop is designed for researchers and practitioners in survey methodology who seek to improve the accuracy, representativity, and overall quality of their data. By the end of the course, participants will have gained insights and skills to apply response quality assessment techniques in their own survey research.

This course is targeted at an intermediate level, ideal for those with a basic understanding of survey methodology and R. 

Bio:
Matthias Roth is a doctoral researcher in the Team Scale Design and Documentation at GESIS – Leibniz Institute for the Social Sciences in Mannheim, Germany. In his thesis, he focuses on psychometric approaches to survey data harmonization and measurement. Additionally, he develops the R package resquin which provides survey researchers with convenient functions to calculate response quality indicators. 

Dr. Daniil Lebedev is a Postdoctoral Researcher in the Cross-Cultural Survey Methods team at GESIS – Leibniz Institute for the Social Sciences in Mannheim, Germany. He works on quality reporting and fieldwork monitoring for the European Social Survey as part of the ESS Core Scientific Team. His research focuses on data quality in web surveys, response patterns, and the use of paradata to study respondent behavior during survey completion as well as on the effect of the mode of data collection on data quality. Daniil has been a member of the European Survey Research Association (ESRA) Board since 2021. 

Understanding Young Voices: Engagement, Ethics and Measurement in Surveys

Instructor:
Larissa Pople, CLS, UCL

Time:
Afternoon

Room:
TBD

Course Description:
Background: With respect to children’s perspectives and experiences, it is increasingly recognised that self-reported data from children should be considered the ‘gold standard’. There is accumulating evidence that child and parental accounts do not always coincide, especially in relation to children’s thoughts and feelings, or risky behaviours that might be concealed from parents. Thus, direct surveys of children provide valuable insights into the reality of children’s lives. Learning objectives: This course is an introduction to the fundamentals of designing surveys for children and young people. It focuses on three key aspects of the survey design process that require careful consideration in surveys involving children: questionnaire design and measurement, participant engagement and ethics. Participants will be provided with an overview of key steps of the survey design process that differ when respondents are children as opposed to adults, including: using qualitative methods to explore the relevance and suitability of topics; formulating well-worded questions and response scales; considering question order and flow; evaluating sources of measurement error; selecting appropriate mode(s) for data collection; developing age-appropriate participant materials; engaging ‘hard-to-reach’ groups; considering the role of parents and other adults as gatekeepers; and developing ethical practice that enshrines key principles such as informed consent, confidentiality, safeguarding and participant well-being.
Activities: Interactive elements will enable course participants to reflect upon key issues inherent in surveying children and young people, such as how to collect high-quality data that will be used by researchers and policy analysts, and how to address the real-life ethical challenges that can arise when children are involved as survey participants.
The Millennium Cohort Study (age 7, 11, 14 and 17 sweeps) will be used as the main survey example – alongside other studies that have foregrounded children’s voices in data collection – to illustrate key considerations central to the design process. 

Bio:

Dr Larissa Pople is a Senior Research Fellow / Survey Manager at the UCL Centre for Longitudinal Studies, where she is also a seminar leader on the Survey Design module within the MSc Social Research Methods programme. Previously she worked for over 10 years as a Senior Researcher at The Children’s Society, the Police Foundation and UNICEF, where she led research programmes on well-being and childhood poverty, and co-authored numerous policy-focused publications, including several Good Childhood Reports, a book on children’s experiences of problem debt and chapters on youth crime and antisocial behaviour. Her research expertise lies in survey and qualitative research with children, as well as child-reported indicators of socioeconomic disadvantage, family relationship quality and subjective well-being, which was the topic of her PhD, gained from the University of Essex.