All time references are in CEST
The International Social Survey Programme (ISSP) - data quality issues in cross-national perspective 2 |
|
Session Organisers |
Professor Stephanie Steinmetz (FORS and University of Lausanne) Dr René Bautista (NORC at the University of Chicago) Dr Evi Scholz (GESIS Leibniz Institute for the Social Sciences) Professor Markus Hadler (University of Graz) |
Time | Friday 21 July, 09:00 - 10:30 |
Room | U6-01a |
In an increasingly interdependent world, to learn about the attitudes and behaviour of the population around the globe is essential. While many cross-national surveys are exclusively one-shot or one-topic initiatives, continuous global social surveys with high-quality standards are rare. The International Social Survey Programme (ISSP) is one of the exceptionally large and continuous programmes designed to offer both cross-national data and long time series. Since 1985, the ISSP conducts annual surveys on various topics important for social science research. Methodological issues are discussed in an extra committee of ISSP experts where national sample designs are checked for compliance with ISSP requirements and signed off for individual ISSP members. The ISSP is also exceptional as an organisation based on democratic cooperation of equal partners, which offers a special atmosphere for all kind of discussion about various subjects and flexibility in terms of adapting to new challenges and unforeseen issues. In addition, ISSP data, questionnaires and documentation, e.g., on data collection or on questionnaire development, are free of charge for all researchers interested in cross-national data. The ISSP is thus a priceless resource for various research areas in need of global survey data with high methodological standards and high quality data.
This session aims to provide and foster the exchange about the ISSP among providers and interested researchers already using or planning to use the ISSP data in their research. In the session, we will focus in particular on methodological challenges related to the data quality of cross-national surveys, however also substantial papers using one or various ISSP modules are welcome to the session.
Keywords: ISSP, cross-cultural research, comparability, data quality, measurement, survey mode, questionnaire development, cognitive testing
Mr Julian Urban (GESIS - Leibniz Institute for the Social Sciences) - Presenting Author
Dr Isabelle Schmidt (GESIS - Leibniz Institute for the Social Sciences)
Ms Katharina Groskurth (GESIS - Leibniz Institute for the Social Sciences)
The ISSP Occupational Commitment Scale (ISSP-OCC) aims to measure a person’s affective attachment to their company with only two items. Two ISSP Work Orientation modules (ISSP, 1997, 2015) included the ISSP-OCC scale. Both ISSP modules not only included the ISSP-OCC scale but also a three-item scale on organizational commitment. In the ISSP 1997 occupational and organizational commitment were assessed as belonging to a single construct. Based on theoretical assumptions and empirical findings, both commitments were assessed with separate scales in the ISSP 2015. We investigated the psychometric quality of the ISSP-OCC scale used in ISSP 2015 across countries regarding factorial validity, construct validity, cross-country comparability, and reliability.
Results from confirmatory factor analyses point to factorial validity (i.e., a correlated two-factor model of occupational commitment and organizational commitment fits well). We excluded six out of 38 countries, namely Chile, Georgia, India, Israel, the Philippines, and Venezuela for which the correlated two-factor model did not hold.
Reliability and construct validity analyses indicate acceptable psychometric quality (except for Mexico). We evaluated the cross-country comparability of the ISSP-OCC testing for measurement invariance across the remaining 31 countries. We accepted partial scalar measurement invariance.
For 31 out of 38 countries, ISSP-OCC forms an appropriate research instrument. Thus, ISSP-OCC achieves its goal of measuring occupational commitment in large surveys economically in these countries. Cross-cultural comparison of latent (co)variances is possible across 31 countries, while the comparison of latent means is possible for 19 countries. Nonetheless, the measurement itself and cross-cultural comparisons are questionable for some countries. We discuss possible reasons for lacking model convergence, factorial or construct validity, across-country comparability, such as the high economy, cultural differences, or errors in the data.
Dr Ondrej Buchel (Institute for Sociology of the Slovak Academy of Sciences) - Presenting Author
Dr Miloslav Bahna (Institute for Sociology of the Slovak Academy of Sciences)
Public opinion surveys can serve as barometers measuring the dynamics of prevailing attitudes in societies. Participation in surveys may be seen as especially useful in democratic societies, where the governing elites and their contenders presumably monitor the demand side of the electoral equation. Analysts use various weighting strategies to correct for biases caused during the design and collection stage of the process (e.g., coverage or non-response errors). However, it is more difficult to make adjustments for low quality of responses. One source of low quality of responses is careless and insufficient (low-effort) responding, meaning lack of engagement with the questions.
We explore a proposition that one of the reasons for contemporary rates of low-effort responding in Eastern European countries could be that, after the initial surge of optimism about the nature of democracy and expectations about elite responsiveness in the early post-communist era, the surveys stopped being seen as a potentially useful novelty and, as troubles tied to transition settled in, people became disillusioned and perhaps even cynical about the value of carefully considering their answers.
In the empirical part of the paper, we analyse available waves of ISSP on multiple themes, covering a period from late 1980’s to 2020. Using multiple measures of low-effort responding, we look at shares of low-effort responders across particular batteries of questions on particular topics and shares of low-effort responders across whole surveys. We also consider mode of survey administration, reasoning that respondents may be motivated to respond differently based on the immediate situation during which they consider their answers. We then compare these over the years, controlling for a number of individual and country-level measures, and describe the patterns showing sources of differences in low-effort responding.
Dr Annika Lindholm (University of Lausanne) - Presenting Author
Professor Stephanie Steinmetz (University of Lausanne)
Dr Marlène Sapin (FORS - Swiss Center of Expertise in the Social Sciences)
Web-based data collection is becoming an increasingly popular alternative to traditional survey modes, particularly in the wake of the Covid crisis. Web surveys have an advantage in terms of time efficiency and cost. It also responds to challenges related to decreasing response rates and increasing computer literacy among the population. By consequence, some ISSP countries have started implementing a “push-to-web”, sequential mixed (web and paper) design. A successful transition to web surveying hinges on properly estimating the selection bias of the online mode compared to the alternative mode (and subsequently correcting for it). However, estimating selection effects across respondents who answered in the different modes is a complex task due to the confounding of the additional measurement error that is introduced by the mixing of modes, and possibly increasing Total Survey Error as a result.
In this research, we examine coverage and measurement in web and paper modes in the ISSP 2020 data from Switzerland and Finland. We also explore mode measurement differences for certain attitudinal variables. About three quarters of respondents participated online in both countries (CH: 77 % web (N=3298), 23 % paper (N=982); FI: 75 % web (N=848), 25 % paper (N=289)). Having access to register data from these countries allows us to use it as benchmark for comparing sample representativeness in terms of key sociodemographic indicators. We find that sample bias is generally higher in the paper mode. Age and education account for selection into mode in both countries, while gender and income show different patterns in CH and FI. Preliminary analyses also suggest that the measurement of attitudes show differences across respondents according to modes. The influence of mode mixing on data quality should thus carefully be assessed when transitioning to web-based data collection in future ISSP and other comparative surveys.
Mr Harry Ganzeboom (VU University Amsterdam) - Presenting Author
Survey data (such as ESS and ISSP) are often accompanied by post-stratification weights – which pretend to repair the biased representation of these data for the population they are sampled from. A general experience with post-stratification weights is that they hardly ever make a difference – which could either lead to the conclusion not to use them, or to use them anyway. In this contribution I explain why weights hardly ever make a difference, and what can best be done when they do. I also explain why using weights generally harm the validity of data-analysis – the hidden injuries that are generally overlooked.
The basic reasons why weights do not make a difference are that the weighting criteria are not or only weakly correlated with the variables of interest (often true for regional variables) or that the weighting criteria are included in the predictor variable set (often true for age, gender and education). In either case weighting does not make a difference to the point estimates – which is what most users are interested in. However, post-stratification weights do harm, even when they do not affect the point estimates because they damage the efficiency of the sampling design – a point that can be shown using survey estimation procedures, as available as svy in Stata.
I illustrate these points empirically with ISSP en ESS data on religious attitudes, which are strongly differentiated by age, education, and gender. Results indicate that weighting makes no difference to point estimates, but reduce efficiency by 25% or more. I recommend not to use post-stratification weights, but rather use the weight (or the constituting variables) as controls in analytical models.
Dr Insa Bechert (GESIS - Leibniz-Institute for the Social Sciences) - Presenting Author
Ms Kerstin Beck (GESIS - Leibniz-Institute for the Social Sciences)
Ms Ivet Solanes Ros (GESIS - Leibniz-Institute for the Social Sciences)
In Medieval Latin the term “curatus” stands for a parish priest, someone “responsible for the care (of souls) ”. Those inclined to poetry could say that as data curators we are responsible for the data’s souls. The question is: how successful are we with our efforts?
In the course of an international survey data life cycle, a study wave is planned, and data are collected within the countries. They are processed by the principal investigator’s team according to the survey program’s standards and, eventually, the national data set is deposited with the final data curators. Here, all national data sets are further processed, and detailed test procedures are applied to the data. They are harmonized, integrated into international data sets, documented, and prepared for release. The aim is to achieve the highest possible data quality across countries.
For the ISSP and the EVS, final data curation is conducted within the Team International Studies at the GESIS department Survey Data Curation (In the case of EVS together with the Central Team at the University of Tilburg). In this presentation, we will give an overview of the most common types of errors and their (sometimes country-specific) peculiarities by contrasting one wave of ISSP with one wave of EVS. We will introduce a framework that helps to quantify and categorize potential data quality problems and thereby assess the data quality of data deposited by the national teams. Second, we will explore the possible means data curators have at hand to increase data quality at this late data curation state. By presenting an illustrative example, we will, thirdly, empirically demonstrate implications for data quality and research results if certain data errors remain undetected.