Total Survey Error and Cross-Cultural Survey Research: Methodological Challenges and Coping Strategies |
|
Session Organiser | Dr Aneta Piekut (University of Sheffield) |
Time | Tuesday 16th July, 11:00 - 12:30 |
Room | D01 |
Cross-national surveys such as the WVS, the ISSP or the ESS include respondents from multiple countries. However, representative national surveys include a significant share of people from different countries as well, particularly immigrants and their descendants. The comparability of data collected in different cultures and countries is critical in cross-cultural research. The theoretical constructs under research must have the same meaning and significance in different contexts to the highest degree possible. Furthermore, the data should have the same level of accuracy, precision of measurement, validity, and reliability in all countries and cultures. Lastly, peoples’ responses to the questionnaire must be equivalent. However, recent research indicates that many cross-cultural studies still have major drawbacks due to the lack of methodological rigor. As a result concepts are misunderstood, cannot be compared and/or estimates are biased.
Contributions to this session will address either these methodological challenges for cross-cultural survey data, the consequences such challenges have for empirical research, or innovative strategies for overcoming these problems. Of course, papers may also address combinations of these points.
We particularly welcome contributions that address the following topics:
- Specification error (e.g. theoretical and measured concepts only match for specific countries or immigrant groups)
- Measurement error (e.g. cross-cultural variations in satisficing, acquiescence, extremity tendencies and their consequences)
- Nonresponse error (e.g. respondents do not complete a questionnaire, item battery, specific items)
- Questionnaire design, construction, and translation
Keywords: total survey error, specification error, measurement error, non-responses, non-sampling errors
Dr Aneta Piekut (Sheffield Methods Institute, University of Sheffield) - Presenting Author
The question why some people reply ‘Don’t know’ or ‘Refuse to answer’ to survey questions lies in the heart of survey design. Item nonresponse is never random and some respondents are less likely to answer some questions. Such data missigness might be related to respondents’ characteristics, such as gender, age or education level, but also to the cognitive efford needed to answer the question, which depends on personal experience and knowledge. Moreover, the interviewer effect and country-to-country diffrences might impact the propensity to avoid answering a question.
This paper will explore patterns of item nonreponse in the largest European cross-national survey – European Social Survey. I will focus on selected items asked in all 8 waves (2002-2016), such as media use, attitudes towards immigration, political trust and mental wellbeing and investigate what characteristics of respondents, interviewers (age, gender) and countries are associated with item nonreponse. Specifically, the analysis will try to improve our understanding of the following 1) whether there is any similarity in item nonreponse across various types of measures (i.e. demographics, factual knowledge vs. opinions), 2) what is the role of the interview and country contexts for item nonresponse; and 3) how item nonresponse in ESS has changed over time.
Dr Katharina Meitinger (Utrecht University) - Presenting Author
Professor Timothy Johnson (University of Illinois at Chicago)
A potential source of non-comparability in cross-cultural survey research is the risk of differential rates of item nonresponse, which may be attributed to differences in data collection procedures and also to differences in cultural perceptions of question meaning or sensitivity. A modest body of literature has confirmed variability in item nonresponse propensity across cultures (Owens, Johnson & O’Rourke, 2001; Lee, Li & Hu, 2017). Largely unexplored to date are potential sociological sources of cultural variability in item nonresponse that are more clearly connected to aspects of power. One such source may be the effects of minority social status on willingness to report sensitive information during survey interviews. We address this possibility in three years of ISSP (2014-2016). Using HLM models, we examine the effects of minority status, uniquely defined within each participating nation, on rates of nonresponse after adjusting for individual, survey and country level characteristics. Implications for the interpretation of survey findings andre recommendations for best practices will be discussed.
Dr Stephen Quinlan (GESIS Leibniz Institute for the Social Sciences)
Professor Ian McAllister (Australian National University) - Presenting Author
Election turnout is central to the health of a democracy. However, measuring election turnout in opinion surveys is problematic due to misreporting. Research shows that survey respondents over-report their turnout so as to give a socially desirable answer. Studies show that over-reporting is more prevalent among partisan respondents and those who are more politically interested. Other studies have concentrated on non-response bias as a cause of turnout misreporting. However, most existing research is country-specific and does not take into account system characteristics or survey administration. Using 20-years of data from Comparative Study of Electoral Systems (CSES) covering 150 elections across 40 different states, this paper examines the importance of the political system and survey administration in the overreporting of turnout. The results suggest that overreporting occurs less often in more democratic states, countries that have compulsory voting, and in countries where the survey is conducted by personal interview. Conversely, overreporting is more prevalent when elections are held in the summer, in countries where there are more parties, and in election studies that use telephone surveys. Our results have implications for how we understand and correct for turnout overreporting in national election studies.
Dr Matthias Bluemke (GESIS - Leibniz Institute for the Social Sciences) - Presenting Author
Miss Katharina Groskurth (GESIS - Leibniz Institute for the Social Sciences)
Religiosity is an essential part of human culture. Therefore items on religiosity are part of international survey programs. However, despite being one of the prime markers for strong cultural differences, the comparability of data on religious beliefs collected across different cultures and countries can be severely hampered. The dissimilarity of religious world views so far prevented relevant comparisons of mean and covariance structures in cross-cultural research. It is fair to say that item bias is not only highly likely, but often self-evident. The problem is concealed, thus aggravated, when immigrants and their descendants are not surveyed in their country of origin but in their new environments, because the assumption of a homogeneous population will be violated even for single-group analyses.
Accordingly, analyzing the supernatural belief items of the International Social Survey Programme (ISSP) shows inadequate levels of measurement invariance, reliability, and validity across countries and religious groups. The Supernatural Belief Scale (SBS-6; Jong & Halberstadt, 2017) was recently invented to address the measurement of religious beliefs from an etic perspective, while allowing adaptations from an emic perspective - without giving up the functional equivalence of the items. Even without cultural adaptations, it shows better levels of measurement equivalence, reliability and validity across countries and religious groups than the ISSP scale.
The implications for cross-cultural comparisons in a domain that is difficult to analyze quantitatively will be discussed.
Professor Jürgen H. P. Hoffmeyer-Zlotnik (Institute of Political Science, Justus Liebig University, Giessen)
Dr Uwe Warner (Methodenzentrum Sozialwissenschaften, Georg-August-Universität, Göttingen) - Presenting Author
The Total Survey Error (TSE) is a valuable tool to quantify the survey quality. Nevertheless comparative surveys face much more quality criteria as defined in TSE. Cross cultural surveys, surveys across different countries and nations, and very often polls across time suffer from inadequate national standardization and international harmonization.
In our contribution we illustrate errors coming from incomplete or misleading standardization and harmonization. These mistakes are not included in the TSE. Our examples are selected sociodemographic measures in social surveys.
The wrong selection of reference statistics, necessary to develop the answer options, we demonstrate by the “total net household income”. “Private household” is our example to show the impact of different national and cultural understandings of measurement concepts. With the question about the respondent’s “highest level of education” we discus the weak transfer of the commonly agreed measurement concept into the national measures. We illustrate the incomplete application of the measurement concept in the field by the “labor force status” questions.
We conclude that social, institutional, economic and cultural differences across societies hamper cross-cultural and international comparison of survey data. And these error sources are not part of TSE.
References:
Hoffmeyer-Zlotnik, Jürgen H.P. and Warner, Uwe 2018: Sociodemographic Questionnaire Modules for Comparative Social Surveys. SpringerBriefs in Population Studies, Cham/CH, Switzerland
Hoffmeyer-Zlotnik, Jürgen H.P. and Warner, Uwe (in press): Messfehler in der Harmonisierung soziodemographischer Variablen für den internationalen Vergleich. In: Qualitätssicherung sozialwissenschaftlicher Erhebungsinstrumente. Schriftenreihe der Arbeitsgemeinschaft Sozialwissenschaftlicher Institute, Springer Fachmedien, Wiesbaden.