Ensuring Validity and Measurement Equivalence through Questionnaire Design and Cognitive Pretesting Techniques 2 |
|
Session Organisers |
Dr Natalja Menold (GESIS) Mr Peyton M Craighill (Office of Opinion Research | U.S. Department of State) Ms Patricia Hadler (GESIS) Ms Aneta G. Guenova (Office of Opinion Research | U.S. Department of State) Dr Cornelia Neuert (GESIS) Dr Patricia Goerman (U.S. Census Bureau) |
Time | Friday 19th July, 11:00 - 12:30 |
Room | D30 |
According to the framework of the total survey error, validity refers to the degree to which survey results can be interpreted with respect to the concepts under investigation, i.e. certain opinions, behaviors, abilities, and competencies. Validity also regards interpretations about the differences or changes in the searched concepts, such as comparisons between different respondent groups, across time or cross-culturally. Survey researchers conduct studies in various languages and cultures within one country or in various countries and gather demographic, administrative and social data in these multi-cultural contexts, constantly trying to improve the accuracy of these measurements. The comparative aspects of validity have been referred to as measurement equivalence issues. Many researchers address measurement equivalence during data analysis, after data collection, often finding that there is a lack of measurement invariance. However, the sources of measurement invariance are more likely to be associated with questionnaire design and data collection processing.
Difficult questions, overloaded instructions or visual design elements can affect validity and measurement equivalence. This session aims to discuss methods of developing measurement instruments and their effect on validity and measurement equivalence. The goal is to better understand the corresponding sources of measurement error and to present methods which help to increase validity in comparative research. In particular, generic, multi-method approaches are of interest. Such approaches can include expert reviews by subject matter experts, cognitive interviews and pilot interviews with respondents who represent the main demographic groups of the target countries. In addition, quantitative analyses of findings, e.g. from experiments related to the use of different versions of questionnaires can help to evaluate the sources of decreased validity and deficient measurement equivalence.
Keywords: Questionnaire Design, Cognitive interviewing, Validity, Measurement Equivalence, Pretesting, Question Evaluation
Dr Natalja Menold (GESIS) - Presenting Author
Mrs Patricia Hadler (GESIS)
Dr Cornelia Neuert (GESIS)
Mrs Verena Ortmanns (GESIS)
Measurement equivalence of data means that comparability is not biased by group membership of respondents. Researchers addressing measurement equivalence of survey data in cross-cultural research have often found that exact measurement invariance can hardly be reached. A line of research has therefore been concerned with more liberal methods to statistically test measurement equivalence. However, if the questionnaires are not comparable due to differences in question comprehension and responses provided by survey participants, wrong conclusions with respect to comparability can be drawn when using ex-post methods of data analysis only. We, therefore, address the question of how comparability of questionnaires can be ensured prior to data collection to avoid wrong conclusions afterward. Our research is conducted in German and in American English. First, we test and revise the questions based on the findings of three pretesting methods: 1) cognitive interviews, 2) evaluation by questionnaire design experts and 3) web probing. For our research purpose, we use questions for which the exact measurement equivalence (cross-cultural comparability) was not given in the available data as well as instruments of which measurement invariance properties were not previously known. After piloting the instruments and revising them on the basis of the findings of the three methods we collect quantitative data for the original and the improved instruments. The different instrument versions are then compared with regard to measurement invariance across countries. The results are discussed with respect to the implications for future research and practice in cross-cultural research.
Ms Danuta Zyczynska-Ciolek (Institute of Philosophy and Sociology, Polish Academy of Sciences)
Ms Weronika Boruc (Institute of Philosophy and Sociology, Polish Academy of Sciences) - Presenting Author
The paper discusses the role of pre-testing in consecutive waves of a panel survey investigating how attitudes and opinions change over long periods of time. On the one hand, accounting for this change requires that the phrasing of questions should remain unaltered in order to maintain comparability. On the other, pre-testing may reveal that respondents experience difficulties in understanding some items due to the possible shifts in the meaning that occurred over time. This fact can significantly affect measurement equivalence. The paper discusses the dilemma of retaining or changing the questionnaire items, using the results of the pre-testing of the Polish Panel Survey POLPAN 1988–2018, conducted in March 2018. Questionnaire items selected for analyses deal with (1) the determinants of life success, (2) the intensity of social-group conflicts, and (3) the self-assessment of social position. We analyze the types of problems that emerged during pre-testing and present examples of the concepts which meaning has evolved because of differences in the social and political context e.g. before and after 1989. We conclude that the pre-test results might not only lead to modifications in the questionnaire items and improvements in fieldwork instructions for interviewers, but they should be also taken into account in the interpretation of the main-survey results.
Professor Patrick Brzoska (Witten/Herdecke University, Faculty of Health, School of Medicine) - Presenting Author
Introduction: Obtaining survey data on migrants involves different challenges. One of these concerns language. Given their oftentimes limited proficiency of the host country’s language, surveys on migrants usually need to be conducted in their mother tongue. Available translation of questionnaires, however, may be difficult to understand for migrants because their use of language may be syntactically and lexically different from the use of the same language spoken in their countries of origin. Consequently, questionnaires need to be re-adapted to achieve functional equivalence. Using the assessment of illness and medication beliefs in Turkish migrants in Germany as an example, this study illustrates how cognitive and expert interviews may serve this purpose.
Methods: The study examines the comprehensibility of the Turkish versions of the Illness Perceptions and the Beliefs about Medicines Questionnaire in a sample of Turkish migrants in Germany. 15 patients were surveyed through cognitive interviews using a think-aloud approach. Additionally, interviews with experts were conducted who were experienced in research with this population group. The interviews focused on the clarity of items, potentially ambiguous wordings, the appropriateness of the language style as well as on suggestions for improvements.
Results: The interviews showed that several of the items of both Turkish-language questionnaires were misunderstood by Turkish migrants because of a complex and ambiguous item wording. Furthermore, confusion existed over the (apparent) similarity of certain items. Also experts identified items that they considered as difficult to understand for this population because of formal wording.
Discussion: Questionnaires developed for native populations (such as Turks in Turkey) may be difficult do understand for migrants. This may be attributable to a diverging development of language over time. These language differences need to be considered thorough testing and re–adaptation when research instruments are to be used across both populations. Qualitative interviews may support this process.