Questionnaire Translation: Achievements and New Challenges Ahead 1 |
|
Session Organisers |
Ms Brita Dorer (GESIS-Leibniz Institute for the Social Sciences) Dr Alisú Schoua-Glusberg (Research Support Services, Chicago) |
Time | Wednesday 17th July, 11:00 - 12:30 |
Room | D23 |
Questionnaire translation is a field within cross-cultural survey research that has been receiving increasing interest within the past decades, as the importance of high-quality and comparable questionnaire translations has become more and more obvious for 3mc studies, both cross national and within country cross-cultural research. While in the earlier stages, approaches such as simply back-translating translated questionnaires and comparing to the source text were routinely applied, ever more sophisticated methods have been developed over the last 10-20 years. Team or committee approaches, as implementations of Janet Harkness' TRAPD model, (e.g. by the European Social Survey (ESS) or SHARE), have become the norm in many multilingual projects. New approaches for assessing and improving translation quality have been added, and research on how to make use of new technological developments in translation sciences, such as Computer-Assisted Translation (CAT) tools, Machine Translation (MT), or Speech Recognition, has been under way.
This session invites papers on a wide variety of aspects of questionnaire translation. Examples of topics may include topics or research questions from within the classical linguistic and translation research fields, such as certain translation issues or linguistic patterns in individual language pairs; discussing existing or new methods and tools for assessing or improving translation quality; aspects of the source questionnaire that affect its translation into multiple language versions; the interplay between translation and adaptation in the context of questionnaire translation; or intercultural factors affecting questionnaire translation in the cross-cultural survey context.
Keywords: Questionnaire translation, comparability, translation tools, translation quality
Professor Ioannis Andreadis (Aristotle University of Thessaloniki) - Presenting Author
It is common practice for many multi-national survey projects targeting the general population in each country to translate the common core questionnaire to national languages. Projects that can be classified in this category are: The Comparative Study of Electoral Systems, the World Values Survey, etc. Even when the target group is the political elites of the country (as in the Comparative Candidates Survey) it is also typical to use translated questionnaires. On the other hand, there are many international survey projects with participants in many countries that do not translate their questionnaires. In this category, we can find a lot of expert surveys e.g. the Chapel Hill expert survey, the Electoral Integrity Project, etc. Other examples of multi-national surveys with single (usually English) language questionnaires are some surveys conducted by international projects, organizations, associations and other multi-national groups with members all over the world. The usual argument for not translating the questionnaire is that the respondents of these surveys are highly educated (academics, health and other professionals, researchers, etc.) and most of them are expected to be comfortable with the use of the English language.
In this paper, I present the findings of an experiment I have conducted among political scientists in Europe. The questionnaire is part of a web survey conducted for the COST Action PROSEPS and it was translated to French, German, Greek, Hungarian, Italian, Polish, Russian and Spanish languages. Half of the sample being native speakers of one of the aforementioned languages were sent an invitation to the survey in their language while the other half received an English language invitation. The paper explores the impact of the use of a translated questionnaire on the response rate and the time spent on the questionnaire items.
Dr Alisú Schoua-Glusberg (Research Support Services) - Presenting Author
Cognitive testing of translated questionnaires can be considered part of translation assessment. By eliciting patterns of interpretation we can assess not only the translation quality but also the validity of the translated questions and see the extent to which they are being interpreted as the source language version designers intended it.
There are different approaches to probing in cognitive testing. Some researchers prefer a highly scripted approach with a protocol that lists a number of specific probes. They craft these probes based on their expert review of the instrument and an a priori selection of possibly problematic question formulations. Other researchers prefer to elicit a narrative that will --by itself and supplemented by spontaneous probes -- reveal how the respondent's answer to the survey question relates to the respondent reality. Thus this will show how the respondent interpreted the question and whether their response was appropriately selected to fit their reality. Both approaches, with highly scripted probes or with narrative elicitation and spontaneous probing include specific probes that are asked to further elucidate the respondent thinking.
Focusing on specifics of the cross-cultural, multi-language context, this presentation will discuss a classification of probes that focuses on whether the probe asks about the question itself (or some of its features) or asks about the respondent's answer. While this way to classify probes is quite different from more traditional classifications, it can shed light on specific aspects of probing and on the right combination of probe to use in a comparative context when pretesting translated questions.
Dr Dorothée Behr (GESIS) - Presenting Author
Dr Katja Hanke (GESIS)
Back translation is one of the first testing or assessment methods for questionnaire translation (Brislin, 1970). Even though its shortcomings were already listed in seminal papers (Brislin, 1970), they have long been overlooked. While nowadays major survey programs no longer reply on back translation as an assessment method, it is still widely used in the field. There have been quite some discussions on pros and cons of back translation so that knowledge on what the method can and cannot achieve is available, at least in theoretical terms, but what is missing is empirical evidence on how it actually performs compared to other assessment methods. The health field has been active in this regard producing evidence in favor of different kinds of team-based assessment approaches (e.g. Epstein et al., 2015; Hagell et al., 2010). However, this research is still scarce and does not include, as an example, comparisons of the best practice TRAPD approach with back translation. In this paper, first results of an experiment comparing TRAPD vs. back translation are presented.
Dr Lydia Repke (GESIS - Leibniz Institute for the Social Sciences) - Presenting Author
Ms Brita Dorer (GESIS - Leibniz Institute for the Social Sciences)
To what extent do different translation approaches (i.e., close and adaptive) shape the data we collect within multinational survey projects? Are there specific topics or linguistic patterns where translators feel the need to adapt? Triggered by the common belief that close translation yields more comparable data than adaptive translation, this paper systematically examines different translation versions and the resulting responses of a translation experiment conducted within the cluster project SERISS. For this experiment, three translation teams in Estonia and three in Slovenia were instructed to translate 60 English source items by applying both competing translation approaches. The source and the translated questionnaires were fielded in the CROss-National Online Survey (CRONOS), Wave 5, with participants from Great Britain, Estonia, and Slovenia.
For the analysis, native speakers of both target languages (i.e., Estonian, Slovene) first assessed whether the translations were adaptive or close and provided explanatory back-translations into English. Next, they evaluated the overall translation potential of each source item (i.e., the theoretical translation space with all possible and meaningful translations) on a 5-point Likert scale ranging from -2 (close) to 2 (adaptive). Finally, they assessed the translation score of each translation (i.e., the realized translation) on a 7-point Likert scale ranging from -3 (overly close) to 3 (overly adaptive). They did this for each question and answer scale separately. Based on this information, we analyzed the occurring translation patterns and combined them with the responses.
Our preliminary analysis shows that it is not always possible to apply both approaches for all items. This speaks to the importance of not having a “one-fits-all” translation strategy. For example, not all items can be translated closely. Instructing translators to do so anyway may lead to bad or wrong translations. We are currently developing recommendations on how to handle adaptive translation approaches in multilingual surveys.