Response Option Order Effects in Cross-Cultural Contexts |
|
Coordinator 1 | Dr Ana Villar (Facebook) |
Coordinator 2 | Dr Yongwei Yang (Google) |
The order of response options may affect how people answer rating scale questions. Response option order effect is present when changing the order of response options of a rating scale leads to differences in the distribution or functioning of individual or group of questions. Theoretical interpretations, notably satisficing, memory bias and anchor-and-adjustment have been used to explain and predict this effect under different conditions. Recent advance in visual design principles with respect to “interpretive heuristics” (esp. “left and top mean first” and “up means good”) adds more insights on how positioning of response options may affect answers when scales are presented visually. A number of studies have investigated the direction and size of response option order effects, but present a complex picture and most were conducted in mono-cultural fashion. However, the presence and extent of response option order effect may be affected by cultural factors in a few ways. First, interpretive heuristics, such as “left means first” may work differently due to varying reading conventions (e.g., right-to-left in Arabic or Hebrew). Furthermore, people within cultures where there are multiple primary languages (e.g., Hebrew and English) and multiple reading conventions (e.g., in Japan where texts may be read left-to-right-top-down horizontally or top-down-right-to-left vertically) may respond to positioning heuristics differently. Respondents from different countries may have varying degrees of exposure and familiarity to a specific type of visual design. Considering how internet is consumed across countries, with plenty of web contents presented left-to-right regardless of language, it is conceivable that heavy users of online contents be less susceptible to the impact of reading conventions. This session will organize papers on research investigating rating scale response option order effect across countries with different reading conventions and industry norms for answer scale designs, considering device of completion as a moderating factor. The research will consider the effect of vertical vs. horizontal display, survey topic area, number of scale points, age, gender, and education level. The works will reflect a range of analytical approaches: distributional comparisons, latent structure modeling, and analysis of response latency. The session will provide guidance on best practices in presenting ratings scales on smartphones, as well as for comparative analysis involving data obtained with different rating scale response option orders.