Tuesday 16th July
Wednesday 17th July
Thursday 18th July
Friday 19th July
Download the conference book
Use of Eye Tracking in Survey Research |
|
Convenor | Dr Timo Lenzner (GESIS - Leibniz Institute for the Social Sciences) |
Eye tracking, the recording of peoples' eye movements, has recently evolved as a promising tool for studying the cognitive processes underlying question answering. During the last couple of years, the method has been used in several research studies investigating issues of question wording and question layout, response order effects, and mode effects. Moreover, it has been argued that eye tracking could be useful for pretesting survey questions.
While these previous studies have highlighted the potential merits of eye-tracking methodology, they have also pointed out several challenges in collecting and interpreting eye-tracking data and have called for the need of additional research.
This session seeks to provide a forum for the latest research findings on the use of eye tracking in survey research. It aims at bringing together survey researchers who conduct or are considering conducting eye-tracking studies and offers an opportunity to share experiences and insights on how to use eye tracking in survey research.
In survey research, numerous studies deal with best practices concerning the optimal design of rating scale items commonly arranged in grid formats. Taking account of mostly indirect indicators of data quality including response times, nondifferentiation and other forms of response styles various advice exist regarding an appropriate construction of rating scales (e.g., in terms of number and labeling of scale points, offering a midpoint or a 'don't know' option). Field-experimental results concerning these design features, however, are often mixed and the cognitive processes underlying respondents' answers to rating scales remain unclear. Eye tracking may help to gain a better understanding of the cognitive processing of rating scales and its interaction with different design features. In a lab experiment, university students from various disciplinary backgrounds (n=150) are randomly assigned to different rating scale designs presented as part of a Web questionnaire. By analyzing the respondents' eye movements, different scale formats (e.g., 5-point scale vs. 9-point scale, with or without midpoint, nonsubstantial option provided or not) are compared concerning the amount of attention and cognitive effort respondents spend on processing various components of the rating scale. Eye tracking data are assessed in order to ascertain whether differential design properties of rating scales evoke differences in the degree of visual attention and in the scanpath of respondents which ultimately can be held responsible for differences in survey responses provided by respondents.
In questionnaire pretesting, cognitive interviewing is a well-established and eye tracking a promising new method. Both, however, exhibit weaknesses if applied individually. Those shortcomings might be mitigated if the methods are used in combination.
We investigate this conjecture in an experimental study with two conditions: In the control condition (n=42), a cognitive interview was conducted using a standardized interview protocol which included pre-defined probing questions for about one fourth of the questions of a 52-item questionnaire. In the experimental condition (n=42), participants' eye movements were tracked while they completed an online version of the questionnaire, and simultaneously, their reading patterns were monitored for evidence of response problems. After completing the online survey, a cognitive interview was conducted using the identical interview protocol as in the control condition. In both conditions, additional probing questions to the ones specified in the interview protocol were asked only if participants showed signs of having difficulties in answering a question (e.g., a long silence) or if peculiar reading patterns were observed during the eye-tracking session (e.g., re-readings of specific words or text passages).
Two main questions are addressed: First, do both approaches identify the same questions as problematic? Second, do they reveal the same or different types of problems? Preliminary findings indicate that the combination of eye-tracking and cognitive interviewing in evaluating and improving survey questions is effective.
During the last years there is an increase in experimental and pseudo-experimental research in the social sciences. Alongside this development more and more researchers use factorial surveys to measure more complex constructs like attitudes. While there is already some methodological research on factorial surveys many questions could not be answered with the used methods. This research shows that the number of dimensions and the number of vignettes per respondent influence the quality of the measured attitudes, but order effects of the vignette dimensions could not be answered yet. To overcome this problem the use of eye tracking studies can be of assistance. Using eye tracking as a method to gather data on the respondents' attention during responding to a vignette module enables researchers to optimize the presentation of vignettes regarding both: the order of dimensions and the order of vignettes. Defining vignette dimensions as areas of interest enables the researcher to reconstruct the attention pattern of the respondents and measure the specific dimensions that influence the respondents' judgment. First insights on how to optimize the presentation of vignettes by eye tracking data from a vignette experiment with students focusing income justice will be presented. Theoretical considerations as well as first experimental results and implications for further research will be discussed.
At the ESRA 2011 conference, we presented findings from an eye-tracking experiment (which examined whether placing the input fields (i.e., radio buttons or check boxes) to the left or to the right of the answer options in closed-ended questions enhances usability and facilitates the response process. The results indicated that respondents required less cognitive effort (operationalized by fixation times, fixation counts, and number of gaze switches between answer options and answer boxes) to select an answer when the input fields appeared to the left of the answer options. In this study, however, the white space between answer boxes and answer options was not equal between conditions. While the boxes were close to the answer options in the "left" condition, more space was visible between the boxes and the answer options in the "right" condition mimicking possible paper-based questionnaires. However, to alleviate this potential problem and to strengthen the internal validity of the experiment we re-ran the experiment with a larger sample (N=81) keeping the distance between boxes and answer options identical between conditions. Respondents were randomly assigned to one of three layouts with answer boxes appearing to the left of left-aligned answer options, answer boxes appearing to the right of left-aligned answer options, or answer boxes appearing to the right of right-aligned answer options. In the analyses we will again look at the cognitive effort indicators mentioned above and we will discuss the new findings in light of our previous results.