Benefits and Challenges of Open-ended Questions 1 |
|
Chair | Dr Evi Scholz (GESIS ) |
Coordinator 1 | Mrs Cornelia Zuell (GESIS) |
Equivalence in survey design and implementation is one of the core issues in cross-national survey research. Equivalence is required at various stages of the design and implementation process and might for example relate to sampling or to survey mode or to the understanding of question texts and answer scales.
Construct equivalence deals with the theoretical validity of concepts measured by survey questions and item batteries. Construct equivalence is a pre-requisite for meaningful cross-national analyses and comparisons where respondents are socialized in different political, social and cultural contexts. Thus the same interpretation of concepts cannot be taken for granted.
Construct equivalence can be tested in several ways, for example by country-specific expert judgement, by focus groups, by cognitive interviews, by statistical tests of item batteries or by asking respondents using open-ended questions to identify associations with the respective terms in question.
The paper is about construct equivalence of the left-right scale in cross-national perspective. The left-right scale is a standard question used in many surveys to measure ideological orientation in a minimalist way. However, the theoretical concepts related to left and right might differ across countries. Variation in the understanding of left and the understanding of right is an issue for survey research if the variation is systematic with other variables and in different contexts. Systematically different understanding might result in incomparable self-placement on the left-right scale and thus challenge its validity.
While cognitive or focus group interviews are valuable sources to identify understanding problems, they might not be sufficient to find all problems due to the low number of interviewees. Using open-ended questions in a survey with several hundred of respondents offer additional options.
To test for construct equivalence and whether the left-right scale is understood in a similar way in cross-national context, we have asked about respondents’ individual associations with the terms left and right by using open-ended probe questions in an experimental online survey fielded in Canada, Denmark, Germany, Hungary, Spain, and the U.S. in 2011 with more than 3800 respondents in total. We have automatically coded open-ended answers using an extensive coding scheme covering more than 250 different aspects associated with left and right. We have tested whether the same empirical relations and ideological dimensions can be found across countries. Similarity in this respect is interpreted as evidence supporting the hypothesis of measurement equivalence.
In a first step of cross-national analyses we concentrate on the ranking of frequencies of individual answers and on the link between left-right self-placement and open-ended questions.
Results of this analysis show that respondents from different countries do not have the same ideas in mind when considering what left and what right mean for them. These results challenge a direct comparison of responses to the left-right scale across countries because responses have different meanings in different cultural contexts and conclusions based on such comparisons might be wrong.
Young Life and Times (YLT) and Kids Life and Times (KLT) are annual cross-sectional postal surveys of 16-year olds and 10/11 year olds respectively, undertaken in Northern Ireland since 2003. Both surveys are run by ARK – a joint initiative between the two universities in Northern Ireland and widely used by government and voluntary sector organisations to monitor policy and young people’s attitudes on a wide range of issues.
This presentation uses YLT and KLT survey findings to discuss the methodological issues associated with identifying the true extent of caring among children and young people in public attitudes surveys. We focus on the subject of question wording to highlight some of the challenges involved and approaches taken to address this important issue.
Questions about the extent and nature of caring by children and young people were included in YLT in 2010 and KLT 2011. Responses indicated that a higher percentage of younger children compared to older adolescents identified themselves as young carers with caring responsibilities. Investigation of the open-ended responses describing the tasks carried out by KLT respondents who defined themselves as carers indicated that their understanding of what constituted ‘caring’ might not fall within the definition of a ‘young carer’ that the survey wished to capture. Therefore, it was decided that any future questions on caring would involve young carers in the survey design.
This was the case when caring questions were included in YLT and KLT in 2015. A group of young carers were consulted on how best to introduce the questions on caring and on the actual question wording so that what we meant by ‘young carers’ was more clearly understood both by 16 year olds and 10/11 year olds. The suggestions put forward by young carers were used to refine the questions to be included in the 2015 KLT and YLT surveys, and to formulate an introduction to the caring questions suitable for both age groups.
Despite consultation with the young carers group, responses indicated that lack of clarity on what constituted a young carer remained. This was reflected by an increased number of younger children than older adolescents identifying as a young carer in 2015, compared to the earlier surveys. This presentation reflects on these findings.
There is a growing interest in factors that contribute to vulnerability across the life course, yet the concept of vulnerability is both difficult to define and to measure. A person is vulnerable when they are at risk of experiencing a source of stress (e.g., a major life event) while lacking resources to cope and recover. For this reason, open-ended questions work particularly well, offering a deep insight into those experiences that mark a person’s life course. Still their use in surveys poses methodological challenges, placing greater cognitive demands on respondents than most close-ended questions. This may lead some respondents to skip the question, or to give only cursory responses. Mode of data collection also has an impact on the reporting of sensitive information. For example, respondents often report less negative events when talking to an interviewer than when auto-completing the questionnaire. Meanwhile, self-administered modes require greater effort to respond carefully and honestly. Such effects could interact with respondent characteristics, to either exacerbate or attenuate differences in response across modes.
Our aim in this paper is to investigate the impact that mode effects have on the quality of responses given to sensitive open-ended questions. Using data from a mixed-mode experiment, we examine differences in the length and the detail of the respondents’ answers across web, paper and telephone modes. Moreover, we explore whether respondents that skip open-ended questions are significantly different from those that respond, and whether the content is substantively different –positive or negative– for the different modes. Preliminary results show that the telephone sample has a lower level of item nonresponse compared to the self-completion ones. In addition, telephone respondents give less complete answers that tend to be more positive than the respondents in other modes. We discuss these results in relation to respondents’ characteristics.
Declining response rates show clearly that peoples’ willingness to respond to surveys are declining. But does that also mean that peoples’ engagement when taking part in surveys are lower? One of the most common comments about a questionnaire addressing young people’s life and health was a wish for more open-ended questions to get opportunities to explain and reflect more deeply about the issues covered by the questionnaire.
Open-ended questions may shift some of the power from the researcher to the respondent and give the respondent better opportunities to give correct and truthful information. This may have positive effect on the relation between respondent and researcher. But it may also increase the response time and cognitive burden for respondents. The researcher is given qualitative information that may increase the value of the survey results but that will also require more effort to analyze.
This presentation will give examples of open-ended questions used in two different surveys answered by more than 30 000 young people and adults. What kind of open-ended questions was most successful and how did respondents react and respond to them? Did people respond differently to an identical open-ended question 2002 and 2016? Did open-ended questions work differently or give different answers in paper- or webformat?