Effect of Respondent’s Age on Survey Research |
|
Session Organiser |
Dr Susanne Vogl (University of Vienna) |
Time | Wednesday 17th July, 09:00 - 10:30 |
Room | D23 |
Insights into survey methodology is often based on middle-aged, middle-class, white, and well-educated members of “mainstream” society. However, in times of globalization, increased international mobility as well as aging western societies, effects of respondent characteristics on different stages of the survey lifecycle have to be consider. Arguably, respondents’ age can affect survey research in multiple ways: Cognitive, verbal, and interactive skills change over the life course and thus have an effect on the question-answer-process. However, life styles and living conditions and position within societal hierarchy also change and affect the definition of target populations, sampling frames, recruitment procedures, interview modes, interviewer behaviour and so on.
The session aims to bring together expertise on age-related changes in cognitive and communicative processes but also motivational and lifestyle changes and their interrelation with research method effects. Age-related differences in cognitive functioning, memory, text comprehension, communication and speech exert differential methods effects for younger and older respondents. Furthermore, changes in sensory, cognitive and motivational aspects over the life course go hand in hand with different effects of research instruments and settings and give rise to age-sensitive context effects.
Presentations can range from laboratory research to field research, re-analysis of existing data sets, meta-analysis or a theoretical discussion. Contributions can focus on either one specific age groups like children or the elderly, or can be based on a comparison across the life course. We particularly welcome contributions on – without being limited to those: age effect on cognitive processes, data quality (e.g. question order and response effects, interviewer bias, social desirability, satisficing), item & unit non-response, motivation to survey participation, unit- and itemnonresponse. We are also interested in the interrelation of age and mode effects and interviewer characteristics as well as innovative approaches to adapt methods to specific age groups.
We hope to stimulate a fruitful discussion to gain a better understanding of age-related effects on the applicability of survey research methods and techniques.
Mr Patrick Lazarevic (Vienna Institute of Demography) - Presenting Author
Background: A key foundation of any substantial research is the comparability of its measurements. In particular, survey research utilizing self-rated health (SRH), while being a widely used generic health measure, might be prone to biases resulting from a lack of measurement invariance, e.g., due to age-specific health standards or aspirations. Yet, it is unclear how respondents incorporate health information into an overall rating and how this is affected by age. This paper proposes a theoretical model describing the cognitive process of health ratings and uses it to investigate the influence of age on SRH.
Method: Using data from 15,178 Canadian NPHS-respondents of the general population and 61,027 participants of SHARE aged 50 years and older, SRH is analyzed with linear regression models. The independent variables are grouped into five health domains: functioning, diseases, pain, depression, and behavior. Via dominance analysis, their individual contributions to explain SRH are compared across gender, age groups, and countries.
Results: Overall, functioning and diseases were the two main contributors in explaining SRH. While there were hardly any gender differences in how health domains influenced SRH, there were some notable differences by age: In Canadians a steady increase in the importance of functioning, diseases, and pain for health ratings was evident with age. For older Europeans, however, functioning was more important in older age while diseases lost part of their relevance with age and the impact of pain on SRH was relatively stable. The relevance of health behaviors decreased with age in both surveys and the weight of mental health, albeit smaller in Canada, was rather stable.
Conclusion: This paper demonstrates that SRH is highly susceptible to age-specific response behaviors both in Canadians and Europeans. This suggests that SRH, when taken at face value, possibly produces biased results in age comparisons.
Ms Danuta Zyczynska-Ciolek (Institute of Philosophy and Sociology, Polish Academy of Sciences) - Presenting Author
Dr Marta Kolczynska (Institute of Philosophy and Sociology, Polish Academy of Sciences)
Many survey questionnaires include at the end a set of questions for interviewers. For example, the European Social Survey from its very beginning contains the following question addressed to interviewers: 'Overall, did you feel that the respondent understood the questions?'. In our paper we use data from the ESS Round 8 to investigate how interviewers answer this question. In the first part we analyse general differences between European countries in this respect. In the second one we focus on the interviewers’ assessment of the elderly respondents’ ability to understand the questions. The ESS creates a convenient opportunity to conduct such an analysis, as there is no upper age limit of the participants: the oldest respondent in 2016 was 100 years old. The ESS dataset also contains interviewers’ characteristics that we use as control variables.
Dr Andraž Petrovčič (University of Ljubljana, Faculty of Social Sciences) - Presenting Author
Dr Alexander van Deursen (University of Twente, Department of Communication Science)
Dr Vesna Dolničar (University of Ljubljana, Faculty of Social Sciences)
Mr Tomaž Burnik (University of Ljubljana, Faculty of Social Sciences)
Dr Darja Grošelj (University of Ljubljana, Faculty of Social Sciences)
The ever-evolving internet technology and its recent mounting adoption among older adults have made evaluation of internet skills in general social surveys not only an important research topic but also a policy-oriented objective for digital inclusion initiatives. While prior research advanced the field by developing various theoretical frameworks of digital skills (i.e. digital literacy theory), considerably less attention was given to empirical validation of survey measures for self-assessment of internet skills in different age groups. In particular, little empirical evidence exists about criterion validity and measurement invariance of internet skills scales, which are of utmost importance when comparing levels of skills among population groups. Hence, using data from the 2018 Slovenian Public Opinion survey conducted as part of the ISSP in Slovenia, the aim of this study was twofold: (1) to determine construct and criterion validity of the Internet Skills Scale (ISS; van Deursen et al. 2016) and (2) to test its measurement invariance between younger and older internet users. The short ISS is a 20-item reflective construct measuring four types of internet skills (Operational, Information Navigation, Social, Creative) on a 5-point Likert-type scale. While high construct validity estimates for the ISS scores were obtained in the past, their criterion validity and measurement equivalence for different age groups have yet to be determined. The results of confirmatory factor analysis in this study showed excellent internal consistency of the ISS. Moreover, Operational and Information Navigation scores demonstrated adequate criterion validity when correlated with corresponding types of internet use. Interestingly, the data supported configural and metric invariance, whereas scalar equivalence between younger and older internet users was refuted for Creative skills. These findings are discussed in the context of providing specific recommendations for further instrument refinement in terms of scale items and response options.
Dr Sven Stadtmüller (Frankfurt University of Applied Sciences) - Presenting Author
Mrs Andrea Giersiefen (Frankfurt University of Applied Sciences)
Mr Robert Lipp (Frankfurt University of Applied Sciences)
The repeated measurement of attitudes and behaviors in longitudinal studies poses some serious problems not present in cross-sectional surveys. Apart from selective panel attrition, changes in the observed values from one wave to another may be (partly) the result of the repeated measurement of the very same questions, known as panel conditioning-effects. Those effects may also affect data quality: Respondents who participate in the very same survey repeatedly may, for instance, get increasingly bored of the questions and be inclined to reduce their burden, either by manipulating the survey instrument (e.g., by answering filter questions in a way to purposefully avoid follow up-questions), by employing response styles or by simply skipping questions. As a result, the quality of the surveyed data may decrease in subsequent panel waves. On the other hand, data quality may also increase since respondents gain a better understanding of the meaning of the questions in later waves. Those “age effects” may hold particularly true for adolescents: as they grow older they may become more knowledgeable and therefore better able to answer the survey questions.
Our contribution aims at analyzing effects of repeated measurement on data quality in a longitudinal survey on adolescents. Data comes from the German survey “Health Behaviour and Injuries During School Age”, a panel survey of roughly 10,000 pupils. We started to survey those pupils in the 5th grade and track them till they are in the 10th grade. Our analysis covers the first four annual waves (age span from 11- to 15-years-old) and relies on various indicators for data quality (e.g., item-nonresponse, response styles, response latencies, measures of scale reliability etc.) in order to test whether data quality is affected positively or negatively when adolescents are surveyed repeatedly.
Mrs Annette Trahms (Institute for Employment Research) - Presenting Author
To realize a panel participation as high as possible is one of the great challenges in longitudinal surveys. Respondent’s experience in the previous interview is one of the main determinants for survey participation in subsequent waves (Laurie et al., 1999). The aim of this paper is to analyse the effect of interview length on panel participation as one aspect of respondent’s experience with previous interviews.
In general, the methodological literature suggests reducing interview length, since time-consuming interviews increase the respondents’ burden and decrease the respondents’ cooperation (Dillman et al., 2009; Groves et al., 2009; Schnell, 2012). However, Lynn (2014) as well as Liebeskind et al. (2017) find no effect of interview length on subsequent survey participation. In a similar vein, Kleinert, Christoph & Ruland (2015) report that varying the duration of cognitive testing during a survey has no direct impact on subsequent survey participation.
The analysis is based on data of the National Educational Panel Study (NEPS) adult starting cohort, which has been conducted annually since 2009. The sample population was drawn from residents’ register offices and represent individuals living in private households in Germany born between 1944 and 1986.
By using this data set, we are able to measure the effect of interview length on panel participation over the panel duration of eight waves. First results show, that elder respondents participate with a higher probability in subsequent panel waves. Furthermore, especially for older cohorts, a small positive effect of interview length on panel participation is found.