Comparative Survey Research Methodology using the European Social Survey |
|
Chair | Dr Kathrin Thomas (City, University of London ) |
Coordinator 1 | Professor Rainer Schnell (City, University of London) |
The use of Total Survey Error (TSE) framework reflects that precision of a survey estimate does not only depend on problems related to sample size and non-response, but also on non-sampling errors. While it is hardly feasible to apply the TSE model in practice, as it is difficult to separate individual effects in the available data set, it is yet possible to indirectly estimate some elements of TSE framework.
One approach is the use of design effects: design effects are the increase in the estimated standard errors in comparison to to those estimated on the basis of a simple random sample of the same size (Kish, 1965). They can be estimated relying on the interviewer intra-class correlation coefficient, which in turn allows estimating the homogeneity of responses within the set of respondents assigned to an interviewer. However, it is difficult to determine whether response homogeneity is due to spatial (social) homogeneity of the PSU or to the interviewer when looking at an individual survey.
Variance induced by the interviewer is at least partially due to formal question characteristics (Fowler 1991, Fowler & Mangione, 1990; Mangione, Fowler, & Louis, 1992; Schnell & Kreuter, 2005): For instance, studies identified difference across open- versus closed-ended, factual versus attitudinal, sensitive versus non-sensitive, and easy versus difficult questions.
This paper relies on the design effects calculated for 28 ESS countries using all ESS rounds, which we have merged with a classification of the formal question characteristics for ESS core module focusing on an improved scheme of the above mentioned attributes. Multi-level models predicting interviewer homogeneity analyse the data.
The results indicate varying degrees of design effects across ESS countries and rounds, which point at strong and worrying interviewer effects.
In countries participating in the European Social Survey, three main types of survey-sample have been used: (1) individual-name samples, (2) household samples and (3) address samples. This distinction stems from the characteristics of survey frames available in those countries, and exerts influence on the fieldwork phase of research. While individual-name frames allow for sample-selection of individuals uniquely identified by name, household frames involve the necessity of within-household selection of target persons among the individuals inhabiting the sample-selected households; furthermore, frames consisting of addresses of buildings require a randomised selection of apartments followed by within-household selection of individuals.
However, the main challenge for fieldwork execution of address and household samples comes in the limited capacity for effective control over the quality of interviewers' work; especially with respect to the quality of selecting target persons. Supervision is mainly constrained by the fact that unlike in the case of individual-name samples it is not enough to ascertain that the interview was conducted with the designated person - one must also corroborate that the interviewer did select the person that should have been selected. Thus, in cases of household and address samples, there is a markedly higher risk of illegal substitution, i.e., the practice (prohibited in the ESS) of replacing the persons that should be selected by those that are more available (e.g., by being more often at home) and that are characterised by higher levels of participation-readiness. The illegal substitution risk also occurs in individual-name samples, yet, given that the respondent is known by name such substitution must involve naked cheating on the part of the interviewer.
Our paper aims to explore the relationships between the sample-type and the quality of fieldwork execution. Empirical studies will be based on the first six waves of the European Social Survey. The sample quality (prevalence of substitutions) will be analysed according to the procedure proposed by Sodeur (1996) and Kohler (2007), consisting in the evaluation of the statistical significance of the difference between the fraction of women living together with a heterosexual partner in two-person households and the yardstick fraction of 50% of women living in such households. Within our presentation we will present a meta-analysis of data based on the assumptions of Borenstein et al. (2007) to compare the selection-bias and effect-size occurring in the three groups of countries distinguished by their sample-types. We will also investigate the relations between the selection-bias and the survey-outcome rates demonstrating the different character of those relations in samples of different types. We shall provide evidence that illegal substitutions are much more prevalent in address and household samples then in individual-name samples. The utilisation of the former requires supervision of not only the quality of interviews conducted, but also of the standards of within-unit selection of target individuals by interviewers.
Response rates have been declining over the years. In order to compensate for potential nonresponse bias, high quality academic and government surveys have adapted and further developed their field work procedures. In recent years, it has been suggested to use adaptive designs (Wagner, 2008) or responsive designs (Peytchev et al., 2010) in order to target sample units belonging to underrepresented groups. However, many large-scale surveys still adopt a more general field work strategy. Often, the overall number of contact attempts has been increased in order to reduce nonresponse and nonresponse bias. Several studies have demonstrated (Heerwegh et al., 2007; Kreuter et al., 2014) that an increasing number of contact attempts helps boost response rates. Nevertheless, even after multiple contact attempts post-stratification and raking procedures are deemed necessary to compensate for nonresponse bias because increasing response rates do not guarantee a linear decline in nonresponse bias. However, multiple contacts may attract already overrepresented groups and thus despite of additional field work effort nonresponse bias may remain on a stable level or even intensify. Consequently, researchers have to decide whether additional contact attempts actually pay off in terms of nonresponse bias reduction.
This paper is based on data from the European Social Survey (ESS), a biennial face-to-face survey in the general population of more than 30 participating countries. We used information from the contact forms on which the outcomes of each interviewer visit has been recorded. Previous analyses using socio-demographic variables (Fuchs et al., 2013) provided preliminary evidence that additional contact attempts generally increase response rates, but at the same time, also have the potential to increase nonresponse bias. In this paper, we assessed the effects of multiple contact attempts on nonresponse bias for a set of substantive variables. We tested whether and to what extent elevated response rates actually contribute to a reduction of nonresponse bias in attitudinal variables. Findings showed that in recent years, response rates have declined and additional contact attempts yield smaller increase in response rates. Although a higher number of contact attempts increases response rates which in turn result in lower nonresponse bias, the relationship between response rates and nonresponse bias is rather weak. The increase of response rates due to additional contact attempts and reduction of nonresponse bias due to increases in response rates are less pronounced in later contact attempts (from the fifth contact attempt onwards). Further analyses take a closer look at changes in the effort (number of contacted cases) required for a complete interview in the course of the first few contact attempts. Preliminary results suggest that the field work effort per complete increases dramatically in later contact attempts. The ultimate purpose of this presentation is a better understanding of the cost-benefit ratio of additional contact attempts and nonresponse bias reduction.
International surveys are unique sources of data to compare opinions and behaviours between countries, as well as assess validity of applied measurement tools. No-opinion answer options – ‘Don’t know’ – reduces the pressure of expressing non-existent attitudes among some respondents, yet, some still might feel uncomfortable revealing their lack of attitudes.
Taking European Social Survey wave 7 (2014/15) as an example, this paper will discuss the issue of comparability of used attitudinal measures by looking at the percentage of ‘Don’t knows’ to particular questions measuring anti-immigration attitudes. Specifically, I will consider whether the same people – in terms of socio-demographics (e.g. age, gender, education), experiences with difference (e.g. having contact with people of different ethnic background, living in ethnically mixed neighbourhoods) and other attitudes (e.g. satisfaction with governmental institutions, political trust) – express non-attitudes and how this patter differs across European countries.
The paper also explores a potential contribution of country-level differences in recent immigration streams and minority ethnic groups residing in particular countries to the variation in no-opinions. Finally, using classification techniques I develop typology of people depending on the type of no-opinion in attitudinal responses.
The present study employs the European Social Survey (ESS) 2010 to investigate gender-of-interviewer effects on reported gender role attitudes across countries. The central question examined is: Are the said effects stronger in countries where gender issues are more salient?
While the study of interviewer-gender effects on reported gender ideology is not new, it has been subject to a number of limitations. First, research outside the United States has been strikingly scarce. The few exceptions are isolated studies in Australia, Mexico, Morocco, and the Netherlands. To the authors’ best of knowledge, the present study is the first Europe-wide study. Second, due to financial costs associated with conducting face-to-face surveys, most studies have employed telephone surveys instead. Others have made use of experiments, and only recently face-to-face surveys. The result has been a limited number of respondents and interviewers, mostly students, altogether decreasing the potential to generalize findings. The large, nationally representative samples of the European Social Survey show more promise in this respect.
Third, and most relevant for this study, past research has looked at interviewer-gender effects in isolation, within single countries. Provided that gender issues are not equally salient across countries, it is possible that this macro-context interacts with the micro-context, i.e., interviewer-respondent interaction, ultimately strengthening or weakening response effects. In societies where gender inequalities are more prominent, respondents may respond more strongly to interviewers’ gender. Only recently has this been acknowledged conceptually, and is yet to be examined empirically.
Answering the present question is important for understanding how such reporting bias might affect substantive conclusions, particularly in light of the increasing use of cross-country face-to-face surveys for comparing countries or ethnic group. If an effect of male interviewers eliciting less egalitarian responses, for instance, is stronger in countries with more inequality, one might find that respondents in these countries hold less-egalitarian views. In actuality, however, the said finding would (partly) be a function of contextual inequalities.
Based on social desirability and power relations theory – respectively – this study hypothesized that female interviewers elicit more egalitarian gender role attitudes than male interviewers, and that gender-of-interviewer effects are stronger for female respondents. Furthermore, we expected that female interviewers (as opposed to male interviewers) will elicit more egalitarian gender role attitudes especially in countries where gender issues are more salient, i.e., countries where gender inequalities are greater. Random effects models were tested to explain between-country variation in reported gender ideology. None of the hypotheses were confirmed, thereby bringing good news to the interviewer bias front.