Assessing the Quality of Survey Data 3 |
|
Convenor | Professor Joerg Blasius (University of Bonn ) |
For complex surveys, one of the most effective tools for reducing survey error is interviewer comments. In the Survey of Consumer Finances (SCF), this process is extraordinarily time-consuming, requiring months of careful analysis by trained editors. To reduce this time cost, a system was designed to incorporate survey data, automatically-generated financial sheets, interviewer comments, and a series of data checks into a single, easy-to-use program called the Editor Assistant (EA). The EA was fully employed for the 2013 SCF, and is credited in large part for the six month reduction in required editing time.
The ESS aims to control the sample designs used by specifying sampling guidelines to be followed in each country. The main requirements are the use of probability sampling and the achievement of a minimum effective sample. The latter is determined by gross sample size, ineligibility rate, nonresponse rate, inclusion probabilities and clustering effects.
The sampling requirements have not always been well satisfied. In this presentation we will outline and discuss key problems that have been encountered to date. We will present summary statistics on sample design parameters, highlighting trends that include increasing ineligibility rates and nonresponse rates.
One possible way of reducing item nonresponse in surveys is for interviewers to probe “Don’t Know” responses, encouraging respondents to give a substantive answer if possible. There is a risk, however, that too much probing will lead to measurement error. This paper examines these different possible effects using data from an experiment on the use of probing in the innovation sample of the European Social Survey in three European countries. The data are analysed using latent variable models designed to disentangle the different possible impacts of probing in a multi-item survey setting.
In large cross national surveys such as ESS the implementation of standardized interviewing techniques and preventing interviewer effects can be considered as a major challenge. Therefore an evaluation of the interviewer related variance must be considered as an essential part of the data quality assessment in cross national research. In the first part of the paper we assess interviewer related variance for 51 substantive variables of the sixth round of ESS. The results show differences in IIC’s between countries. In the second part of the paper interviewer effects on latent variables are evaluated.
In this paper we discuss various possible strategies interviewers might employ to fabricate parts of their interviews, such as asking only one or two questions from a battery of items and then “generalizing” the answers to the entire set. Our guiding hypothesis is that the cross-national prevalence of such data fabrication is a direct function of the pervasiveness of the perceived corruption in each country. Applying anomie theory and rational choice theory, we argue that both expected costs and normative commitment are correlated with the perceived corruption in the country.