Interviewers’ Deviations in Surveys 1 |
|
Convenor | Dr Natalja Menold (GESIS ) |
Coordinator 1 | Professor Peter Winker (University of Giessen) |
Previous studies on falsifications in survey data collection gave indication that fabricated data can be detected on basis of formal indicators. Most of these surveys were done on experimentally obtained data. The aims of this study were to apply set of formal indicators and analyze differences between real and falsified data detected in real setting.The data are derived from Household Need Assessment survey in Croatia. Through control process 24.7% of incorrect interviews were detected. Different statistical procedures were applied on formal data indicators, but no clear and consistent evidence of possibility of detecting falsified data were found.
In this paper we discuss various strategies interviewers might employ to fabricate parts of their interviews. Among other strategies, interviewers could ask only one or two questions from a battery of items and then “generalize” these answers to the entire set. Our interest is twofold: We try to explain why interviewers fabricate parts of their interviews, and we estimate the effects caused by deviant interviewers. As an example we use the German Social Survey 2008 which belongs to the best data sets the social sciences have in Germany.
Falsified interviews represent a serious threat to empirical research. Applying cluster analysis to a set of indicators has been shown to allow identification of suspicious interviewers when a substantial share of all of their interviews are complete falsifications. The analysis is extended to the case when only a share of questions within all interviews provided by an interviewer are faked. Based on a unique experimental dataset it is possible to construct many synthetic data sets with the required properties. A bootstrap approach is used for evaluating the robustness of the method for varying shares of falsifications within interviews.
Face-to-face interviews are an important mode of data collection. The interviewer plays a central role, but data falsification can seriously contaminate the data quality. We analyse differences between real and falsified data. Our database consists of two datasets: Real interviews and falsified interviews fabricated in the lab. We use both datasets for calculating multivariate analyses and compare the results. We model the effects of political values and attitudes on the political participation. The results are discussed in light of the theories of social cognition and interviewers’ motivation as well as with respect to identification of falsified data.