Tuesday 16th July
Wednesday 17th July
Thursday 18th July
Friday 19th July
Download the conference book
Fieldwork in interview surveys - professional guidelines and field observations |
|
Convenor | Mr Wojciech Jablonski (University of Lodz) |
This session invites presentations dealing with different aspects of fieldwork in interview surveys - both in person (PAPI / CAPI) and over the telephone (CATI). In particular, we are interested in two issues. On the one hand, we will focus of the fieldwork procedures, guidelines, sets of rules, etc. implemented in order to keep the research process standardized and achieve high quality of survey data. On the other, we will investigate the problem of complying with these principles during the fieldwork.
Topics that might come under this theme include (but are not limited to):
- innovative methods of interviewer training (general, project-specific, and refresher);
- procedures of monitoring and evaluating interviewers' job performance (in particular, detecting deviations from the standardized protocol);
- analysis of interviewers' behaviour during survey introduction and while asking questions / recording answers;
- interviewers' attitude toward their job (specifically the difficulties they encounter while administering the survey, and the solutions they implement in order to overcome these problems).
A specific problem of fieldword in interview surveys is interviewers' deviant behaviour which might affect data quality. An extreme form of such deviant behaviour consists in interviewers fabricating data instead of conducting the interview. The extent of such behaviour ranges from interviewers filling a few questions, e.g. sensitive ones, themselves to questionnaires, for which the interviewers only collect some basic socio-demographic information and complete all remaining questions themselves. While references in the literature are still rare, the prevalence of such behaviour should not be underestimated given the potential impact on data quality. In past work, we presented a multivariate statistical procedure allowing identify "at risk" interviewers based on specific characteristics of their data, so called indicators. In a large scale experiment both real and falsified data have been obtained which enable an evaluation of the discriminatory power both of individual indicators and the multivariate approach. In addition, the experiment comprised a variation of the payment scheme. While one subgroup of interviewers was paid by hour, the other one obtained a payment per completed questionnaire both for the real and the falsified interviews. We assess to what extent the quality of real and falsified data are affected by this experimental setting and whether it has an impact on the discriminatory power of the multivariate statistical analysis. These results might help improve guidelines for field work and reduce the prevalence of this type of deviant behaviour.
This paper describes the multi-stage field process developed to deal with data fabrication and falsification identified during the second wave of a nationally representative survey (NIDS). Data collection through CAPI and random quality call backs employed in the NIDS survey, allowed for early detection of fabrication during field rather than afterwards (as is usually the case). This created the opportunity for repair nuanced in ways that maximised confidence in the data while reducing respondent burden. Firstly, we describe how suspicious interviewers were identified, how falsification was identified in office and confirmed in field and the different types of falsification found. Four types of falsification were identified: false non-responses, fake interviews with no contact, fake interviews with contact and partial fabrications. Each type of falsification required a different type of remedial action, all of which are described in detail in the paper. Although not easy to undertake in the field, the approach of customised re-interview instructions paid dividends. All unverified interviews by suspicious interviewers were removed from the dataset before publication, only verified data was included in the published dataset. This paper intends to provide practical lessons and guidance for remedial action when encountering fieldworker falsification of data.
The general decrease in telephone survey response rates leads to potential selection and estimation biases. As non-respondents can be broken down into non-contacts and refusals, different strategies can be deployed: 1/ increasing the number of call attempts before abandoning a number, 2/ calling back refusals/abandonments to persuade them to participate.
A two-stage random probability sampling, using random digital dialling was used to select 8 645 individuals aged 15-49 in 2010 for a survey on sexual and reproductive health (SRH). We compared the effects of the two strategies: including hard-to-contact respondents (more than 20 call attempts with no upper limit) and including respondents from two successive waves of call-back among initial refusals/abandonments. Comparisons were based on sociodemographic bias, differences in SRH behaviours, multivariate logistic modelling of SRH behaviours, post-calibration weighting and cost estimation.
The sociodemographic profile of hard-to-contact and call-back respondents is different from that of easy-to-interview respondents. Including hard-to-contact respondents decreases the sociodemographic bias of the sample, while including call-back respondents increases it. There are several significant differences in SRH behaviours between easy-to-interview and hard-to-contact respondents, but none between first wave and call-back respondents. Nevertheless, the determinants of SRH behaviours in call-back and hard-to-contact respondents differed with respect to easy-to-interview respondents.
The trade-off between bias and financial costs suggests that the best protocol would be to mix the two strategies but
Survey researchers often regard effective interviewer control as a practical tool for securing high data quality, especially with respect to the unbiasedness of data. Usually, the response rate is used as a proxy for data quality. Based on rational choice theory I hypothesize, however, that studies with extensive interviewer control show lower response rates than surveys that do not invest as much effort in supervising and monitoring their interviewers.
In order to test this hypothesis, I collected aggregated paradata of German population surveys, using field reports as major source. In further steps I inspected articles, books and homepages and contacted the principal investigators of the population surveys in order to collect relevant information if field reports were either not available or did not contain all relevant information. Taken together, these sources provide, after extensive coding, a database with information on field work, interviewer control and response rates of all major German survey studies carried out between 1990 and 2010. Data is analyzed using OLS regression models (with multiply imputed data because of the still rather high proportion of missing information). In my presentation I will discuss the theoretical background, outline the results that in most parts support the postulated hypothesis and discuss possibilities for further research that exploits field reports and aggregated paradata.