ESRA logo

ESRA 2019 full progam


Monday 15th July Tuesday 16th July Wednesday 17th July Thursday 18th July Friday 19th July


New Strategies of Assessing Data Quality within Interviewer-Administered Surveys 1

Session Organisers Dr Laura Silver (Pew Research Center)
Mr Kyle Taylor (Pew Research Center)
Ms Danielle Cuddington (Pew Research Center)
Dr Patrick Moynihan (Pew Research Center)
TimeWednesday 17th July, 14:00 - 15:00
Room D17

International survey researchers are no strangers to the difficulties inherent in assuring high-quality data, particularly in a post-GDPR environment where access to audio files -- a key mechanism to verify the caliber of interviewing -- may be severely restricted. Moreover, closely monitoring or investigating every sampled case is unlikely given resource constraints (e.g., limited time, budget and capacity), driving researchers to base evaluations on aggregate measures of data quality, such as interview length (in its entirety or by sections), extreme item nonresponse and other related substantive and paradata indicators.

For survey practitioners, this raises a critical question: Which data-quality indicators are most valuable for identifying problems in the field -- and, by extension, low-quality interviewing? Are certain indicators better associated with identifying certain problems? And what thresholds are used to distinguish between a case worth analyzing and one requiring more investigation? More broadly, how do these issues play out across comparative data as well as between locations and modes?

Once potential problems are determined, identifying the best course of action to resolve the issue can be a challenge. Resolving the issue can involve anything from simple case deletion (with requisite re-weighting, as applicable) to deletion of all interviews by a conducted by an interviewer or observed by a given supervisor to complete re-fielding.

Taken together, the goal of this session is to bring together researchers to discuss the measures they use to assess data quality, the thresholds they apply and the actions they take to resolve problematic cases. Topics may include but are not limited to:

Assessing the validity of cases flagged as “low quality” across different indicators;
Setting thresholds for quality control – that is, what is “too short” or “too long” and how do you determine that across different countries, languages, and modes;
Research that tackles new and innovative ways to expose “curbstoning” and other practices that lead to low-quality data;
Methods used to verify proper in-home selection;
Strategies used to detect respondent confusion, satisficing, and discomfort;
Research focused on when evaluating when and how to replace low-quality data, including, issues of substitutions and implications for data quality and final data representativeness.

We will limit this particular session to face-to-face and telephone interviewing, rather than online interviewing. We invite academic and non-academic researchers as well as survey practitioners to contribute.

Keywords: data quality, paradata, speeding, curbstoning, replacement, in-home selection

I [Don’t] Think We’re Alone Now: Third-Party Presence in European Face-to-Face Surveys

Ms Stacy Pancratz (Pew Research Center) - Presenting Author
Mrs Martha McRoy (Pew Research Center)
Dr Patrick Moynihan (Pew Research Center)

With face-to-face interviewing, responses can be influenced by many confounding factors. While the interviewer’s presence can introduce a social desirability bias of its own accord, an additional effect can occur from the presence of a third-party during the interview, potentially adding to measurement error.

This is concerning within a multinational survey context as interviewers are not always able to secure privacy for the entire interaction across all locations. Moreover, third-party presence can have an impact when the third-party’s relationship to the respondent and the survey topic overlap. For example, the third-party’s position within the family (e.g., spouse or parent) may influence how the respondent answers certain questions in some countries, while in others the third-party’s presence is paramount to acquiring participation.

To better examine the potential effects of third-party presence, we included measures of interview privacy and third-party relation to respondent (according to the interviewer) on Pew Research Center’s cross-national Global Attitudes Survey. In addition, we included measures of respondent candor and engagement, again according to interviewers. Our analysis focuses on the four face-to-face surveys fielded in Western Europe (Greece, Hungary, Italy and Poland) in 2017 and 2018. We examine respondent characteristics by third-party presence in order to identify those most likely to have a third-party present. We also analyze differences in responses by both third-party presence and the quality of the relationship across substantive topics and by country.


Effects of Interviewer Gender on Interaction Relationship between Interviewer and Respondent: Findings from a Household Survey of Taiwan

Dr Ruoh-rong Yu (research fellow) - Presenting Author

The existing studies of interviewer gender on survey responses have focused on the respondent’s answers to survey questions and response style. Yet, whether interviewer gender matters for the interaction relationship between interviewer and respondent remains barely touched. In a longitudinal survey, the interaction relationship between a respondent and his/her interviewer is not only related to data quality of the complete interview, but is also associated with the likelihood of locating and interviewing the same respondent in the follow-up survey. In this study, whether interviewer-respondent gender matching affects the interaction relationship between interviewer and respondent is explored using the paradata of refreshment sample collected from a longitudinal household survey of Taiwan. The measures of interaction relationship between interviewer and respondent are constructed from two sources of paradata. One is the interviewer’s self-administered questionnaire following each complete interview. After an interview is completed, the interviewer is asked to complete a questionnaire including questions on his/her subjective evaluation for the trustworthiness of the respondent’s answers, degree of respondent’s trust for the interviewer, additional efforts spent in explaining the questions or obtaining an answer, etc. The other source of paradata is the re-interview data. In the re-interview, the respondent is asked to evaluate the attitudes and behaviors of the interviewer in addition to providing answers to some survey questions for the second time. Based on these paradata, measures of the interviewer’s and respondent’s perceived interaction quality are constructed and used as dependent variables in the analysis. Multilevel models are adopted to analyze the interactive effects of interviewer and respondent gender, with other relevant interviewer- and respondent-level variables being controlled. Besides the above-mentioned effects, whether the interaction relationship perceived by the interviewer is highly associated with that perceived by the respondent is also analyzed.


Understanding Interviewer Fatigue and Interview Travel Time in the General Social Survey (GSS)

Mr Benjamin Schapiro (NORC at the University of Chicago) - Presenting Author
Dr Rene Bautista (NORC at the University of Chicago)

Interviewer effort and fatigue remains mostly unexamined within the Total Survey Error paradigm. While care can be taken to alleviate respondent fatigue, no such care is taken with interviewers. Questionnaire design and training can eliminate this effect during the interview itself, but do not necessarily potential effects on successful recruitment. Building on the interviewer GPS work of Olsen & Wagner (2015), we will explore the hypothesis that interview travel time has a negative effect on successful recruitment, and that interviewers who travel for shorter times or distances have higher success with beginning interviews. Utilizing 2016 and 2018 General Social Survey data, we examine interviewer contacts and travel times, as well as interviewer ratings of cooperativeness, comprehension, case difficulty, and incentives. Preliminary analysis shows that while cooperativeness and comprehension do not appear to be affected by travel times, incentive use and value increases with increased travel time, suggesting that interviewers are more readily relying on incentives as a means of recruitment when they have had to travel further and are more fatigued.