ESRA logo

ESRA 2019 full progam


Monday 15th July Tuesday 16th July Wednesday 17th July Thursday 18th July Friday 19th July


Fieldwork Monitoring Tools for Large-Scale Surveys: Lessons from the Field 3

Session Organisers Dr Michael Bergmann (Munich Center for the Economics of Aging (MEA) )
Dr Sarah Butt (City, University of London)
Dr Salima Douhou (City, University of London )
Mr Brad Edwards (Westat)
Mr Patrick Schmich (Robert Koch Institut (RKI))
Dr Henning Silber (GESIS)
TimeWednesday 17th July, 11:00 - 12:30
Room D17

A key challenge in survey research is how to manage fieldwork to maximise sample sizes and balance response rates whilst ensuring that fieldwork is completed on time, within budget and to the highest possible standards. This is particularly demanding when monitoring fieldwork across multiple surveys simultaneously, across time, or across different countries. Effective monitoring requires access to real-time information which can be collected by and shared with multiple stakeholders in a standardised way and responded to in a timely manner. This process often involves researchers who may be several steps removed from the fieldwork, located in a different organisation and even a different country.

Increasingly, fieldwork agencies and survey infrastructures have access to detailed progress indicators collected electronically in the field. Making use of this information requires streamlining the information received by means of fieldwork monitoring systems or dashboards. Developing an effective fieldwork monitoring system is not as straightforward as it may at first appear and raises both methodological and operational questions. Methodological questions include which indicators to use for monitoring, how to combine paradata from multiple sources to generate indicators, and how to use the indicators during fieldwork. Operational questions include how to most effectively present the agreed indicators, which stakeholders the monitoring system should cater for, how often updates should be received, how to implement y interventions, and how the feedback loop between the monitoring team and interviewers in the field should be managed. Thinking about these questions is crucial in ensuring that increased access to real-time data leads to improvements in monitoring and responding to issues in the field. The data should facilitate informed decision-making about how best to realise often competing goals of fieldwork delivery, quality and costs.

This session aims to bring together those working with fieldwork monitoring dashboards or other tools to share their experiences and learning. We welcome presentations from different stakeholders (fieldwork agencies, research infrastructures, academic institutes, etc.) and survey types (national and cross-national, cross-sectional and longitudinal, conducted in any mode). Presentations could outline how the types of methodological and practical issues raised above have been addressed and demonstrate how the choices made have had an impact, good or bad, on capacity to monitor fieldwork. Presentations may also provide systematic reviews of one or more performance indicators results of implementing adaptive and responsive designs informed by monitoring dashboards, and experimental and simulation studies on fieldwork monitoring.

Keywords: fieldwork monitoring; dashboards; progress indicators; adaptive design

Fieldwork Monitoring for Complex Survey Designs Using an Interactive Web Dashboard

Mr Joe Murphy (RTI International) - Presenting Author
Mr Brian Burke (RTI International)
Dr Rebecca Powell (RTI International)
Dr Paul Biemer (RTI International / University of North Carolina)
Dr Kathleen Mullan Harris (University of North Carolina)
Dr Carolyn Tucker Halpern (University of North Carolina)
Dr Robert Hummer (University of North Carolina)

Monitoring data collection is essential for survey designs requiring real-time interventions. This includes responsive, adaptive and tailored designs and continuous data collections. For survey designs with a mix of data collection modes and protocols with error, cost, and schedule constraints, effective monitoring systems can be quite complex. There are often multiple stakeholders (e.g., sponsors, managers, supervisors), several layers of data (e.g., event, case, interviewer, region) and competing risks and objectives that must be optimized.

The National Longitudinal Study of Adolescent to Adult Health (Add Health) exemplifies these requirements and challenges for fieldwork monitoring. Add Health requires monitoring and visualizing data from multiple sources to track experimental, multimode, and longitudinal survey designs in near-real time to enable rapid design decisions during data collection. These decisions ultimately affect data quality (e.g., response rate, representativeness), the cost of data collection, and the timing of data release for public use.

To manage the multiple objectives of fieldwork monitoring, we employ an Adaptive Total Design (ATD) Dashboard. This platform standardizes the approach to, and production of, easily interpreted data visualizations and reports using the R Shiny web application framework. Critical-to-quality indicators are prominently displayed while extraneous information is minimized, using best practices of visual design. Users can interactively select from an array of display options and mechanisms for categorizing, subsetting, and aggregating data. No programming knowledge or statistical software is required of the user.

This presentation will include examples from Add Health Wave V (2016-18) and design plans for future waves to use the ATD Dashboard for regular monitoring and decision making during the course of fieldwork. We will discuss lessons learned about the process and development of the monitoring approach that can be applied for other large scale, complex surveys with real-time monitoring requirements.


From Raw Interaction Traces to Actionable Indicators: Lessons Learned

Ms Simon Dellac (Sciences Po)
Ms Geneviève Michaud (Sciences Po)
Mr Romain Mougin (Sciences Po)
Ms Elodie Pétorin (Sciences Po) - Presenting Author

ELIPSS (Étude Longitudinale par Internet Pour les Sciences Sociales) is a probability-based Internet panel dedicated to social sciences, inspired by the dutch LISS Panel. Each panel member is provided with a touch screen tablet, through which academic surveys are delivered monthly. The study is in its 6th year of existence and has hit its stride with around 2500 panel members remaining and 69 surveys conducted so far.,

During the pilot study, the ELIPSS team designed specific management processes and interaction strategies to maximize participation and minimize attrition; in support of which custom software tools were developed, ranging from an Android questionnaire delivery application to web services facilitating fieldwork management. This growing ELIPSS toolbox was extended in 2017 with a real time analytics dashboard.

Sustaining a daily flow of multi-channel interactions with panel members is a focus point of the panel management team’s efforts. Phone calls, electronic and traditional mail, SMS, and lastly in-app notification messages were incrementally adopted and combined. Both individual and batch delivery are supported where relevant.

Detailed traces of all these activities (connection times, communication histories, answering tracks, assistance requests…) were collected along the way, yielding a wealth of behavioral metadata. However, tapping into this potential to provide useful metrics and indicators remains a challenge for several reasons. First, the adaptive rather than rigidly foreplanned design process of tools and workflows occasioned heterogeneities in data shape and contents through time, that must be reconciled. Second, “quick-fix” solutions were sometimes unavoidable to pragmatically meet the needs of continuously ongoing fieldwork, and introduced numerous idiosyncrasies in the data model.

This presentation will focus on the lessons learned from our best efforts in the experimental curation, documentation and aggregation of these raw interaction-traces to produce a usable paradata set capable of providing useful indicators to inform real-time decisions.


Monitoring Survey Success Using Dashboards

Ms Victoria Vignare (Westat) - Presenting Author
Mrs Wendy Hicks (Westat)
Mr Jerome Wernimont (Westat)

Surveys that include field data collection often involve several different categories of stakeholders, each with their own variation on the definition of success. The interviewers themselves, their supervisors, the survey leadership staff and, in contract work, the survey sponsor each represent different stakeholder groups. Dashboards provide an opportunity for each stakeholder group to have visibility into the data and metrics at different levels of aggregation that speak to their decision-making needs, and ultimately their definition of survey success. Survey sponsors require data that help them identify risks to their organization’s ability to meet key performance indicators, such as how well estimate from a survey conform to standard benchmarks. Survey leaders need data that quickly identify departures from the contractually required levels of performance, such as the overall survey response rate, the mean interview time, or respondent representativeness. Field management need data that facilitate their ability to manage the field staff’s productivity and efficiency, and with an ability to monitor at a lower level of aggregation, such as a region or a PSU level, or for panel surveys, by panel cohort and various levels of geography. Additionally, field managers need visibility into individual interviewer performance in order to monitor and control their work from the ground up, including supporting tasks such as incentive management and receipting of supplementary materials such as hard-copy questionnaires. Each of these needs share much of the same source data, but reflect different views into those data. A dashboard, and the underlying architecture to support central database and control processes, provides a vehicle for each of the stakeholders to gain near-time visibility into metrics that allow them to manage risks and support their decision-making. This presentation will walk through various different dashboard “views” that are tailored to a stakeholder or user group, that together provide a comprehensive picture of survey success.


Monitoring Tools Tracing the Fieldwork Practice for Large-Scale Surveys

Mrs Reveilhac Maud (FORS) - Presenting Author

Drawing from considerations about how to improve the effectivity and responsiveness of fieldwork management, as well as documentation and communication between survey team members and clients in a timely manner, we developed a series of tools that work together to trace the fieldwork practice and allow for the implementation of responsive survey designs. These tools aim to provide a daily overview to measure response rates whilst ensuring that fieldwork is completed on time, within budget and to the highest possible standards. They further make it easier to monitor fieldwork across multiple surveys simultaneously by accessing real-time information and are adapted to multimode survey design, especially paper-web surveys.
On the one hand, we developed an intuitive fieldwork monitoring tool allowing to collect and combine information from web and paper modes and which provides an overview of the status of sampled individuals in conformity to AAPOR standards on a daily basis. On the other hand, we developed a tool allowing tracking of cashing of unconditional incentives, an important factor in the total cost of our surveys, and compared the results with the current response rate. A dashboard links and presents all the information in a concise way and provides a feedback loop that facilitate informed decision-making about how best to realise fieldwork delivery, balancing quality and costs. The developed tools do not require special coding abilities and have a user-friendly interface. We will present lessons from fieldwork where the tools were used in practice.