Towards Strong Evidence-Based Survey Methodology using Replications, Systematic Reviews, or Bayesian Approaches |
|
Session Organisers | Dr Bernd Weiss (GESIS - Leibniz-Institute for the Social Sciences) Ms Jessica Daikeler (GESIS - Leibniz-Institute for the Social Sciences) Dr Henning Silber (GESIS - Leibniz-Institute for the Social Sciences) Professor Michael Bosnjak (Leibniz Institute for Psychology Information) |
Time | Tuesday 16th July, 14:00 - 15:30 |
Room | D31 |
The conscientious, explicit and judicious use of current best empirical evidence in making decisions is the central paradigm of the evidence-based practice movement. This movement originated in the medical sciences but found its way in others disciplines like economics, social work and, to a much lesser extent, in survey methodology. The quality of the evidence can be evaluated according to various rating systems that distinguish between different levels of evidence. A common characteristic of these systems is that they value accumulated evidence based on full or partial replications higher than evidence based on a single (and singular) study. Furthermore, within the category of replicative studies, systematic reviews and meta-analysis are considered to promote the best available evidence -- with the gold standard for causal inference being systematic reviews and meta-analyses based on experimental studies. A related, but also overlooked question is how to appropriately incorporate existing evidence in a prospective survey methodological study. Usually, this is done in a qualitative and subjective manner by citing and discussing previous work. However, Bayesian approaches can help to be more rigorous and formal in terms of incorporating past evidence using informative priors, which then are contrasted and updated with the current study and data at hand.
The overall aim of this session is to promote evidence-based survey methodology that is studies aimed at systematically aggregating high-quality evidence on issues relevant for preparing, implementing, and analyzing survey-based research. We encourage the submission of methodological papers, applications, and software/tool demonstrations. Eligible contributions may address, but are not limited to, the following topics:
- Applications and challenges of replicative survey methodology, e.g. issues of pre-registration, determinants of replicability in survey methodology, etc.
- Replicability in the context of big data
- Systematic reviews, gap maps, and meta-analyses in survey methodology
- Using Bayesian approaches to incorporate previous research findings
- Software and tools supporting replications, systematic reviews and meta-analyses or Bayesian approaches in survey methodology contexts
Keywords: Survey methodology, Meta-analysis, Systematic review, Replication, Bayesian statistics
Dr Rebecca Kuiper (University Utrecht) - Presenting Author
Mr Oisín Ryan (University Utrecht)
In science, the gold standard for evidence is an empirical result which is consistent across multiple studies. Meta-analysis techniques allow researchers to combine standardized parameters or effect-size measures of multiple studies. Due to the increasing availability of longitudinal data, standardized lagged effects are becoming increasingly the target for meta-analytic studies. Lagged effects are the effects of variables at one measurement wave on variables at the next. These lagged effects can come from cross-correlations, cross-lagged panel models and/or first-order vector autoregressive models.
While lagged effects models can be viewed as simple SEM or regression models, they typically tend to ignore the well-known problem of time-interval dependency. That is, these models ignore that the parameter estimates change depending on the time that elapses between measurement waves. This means that studies that use different uniform time intervals between observations can come to very different parameter estimates, and conclusions, about the same underlying process. For example, the effect of current stress on anxiety one hour later differs from the effect of current stress on anxiety three hours later.
Time-interval dependency presents a challenge to the meta-analysis of lagged effects. This time-interval dependency must be explicitly modelled to avoid misleading or inconsistent conclusions. In this presentation, I will describe a continuous-time approach to meta-analysis, which explicitly models lagged effects as a non-linear function of the time interval. I examine the performance of this new meta-analytic approach in relation to current best practice in the field: treating time-interval as a linear or step-wise moderator of the lagged effect. I will also demonstrate novel tools, in the form of Shiny apps which will aid researchers in applying these meta-analytic techniques.
Ms Sonila Dardha (City, University of London) - Presenting Author
Ms Jessica Daikeler (GESIS Leibniz Institute for the Social Sciences)
Dr Kathrin Thomas (Princeton University)
Dr Bernd Weiss (GESIS Leibniz Institute for the Social Sciences)
Interviewer effects are a crucial source of error in interviewer-administered surveys, predominantly affecting the measurement side of the Total Survey Error. A substantive body of literature looks at interviewer variance, and further at interviewer traits and how they influence respondents' answers. The present study is a systematic review and meta-analysis of gender-of-interviewer effects on survey responses, aiming at examining the differences in the data collected between male and female interviewers.
We first present the study scope and search strategy. We, then, search for relevant studies from book publications, edited volumes, journals and grey literature i.e. conference presentations, theses and other unpublished work. The literature search is conducted between November 2017 and March 2018. Finally, we catalogue the available literature using specific inclusion and exclusion criteria. After a careful screening of the collected manuscripts, we selected over a hundred relevant studies for further scrutiny.
Four researchers independently code moderators i.e. study characteristics such as topic, mode of data collection or sampling technique, just to mention a few, along with calculating effect sizes for each eligible study. The meta-analysis assesses the overall gap between the survey responses collected from male and female interviewers and tries to explain the heterogeneity found in the literature with respect to various study moderators. Tables listing references and forest plots are included to summarise studies and depict results.
Our findings are the first systematic review and meta-analysis of gender-of-interviewer effects in surveys and may have implications on future survey practice and designs. In addition, our findings provide important insights to what extent interviewer gender may affect survey estimates across a large variety of studies and topics, and help researchers and study designers to better judge when to control for gender-of-interviewer effects to improve data analyses or how to assign interviewers in field.
Dr Rebecca Kuiper (University Utrecht) - Presenting Author
Mr Lion Behrens (Mannheim University)
In science, the gold standard for evidence is an empirical result which is consistent across multiple studies. In current aggregation approaches, like meta-analysis or Bayesian updating, effect sizes are combined. Notably, these are restricted to estimates that stem from homogeneous statistical models sharing a common functional form. However, in applied research, the main variables of interest are often operationalized in different ways and thus measured on different scales, and/or the statistical model used to relate variables often differ between studies. This presentation shows how one can aggregate evidence for informative (i.e., theory-based) hypotheses even if underlying models and operationalisations are diverse.
To combine results from multiple studies of different types and designs, but regarding one theoretical concept, we introduce an evidence aggregation method in which we combine the evidence for informative hypotheses using Bayes factors and posterior model probabilities. In this method, the hypotheses do not address specific parameters but cover an underlying effect for which the parameters (e.g., regression parameters) are indicative of. The method can be employed to evaluate informative hypotheses of various complexities that go far beyond the traditional null hypothesis significance framework and quantifies the evidence for the hypotheses of interest being true in all the studies. To apply our method, a researcher is merely in need of the parameter estimates and their covariance matrix of each study, which adequately summarize all the necessary information in the data with respect to the effect.
The method will be explained and illustrated by combining the evidence from four studies on how trust depends on previous experience. In addition, I demonstrate the use of a (so-called) Shiny-app which can be used to combine evidence from multiple studies with diverse designs.
Mr Marcus Maher (Ipsos) - Presenting Author
Dr Alan Roshwalb (Ipsos)
Dr Rob Petrin (Ipsos)
Tracking studies often use each month's results as a one would a cross-sectional study where each month’s results stand on their own. These studies often only consider preceding months’ results in examining trends and capturing changes over time. Tracking studies that we conduct collect data continuously and mostly report results monthly. The delay is mostly to allow for the collection of sufficient sample sizes and the calculation of statistical weights to adjust for design and non-response effects. Regardless, users of the data have needs for intra-cycle reporting and indicators of change in month-to-month results and trends. Previously reported work examined using past data to help identify changes in the context of a Bayesian update process to reduce the gap in the reporting process. This research looked at using past month’s data and multiple months’ data to estimate parameters of a Beta-Binomial conjugate prior to be used in Bayes factor analyses. The parameter estimation has considered past month’s data alone, aggregated past 10-month data, and a time-series regression to estimate a prior’s parameters. This paper extends our efforts to better estimate the parameters using Bayesian Structural Time Series (BSTS). This machine-learning approach allows for capturing trends, seasonality, and regressors without the benefit of sufficient historical data. It also allows for spike-and-slab priors to reduce a large set of correlated variables into the model. This paper applies past Reuters-Ipsos state-level survey data to identify a current week’s changes to flag potentially substantive and significant changes in sentiment. State-level BSTS models separates trends for Red, Blue and Purple states along with other economic and political data. The study data are the Reuters-Ipsos data from the months political sentiment results leading up to the 2018 US mid-term elections.