Different Methods, Same Results? – How Can We Increase Confidence in Scientific Findings 1? |
|
Coordinator 1 | Dr Thorsten Kneip (Max Planck Institute for Social Law and Social Policy, MEA) |
Coordinator 2 | Dr Gerrit Bauer (LMU Munich) |
Coordinator 3 | Professor Elmar Schlueter (Justus-Liebig-University Giessen ) |
Coordinator 4 | Professor Jochen Mayerl (Chemnitz University of Technology) |
This session follows up on earlier discussions at last year's ESRA conference on the fruitful use of multiple methods. We are interested in how to increase confidence in scientific findings in the light of mixed evidence on the one hand and busted seemingly established findings on the other ("replication crises"). While we have seen an ever increasing proliferation of methods for survey data collection and analysis in recent years, there is still a lack of standards in how to aggregate findings. Too easily, convergence of findings is seen as indicative for a "true" effect, while it may well reflect repeated systematic errors. While replicability and reproducibility are fundamental for empirical research, replication studies may not only confirm true but also false results. In a similar vein, diverging results when using different methods are often to be expected, as they aim for the identification of different effects or rely on different assumptions, the violations of which lead to different forms of bias.
The common problem seems to be rooted in a lack of awareness and transparency regarding implicit decisions made in the process of analysis and the lack of explication and discussion of model assumptions. We invite researchers to submit papers discussing the consequences of applying alternative methods of survey data analysis addressing the same research question. A focus should be given on making explicit all assumptions related to the chosen method(s). Examples would be:
Studies comparing at least two different estimation approaches, addressing different potential sources (and directions) of bias; extensive robustness checks varying theoretically undetermined parameters (e.g. functional form of control variables, definition of analytic sample); replication studies critically reflecting or challenging decisions made in the entire research process; crowd research.