ESRA logo

ESRA 2023 Glance Program


All time references are in CEST

Unexpected Research Findings - Reasons and Consequences

Session Organisers Dr Uta Landrock (LIfBi – Leibniz Institute for Educational Trajectories)
Dr Detlev Lück (Federal Institute for Population Research (BiB))
TimeTuesday 18 July, 09:00 - 10:30
Room

In social science research, we do not always find what we expect, but sometimes surprising and disappointing results. Instead of the theoretically predicted effects, we may find unpredicted ones or no effect at all. Sometimes results even completely contradict our hypotheses. Given that such outcomes are unpleasant and difficult to publish, we assume that they occur much more often than we hear about them.
We do not want to complain about this problem (not only!), but also discuss possible reasons as well as possible ways how to deal with this issue. Reasons may include limited data quality, unavailable information or insufficient analytical potential of our data. For example, we may not be able to consider multilevel structures due to low case numbers or because statistical requirements are not met. Problems of data availability may include indicators not included in the questionnaire or unobserved third variables. Unexpected research results may also occur when fieldwork procedures or the research design are documented insufficiently.
Regarding the issue of how to deal with unexpected findings, one concern is to give them more visibility. This is the precondition for improving future research instead of repeating failures that others have secretly made before. A second concern is to avoid common reasons for failures. Perhaps the most important advice is to carefully consider what information we need to apply a particular theoretical framework or to address a particular research question before collecting the data. In the face of limited data availability or analytical potential, new and alternative data sources may offer opportunities to fill these gaps.
We invite researchers to participate in this discussion and to learn from each other in order to avoid disappointments.

Keywords: unexpected findings, unexpected results, publication bias

Papers

Prevalence and Actor-Dependent Risk Factors of Publication Bias in the Social Sciences: Evidence from Two Probabilistic Panel Surveys

Dr Désirée Nießen (GESIS – Leibniz Institute for the Social Sciences) - Presenting Author
Ms Caroline Poppa (SHARE BERLIN Institute)
Dr Jessica Daikeler (GESIS – Leibniz Institute for the Social Sciences)
Professor Henning Silber (University of Michigan)
Dr Bernd Weiß (GESIS – Leibniz Institute for the Social Sciences)
Professor David Richter (SHARE BERLIN Institute, Freie Universität Berlin)

For over a decade, there has been debate about the (non-)replicability of empirical results. During this replication crisis, several causes of low replicability have been identified (lack of transparency, unclear data basis and methods, flawed experiments, p-hacking, etc.) and addressed by promoting, for example, preregistrations, open data, and open materials. Despite these efforts, some results remain non-replicable. Publication bias has been identified as a contributing factor, describing the (non-)publication of study results due to their direction or strength. This means that statistically significant results and larger effects are more likely to be published than statistically non-significant results and smaller effects, leading to an overestimation of the importance of findings. As a result, publication bias complicates the assessment of the true state of knowledge on a particular research question, influencing scientific debates and policy decisions based on incomplete evidence, increasing the risk of incorrect recommendations. In the present study, we focus on the decision-making process that lead researchers to selectively publish certain results, while others remain in the “file drawer.” Specifically, we look at successful study submissions of external researchers to two German probabilistic panels (GESIS Panel, SOEP-IS) and examine discrepancies between, among other things, the research questions and hypotheses formulated in the study submissions and the presentation in the subsequent actual publications. Detailed information is collected on three elements (study submission, publication, author). To better understand the extent of publication bias, we analyze which study and author characteristics are associated with a higher probability of publication. Additionally, we conduct an author survey to gather information on further planned and prepared publications, as well as on personality traits, motives, attitudes, and values related to the research process. In this way, the entire research cycle can be observed.


The Big Five - Replicable or not? Acquiescence may blur the Big Five factor structure in heterogeneous samples

Professor BEATRICE RAMMSTEDT (GESIS - Leibniz Institute for the Social Sciences) - Presenting Author

After developing and thoroughly validating a short-scale personality questionnaire, measuring the Big Five personality dimensions (the BFI-10; Rammstedt & John, 2007), we were faced with the fact, that its Big Five factor structure is not guaranteed in representative samples. Analyses of subpopulations clearly showed that especially in subsamples with lower levels of education the Big Five factor structure was not replicable whereas in subsamples with a high level of education the Big Five clearly emerged. What might have been the reason for this measurement invariance? Is the Big Five personality structure only valid for higher educated? In search for potential biasing factors we identified that strong differences in acquiescence were found between subsamples with different educational levels. After controlling for this response bias, the Big Five model held at all educational levels.
We showed the generalizability of this biasing effect of acquiescence across questionnaires, assessment modes, and cultures in numerous studies (Lechner et al, 2019, Rammstedt & Farmer, 2013; Rammstedt & Kemper, 2011; Rammstedt et al., 2013, 2017, 2023). Based on our findings we were able to identify factors causing acquiescent response behavior and developed a model explain acquiescent response styles.


The Effect of the Perception of the Interview Situation on Future Participation in Panels

Dr Uta Landrock (LIfBi – Leibniz Institute for Educational Trajectories) - Presenting Author

Panel attrition is a relevant issue in longitudinal studies, and identifying the causes for dropout is crucial for improving participation rates. This work initially aimed to examine how the perception of the interview situation, from both the respondents' and interviewers' perspectives, may predict participation in future panel waves. Our hypothesis was that the perception of the interview situation in the current wave would predict future participation.
As expected, our analyses show that the respondents' perception of the interview situation is predictive for their future participation. Unexpectedly, the interviewers' assessment of the respondents’ comprehension, reliability, cooperation, and fatigue does not have effects on the participation in subsequent waves.
This finding challenges the initial assumption that interviewers' assessments would play a role in predicting future participation. It raises questions about the reasons for such unexpected results. One possible explanation may include social desirability biases, in the sense that interviewers may be reluctant to report difficulties occurring during the interview in order to avoid conflicts with their fieldwork institutes. Additionally, limitations in data availability could have influenced the results. Such unobserved factors may include the interviewers’ perception of the respondents’ interest or equivalent data for an earlier panel wave.
This work highlights the importance of critically reflecting on unexpected findings in research. It emphasizes the need to carefully consider the limitations regarding data availability and the risk of unobserved factors that may affect the results. And it gives advice to carefully consider what information we need to address a particular research question before collecting the data.


Verifying Respondents’ Identities or Capturing Gender Transitions?

Dr Detlev Lück (Federal Institute for Population Research (BiB)) - Presenting Author

In surveys and panels, interviews are occasionally conducted with the wrong person for various
reasons. For example, individuals invited to participate in self-administered surveys might lack
the time or motivation to complete the questionnaire and may hand the invitation over to another
household member. Additionally, misunderstandings regarding the intended recipient of an
invitation letter can occur. To identify such errors and confirm respondents’ correct identities,
panels often collect basic demographic information, such as birth year and gender, during each
panel wave and check for consistency across waves. An inconsistency is interpreted as an
indication of a mistaken identity.

However, in light of the growing recognition of queer gender identities, it is
clear that a change in gender across panel waves is entirely possible. This suggests that such
inconsistencies in the reported gender should, rather than being seen as an error, be empirically
interpreted as evidence of a gender transition. But is this interpretation plausible? How can we
distinguish between mistaken respondent identities and genuine gender transitions?

This presentation draws on data from the first three data-collections of “FReDA - The German
Family Demography Panel Study” (2021). It examines the prevalence of various patterns of gender
inconsistencies in these data. And it presents the evidence supporting both the methodological and
empirical interpretations, discussing which interpretation should be prioritised. Very often, the
inconsistency in gender comes with other inconsistencies, such as an inconsistency in the reported
year of birth, which supports the assumption of a mistaken respondent identity. This suggests that
the strategy of confirming respondents’ correct identities by asking redundant demographic
information in panels still seems adequate. It also suggests that the prevalence of gender
transitions may be too low for being meaningfully measured in panel data since the “false gender
transitions” clearly exceed the true gender transitions.