Tuesday 16th July
Wednesday 17th July
Thursday 18th July
Friday 19th July
Download the conference book
Effect of nonresponse on results of statistical models 1 |
|
Convenor | Professor Christof Wolf (GESIS) |
Coordinator 1 | Professor Dominique Joye (University Lausanne/FORS) |
The research on nonresponse has made very important progress these last years by looking to many different aspect of the survey process linked to the nonresponse question and on the way to estimate a potential bias on some indicators. However, research on the effects of nonresponse has mostly focused on possible bias of point estimates and how to correct for this bias. Very little research has been done on the consequences nonresponse can have on estimates of covariance structures and multivariate models; data structures that are much more common in substantive social science research. The aim of this session is to explore how nonresponse can effect estimates of covariance and effect sizes and ways to counteract these effects. All papers -- theoretical contributions, empirical analysis, results from simulations or experimental studies -- are welcome.
Business cycle indicators based on the balance statistics are a widely used method for monitoring the recent economic situation. As surveys in general can be affected by distortions arising from the response behaviour, these indicators can also be biased. In addition, time-dependent nonresponse patterns can produce even more complex forms of biased results. This paper examines a framework for which kind of nonresponse patterns lead to biases and decreases in performance. We perform an extensive simulation study to analyse their effects on the indicators. Our analyses show that these indicators are extremely stable towards selection biases.
Longitudinal studies not only suffer from unit nonresponse in their first wave of data collection, but also from attrition from one wave to the next. Both nonresponse and attrition can potentially cause bias in survey estimates, which can often only be partially corrected by the use of adjustment weights. While bias is generally viewed as an item-level attribute resulting from a correlation between variables influencing the decision to participate in a survey (or a given survey wave) and responses to key survey questions, relatively few studies of nonresponse error have examined whether and how bias at the item level affects the relation between variables.
In the present study, we investigate the extent to which attrition in the Swiss Household Panel Study affects the conclusions drawn from multivariate analysis. Using a selection of variables known to be affected by selective attrition, we replicate multivariate analyses that have been developed by substantive specialists using data from 1999 to 2011 to investigate whether and how bias at the item-level affects the interpretation of results and to what extent the use of sample weights affects the coefficients. Our aim is to raise awareness among analysts about any potential impact selection bias may have on the development of social scientific theory, but also to draw conclusions about whether bias 'matters' if item-level errors have little influence on the relation between variables.
In electoral research the propensity to vote (PTV) for a party has become widely used in the study of voting behavior (van der Eijk et al. 2006). Hitherto, item non-response was not yet tackled in connection with the PTV questions. Though, this seems all the more important as respondents who cannot or choose not to answer the PTV questions are omitted from the analysis of voting behavior: PTVs are meant to measure electoral utilities and therefore from a theoretical point of view one needs all utilities to model the electoral decision in a stacked data set. Usually, all available respondent-party combinations are used. If item non-response in the PTV questions is affected by individual characteristics that are usually used to explain voting behavior, such as political knowledge, party identification etc., statistical models explaining vote choice might be biased. Moreover, parties that are more affected by item non response might be underrepresented in such models.
Using the European Election Study 2009, we first analyze which characteristics of a respondent and which features of a political party increase the likelihood of item non-response in PTV questions. In a second step we will assess how coefficients in models are affected. Our results will allow to understand better item non-response in PTV questions, and thus shed light on the potential bias that is induced in such vote choice models.