Tuesday 16th July
Wednesday 17th July
Thursday 18th July
Friday 19th July
Download the conference book
Assessing the Quality of Survey Data 2 |
|
Convenor | Professor Jörg Blasius (University of Bonn) |
This session will provide a series of original investigations on data quality in both national and international contexts. The starting premise is that all survey data contain a mixture of substantive and methodologically-induced variation. Most current work focuses primarily on random measurement error, which is usually treated as normally distributed. However, there are a large number of different kinds of systematic measurement errors, or more precisely, there are many different sources of methodologically-induced variation and all of them may have a strong influence on the "substantive" solutions. To the sources of methodologically-induced variation belong response sets and response styles, misunderstandings of questions, translation and coding errors, uneven standards between the research institutes involved in the data collection (especially in cross-national research), item- and unit non-response, as well as faked interviews. We will consider data as of high quality in case the methodologically-induced variation is low, i.e. the differences in responses can be interpreted based on theoretical assumptions in the given area of research. The aim of the session is to discuss different sources of methodologically-induced variation in survey research, how to detect them and the effects they have on the substantive findings.
Spatial models of vote choice assume that voters compare policy positions of parties and choose the party that best represents their personal policy preferences. Yet, this is only feasible if (a) respondents are able to locate themselves and all (major) parties on a policy scale and (b) if these policy placements actually reflect meaningful information to them. Item non response to at least one of the ideological placements thus excludes respondents from spatial models of voting. Moreover, if respondents provide answers of very poor quality, spatial models do not make sense.
In this paper, we argue that personality traits affect whether respondents can be included in a spatial model of voting. In particular, we argue that item non-response and guessing with regard to ideological preferences and perceptions is less likely among conscious and emotionally stable respondents. Our empirical analysis based on data from the German Longitudinal Election Study (GLES) supports most of our hypotheses. It shows that conscientiousness and emotional stability decrease the likelihood of item non response and guessing, whereas openness to experience has a positive effect. Importantly, these results also hold after controlling for variation in the respondents' cognitive abilities and motivation. These findings suggest that is more prevalent among more sophisticated, more interested, more conscientious and more emotionally stable voters.
It is well known that self-report measures suffer from diverse sources of measurement bias. We discuss popular measurement models with regards to their assumptions and capability to deal with nonsystematic and systematic measurement error inherent to balanced personality scales. This paper will present a new model based on the notion of the 'common trait score' (Pohl & Steyer, 2010), which represents the overall trait value for different measurement methods (pro-trait/con-trait wording). The model is labeled the Relative Response Style (RRS) model and captures both acquiescence bias and question order (autoregressive) effects. Different models are evaluated using structural equation modeling. For this purpose, the short BFI-10 scale (Rammstedt & John, 2007) in German and English language will be examined in large population samples (ALLBUS, ISSP). We conclude that a variant of the RRS model can provide a theoretically sound model for construct validation and offers an explanation for measurement bias inherent to survey measures. Finally, we discuss theoretical implications for personality measurement, personality theory and suggestions for further applications of this model.
Efi Markou, Françoise Courtel, Bernard de Clédat, Fannie Plessis, Lamia Zamouri
Conducted by INSEE and INED in 2012 among users of services such as shelters and soup kitchen, with the Time Location Sampling method, the Homeless national survey aimed to give an estimation of the number of people frequenting these services, as well as to describe their characteristics and living conditions.
Two types of paper questionnaires were administrated, each of them being linked to a very specific protocol. The main questionnaire, approximately one hour face to face, and a four pages self-administered (SAQ). The SAQ was translated into 14 different languages. Thus, for the first time, homeless non-French speakers were surveyed: Among the 5800 respondents to the SAQ, 1500 were non-French speakers.
This paper will focus on the SAQ, in particular on the method used to assess the quality of the responses. Stating that there is often a gap between responses to a SAQ and those which are usually expected (multi-response when only one response where asked, responses in the margins...), we documented this gap through a coding system, while reviewing and capturing data. This coding, enriched with more classical information (such as item non response and non respect of the instructions) will allow us to assess the quality of the completion of the questionnaire, isolating "errors" or other deviations. This assessment will take into account the language of the questionnaire, as well as respondents' characteristics or the nature of the field work.
The paper outlines the methods and tools for survey quality assessment, focusing on the self-assessment as a method and the checklist as an instrument aiming to evaluate the quality of the survey. It presents an example self-assessment model integrating a 3-level survey quality framework (the ESS internal data collection standards, the TSE components and the external ISO 20252 standard). The instrument is developed with the aim to assess the quality of the European Social Survey (Round 4) in Bulgaria, thus it uses the ESS (internal) data collection standards as a reference model against which an assessment is carried out. But as it is primarily a Survey Quality Assessment tool that could be applied not only in the ESS participating countries, but also in the different research areas (after relevant adjustment) irrespective to the subject area or the specific survey methodology, the checklist additionally integrates the Total Survey Error (TSE) components and takes into consideration the ISO standard in market, social and opinion research. The checklist strives to provide a tool for a systematic process-oriented survey quality assessment, and a guidance in the consideration of improvement measures potential quality problems. Background of the Questionnaire is the EUROSTAT DESAP Self-Assessment Checklist for Survey Managers. However, the new instrument strives to avoid the subjectiveness of the DESAP checklist, integrating mainly quantitative statistical indicators, measuring the quality of the survey output through the different sources of errors, and the precision of the survey process, while analyzing paradata.