Probability-based research panels 3 |
|
Chair | Mr Darren Pennay (Social Research Centre, Australian National University ) |
Probability-based online panels are regarded as state of the art data collection tools in Europe and the USA (e.g., LISS in Holland, GIP and GESIS panel in Germany, ELIPSS in France, and GfK-Knowledge Networks in the USA) and are now being established in Australia and Canada. However probability-based panels are also vulnerable to nonresponse during data collection and especially attrition is a constant worry of panel managers. Several theories on nonresponse have been developed over the years and attitudes towards surveys are key concepts in these theories. Therefore our research Question is: Do survey attitudes predict wave nonresponse and attrition better than standard indicators of nonresponse, such as, age, education, income, and urbanization?
To measure survey attitude a brief nine-question scale was developed for use in official statistics and (methodological) survey research. Key dimensions are survey value (value ascribed to surveys, e.g., surveys are seen as important for society, much can be learned by surveys), survey enjoyment (reflecting the assumption that respondents do like participating in surveys, e.g., surveys are seen as enjoyable and interesting), and survey burden (reflecting an increase of demands, e.g., too many survey requests, to invasive, too long). Preliminary research in four online panels indicated that the scale is reliable and has predictive validity
The data comes from the Dutch probability-based online LISS-panel. The Survey Attitude Scale was part of the annually measured core-questionnaire from 2008 - 2011. Furthermore, the number of completed questionnaires and number of invitations was available for each panel member over the years. Also available were 34 demographic and psychographic variables. Drawing on expert opinions from 31 survey methodologists, the most important correlates of nonresponse are added as control variables to out model. To predict the number of completed interviews and determine the explanatory power of the survey attitude scale, a longitudinal negative binomial regression is employed.
The Survey Attitude Scale consists of 3 sub-constructs: enjoyment, value, and burden. Respondents who perceived a survey one unit more enjoyable (on average across waves on a scale from 1 to 7) are estimated to complete roughly 1.22 times as many or 22% more interviews per year. The same attitude change with respect to the perceived survey value (one unit on average across waves) corresponds to merely 8% more interviews. Finally, a one-unit increase in the perceived survey burden reduces the number of completed interviews by 12%. These results hold even when control variables (e.g., age, education, urbanization) are added to the model. The regression coefficients of the survey attitudes hardly change although most controls are significant.
Hence, survey attitude is a strong predictor of nonresponse over and above a person’s psycho-demographic profile. This makes it possible to identify potential nonrespondents in an online panel early on and use tailored designs to improve response and reduce attrition. Moreover, emphasizing to respondents the positive sides of survey enjoyment instead of survey value and actively decreasing survey burden seems promising.
The polling industry has come under considerable strain after the latest erroneous predictions of the US-American presidential election and the Brexit referendum. Election polls need to be implemented within a short framework of typically just a few days, and thus quick and lean survey modes such as online access panels are often preferred. However, these panels generally use nonprobability based techniques to recruit panelists and to select survey participants. Some comparative studies show that the samples of such nonprobability online panels lack representativeness of the general population and lead to less accurate data than traditional probability-based offline surveys. In this light, we assess the data accuracy of probability and nonprobability, as well as online and offline surveys in Germany.
We compare data from one probability online survey split into two samples – one with and one without the offline population –, eight nonprobability online surveys, and two probability face-to-face surveys. As a metric of accuracy, we use the average absolute relative bias (AARB). It measures the average of the absolute relative bias between the survey and the benchmark data, computed over the ordinal or nominal categories of the data. As benchmarks, the German Mikrozensus as well as other official data sources are used.
Our results indicate that the offline surveys provide the most accurate survey data. Moreover, the probability online surveys are more accurate than nonprobability online the surveys. The quotas drawn and weights provided by the nonprobability panels are insufficient to provide accurate samples, while our calibration weighting improves accuracy.
With our research, we enter an urgently needed discussion of the suitability of nonprobability online surveys for election polling and social research. Moreover, this is the first study that assesses data accuracy across different survey modes and sampling techniques in Germany.
In Australia in 2014-15, 86 per cent of households had the internet connected (ABS Cat.8146.0). Since 2010, online research has been the dominant mode of data collection in the Australian market and social research industry, supplanting Computer Assisted Telephone Interviewing (CATI). In Australia in 2015 online research accounted for 41 per cent of the revenue generated by the industry up from 31 per cent two years earlier (Research Industry Council of Australia, 2016), with much of this coming from non-probability internet panels. Unlike in the United States and Europe, there are not any national probability based online panels in Australia as of 2016.
The authors of this paper are concerned that the rapid increase in the use of non-probability online panels in Australia has not been accompanied by an informed debate regarding the advantages and disadvantages of probability and non-probability surveys.
Thus, the 2015 Australian Online Panels Benchmarking Study was undertaken to inform this debate and report on the findings from a single national questionnaire administered across three different probability samples and five different non-probability online panels.
This study enables us to investigate whether or not Australian surveys using probability-sampling methods produce different results in terms of accuracy, relative to independent population benchmarks, than Australian online surveys relying upon non-probability sampling methods. In doing so we build on similar international research in this area (e.g. Yeager et al. 2011, Chang & Krosnick 2009, Walker, Pettit & Rubinson, 2009). We discuss our findings as they relate to Coverage error, Non-response error, Adjustment Error, and Measurement Error.
This study directly compares survey data on social attitudes collected from an opt-in sample of Voter Advice Application (VAA) users and a randomly recruited, probability-based online panel of respondents. While much research to date has focused on the demographic representativeness of VAA data, less is known about the attitudinal and other representativeness of that data. This study of Australian samples contributes to the emerging literature.
VAAs are proliferating as a source of ‘big data’ among public opinion and political science researchers, despite concerns over the representativeness of the opt-in samples. During July 2016, VAA developer Election Compass collected email address details for approximately Australian 40,000 users of its application in the weeks prior to the 2016 Australian federal election. In November 2016, this study will survey the sample of VAA users on their attitudes towards a range of Australian social issues. In December 2016, I will administer the same questionnaire to a probability-based sample, using an identical mode of administration and similar response maximisation techniques. The questionnaire contains a broad range of questions designed to identify dimensions (using factor analysis) of socio-political attitudes in Australian society. Comparing the composition of dimensions and relationships between variables within the data will contribute to our understanding of incidental samples such as VAA users, and the extent to which we can and should make inferences from VAA-generated data.