Tuesday 16th July
Wednesday 17th July
Thursday 18th July
Friday 19th July
Download the conference book
Survey methodology and online community research |
|
Convenor | Dr Andraž Petrovčič (University of Ljubljana, Faculty of Social Sciences) |
Coordinator 1 | Dr Gregor Petrič (University of Ljubljana, Faculty of Social Sciences) |
Coordinator 2 | Dr Katja Lozar Manfreda (University of Ljubljana, Faculty of Social Sciences) |
Online communities have attracted much attention from social scientists over the last two decades. From the early beginnings of the bulletin-board systems in the late 1980s, through discussion boards and web forums in 1990s, to the recent diffusion of social networks sites, researchers have explored social, psychological and cultural aspects of online phenomena. Such variety of research interests has been accompanied with a variety of methods for data collection and analysis, including virtual ethnography, data mining with "big data", social network analysis, and surveys. Looking at this 20-year period and, especially, after the proliferation of web survey tools at the end of 1990s, survey methodology seems to be the most frequently used approach in online community research. However, little attention has been given to the methodological advances in survey techniques for this specific purpose. This is surprising considering the impressive development of web survey methodology and the diversity of its applications in general population studies. The proposed session aims to close this gap, by providing a forum to present and discuss some of the state-of-the-art activities in survey research of online communities. The session invites proposals that explore the methodological aspects of survey deployment in online community research. These include (but are not limited to) the following issues: design and implementation of survey recruitment techniques, unit and item non-response, non-coverage error, response reluctance, sampling of online community participants, use of incentives, strategies and methods for surveying participants in special online domains such as online social support or health-related groups. We welcome original and innovative theoretical, empirical as well as case studies that might involve various types of online communities such as discussion boards, news-letters and mailing lists, web forums, virtual worlds, and social network sites, wikis and other online collaborative environments.
This section is coorganized with Dr.Katja Lozar Manfreda and Dr. Gregor Petrič
Researchers interested in discovering outlines of online communities (either for establishing a sample for a survey or driving a content analysis) are confronted to methodological sampling problems. These methodological issues are still very little surveyed.
Social media are very rich in relational data and these data are often used for snowball sampling, and hence discovering online communities. Social network analysis can then have a central place in sampling online communities. We will investigate in this paper two general manners to take into account connectivity in the survey design by use of hyperlink analysis.
The first classical way is to take all hyperlinks (or at least the most possible) and then partitioning the graph to find out highly connected clusters that could then be defined as online communities. The graph can be partitioned by use of different clustering analyses. A second way is to include sociological statements in the web crawler itself. The philosophy is then to use social assumptions (such as mutuality or transitivity) in order to include new actors in the sample step by step only if their inclusion in the sample comply with the initial assumptions.
The advantages and disadvantages of these two approaches will be discussed in this paper. We will especially focus on the practical implication of choosing one of these approaches or the other.
The public opinion world has been largely upended by three seismic shifts in recent decades. Declining telephone response rates have called into question the validity of RDD sampling frames; the possibility for large, diverse samples derived from opt-in Internet panels has raised the specter that probability samples may be unnecessary for many social questions; and the preponderance of unsolicited, publicly available discussions scraped from sites like Twitter suggest that surveys themselves may sometimes prove an inefficient tool. These challenges to traditional survey research threaten the basis for almost a century of survey sampling. The threats they pose, however, depend on a variety of assumptions. This paper outlines the conditions for which each public-opinion-gathering method will produce results that accurately describe the public. The assumptions required for learning about the population from non-probability and web scraping techniques are compared to those used in non-response and weighting adjustments. Further, various data collection methods may be inconsistently accurate when used for different types of inference. The conditions required to correct trends over time and relations between variables differ notably for the corrections necessary to understand population marginals. The product of these considerations is a clear empirical agenda for exploring how we can best understand the dynamics of American society amid a changing public opinion landscape.
Please note, the presenting author cannot make July 15th or early on the 16th.
The paper is based on the experience of the European Union Fundamental Rights Agency that in 2012 carried out a large European LGBT people's online survey covering 28 countries. The choice of the survey method was based mainly on the assumption that a large proportion of lesbian, gay, bisexual and transgender people, considered as hard-to-reach population, are regularly using the internet.
The paper discusses the main methodological challenges related to the reaching out the LGBT people across all the EU Member States and Croatia, and the efforts for monitoring the quality of the survey sample and the data. In particular differences between countries and target groups in the level of participation in the community organisations, in the access to the internet had to be taken into account in order to obtain a strong heterogeneity in the socio-demographic characteristics of respondents. Specific strategies for reaching out to the target groups were developed mainly by focusing on the online community. During the data collection phase monitoring techniques were applied in order to adapt the outreach strategies if needed to ensure the quality of the survey sample. The paper summarises the lessons learned, the factors of success and challenges for the future online surveys.
This paper presents the results of a study aiming at evaluating the effect of survey invitation characteristics and response reluctance on unit nonresponse in list-based surveys of web forum participants. On one hand, it investigates how response rates are associated with request for help, authority, and sense of community as three response-inducing techniques used in email invitations. The decision to focus on a combination of the three elements was theoretically informed by research on online communities, showing that exchange of social support, belonging and norms have been central in their development and sustainability. Considering the leverage-salience theory (Groves et al. 2000) and the fact that these elements are not appealing for all online community members to the same extent, it is hypothesized that combining the three elements in different ways might result in different response rates. In addition, the paper explores if and how the three response-inducing techniques are associated with the unit nonresponse bias. On the other hand, the paper investigates the effect of response reluctance on non-response bias. In particular, the study investigates whether early and late respondents have different characteristic in terms of participation in online community (i.e., frequency of participation, number of posted messages, and year of registration) as well as whether the use of reminders can reduce the potential bias associated with unit non-response error.