Using Surveys for Quality Assurance in Higher Education - Challenges and Solutions |
|
Session Organisers |
Dr Heide Schmidtmann (University of Duisburg-Essen, Center for Higher Education Development and Quality Enhancement (CHEDQE)) Dr Rene Krempkow (Humboldt-Universitat zu Berlin) |
Time | Tuesday 16th July, 14:00 - 15:30 |
Room | D18 |
The increasing availability of accessible and affordable survey tools provides ample opportunities to gather data on individuals’ subjective perspective. More and more, such data are used as evidence in practical fields of application including quality assurance in higher education. In post-Bologna Europe, the newly revised European Standards and Guidelines for Quality Assurance in Higher Education (ESG) emphasize the importance of data including survey data from students and staff as they help stakeholders at all levels to make informed decisions (ENQA et al. 2015, Standard 1.7).
Social scientists who collect and analyze data for quality assurance purposes at institutions of higher education face a triple challenge: to collect valid and reliable survey data (Ansmann & Seyfried 2018), to coordinate the questionnaires with evaluators within the quality assurance departments (Ganseuer & Pistor 2016) and to ensure the practical applicability of the results (Rousseau 2006). Striking a balance between scientific rigor and practical utility concerns the entire survey lifecycle (Survey Research Center 2016). Issues include, but are not limited to the following:
• Questionnaire design: Which concepts should be measured? Who should be involved in the questionnaire design process and how?
• Data collection: How to ensure data validity despite limited financial and human resources as well as data protection requirements?
• Data analysis: Who is the target audience? How to analyze the data to be useful for quality assurance while adhering to principles of good scientific practice? How to strike a balance between data protection requirements and loss of information?
• Application: How to increase the acceptance and application of the results by decision-makers?
We welcome papers from researchers and evaluation practitioners working at the interface between data collection and application for quality assurance in higher education. Ideally, the papers describe examples of good practice that address the aforementioned issues.
References
Ansmann, M. & Seyfried, M. (2018): Qualitätsmanagement als Treiber einer evidenzbasierten Qualitätsentwicklung von Studium und Lehre? Zeitschrift für Hochschulentwicklung, 13(1), 233-252.
ENQA et al. (2015): Standards and Guidelines for Quality Assurance in the European Higher Education Area (ESG). Brussels, BE.
Ganseuer, C. & Pistor, P. (2016). Auf dem Weg zur Qualitätskultur? Interne Qualitätsmanagementsysteme: Internationale Tendenzen und Nationale Entwicklungsfelder. Wissenschaftsmanagement, 5, 37-41.
Rousseau, D. (2006): Is there such a thing as evidence based management? Academy of Management Review, 31(2), 256–269.
Survey Research Center. (2016). Guidelines for Best Practice in Cross-Cultural Surveys. Ann Arbor, MI.
Keywords: student survey, graduate student survey, tracer studies, evidence-based quality assurance, Higher Education, evaluation
Dr Maike Reimer (Bayerisches Staatsinstitut für Hochschulforschung & Hochschulplanung (IHF))
Mr Johannes Wieschke (IHF) - Presenting Author
Ms Esther Ostmeier (IHF, TUM)
Among the information German universities are required to incorporate into their quality management is graduate feedback about study quality and labour market outcome. While quantitative graduate surveys are an established instrument for collecting such data, it is a considerable challenge to assure that they truly can unfold their evaluative potential and do not, as sometimes is suspected, “vanish in the drawer”.
To address this challenge, more than 15 universities and universities of applied science in the German Federal State of Bavaria have coordinated their data collection efforts in the area of graduate surveys for more than five years now. Within the project “Bayerische Absolventenstudien” (BAS, Bavarian Graduate Studies), they collaborate with two specialized institutions:
- the IHF, providing experience in education and labour market research as well as in survey data collection expertise,
- the Institute for University Software Bamberg, providing a data warehouse as a tool to flexibly archive, visualize and contextualize the data in a temporal extended way without the need of specialized statistical or other software.
Thus, a variety of challenges on the road from data to strategic action could be overcome, among them:
- the conundrum of sustainability and continuity over time vs. flexibility in addressing topics of short-term interest,
- the dilemma of complex and fine-grained data requirements vs. the need for simple and timely analyses
- the difficulty of contextualizing graduate reports with data from other sources
- the desire for benchmarking while avoiding simplistic rankings
In our contribution, we want to describe the experiences and lessons learned in the process, identify the factors contributing to the success of the collaboration and sketch future challenges that must be addressed in the next steps.
Professor Vasja Vehovar (Professor) - Presenting Author
Mr Miha Matjašič (Research assistant)
The student evaluations of higher education teaching have become a standard instrument across all universities. An important question of these surveys relates to the timing of the data collection. There are clear advantages in conducting these evaluations after the exam. Namely, after the exam, the student can more properly evaluate the exam itself and the competencies obtained. However, the exam results might interfere with student attitudes, especially when they differ from student’s expectations. On the other side, the evaluation before the exam is free from this influence. In addition, the evaluation before the exam has the advantage that all students answer the survey, approximately at the same time and not in various circumstances at different time points after the exam. At the University of Ljubljana, we currently use two surveys. Before the exam students answer standard questions about the course and the teacher(s). After the exam, they evaluate the exam, the competencies obtained and the amount of time spent (relative to course credit). This approach, of course, raises the question of benefits from the increased response burden. Therefore, we studied the response rates and response quality of these two surveys. We also addressed the predictability of the after exam results, as it is well-known that the responses from a student across components for certain course are highly correlated. In addition, we compared this relation with experimental data, where the after exam questions were already included in the pre-exam survey. We also compared response quality with experimental surveys where only after exam survey was used. The results enable to elaborate arguments, advantages and disadvantages for the three timings of the data collection: before, after, before and after.
Dr Barbara Neza Brečko (Researcher) - Presenting Author
In student evaluation of teaching surveys, good response rates are crucial for the data quality, as survey results can have big implications. There are several approaches to attract students to participate. The first option is a free decision on participation (with no forcing), however, danger exists that response rates would be very low. The second option is forcing students into the survey by not allowing them to apply for a corresponding exam before the survey is answered. Alternatively, they might not be allowed (until they evaluate all courses) to enroll in the next study year. Another approach to increase survey participation is to slightly delay the exam results for students who did not complete the survey. An extreme version of forcing is not offering the option to reject the survey or even not allowing to omit any question from the questionnaire.
University of Ljubljana is conducting online student evaluation surveys for several years already and the experience shows, that without some forcing in participation the response rates are very low, even when large resources are invested into promotion. On the other hand, the experience shows that with extreme forcing the response quality is so low, that results are not useful. Namely, there is a danger of satisficing, which can be measured with the level of straight lining (i.e. providing the same category of response for entire set of questions) and amount of answers in open ended question. The paper outlines the current solution at the University of Ljubljana, which is balancing between the danger of low response rate, which occurs if there is no forcing in participation, and low response quality, which may appear when full forcing is present. The response rates and satisficing indicators are compared for various approaches, particularly the free approach, the fully forced approach and for the current combined approach.
Ms Anna-Lena Hönig (University of Mannheim)
Mr Edgar Treischl (Friedrich-Alexander-University Erlangen-Nürnberg) - Presenting Author
Increasing teaching quality is an important aim for instructors, institutions, and academia as a whole when it comes to quality assurance. Teaching quality is a construct with several dimensions, which is why the key challenge in (higher) education is to identify individual dimensions of teaching quality as suitable leverage points to assess quality aspects. Student evaluation of teaching is the standard tool to measure teaching quality, but SET based on surveys is not well equipped to identify the impact of single dimensions on students’ overall course assessment. Moreover, survey research does not take students’ assessment process of different dimensions into account, which is why we use factorial survey experiments to measure the relative importance of individual dimensions in students’ course assessment. We randomly assign students to different course scenarios and ask them to simultaneously weight individual dimensions of teaching quality. Using survey experiments allows us to gain valuable insight into the decision-making process of students in course evaluations.
We have collected survey data at two universities covering a wide range of course types, programs, and student backgrounds. This original data allows us to calculate the causal effect of several dimensions of teaching quality on students’ course assessment.
Our results support instructors and institutions as we evaluate the relevance of individual dimensions of teaching quality in their context. These practically applicable findings enable instructors to improve their teaching skills as well as institutions in their mission for quality assurance. Results from this project inform lean questionnaire design of course evaluations as they provide practical means to decide which dimensions should be measured. We show how survey experiments provide a practical solution to inform this decision-making process despite limited resources. Our project demonstrates how this survey methodology allows for the involvement of all key stakeholders if successfully integrated with course evaluations and evaluators.
Dr René Krempkow (Humboldt University of Berlin) - Presenting Author
Here we present an evaluation for quality assurance and quality development by a longitudinal survey design of a qualification program for young professionals in higher education aimed at improving their teaching and learning. The teacher-training program is based on standards and methods of higher education and on a theoretical model of teaching competency that was originally developed in pedagogy research for school education. The survey was conducted to investigate the effects of the program as perceived by participants of the program. The results from the survey show a high level of participant satisfaction with the contents of the workshops offered. Moreover, the participants viewed the workshop leaders in the program as highly competent, and expressed a high acceptance of the peer-to-peer relations established within the program, such as peer counselling and mutual visits to lectures or workshops. The empiric results also show that participants perceived an increase in knowledge of all three dimensions of teaching competency as defined by (1) knowledge transfer and support of understanding, (2) students’ motivation, and (3) control of interaction. These results reinforce the effectiveness of the teacher-training program, which was consistent across participants from all faculties and independent on participants’ amount of prior teaching experience. Furthermore, a quasi-experimental survey design will be presented, which aims to evaluate the development of teaching competency in a longitudinal approach and compare teaching competencies between participants and non-participants of the program.