Management of academic integrity in Higher Education across the European Union
The EU funded IPPHEAE project (Impact of Policies for Plagiarism in Higher Education Across Europe 2010-2013) found some clear examples of good practice (in the UK, Slovakia, Sweden, Austria, Malta, Republic of Ireland), but these were greatly outweighed by examples in every country studied where institutions had not started to build any comprehensive framework for either consistent handling of misconduct allegations or addressing how to deter misconduct in different forms of assessment.
The main research instrument used was a purpose-built survey, using online questionnaires (students, teachers and HE managers), semi-structured interviews (managers, researchers, government officials, quality assurance and accreditation agencies) and focus groups (students). This was a mixed-methods study using a combination of quantitative and qualitative data, with the online survey questions translated into 14 different European languages and interviews conducted mainly in English.
Over 5000 responses have been collected to date, which has allowed a meaningful comparative study to be conducted using responses from different countries and (anonymous) institutions. The comparison was achieved using a specially developed toolset called the Academic Integrity Maturity Model (AIMM). The AIMM national comparison extracted responses to 68 separate questions from across the four levels of the survey. Each selected question is assigned to one of nine characteristics, to provide evidence of maturity within that aspect:
- Transparency in academic integrity and quality assurance
- Fair, effective and consistent policies for handling plagiarism and academic dishonesty
- Standard range of standard sanctions for plagiarism and academic dishonesty
- Use of digital tools and language repositories
- Preventative strategies and measures
- Communication about policies and procedures
- Knowledge and understanding of academic integrity
- Training provision for students and teachers
- Research and innovation in academic integrity
The average (arithmetic mean) of selected quantitative responses from a specific dataset were calculated using a scale of 0-4 (indicating low to high maturity of policy or indicator). Subjective decisions were made to quantify and score the evidence that was based on sets of qualitative data. Averaging the score for all questions assigned to one category provided a nine-point profile that translated to spokes on a radar chart, for example:
Figure 1: AIMM profile for the Republic of Ireland
The Republic of Ireland profile in figure 1 shows relative strengths in knowledge, communication and software use, but weakness in strategies for prevention. Summing all nine metrics provides an overall AIMM maturity score out of a maximum possible 36. The results for the 27 countries, illustrated in figure 2, show that the UK responses achieved the highest score overall of the 27, which reflects the advances and investments made in the UK in this area since about the year 2000, but other countries scored well under different characteristics, in acknowledgement of national foci for example on effective use of software (Slovakia), National policies (Sweden).
Figure 2: Comparison of results across 27 EU countries
Although these results are useful as the only survey of its kind, the relatively small dataset must be viewed as a snapshot of self-selecting EU institutions rather than a comprehensive generalizable survey of EU HEIs. Many institutions that chose to respond had a positive message to share and from feedback, we know that several institutions that acknowledged their policies were weak refused to participate, therefore we know there is a positive bias to the data. It was very difficult to get people to respond to the survey in several counties, including Italy, Denmark, Holland, Estonia and Spain, therefore the low numbers of respondents further reduce the reliability of the results in those countries.
Separate reports were created for each of the 27 EU countries studied in addition to the EU-wide comparison, all available here.
Perhaps the most worrying finding from the research was that no quality assurance or accreditation body was found in any EU country that undertook routine monitoring and oversight of institutional policies for academic integrity as part of their role. This appears to be a serious omission and missed opportunity. In my view, there is a clear link between academic integrity and academic quality, in that the absence of an institutional culture of integrity undermines and negates systems and processes for assuring quality. However, the absence of oversight is potentially the easiest deficit to fix.
As the above article suggests, despite the acknowledged limitations, this study provides remarkable insights into the security and integrity of assessments that underpin qualifications awarded by European HEIs. It is crucial that the warnings raised and recommendations emanating from this research are taken seriously by governments and senior management within HE institutions, not just in EU countries, but across the world.