Reinventing Academic Integrity in the Age of Artificial Intelligence
In this third book by IRAFPA (International Institute for Research and Action on Academic Fraud and Plagiarism) published by EMS in the “Questions de société” collection, the authors invite us to consider technological transformations on an unprecedented scale, where artificial intelligence (AI) is imposing itself as a phenomenon that is profoundly redrawing the very structures of our society. The academic world, the primary foundation for the production and dissemination of knowledge, is particularly affected. Throughout the chapters, the book in French “Reinventing Academic Integrity in the Age of Artificial Intelligence” lays the groundwork for an in-depth reflection on how artificial intelligence, and more specifically generative AI, is disrupting the paradigms of research, teaching, and management of academic institutions. This technology has made spectacular advances in a very short time. It now offers unprecedented opportunities to accelerate the production and dissemination of knowledge: it makes it possible to process massive volumes of data, automate complex tasks and optimize human resources management. However, these undeniable successes are accompanied by unprecedented challenges, particularly in terms of ethics, academic integrity and institutional organization.
The first part of this book is entitled “Training, information and artificial intelligence”.
• Morgan Blangeois explores the queries and uses of generative AI in three research activities: bibliographic research, data processing, and text writing. Rethinking academic integrity in the age of AI requires understanding how it works. Generative AIs, such as ChatGPT or other advanced language models, make it possible to automate tasks that once required considerable intellectual work. This redraws the extensions as well as the limits of the criteria of originality, creativity, ethics and academic deontology leading to scientifically valid and socially honest knowledge. It is therefore necessary to inform/train researchers, students, supervisors, publishers and… potential fraudsters.
• For their part, Marie-Frédérique Bacqué and Pedro Urbano analyze the close relationship between AI and new generations of researchers. From a sociological point of view, these generations are distinguished by their unique relationship with information and communication technologies. They approach AI not only as a tool for innovation, but also as a natural extension of their daily lives. For these people who are now between the ages of 15 and 40, AI is an integral part of the way they think, learn and produce content. This familiarity with technology, while an undeniable asset, can expose them to the temptation to cheat or use these tools unethically.
• Then, with a resolutely pragmatic approach, Frédérick Bruneault and Andréane Sabourin Laflamme explain how their “educational tool kit” offers ten activities that teach students how to use AI tools intelligently. These younger generations are shaped by unprecedented social phenomena: the culture of social networks, self-image, social validation and the speed of recognition play a crucial role in the construction of individual and collective identity.
• Because they accompany allophone Master’s and doctoral students, Yves Frédéric Livian and Robert Laurini deliver a very current analysis of the exchanges between accompanying persons and students. The aim is to train future researchers in the ethical and responsible use of AI, while maintaining a strong sense of their critical thinking and personal intellectual contribution. In this process, which calls for great maturity, students cannot and must not be left to their own devices. One conclusion is inescapable: the strengthening of the social link between the pedagogue and the student, but even more so the return in force of a culture of orality in the training process that will have to counterbalance decades of simple “academic” evaluation of the written words.
The second part of this book is entitled “Publications and Artificial Intelligence”.
• Cinta Gallent-Torres and Rubén Comas-Forgas first analyze the overpublishing ecosystem and its impact on academia. The authors of this chapter show how the current dynamics of knowledge production are more governed by the imperatives of profitability and visibility: research is often perceived as a product to be sold in a competitive market. In this context, AI is becoming a boon for these publication merchants. Search automation encourages the proliferation of redundant and superficial articles, which dilutes the real value of the knowledge produced. The optimization of the production chain is to the detriment of quality and scientific integrity.
• In his chapter, Ignace Haaz analyzes literary practice from his vantage point as a publisher. Traditionally, scientific publishing has been the result of a rigorous process of research, verification and peer validation. However, with AI’s ability to autonomously generate consistent and detailed texts, it becomes more difficult to distinguish genuinely innovative work from one that is merely replicas or automated compilations of existing work. How can we ensure that the use of AI will not lead to an erosion of linguistic rigor? For the author of this chapter, an automation of scientific production would raise crucial questions about the authenticity and originality of the content of a manuscript.
• For their part, Delphine Szecel, Tom Melvin and Wouter Oosterlinck look at the link between two phenomena: the funding of published medical research… and conflicts of interest. By facilitating the mass production of content, AI potentially encourages dishonest publications based on plagiarism or data fabrication. The authors clarify what “open science” is before showing that these tools present considerable risks, such as publishing in predatory journals or in journals that are not very careful in their evaluations of submitted articles. The authors consider that the integration of AI into the research process must be accompanied by careful ethical reflection to ensure that these technologies enrich scientific production rather than impoverish it.
The third part of this book is entitled “Organizations and Artificial Intelligence”.
• In their chapter, Jean Moscarola and Michel Kalika explain how an international school of doctoral studies had to very quickly integrate the uses of artificial intelligence with the utmost respect for academic integrity. Generative AI, with its ability to synthesize large amounts of information in record time, sometimes seems to short-circuit the slow maturation process of research. For the authors, these challenges call for a transformation of pedagogical practices: the emphasis must be placed not only on technical learning, but also on the development of critical thinking capable of navigating an increasingly automated world.
• Ghislaine Alberton shows that the impact of artificial intelligence (AI) also extends to human resource management in academic institutions. The increasing introduction of AI systems in career management is profoundly transforming these key processes such as the recruitment, promotion or performance evaluation of researchers and teachers. In theory, AI systems are supposed to be objective and unbiased. However, the algorithms on which they are based are often biased by the data they have been trained with, which is the product of subjective human choices. The use of AI in the management of academic careers therefore raises important questions regarding the fundamental rights of researchers, in particular with regard to the transparency of decisions, non-discrimination and fairness in the treatment of applications and promotions.
• Susana Magalhães makes proposals to adapt the integrity guidelines in a context in which artificial intelligence causes a lot of uncertainty. The real challenge facing research centers is not just the adoption of AI, but how this technology can be ethically integrated. Research evaluation systems need to be reformed in a way that promotes quality and originality rather than focusing solely on quantitative metrics. Stories of experience and ethical deliberation are called upon to play a crucial role in managing uncertainty. Finally, it is a question of integrating AI into research practices in a transparent and ethical way, by recognizing its potential and avoiding its abuses.
The contributors of this book invite us to explore the long-term implications of AI from their different disciplinary perspectives: surgery, law, education, management, computer science, medicine, philology, philosophy, psychology, psychoanalysis, and sociology. A model of “assembly-line” academic production, in which human creativity and intellectual intuition would be partially entrusted to machines, would upset the very foundations of scientific authority and legitimacy. Ultimately, AI represents a challenge for academic institutions, not only in terms of technical adaptation, but also in their social role. It is up to us to take up this challenge for knowledge today.
Michelle Bergadaà, Paulo Peixoto
michelle.bergadaa@unige.ch and pp@uc.pt