Generative AI and Academic Writing

Image generated by Deepai. The texts were then added manually.

Generative artificial intelligence (Gen-AI) is coming like a wave shaking up the habits and customs of higher education and research (ChatGPT, Copilot, Gemini, Mistral, Perplexity, DeepSeek, etc.). Between proven cheating and reasoned use, the balance is difficult to find.
A recent study in France shows that 86% of young people aged 18 to 25 use it regularly, including of course students and doctoral students. What can be done?

From an ethical point of view, asking an AI to write a complete text in the place of a student is a fraud that must be sanctioned. AI detection tools have already been created (Scribbr, Decopy, etc.) but these give ambiguous results, especially since humanizers (Quilbot, Humanizer, Zerogpt, etc.) have been invented in order to circumvent their characterization.


Already now, organizations such as UNESCO have published compendiums of good practices, and universities have begun to draft codes of conduct.

For a long time, it has been a common practice to ask students to write assignments. But how can we judge whether the author of the content is the student himself or an external intelligence? Faced with this problem, some faculties have decided to base themselves solely on oral exams; From then on, the teacher will have to make all students take oral questions and no longer only correct written ones; As a result, teachers’ working hours will have to be reviewed and evaluators more cautious. In addition, the effects on articles in scientific journals will undoubtedly be immeasurable!


Let’s make it clear that asking for small tasks such as reformulating a paragraph, looking for the definition of a concept, translating a small passage from one language to another seems acceptable, it is not the same as having a complete text written. The rules to be followed are, of course, the scientific integrity of the author and those relating to copyrights

Organizations such as the APA have given guidelines for referencing AI assistance. It remains to be seen whether these are respected! Assuming that the use of the IAG is authorized and explicitly mentioned by your institution, you should write, for example, with the APA Standard (7th ed.):
Text
In bibliography: OpenAI (2024). Chat GPT 3.5. https://chat.openai.com
In a footnote: “When the prompt “xxx “ is submitted to Perplexity”… The response generated is… (Perplexity AI, 2024)
Image
The image must be captioned and the query used to generate it must be specified. Example: OpenAI (2023). DALL-E (version 2).

By way of illustration, the image given above can be referenced as follows:
DeepAI (2025) Image generated with the prompt “make an image with a student walking on a ridge line of editing, with the arms horizontal like a tightrope walker.” https://deepai.org/


Finally, let’s mention the Thesify and Mystylus sites. Generally speaking, the use of generative artificial intelligence is a delicate subject with unsuspected consequences, high hopes and inevitable pitfalls.

“USF/Academics Without Borders” will return to this subject because if technological developments are rapid, the same is not true for the philosophical and ethical reflections on the use of these new tools.

Robert Laurini

Editor Professor Emeritus in Information Technologies
Picto

Subscribe to newsletter

Enter your email and receive all our newsletters directly and for free.

By clicking “Sign up”, you agree to the Terms of Service as well as our Privacy Policy describing the purposes of the processing of your personal data.