AI University
As the beginning of the academic year approaches, there is rising alarm among universities about the implications of AI. The focus of this concern is ChatGPT, a program that can, in response to a simple prompt, immediately originate a reasonably convincing approximation of a university-level essay.
Universities are on the defensive, pledging to increase the prevalence of supervised exams, while augmenting their plagiarism detection systems to identify students who have recruited an AI assistant. We’re told innovation (that empty buzzword) will be required, and some have suggested that students could be expected to incorporate the use of AI into their work, becoming for prose what calculators are to maths.
Just as the internet changed the depth and complexity we expect of students’ writing and research, the increasing availability and sophistication of AI might similarly shift the goalposts. Yet the bulk of institutional energy is being directed towards preserving the system as it exists.
As an academic who has marked thousands of assessments in disciplines across the humanities, I can confidently tell you that the system is not worth preserving. Rather than an obstacle to overcome, the flourishing of AI should be seen as an opportunity to ask what universities and assessments are for in the first place.
Le règlement DORA : un tournant majeur pour la cybersécurité des institutions financières Le 17…
L’Agence nationale de la sécurité des systèmes d'information (ANSSI) a publié un rapport sur les…
Directive NIS 2 : Comprendre les nouvelles obligations en cybersécurité pour les entreprises européennes La…
Alors que la directive européenne NIS 2 s’apprête à transformer en profondeur la gouvernance de…
L'intelligence artificielle (IA) révolutionne le paysage de la cybersécurité, mais pas toujours dans le bon…
Des chercheurs en cybersécurité ont détecté une intensification des activités du groupe APT36, affilié au…
This website uses cookies.