robot writting
AI is becoming more sophisticated, and some say capable of writing academic essays. But at what point does the intrusion of AI constitute cheating?
“Waiting in front of the lecture hall for my next class to start, and beside me two students are discussing which AI program works best for writing their essays. Is this what I’m marking? AI essays?”
The tweet by historian Carla Ionescu late last month captures growing unease about what artificial intelligence portends for traditional university assessment. “No. No way,” she tweeted. “Tell me we’re not there yet.”
But AI has been banging on the university’s gate for some time now.
In 2012, computer theorist Ben Goertzel proposed what he called the “robot university student test”, arguing that an AI capable of obtaining a degree in the same way as a human should be considered conscious.
Goertzel’s idea – an alternative to the more famous “Turing test” – might have remained a thought experiment were it not for the successes of AIs employing natural language processing (NLP): most famously, GPT-3, the language model created by the OpenAi research laboratory.
Two years ago, computer scientist Nassim Dehouche published a piece demonstrating that GPT-3 could produce credible academic writing undetectable by the usual anti-plagiarism software.
Le règlement DORA : un tournant majeur pour la cybersécurité des institutions financières Le 17…
L’Agence nationale de la sécurité des systèmes d'information (ANSSI) a publié un rapport sur les…
Directive NIS 2 : Comprendre les nouvelles obligations en cybersécurité pour les entreprises européennes La…
Alors que la directive européenne NIS 2 s’apprête à transformer en profondeur la gouvernance de…
L'intelligence artificielle (IA) révolutionne le paysage de la cybersécurité, mais pas toujours dans le bon…
Des chercheurs en cybersécurité ont détecté une intensification des activités du groupe APT36, affilié au…
This website uses cookies.