A cryptographic tool inserts a detectable signature in the words produced by OpenAI’s text-generating artificial intelligence models. It could help teachers stop students using AIs to do their homework
Artificial intelligence firm OpenAI is developing a way to prevent people taking text that AI models produce and passing it off as their own work.
The watermark-like security feature could help teachers and academics spot students who are using text generators such as OpenAI’s GPT to write essays for them, but cryptography experts say workarounds will inevitably be found.
Scott Aaronson at the University of Texas at Austin, who is spending a year working with …