Please use the sharing tools found via the share button at the top or side of articles. Copying articles to share with others is a breach of FT.com T&Cs and Copyright Policy. Email licensing@ft.com to buy additional rights. Subscribers may share up to 10 or 20 articles per month using the gift article service. More information can be found here.
https://www.ft.com/content/3e27cfd6-e287-4b6f-a588-29b5b962a534
What if the only thing you could truly trust was something or someone close enough to physically touch? That may be the world into which AI is taking us. A group of Harvard academics and artificial intelligence experts has just launched a report aimed at putting ethical guardrails around the development of potentially dystopian technologies such Microsoft-backed OpenAI’s seemingly sentient chatbot, which debuted in a new and “improved” (depending on your point of view) version, GPT-4, last week. The group, which includes Glen Weyl, a Microsoft economist and researcher, Danielle Allen, a Harvard philosopher and director of the Safra Center for Ethics, and many other industry notables, is sounding alarm bells about “the plethora of experiments with decentralised social technologies”. These include the development of “highly persuasive machine-generated content (eg ChatGPT)” that threatens to disrupt the structure of our economy, politics and society. They believe we’ve reached a “constitutional moment” of change that requires an entirely new regulatory framework for such technologies. Some of the risks of AI, such as a Terminator-style future in which the machines decide humans have had their day, are well-trodden territory in science fiction — which, it should be noted, has had a pretty good record of predicting where science itself will go in the past 100 years or so. But there are others that are less well understood. If, for example, AI can now generate a perfectly undetectable fake ID, what good are the legal and governance frameworks that rely on such documents to allow us to drive, travel or pay taxes? One thing we already know is that AI could allow bad actors to pose as anyone, anywhere, anytime. “You have to assume that deception will become far cheaper and more prevalent in this new era,” says Weyl, who has published an online book with Taiwan’s digital minister, Audrey Tang. This lays out the risks that AI and other advanced information technologies pose to democracy, most notably that they put the problem of disinformation on steroids.