AI chatbots may have a liability problem

MULTILINGUAL CHATBOTS AND HOW THEY ARE USED IN REAL-LIFE SCENARIO

During oral arguments last week for Gonzalez v. Google, a case about whether social networks are liable for recommending terrorist content, the Supreme Court stumbled on a separate cutting-edge legal debate: Who should be at fault when AI chatbots go awry?

While the court may not be, as Justice Elena Kagan quipped, “the nine greatest experts on the internet,” their question could have far-reaching implications for Silicon Valley, according to tech experts.
Justice Neil M. Gorsuch posited at the session that the legal protections that shield social networks from lawsuits over user content — which the court is directly taking up for the first time — might not apply to work that’s generated by AI, like the popular ChatGPT bot. 
“Artificial intelligence generates poetry,” he said. “It generates polemics today that would be content that goes beyond picking, choosing, analyzing or digesting content. And that is not protected. Let’s assume that’s right.”

While Gorsuch’s suggestion was a hypothesis, not settled law, the exchange got tech policy experts debating: Is he right?

Entire business models, and perhaps the future of AI, could hinge on the answer. 

The past year has brought a profusion of AI tools that can craft pictures and prose, and tech giants are racing to roll out their own versions of OpenAI’s ChatGPT.

Already, Google and Microsoft are embracing a near future in which search engines don’t just return a list of links to users’ queries, but generate direct answers and even converse with users. Facebook, Snapchat and Chinese giants Baidu and Tencent are hot on their heels. And some of those AI tools are already making mistakes.

Source