Meta’s AI Chatbot Repeats Anti-Semitic Conspiracies

meta ai chatbot
meta ai chatbot

Only days after being launched to the public, Meta Platforms Inc.’s new AI chatbot has been claiming that Donald Trump won the 2020 US presidential election, and repeating anti-Semitic conspiracy theories.

Chatbots — artificial intelligence software that learns from interactions with the public — have a history of taking reactionary turns. In 2016, Microsoft Corp.’s Tay was taken offline within 48 hours after it started praising Adolf Hitler, amid other racist and misogynist comments it apparently picked up while interacting with Twitter users.

Facebook parent company Meta released BlenderBot 3 on Friday to users in the US, who can provide feedback if they receive off-topic or unrealistic answers. A further feature of BlenderBot 3 is its ability to search the internet to talk about different topics. The company encourages adults to interact with the chatbot with “natural conversations about topics of interest” to allow it to learn to conduct naturalistic discussions on a wide range of subjects.

Conversations shared on various social media accounts ranged from the humorous to the offensive. BlenderBot 3 told one user its favorite musical was Andrew Lloyd Webber’s “Cats,” and described Meta CEO Mark Zuckerberg as “too creepy and manipulative” to a reporter from Insider. Other conversations showed the chatbot repeating conspiracy theories.

Read more