The hyper-racist bots posted 15,000 times in one day.

The hyper-racist bots posted 15,000 times in one day.

Microsoft inadvertently learned the risks of creating racist AI, but what happens if you deliberately point the intelligence at a toxic forum? One person found out. As Motherboard and The Verge note, YouTuber Yannic Kilcher trained an AI language model using three years of content from 4chan’s Politically Incorrect (/pol/) board, a place infamous for its racism and other forms of bigotry. After implementing the model in ten bots, Kilcher set the AI loose on the board — and it unsurprisingly created a wave of hate. In the space of 24 hours, the bots wrote 15,000 posts that frequently included or interacted with racist content. They represented more than 10 percent of posts on /pol/ that day, Kilcher claimed.

Nicknamed GPT-4chan (after OpenAI’s GPT-3), the model learned to not only pick up the words used in /pol/ posts, but an overall tone that Kilcher said blended « offensiveness, nihilism, trolling and deep distrust. » The video creator took care to dodge 4chan’s defenses against proxies and VPNs, and even used a VPN to make it look like the bot posts originated from the Seychelles.

Read more