How AI is creating a safer online world

How AI is creating a safer online world

From social media cyberbullying to assault in the metaverse, the Internet can be a dangerous place. Online content moderation is one of the most important ways companies can make their platforms safer for users.

However, moderating content is no easy task. The volume of content online is staggering. Moderators must contend with everything from hate speech and terrorist propaganda to nudity and gore. The digital world’s “data overload” is only compounded by the fact that much of the content is user-generated and can be difficult to identify and categorize.

AI to automatically detect hate speech

That’s where AI comes in. By using machine learning algorithms to identify and categorize content, companies can identify unsafe content as soon as it is created, instead of waiting hours or days for human review, thereby reducing the number of people exposed to unsafe content.

For instance, Twitter uses AI to identify and remove terrorist propaganda from its platform. AI flags over half of tweets that violate its terms of service, while CEO Parag Agrawal has made it his focus to use AI to identify hate speech and misinformation. That said, more needs to be done, as toxicity still runs rampant on the platform.

Similarly, Facebook’s AI detects nearly 90% of hate speech removed by the platform, including nudity, violence, and other potentially offensive content. However, like Twitter, Facebook still has a long way to go.

Read more