How AI Is Moderating Online Content

How AI Is Moderating Online Content

AI can help flag harmful or offensive content faster and more effectively!

Whether it be by posting a photo on Instagram or writing a blog post, we’re all adding more information to the internet. With over 4.62 billion people using social media, there are bound to be some bad eggs creating harmful or deceitful content. To make sure that users are exposed to as little bad content as possible, websites practice content moderation. 

Content moderation is the process of regulating and monitoring user-generated content based on a set of pre-existing rules and guidelines. Since it is near impossible for human moderators to ever catch up to the huge amount of content being posted every second on social media, some websites are now using artificial intelligence (AI) to moderate posts. Let’s dive deeper into how AI content moderators work and whether they are actually effective in making the internet safer. 

What is AI moderation?

AI moderators are fed user-generated content to train them on what is acceptable and what’s not on a particular platform. The AI will learn to pick up patterns and filter out illegal, dangerous or sexually explicit content. Automating the content moderation process with AI allows a large amount of user-generated content to be checked in real-time. Platforms can thus be proactive in taking down any suspicious content that might be harmful to their users. Using AI moderators not only reduces the workload on human moderators but also reduces the chances of them being exposed to offensive content which can adversely affect their mental health. 

Two layers of content moderation

There are two broad layers of moderating content—surface-level moderation and context-based moderation. In the first layer, image, text and video content are checked through natural language processing to recognize damaging elements. If there is text inside an image, it can be further checked using object character recognition, a technology that converts words inside images into a format that the AI can understand. An example of an AI content moderator is Checkstep, which compares user-generated content against a company’s terms and conditions to check for violations. 

The second layer requires the AI to understand the nuance in which a particular statement is being made. While this form of AI moderation is still in its nascent stages, tech company Spectrum Labs has created the AI moderator Contextual AI, which gives us some clues on what context-based moderation could look like. Contextual AI takes into account current and historical responses to a user’s content to understand the context in which it is being created and understood. 

Is AI alone enough to monitor content?

Although AI moderation can help social media giants monitor content faster, it alone isn’t entirely effective. Since AI understands malign content based on the data it has been exposed to, it wouldn’t be able to flag harmful ideas that it doesn’t recognize. Moreover, even though there are attempts to contextualize content with technologies, it still has a long way to go before achieving accurate comprehension. At present, even if made in a completely harmless context, the use of some words, like “stupid”, might still end up giving the content a high toxicity score.

The issues with AI moderators have been well documented in the media. Recently, Daniel Motaung, a former content moderator at Facebook, shared that even though the social media giant claims that its AI has 95% accuracy in detecting hate speech, there are times when it removes content that doesn’t violate content policies. The same issue has been reported with other platforms too. For instance, in 2017 YouTube took down videos of journalists reporting on extremist content because its AI moderator didn’t know the difference between reportage and content inciting extremist behavior. This lack of detection and differentiation capabilities can wrongfully punish users, some of whom might be full-time content creators, and take down their content. 

Finally, much like other kinds of AI, AI moderators are also susceptible to bias based on the data that they have been trained on. Much like the issues around Google’s allegedly sentient AI, LaMDA, the kind of content being fed to the AI moderator also raises serious ethical questions. Ultimately, even though AI can detect problematic text and imagery, human moderators, who are better equipped to read between the lines, are still essential in content moderation. 

Also read:

Header image courtesy of Freepik

SHARE THIS STORY

Share on facebook
Share on twitter
Share on linkedin
Share on email

RELATED POSTS

What Should Employees Do in a Crisis

What Should Employees Do in a Crisis?

At the start of January 2023, a drunk man in an Air India flight’s business class urinated over a 72-year-old woman sitting beside him. The man, Shankar Mishra, was the Vice President of the financial services company Wells Fargo (he was fired following the incident). In itself, the incident is disgusting.

Top 5 AI Companies in the World

Top 5 AI Companies in the World

Technological developments are driven by the human need to make life easier and complete tasks faster and more efficiently. The drastic growth of technology in recent years has paved the way for artificial intelligence (AI) to become an integral part of almost every industry—from education and lifestyle to music and sports. It is everywhere. And by the looks of it, it is here to stay.

3 High Demand Metaverse Jobs in Future That Your Kids Should Look into

3 High Demand Metaverse Jobs in Future That Your Kids Should Look into

It’s no wonder why Jumpstart Media named “metaverse” one of the top emerging trends and technologies to look forward to in 2023. With an expected compound annual growth rate (CAGR) of 39.4% from 2022 to 2030, Grand View Research forecasts that the global metaverse market is set to expand exponentially. From fashion shows, gaming, luxury products, sports and travel to art, many industries are tapping into the metaverse’s potential.

4 Most Anticipated Tech IPOs of 2023

4 Most Anticipated Tech IPOs of 2023

The technology industry has been a driving force in shaping the global economy for decades and the initial public offerings (IPOs) of technology companies are often highly-anticipated events. In 2022, the tech IPO market saw a slowdown compared to the previous year, 2021.

5-Luxury-Items-That-Are-Worth-the-Investment

5 Luxury Items That Are Worth the Investment

The world of luxury items is one of elegance and exclusivity, with their value increasing as time passes. They are unattainable and highly coveted by the masses. Plus, luxury products have a Veblen effect on the market—as their price rises, their demand does, too.

How Do You Know When to Give Up on Your Startup?

How Do You Know When to Give Up on Your Startup?

Starting a new business is never an easy task. You put your heart into it and spend countless hours working on what you believe will make someone happy or solve society’s problems. But sometimes, startups don’t work out as planned—this can be both disheartening and discouraging.