Facebook’s AI Systems Identify and Remove Hate Speech

When there is more hate than love in the world, you know there is a serious problem with humanity.

Facebook CTO Mike Schroepfer, said that in just a year —from the second quarter of 2019 to the second quarter of this year— the amount of hate speech that Facebook’s Artificial Intelligence (AI) systems have identified and removed has increased five-fold. 

Learning that the world is so driven by hate and associated negative emotions makes you wonder where the world is heading. What does Facebook’s CTO think about the tech giant’s findings? 

As Facebook’s CTO, Mike Schroepfer leads the development of the technology and teams that enable Facebook to connect billions of people around the world. The teams work in developing fields such as Artificial Intelligence (AI) and Virtual Reality (VR). 

Those connections create data, and the data is charged with emotional conversations, exchanges, and opinions that reveal who truly humans are, what the essence of the human being is. Do you want to hear more? 

Despite widespread skepticism about social media platforms and the usually exasperating spread of disinformation, Facebook’s CTO said that he still sees Facebook as a force for good that, at its core, does everything it can to help people around the world connect more easily and affordably, something it was just a dream a few decades ago. 

“95, 98, 99 percent of that experience is people just connecting with their friends and family,” Mike Schroepfer said. “Now, are there bad things that happen when you lower the friction for communication? Absolutely. And that’s what we’ve seen over the last many years, and why I’ve been so dedicated to allowing people to communicate freely, but also eradicating hate speech, violence, [and] speech that just is not allowed on the site.”

During Schroepfer’s interview with Jeremy Kahn, senior writer at Fortune Magazine at Web Summit Day Two, Facebook’s CTO went into great detail about the massive challenge, both from a technical and policy level, that comes with eliminating disinformation and hate speech on Facebook. Facebook has over 2.7 billion monthly active users at the time of writing, which makes it the biggest social network worldwide. In other words, at present, 28.5 percent of the population worldwide uses Facebook as a means of online social communication. 

Jeremy Kahn asked Schroepfer what message would he have for critics who say that no amount of technology will fix this content moderation problem until Facebook is no longer optimized for attention, which he posits results in divisible content drawing out more positive, uniting content. Schroepfer responded to this saying that all communication mediums throughout history, from newspapers to radio, have faced this dilemma, and that it’s not new to social media giants such as Facebook. 

“This is just a reality when humans connect. There are good uses and bad uses,” he said. “The answer is not to clamp down on platforms and make them more restrictive. It’s to decide as a democratic society what’s allowed and not, and have platforms do our very best to enforce those rules,” said Schroepfer. 

Then there is the topic of just being emotional and having a bad day without meaning hate speech as a constant. We all have been there, when we are so exasperated and frustrated by something that almost cannot contain ourselves when we are having a heated argument with a friend or family member. And this is also a part of free speech. How is that evaluated by Facebook moderators and AI systems at the time of deciding about removing hate speech? 

That’s the real trick –giving you the power to share and communicate with who you want, and using our technical prowess and scale to eliminate all the bad uses and bad actors where we can,” Schroepfer said. 

Source:https://interestingengineering.com

Leave a Reply

Your email address will not be published. Required fields are marked *