responsiblog

AIs help Facebook and Google Remove Terrorist Content

March 2018
world wide web concept art man logging into a website

Facebook says they want to find terrorist content as quickly as possible. At this point, they’re still very dependent on humans to remove content, but they use a decent amount of AI technology (algorithms that improve over time) and they’re always trying to advance it.

Mark Zuckerberg is a huge proponent of AI technology. He says, “I think you can build things and the world gets better. With AI especially, I’m really optimistic, and I think that people who are naysayers and try to drum up these doomsday scenarios… I don’t understand it. It’s really negative, and in some ways, I think it’s actually pretty irresponsible.”

He also says that when people argue that AIs will harm people in the future, he thinks, “Yeah, technology can generally always be used for good and bad, and you need to be careful about how you build it, and you need to be careful about what you build and how it’s going to be used.”

Facebook’s current efforts at using AI technology include:

  • Image matching

Matching photos or videos posted on Facebook to a known terrorist photo or video

  • Language understanding

Analyzing text and learning to understand language that advocates for terrorism

  • Removing terrorist clusters

After manually identifying terrorist pages, groups, posts, or profiles who support terrorism, Facebook uses algorithms to identify related material. The algorithm uses signals such as an account that’s friends with another account that was disabled for having terrorist content to “fan out” and identify more terrorist content.

  • Recidivism

Identifying fake accounts created by repeat offenders. This vastly decreases the amount of time in which terrorists are on Facebook. Facebook has to frequently update their systems to keep up with people who use fake accounts to circumvent Facebook with terrorist content.

  • Cross-platform collaboration

Identifying terrorist content across Instagram and WhatsApp as well as Facebook.

Facebook is always trying to get better at learning what terrorist content looks like and knowing what needs to be taken down. While Facebook already has algorithms that update over time, they still rely a lot on human expertise to cover their bases. But they want to keep advancing their algorithms to find terrorist content faster and more effectively.

Facebook has also started giving free advertisement space to counter-terrorism campaign groups as part of their Online Civil Courage Initiative in the U.K.

Google is looking to improve their ability to find terrorist content, as well, specifically on YouTube. They want to:

  • Get better at using technology to identify extreme terrorist-related videos
  • Be stricter with videos that might be in a gray area, that don’t clearly and directly violate their policies
  • Expand their role in counter-radicalist efforts by building on their Creators for Change program, which promotes YouTube voices that fight radicalisation

Google is not only looking to improve their technology, but also their ability to fight terrorism, and radicalisation in general, as much as possible. Unlike Facebook, which is looking to find terrorist content as quickly and effectively as possible, Google is looking more to identify and fight back against terrorist content, using both technology and human expertise. Kent Walker, Google’s general counsel, has spoken publicly about their methods, as he said in June, “We have used video analysis models to find and assess more than 50 percent of the terrorism-related content we have removed over the past six months.” There are pros and cons of using AI technology to take down bad content. On one hand, Facebook’s algorithms learn and improve every time they find new terrorist content, which makes them increasingly efficient. And Facebook is always improving their algorithms, so the technology will become more accurate as it advances. However, one of the biggest flaws in AI technology is that it can’t differentiate between content that’s necessary for news purposes and content that isn’t. In general, AIs don’t understand how the context of content affects how it needs to be handled. That’s where humans step in. Facebook in particular relies heavily on its users to report terrorist content when they see it. Facebook even has Community Operations teams around the world who work 24 hours a day in a dozen different languages to review those reports and determine the context. Their community operations teams are growing immensely each year and are important for a lot of things, not just terrorism. Facebook also has specialists whose primary job is to fight terrorism. The specialists can be academic experts, former prosecutors, former law enforcement agents and analytics, and engineers. And, Facebook has a global team who responds to emergency requests from law enforcements. Big companies such as Google, Facebook, Microsoft, and Twitter are all working together to establish an international forum to share and develop technology and support smaller companies in a global effort to fight terrorism online.Artificial Intelligence is really useful for making the internet a safer space and improving people’s experience online. Algorithms that improve over time help us by overviewing vast amounts of data and revealing content that we need to take a closer look at.

As AI technology advances, however, we’ll face larger questions, such as:

How do we know when removing content becomes a way to allow people to avoid real global issues, so much so that it stops being a good thing? How exactly do we differentiate between news and bad content? Should there be rules that supercede companies to determine what is and is not okay? If there were rules like that, would they violate people’s freedom of speech? If so, how do we avoid that? These are questions that we need to think about as AI technology advances and social media companies get more control over what can and cannot be seen publicly.

Share this post
No items found.

responsiblog

Keep Reading

Browse All

Subscribe for the Latest

Want to stay in the know with all things Responsival? Sign up for our newsletters, we promise we won't email you too often!

By clicking Sign Up you're confirming that you agree with our Terms and Conditions.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.