Three myths about fighting harmful and illegal content online

  • Published
  • Posted in Google
  • 4 mins read

Tomorrow is Safer Internet Day, but it’s just like every other day for the Trust & Safety team at our Google Safety Engineering Center in Dublin.

They are part of a global, cross-disciplinary group committed to helping people access information and content they can trust, through enforcing our policies and moderating content at scale, working to make the internet a safer and better place for all.

As we continue to develop and improve the tools, processes, and teams that help us provide access to helpful information and to moderate content, this moment also provides an opportunity to look at some popular misconceptions about the spread of illegal and harmful content online.

Myth: AI can detect all illegal and harmful content

It’s true that content moderation leverages automated systems to detect illegal and policy-violating content quickly and consistently at scale. But relying too heavily on machines could lead to problems like over-removal of content, or inadvertently restricting free speech or access to information. Humans’ capacity to understand context and nuance remains crucial in identifying and making decisions about harmful content.

That’s why we rely on a mixture of automated and human efforts to identify problematic content and enforce our policies. One of the ways we do this is by partnering with trusted organizations, and enabling them to flag content that may be problematic. And we turn to trained human reviewers, who can use their judgment to make complicated decisions.

Combined with the ability of automated systems to work quickly and at scale, this process means problematic content is removed before it is widely viewed, or viewed at all.

Myth: It’s easier to just take down all the “bad” stuff

At Google, we work hard to strike the right balance between empowering people to share their ideas and services with the world, and ensuring that people aren’t harmed:

  • We set responsible rules for each of our products and services and remove content that violates our policies or local law.
  • We elevate authoritative content and expertise from trusted sources.
  • We counter the spread of potentially harmful and borderline content that could misinform or harm users by not amplifying it through recommendations.
  • We set a high standard of quality and reliability for publishers and content creators who want to monetize or advertise their content.

This type of balance is hugely important to empowering users with choice, and providing access to high-quality information that helps people in times of need, too.

Myth: Fighting misinformation online is a straightforward challenge

The fight against online misinformation is constantly evolving — the content online changes every day, societal expectations and language use evolves, and bad actors are always innovating to try to circumvent detection. But we are committed to improving the ways we enforce our policies at scale, and partnering with policymakers, other companies and civil society.

One aspect of our approach is to do more than just remove the harmful information — we work to elevate information from trusted sources and support media literacy efforts through partnerships and initiatives across Europe.

For example, in the early days following the invasion of Ukraine, the teams working on Google Search and Google Maps provided support when it was needed most to help refugees find verifiable resources. To do this, we worked with a range of partners including the United Nations High Commissioner for Refugees, the Red Cross, Ukraine’s Ministry of Foreign Affairs and Reuters.

And on Play, we promoted an air raid alert app developed by the Ukrainian government to help warn civilians about impending air strikes.

These actions and partnerships gave people confidence that the information was accurate, trustworthy and up to date.

Google was built on the premise that information can be a powerful thing for people around the world. We’re determined to keep doing our part to help people everywhere find what they’re looking for and give them the context they need to make informed decisions about what they see online.

You can discover more about how we are taking on content responsibility at the Google Safety Engineering Center in Dublin, and how we are building a safer, more trusted internet at safety.google.

News Article Courtesy Of »