Stopping Hate From Going Viral: Content Moderation in the Wake of the New Zealand Mosque Shooting

Are social media companies responsible for what is published on their platform? In the wake of the New Zealand mosque shooting in Christchurch, questions about social media firms’ responsibilities are coming to the fore once again. Facebook, YouTube, Twitter and other social media firms are already using technologies, originally designed to remove copyrighted materials, to also take down child pornography and ISIS-related propaganda. Should these firms now be required to remove all extremist and violent content proactively? Should they be allowed to continue to self-regulate?

In a recent appearance on TVO’s The Agenda, Dr. Anatoliy Gruzd (@gruzd), Director of Research at the Ryerson Social Media Lab at Ted Rogers School of Management joins The Adenda’s host Steve Paikin (@spaikin) and co-panelists Dr. Sarah T. Roberts (@ubiquity75) from UCLA and Stephanie MacLellan (@smaclellan) from Centre for International Governance Innovation to discuss some of the many thorny issues around content moderation online.

You can watch an encore presentation of their discussion on this very timely topic in the video below.

Social Media and the Problem of Hateful Content

The Agenda discusses if and how social media companies should govern content that exhibits hate.