Trust & Safety
The field of “Trust and Safety” has developed hand in hand with the Internet, as a group of professionals have emerged to help online platforms and communities create and enforce the principles, policies and practices that define acceptable online behavior and norms. The formation of the Trust and Safety Association in 2020 was a milestone for the professionalism of the field and the industry.
Trust and Safety professionals are tasked with ensuring that users of a platform, tool or online community feel welcome, safe and secure. They develop community guidelines and define the parameters of content moderation. They address a wide range of issues including misinformation and disinformation, offensive or extremist content, harassment, fraud, phishing, online child safety, copyright infringement and spam.
For the Trust and Safety Knowledge Hub the All Tech is Human team has curated relevant resources from our various publications and from our community. This is a “living document” and will continue to grow with your input.
Please add any suggestions using the button below.
RESPONSIBLE TECH | NYC
All Tech Is Human will host The Future of Trust & Safety on Tuesday, May 14 6:00pm-8:30pm in New York City. This curated convening is designed to bring together 200 emerging and established Trust and Safety professionals from across industry, civil society, academia, and government to share how to break into the space and consider what the future of Trust and Safety will look like.
The convening will include two panels: Cultivating the Next Generation of Trust & Safety Leaders where speakers will share career advice, what makes their role exciting, challenges they face, how policy influences Trust and Safety operations, and reflections on the future of trust and safety. And, a second panel Safety by Design for Generative AI: Preventing Child Sexual Abuse.
We are now accepting applications to attend the Future of Trust & Safety! Click below to begin the process. We are looking for people new to Trust & Safety, emerging talent, established individuals, and those interested in the field. Attendance to this gathering is free for those invited after filling out our application. Please note: we receive far more requests than spots available.
-
ActiveFence
Wellness Breaks & Blur Tools for Content Moderators
The Guide to Transparency Reports
Global Law on Online Disinformation Content
EU Code Against Disinformation: What You Need To Know
The Trust & Safety Industry: A Primer
Increasing Content Moderation ROI in 2023
Protecting Children in Online Platforms
Protecting Election Integrity Throughout the U.S. Midterms
Navigating the EU’s New Digital Safety Regulations
New Approaches to Dealing with Content Moderation Challenges
Getting Ahead of Global Election Misinformation Webinar
Online Hate and Harassment: The American Experience 2021 (adl.org)
Hate in Social VR (adl.org)
The Online Hate Index | Anti-Defamation League (adl.org)
Online Hate Ecosystem Primer | Anti-Defamation League (adl.org)
ADL Social Pattern Library | Anti-Defamation League
Berkman-Klein Center for Internet and Society (Harvard)
It Could Happen (T)here: Transnational Advocacy Strategies Around Social Media | Berkman Klein Center (harvard.edu)
Harmful Speech Online | Berkman Klein Center (harvard.edu)
Content and Conduct | Berkman Klein Center (harvard.edu)
Center for an Informed Public (University of Washington)
Moderation on Social Media Platforms
Tracking & Unpacking Rumor Permutations to Understand Collective Sensemaking Online
Detecting Misinformation Flows in Social Media Spaces During Crisis Events
Center for Critical Internet Inquiry (UCLA)
Critical Internet Speaker Series C2i2 Scholars’ Council Speaker Series – UCLA Center for Critical Internet Inquiry
Augmenting Social Media Content Moderation NSF Project – UCLA Center for Critical Internet Inquiry
Center for Humane Technology
Documentary
Youth Toolkit https://www.humanetech.com/youth
Center for Information, Technology, and Public Life (CITAP)(University of North Carolina)
Critical Disinformation Studies syllabus Critical Disinformation Studies - Center for Information, Technology, and Public Life (CITAP) (unc.edu)
Podcast Podcast: Does Not Compute - Center for Information, Technology, and Public Life (CITAP) (unc.edu)
Change the Terms
CNN: The Dangers of Participatory Disinformation
Consentful Tech
Cyber Civil Rights Initiative (Florida International University)
2017 Nationwide Online Study of Nonconsensual Porn Victimization and Perpetration: A Summary Report
Data & Society — Media Manipulation & Disinformation
Source Hacking: Media Manipulation in Practice: “Source Hacking details the techniques used by media manipulators to target journalists and other influential public figures to pick up falsehoods and unknowingly amplify them to the public.”
Disinformation Action Lab
Data Craft Library
Data Voids Library
Data & Society — Media Manipulation and Disinformation Online (datasociety.net) [by Becca Lewis and Alice Marwick published by Data & Society]
Data Detox Kit
Electronic Frontier Foundation
Who’s Got Your Back?: 2019 report on content moderation practices
EU Disinfo Lab
Tools to monitor disinformation
Family Online Safety Institute
Policy & Research for Professionals
Tools for Today’s Digital Parents https://www.fosi.org/good-digital-parenting
Georgetown Law Technology Review (georgetownlawtechreview.org) [by Alice Marwick in Georgetown Law Technology Review]
Global Disinformation Index
Global Network of Internet and Society Research Centers
Institute for Strategic Dialogue
Integrity Institute
Metrics and Transparency: understand the scale and cause of harms occurring on social media platforms
Internet Commission
Evaluation Framework for Digital Responsibility: “Our EVALUATION FRAMEWORK FOR DIGITAL RESPONSIBILITY enables procedural accountability. It looks at how organisational cultures, systems and processes align to support corporate digital responsibility. It has a particular focus on internet safety, freedom of speech and the ways in which decisions are made in relation to content, contact, conduct online. The revised second edition (January 2022) IDENTIFIES 124 QUALITATIVE AND QUANTITATIVE INDICATORS of how commercial, safety, and freedom of expression issues are balanced.”
Internet Policy Research Initiative (MIT)
Democracy, Disinformation, AI, and Platform Regulation Democracy, Disinformation, AI, and Platform Regulation - Internet Policy Research Initiative at MIT
The Ways Anti-Vaccine Activists and Others Try to Avoid Content Misinformation Escape Room: Loki's Loop (lokisloop.org)
Markkula Center for Applied Ethics at Santa Clara University - Internet Ethics
The Santa Clara Principles - On Transparency and Accountability in Content Moderation
Content Moderation Remedies (by Eric Goldman, Co-Director, High Tech Law Institute, Santa Clara University): “if a user’s online content or actions violate the rules, what should happen next?”
Round-Up of Materials from High Tech Law Institute conference: “Content Moderation and Removal at Scale” (by Eric Goldman, Co-Director, High Tech Law Institute, Santa Clara University)
Misinformation and Violence (by Rohit Chopra): The link between misinformation and violence needs to be addressed by legacy media, new media platforms, civil society, and regulatory bodies.
Markkula Center for Applied Ethics: Internet Ethics case studies
Media Manipulation: Media Manipulation Casebook Media Manipulation
Mozilla Foundation
Internet Health Report Hub
Everything in Moderation: The Limitations of Automated Tools in Content Moderation (newamerica.org)
New Public
New York Times:
Those Cute Cats Online? They Help Spread Misinformation
Oasis Consortium
User Safety Standards
Making Safety Universal in Online Dating | Equal, Inclusive, Safe: On Grindr’s best practices for gender inclusive content moderation
Use of AI in online content moderation – Cambridge Consultants (ofcom.org.uk)
https://ohpi.org.au/basic-online-safety-expectations/
How to Guides - Online Hate Prevention Institute (ohpi.org.au)
Online Hate Prevention Institute Measuring the Hate: The State of Antisemitism in Social Media
Oxford Internet Institute
Industrialized Disinformation 2020 Global Inventory of Organized Social Media Manipulation
We Need a Global Panel on Fake News
What’s Stunning about the Misinformation Trend and How to Fix It
Pew Research Center
Internet & Technology Datasets
Science.org
How Do You Solve a Problem Like Misinformation
Shorenstein Center on Media, Politics, & Public Policy (Harvard)
The Media Misinformation Casebook *** Tracking Social Media Takedowns and Content Moderation During the 2022 Russian Invasion of Ukraine | Media Manipulation Casebook
Disinformation | Shorenstein Center
Explore | HKS Misinformation Review (harvard.edu) Content Moderation topic
https://shorensteincenter.org/programs/technology-social-change/
HKS Misinformation Review (harvard.edu)
Content moderation, AI, and the question of scale - Tarleton Gillespie, 2020 (sagepub.com)
The Role Of AI In Content Moderation
Toward Data Science
Machine Learning for Content Moderation — Introduction | by Devin Soni 👑 | Towards Data Science
Trust.org
Online content moderation: Can AI help clean up social media? (trust.org) (Reuters Foundation)
Trust & Safety Professional Association (TSPA.org)
Trust & Safety curriculum: “To effectively safeguard their users from harmful content, these companies must grapple with incredibly complex issues, from child sexual exploitation to misinformation, while also responding to new and rapidly evolving threats. Policies and decisions by companies have real-world consequences, whether it is the safety of an individual or political repercussions for a nation; it is no wonder that civil society groups, governments, and regulatory bodies are increasingly scrutinizing the measures technology companies take to preserve users’ safety and privacy.”
Careers in Trust & Safety (FAQ)
Trust & Safety Resource Library
Related - Trust and Safety Foundation
[Selected resource: Metaphors in Moderation]
Trust Lab
TS Collective/Spectrum Labs
The Net Safety Collaborative
The Trust Project
TransTech Social Enterprises
Trust and Safety Forum
Web Foundation
Tackling Online Gender Based Violence and Abuse Against Women
-
Books
Ghost Work - Mary Gray
Behind The Screen - Sarah T. Roberts
Articles
Why Hasn’t Social Media Done More?
Data & Society - Reorienting Platform Power:
Counterspeech: A Literature Review
Podcasts
-
Business Case for AI Ethics
Co-Creating a Better Tech Future
HX Report - Aligning our Tech Future with our Human Experience