Click to Trust

TrustLab

Online platforms are plagued by harmful content, from hate speech to illegal activities, there seems to be a news headline every week where someone’s well being has been impacted. No wonder we’ve got trust issues. Click to Trust delves into the intricate world of safeguarding online spaces. Guided by the wisdom of Tom Siegel, CEO and Co-Founder of TrustLab, and (previously) the VP of Trust & Safety at Google, the show covers the challenges of monitoring harmful content, combatting digital threats, and empowering you to navigate the web with trust. Click to Trust is your go-to resource for promoting a safer and healthier digital environment for all. For more, visit trustlab.com read less
TechnologyTechnology

Episodes

Panicking Responsibly with Katie Harbath, Chief Global Affairs Officer at Duco Experts
Apr 3 2024
Panicking Responsibly with Katie Harbath, Chief Global Affairs Officer at Duco Experts
2024 is a busy year for democracy! More countries will hold elections this year that at any point in the next two decades to come. So perhaps more than ever, it’s critical that we take a look at the growing amount of misinformation that threatens to influence or subvert these elections.In this episode of Click to Trust, we’ll hear from Katie Harbath, an expert in misinformation and election interference as well as the Chief Global Affairs Officer at Duco. In her interview with Tom Siegel, Katie shares her thoughts on the balance between freedom of speech and user safety and how the approaches taken by social media companies so far are not enough to safeguard their users. Katie’s insights highlight the role that individual responsibility plays in combating misinformation and that consumers simply can not wait for organizations to tackle the problem of misinformation for us.Highlights:Takeaway One: Katie Harbath illuminates the growing necessity for government regulations to level the playing field in the tech industry and assist in funding trust and safety initiatives.Takeaway Two: There is a pressing need for ethical guidelines and responsible AI use, especially in critical areas like election integrity.Takeaway Three: The battle against misinformation is relentless as new tactics continue to surface. It’s imperative that tech companies adapt with innovative solutions.Jump Into the Conversation:[09:38] Misinformation caused by websites pivoting to claim 5G causes COVID.[19:22] Algorithms designed to maximize inflammatory content consumption.[24:45] What happened with Hertz Rent-a-Car inadvertently placed an ad on a misinformation website.[31:26] The constant cat-and-mouse game with the accelerating pace of AI development.Resources:Anchor Change with Katie Harbath (newsletter)Impossible Tradeoffs with Katie Harbath (podcast)
Misinformation Goes Social: Karishma Shah on Ensuring Election Integrity in the Era of Social Media
Apr 10 2024
Misinformation Goes Social: Karishma Shah on Ensuring Election Integrity in the Era of Social Media
2024 is a busy year for democracy! More countries will hold elections this year that at any point in the next two decades to come. So perhaps more than ever, it’s critical that we take a look at the growing amount of misinformation that threatens to influence or subvert these elections.In this episode of Click to Trust, we’ll hear from Karishma Shah, Program Manager of News Integrity Parterships at Meta. In her interview with Carmo Braga da Costa, Karishma provides a behind the scenes look at how organizations like Meta are partnering with independent, fact-checking organizations to fight against misinformation. Karishma explains the full scope of the challenge presented by AI-generated content, and expresses a need for collaboration among tech companies to address misinformation and protect users.Highlights:Takeaway One: Fact-checking involves much more than algorithms—it's small newsrooms or divisions within larger ones meticulously analyzing claims and content authenticity.Takeaway Two: Artificial intelligence is reshaping the landscape of misinformation, making it easier to create and spread false narratives.Takeaway Three: It’s crucial to engage with resources and educational materials that enhance media literacy and enable users to recognize and report potential misinformation.Jump Into the Conversation:[05:45] Facebook's fact-checking program partners with organizations globally.[14:51] How to support fact-checkers in facing online threats.[17:50] AI-generated content and the potential risk for elections.[21:44] User-generated platforms must address misinformation challenge.
Populism in the Age of Social Media and AI With Tom Davidson
Apr 24 2024
Populism in the Age of Social Media and AI With Tom Davidson
2024 is a busy year for democracy! More countries will hold elections this year that at any point in the next two decades to come. So perhaps more than ever, it’s critical that we take a look at the growing amount of misinformation that threatens to influence or subvert these elections.In this episode of Click to Trust, we’ll hear from computational sociologist Tom Davidson about how misinformation and social media have changed citizen engagement online. One of the many results is the growth of populist political parties fueled by online spaces where misinformation can run rampant. Tom explains the differences in engagement between populist and non-populist parties on platforms like X and Facebook.Highlights:Takeaway One: The shift from chronological to curated news feeds can radically reshape political landscapes by tipping the scales of engagement.Takeaway Two: Populist rhetoric finds a fertile breeding ground on platforms like Facebook, with twice the engagement than on Twitter, and it's pivotal for online safety experts to examine why.Takeaway Three: As influencers of online safety, recognizing the importance of a holistic, international approach to platform scrutiny can help mitigate the risks of misinformation and aggressive political tactics.Jump Into the Conversation:[06:12] Populists use social media for direct connection.[08:25] Hate speech and misinformation fuel populist engagement.[17:28] Facebook is widely used in Europe, but Twitter is used selectively.[29:17] Populists are targeting big tech and AI tools.
Measuring Misinformation with Xiaolin Zhuo
May 8 2024
Measuring Misinformation with Xiaolin Zhuo
2024 is a busy year for democracy! More countries will hold elections this year that at any point in the next two decades to come. So perhaps more than ever, it’s critical that we take a look at the growing amount of misinformation that threatens to influence or subvert these elections.In this episode of Click to Trust, TrustLab’s own Xiaolin Zhuo about the European Union’s Code of Practice on Disinformation. The Code of Practice is a first-of-its-kind study that measures the prevalence and sources of misinformation across major social platforms.In our conversation, Xiaolin shared her team’s findings on online misinformation and the impact the Code of Practice can have on slowing down the spread of misinformation online.Highlights:Takeaway One: Combating misinformation requires a joint efforts between governments, social platforms, and civil groups under frameworks like the EU’s Code of Practice.Takeaway Two: Platforms with lower discoverability rates of misinformation might still experience high engagement rates on those posts.Takeaway Three: Misinformation and the way it is disseminated will continue to change due to technological advancements.Jump Into the Conversation:[05:27] TrustLab’s study shows multiple metrics with unexpected platform patterns.[11:50] Data helps combat misinformation by giving users an understanding of where information is coming from.16:47 Addressing misinformation in EU requires education and collaboration between platforms and users.
Election Misinformation: Platform Pains and Solutions with Tom Siegel
2d ago
Election Misinformation: Platform Pains and Solutions with Tom Siegel
2024 is a busy year for democracy! More countries will hold elections this year that at any point in the next two decades to come. So perhaps more than ever, it’s critical that we take a look at the growing amount of misinformation that threatens to influence or subvert these elections.In this episode of Click to Trust, we draw our series of episodes about election misinformation to a close with an examination from TrustLab CEO Tom Siegel. From the need for collaboration between platforms to combat misinformation to the impact of algorithmic amplification on the spread of misinformation to the role of AI in both exacerbating and addressing the problem, several key themes have emerged over the last six episodes. As several key elections draw closer, Tom reflects on these themes and informs us on how to stay safe (and vigilant) in our online lives. Highlights:Takeaway One: What algorithmic amplification means and how misinformation spreads easily online.Takeaway Two: The psychology of people online when it comes to heated arguments and content.Takeaway Three: The important role that advertisers and governments play in adhering to industry standards of information dissemination.Jump into the conversation:[05:37] How echo chambers amplify misinformation. [09:11] How AI impacts misinformation.[12:26] What role do advertisers play in curbing misinformation online?[21:47] Why media literacy is a crucial safeguard. Media Literacy Resources:Media Literacy Now: A national organization focused on ensuring media literacy education is included in school curricula. Media Literacy NowThe News Literacy Project: A nonpartisan national education nonprofit offering programs that help people of all ages learn how to identify credible news and information. The News Literacy ProjectCyberWise: A resource dedicated to helping parents and educators teach digital citizenship and online safety. CyberWiseUnderstanding AI and Deepfakes:Deepfake Detection Guide by MIT Media Lab: An educational guide to understanding and detecting deepfakes. MIT Media Lab Deepfake Detection“Deepfakes and the Infocalypse” by Nina Schick: A book that delves into the world of deepfakes and the broader implications of AI-driven misinformation. Amazon LinkAI For Good – ITU: An initiative by the International Telecommunication Union exploring how AI can be harnessed to advance global sustainability. AI For GoodGeneral Resources on Staying Informed:Reuters Fact Check: Provides accurate, unbiased fact-checking to help users understand what is true and what is not. Reuters Fact CheckAllSides: Shows news coverage from different perspectives to help readers understand bias. AllSidesPew Research Center: Offers in-depth research on media, technology, and many other topics to help users stay informed. Pew Research Center