TechLaw Chat

Matthew Lavy & Iain Munro

A series of short podcasts exploring emerging and topical issues in technology law.

Child safety on a video-sharing platform
Jan 11 2022
Child safety on a video-sharing platform
As of now, the UK has not enacted online harms legislation, and social media platforms in general are under no statutory duty to protect children from harmful content. However, providers of video-sharing platforms do have statutory obligations in that regard, set out in Part 4B of the Communications Act 2003 (added to the Act by amendment in 2020). Amongst other things, section 368Z1 of the Act requires providers of such platforms to make appropriate measures to protect under-18s from videos and audio-visual commercial communications containing "restricted material". Regardless of the statutory obligations (or lack thereof in the case of non-video social media platforms), many platforms expend considerable efforts seeking to protect children from harm.In this episode, we consider how a video-sharing start-up might focus its resources in order to comply with its statutory obligations and to maximise the prospects that it offers a safe environment for children. We are joined in this endeavour by Dr Elena Martellozzo, an Associate Professor in Criminology at the centre for Child Abuse and Trauma Studies (CATS) at Middlesex University. Elena has extensive experience of applied research within the Criminal Justice arena. Elena’s research includes exploring children and young people’s online behaviour, the analysis of sexual grooming and online harm and police practice in the area of child sexual abuse. Elena has emerged as a leading researcher and global voice in the field of child protection, victimology, policing and cybercrime. She is a prolific writer and has participated in highly sensitive research with the Police, the IWF, the NSPCC, the OCC, the Home Office and other government departments. Elena has also acted as an advisor on child online protection to governments and practitioners in Italy (since 2004) and Bahrain (2016) to develop a national child internet safety policy framework. Further reading:Part 4B of the Communications Act 2003 can be found here: description of the Internet Watch Foundation technology suite can be found here: series of recommendations for various stakeholders (including tech companies) in relation to protection of children online in the age of COVID is made in the Glitch report.An article by Dr Martellozzo and her team on the effect of harmful content on children can be found on Sage Open here.Dr Martellozzo explains the grooming process in Chapter 4 of Bryce, Robinson and Petherick, Child Abuse and Neglect: here: Forensic Issues in Evidence, Impact and Management, Academic Press, 2019.In the LSE-hosted blogpost Speaking Up: Contributing to the fight against gender-based online violence, Dr Martellozzo, Paula Bradbury and Emma Short provide commentary and references on this issue.
The Black Box problem
Feb 5 2021
The Black Box problem
AI can improve how businesses make decisions. But how does a business explain the rationale behind AI decisions to its customers? In this episode, we explore this issue through the scenario of a bank that uses AI to evaluate loan applications and needs to be able to explain to customers why an application may have been rejected. We do so with the help of Andrew Burgess, founder of Greenhouse Intelligence (andrew@thegreenhouse.ai). About Andrew: He has worked as an advisor to C-level executives in Technology and Sourcing for the past 25 years. He is considered a thought-leader and practitioner in AI and Robotic Process Automation, and is regularly invited to speak at conferences on the subject. He is a strategic advisor to a number of ambitious companies in the field of disruptive technologies. Andrew has written two books - The Executive Guide to Artificial Intelligence (Palgrave MacMillan, 2018) and, with the London School of Economics, The Rise of Legal Services Outsourcing (Bloomsbury, 2014). He is Visiting Senior Fellow in AI and RPA at Loughborough University and Expert-In-Residence for AI at Imperial College’s Enterprise Lab. He is a prolific writer on the ‘future of work’ both in his popular weekly newsletter and in industry magazines and blogs.  Further reading:ICO and The Alan Turing Institute, ‘Explaining decisions made with AI’ (2020)ICO, ‘Guide to the General Data Protection Regulation (GDPR)’ (2021)The Data Protection & Privacy chapter in The Law of Artificial Intelligence (Sweet & Maxwell, 2020)An explanation of the SHAP and LIME tools mentioned by Andrew can be found at and a deeper explanation for the more mathematically minded can be found here:
Is your bot talking $£!? about me again?
Oct 7 2020
Is your bot talking $£!? about me again?
This podcast is intended as an introduction to issues that arise when an AI bot creates defamatory content. For detailed commentary on this specialist area of law, see: Gatley on Libel and Slander (12th Ed, 2017) and Duncan and Neill on Defamation (4th Ed, 2015 – with new addition forthcoming). For an overview, see our chapter on ‘Liability for Economic Harm’ in The Law of Artificial Intelligence (2020, forthcoming).Cases relevant to auto-generated content include:  Bunt v Tilly [2006] EWHC 407 (QB) Metropolitan International Schools Ltd (trading as Skillstrain and/or Train2Game) v Designtechnica Corpn (trading as Digital Trends) and others [2009] EWHC 1765 (QB) Tamiz v Google Inc. [2013] EWCA Civ 68 CAFor other jurisdictions, see e.g. Defteros v Google LLC [2020] VSC 219 at [40], in which Richards J summarised the Australian position as follows: “The Google search engine … is not a passive tool. It is designed by humans who work for Google to operate in the way that it does, and in such a way that identified objectionable content can be removed, by human intervention, from the search results that Google displays to a user.”  For Hong Kong, see e.g. Yeung v Google Inc. [2014] HKCFI 1404; Oriental Press Group Ltd v Fevaworks Solutions Ltd [2013] HKFCA 47 (especially [76] for a test endorsed by the authors of Gatley).On the contradictory positions taken by search engines worldwide, see, e.g., Sookman, “Is Google a publisher according to Google? The Google v Equustek and Duffy cases”, C.T.L.R. 2018, 24(1).