AI lab by information labs

information labs

AI lab podcast, "decrypting" expert analysis to understand Artificial Intelligence from a policy making point of view. read less
TechnologyTechnology
1:1 with Andres Guadamuz
Nov 23 2023
1:1 with Andres Guadamuz
In this podcast Andres Guadamuz (University of Sussex) & the AI lab ‘decrypt’ Artificial Intelligence from a policy making point of view📌Episode Highlights⏲️[00:00] Intro⏲️[01:24] The TL;DR Perspective⏲️[10:34] Q1 - The Deepdive: AI Decrypted | You look at the inputs and outputs of AI. For the inputs, the key question is: does mining data infringe copyright? For the outputs, the main question is: can derivative works infringe copyright and what role do exceptions play?⏲️[20:28] Q2 - The Deepdive: AI Decrypted | In your blog entitled “Will we ever be able to detect AI usage”, you wonder if that is really the right question to ask and suggest alternatives. What are your key thoughts?⏲️[23:53] Outro🗣️ To think of copyright like any granular, tiny speck of information that went into the training of an input means that you own that [AI] output. That's ridiculous to me. That means there are billions of authors for every single ChatGPT or entry. 🗣️ What [AI providers are] doing is a temporary copy or transient copy. (...) They don't need them after the model is trained. (...) What's happening is they make a copy and then extract information.🗣️ Some of these actions [by AI providers] could fall under existing exceptions and limitations. (...) They make a copy (...) that allows the generativity to work. 🗣️ AI is actually making it easier for small-time creators to create quality content. (...) What we're starting to see: it’s enabling more creators to do stuff. 📌About Our Guest🎙️ Andres Guadamuz | Reader in Intellectual Property Law, University of Sussex  𝕏  https://twitter.com/technollama🌐 Openness, AI, and the Changing Creative Landscape (TechnoLlama blog)🌐 Corridor Crew’s Anime Rock, Paper, Scissors🌐 A Scanner Darkly: Copyright Liability and Exceptions in Artificial Intelligence Inputs and Outputs (SSRN)🌐 Will We Ever Be Able to Detect AI Usage? (TechnoLlama blog)🌐 Asking Whether AI Outputs Are Art Is Asking the Wrong Question (TechnoLlama blog) 🌐 TechnoLlama blog🌐 Dr Andres GuadamuzDr Andres Guadamuz (aka technollama) is a Reader in Intellectual Property Law at the University of Sussex and the Editor in Chief of the Journal of World Intellectual Property. His main research areas are artificial intelligence and copyright, open licensing, cryptocurrencies, and smart contracts. He has written two books and over 40 articles and book chapters, and also blogs regularly about different technology regulation topics, notably on his TechnoLlama blog.
AI lab hot item | Michiel Van Lerbeirghe (ML6) - Copyright Transparency: An AI Firm’s Perspective
Nov 14 2023
AI lab hot item | Michiel Van Lerbeirghe (ML6) - Copyright Transparency: An AI Firm’s Perspective
🔥 In this 'Hot Item', Michiel Van Lerbeirghe (ML6) & the AI lab explore how the push for copyright transparency in the EU AI Act’s could impact smaller European AI providers and how we can move towards a practical solution📌Hot Item Highlights⏲️[00:00] Intro⏲️[00:45] Michiel Van Lerbeirghe (ML6)⏲️[08:28] Wrap-up & Outro🗣️ Copyright protection is subjective: it is definitely not up to providers of foundation models to rule whether the criteria are met. However, under the current version of the AI Act, they would be required to make that assessment.🗣️ The current obligation regarding copyright [transparency] is almost impossible to comply with. (...) The obligation is still under review, and we hope that we can evolve to a mechanism that makes more sense.🗣️ While transparency is definitely a good thing that should be supported, (...) the upcoming [copyright transparency] obligation could prove to be very difficult, and not to say impossible, to comply with.🗣️ Copyright can actually go very far and a lot of different content can potentially be protected by copyright. (...) From a practical point of view: where would the [transparency] obligation start and where would it end?📌About Our Guest🎙️ Michiel Van Lerbeirghe | Legal Counsel, ML6🌐 Assessing the impact of the EU AI Act proposal (ML6 Blog Post)🌐 ML6🌐 Michiel Van LerbeirgheMichiel is an IP lawyer focusing on artificial intelligence. After working for law firms for multiple years, he recently became the in-house legal counsel for ML6, a leading European service provider building and implementing AI systems for several multinationals.
AI lab hot item | Brian Williamson (Communications Chambers) - Latest AI Policy Developments
Nov 9 2023
AI lab hot item | Brian Williamson (Communications Chambers) - Latest AI Policy Developments
🔥 In this 'Hot Item', Brian Williamson (Communications Chambers) & the AI lab discuss the latest AI policy developments, from the U.K. AI Summit to the U.S. White House Executive Order on AI safety📌Hot Item Highlights⏲️[00:00] Intro⏲️[00:33] Brian Williamson (Communications Chambers)⏲️[12:25] Wrap-up & Outro🗣️ We didn't seek to regulate computing or have a law of computing. We did focus on particular problems that arose over time, and computing led to a focus on data protection, but that's different to having a law of computing.🗣️ What should we do? We should not seek a law of AI (...), not now, possibly not ever.🗣️ The EU is working to agree [on] a law for AI, but (...) the perceived challenges continue to evolve, as does the technology. So, that's a difficult thing to do, but I actually think it's the wrong thing to do at this point in time, if ever.🗣️ We should remain technology agnostic and focus on delivering a solution.🗣️ We need to do the hard work of thinking about whether existing regulation and market adaptation is going to be sufficient (...) but just trying to fix the problems now with a law in advance won't work.📌About Our Guest🎙️ Brian Williamson | Partner, Communications Chambers 𝕏  https://twitter.com/MarethBrian🌐 Communications Chambers🌐 Brian WilliamsonBrian Williamson is a London based partner of the consultancy Communications Chambers. His clients include governments, regulators, telcos, and tech companies. He has a background in economics and physics.
AI lab hot item | Kai Zenner (European Parliament) - EU AI Act Trilogue: The Focus Points
Oct 19 2023
AI lab hot item | Kai Zenner (European Parliament) - EU AI Act Trilogue: The Focus Points
🔥 In this 'Hot Item', Kai Zenner (Head of Office & Digital Policy Adviser for MEP Axel Voss) & the AI lab discuss the state-of-play of the EU AI Act trilogue negotiations📌Hot Item Highlights⏲️[00:00] Intro⏲️[00:49] Kai Zenner (European Parliament)⏲️[12:00] Wrap-up & Outro🗣️ We do not have a lot of time left (...). There's a 50-50 chance (...) [We] really want this deal, because we don't believe that it would be a wise move to delay the adoption of the AI Act after the European election. We will really give our best to close this file at the end of this year.🗣️ [Remote biomedical identification (RBI):] to find a middle ground here is almost impossible, because the European Parliament really wants to ban it completely and to not allow any loopholes.🗣️ [Prohibited AI practices:] we really need to go into details, again always trying to find this difficult compromise between making the ban not too broad, that there are loopholes, but also not too narrow.🗣️ [High-risk AI systems:] we need to be extremely careful that no activities or deployment cases for the use of AI are listed that are not really risky. A lot of rather technical work needs to be invested there.🗣️ The European Parliament of course had much more time to come to a position (...) compared to the Council (...)  Therefore, parliamentarians of course saw the rise of ChatGPT.🗣️ We tried to make this AI value chain a little bit more transparent, to accelerate the information sharing from upstream to downstream and also to make sure that even though foundation models are not the focus of the AI Act, that they need to fulfil certain minimum criteria.🗣️ Where we can probably meet both Council and Parliament is (...) making sure that all the actors in the AI value chain are at least somehow covered by the AI Act and (...) that we allow the downstream actors to become compliant (...) by having all information necessary.🗣️  [AI governance:] the European Parliament pushed for the creation of an AI office (...) and we have a Parliament that really wants to learn from the mistakes with the General Data Protection Regulation (GDPR).📌About Our Guest🎙️ Kai Zenner | Head of Office & Digital Policy Adviser for MEP Axel Voss, European Parliament  𝕏  https://twitter.com/ZennerBXL 🌐 MEP Axel Voss 🌐 Kai ZennerKai Zenner is the Head of Office and Digital Policy Adviser for MEP Axel Voss in the European Parliament. He is heavily involved in the political negotiations on the AI Act and the AI Liability Directive. Kai is member of the OECD.AI Network of Experts since 2021, was awarded best MEP Assistant in 2023 and ranked Place #13 in Politico's Power 40 - class 2023.
1:1 with Brigitte Vézina
Oct 17 2023
1:1 with Brigitte Vézina
In this podcast Brigitte Vézina (Creative Commons) & the AI lab ‘decrypt’ Artificial Intelligence from a policy making point of view📌Episode Highlights⏲️[00:00] Intro⏲️[00:58] The TL;DR Perspective⏲️[09:45] Q1 - The Deepdive: AI Decrypted | Creative Commons pointed out the link between AI and free and open source software (FOSS), highlighting opportunities and threats. Can you explain this?⏲️[15:50] Q2 - The Deepdive: AI Decrypted | Contrary to the original proposal, copyright rules related to transparency and possibly content moderation have been proposed in the AI Act. Is this necessary?⏲️[21:58] Q3 - The Deepdive: AI Decrypted | Creative Commons states that using copyright to govern AI is unwise, as it contradicts copyright’s primordial function of allowing human creativity to flourish. What do you mean by that?⏲️[29:07] Outro🗣️ [Article] 28b 4 (c) (...) is ambiguous (...). We need to find a way to achieve the EU AI Act's aim to really increase transparency, but without placing an undue and unreasonable burden on AI developers.🗣️ Balance is key: there needs to be appropriate limits on copyright protection, if we want the copyright system to fulfil its function of both incentivising creativity and providing access to knowledge. That is the current framework in the EU with the DSM Directive.🗣️  What we've heard time and again through our consultations: copyright is really just one lens through which we can consider AI, and often copyright is not the right tool to regulate [AI].🗣️ Copyright is a rather blunt tool that often leads to either black and white or all or nothing solutions.That is dangerous.📌About Our Guest🎙️ Brigitte Vézina | Director of Policy & Open Culture, Creative Commons  𝕏  https://x.com/Brigitte_Vezina🌐 Supporting Open Source and Open Science in the EU AI Act | Creative Commons🌐 European Parliament Gives Green Light to AI Act, Moving EU Towards Finalizing the World’s Leading Regulation of AI | Creative Commons🌐 Exploring Preference Signals for AI Training | Creative Commons🌐 Update and Next Steps on CC’s AI Community Consultation | Creative Commons🌐 Better Sharing for Generative AI  | Creative Commons🌐 AI Blog Posts | Creative Commons🌐 Open Culture Voices | Creative Commons🌐 Spawning AI 🌐 Brigitte VézinaBrigitte Vézina is Director of Policy and Open Culture at Creative Commons (CC). She is passionate about all things spanning culture, arts, handicraft, traditions, fashion and, of course, copyright law and policy. Brigitte gets a kick out of tackling the fuzzy legal and policy issues that stand in the way of access, use, re-use and remix of culture, information and knowledge. Before joining CC, she worked for a decade as a legal officer at WIPO and then ran her own consultancy, advising Europeana, SPARC Europe and others on copyright matters. Brigitte is a fellow at the Canadian think tank Centre for International Governance Innovation (CIGI).
1:1 with Teresa Nobre
Sep 27 2023
1:1 with Teresa Nobre
In this podcast Teresa Nobre (COMMUNIA) & the AI lab ‘decrypt’ Artificial Intelligence from a policy making point of view📌Episode Highlights⏲️[00:00] Intro⏲️[01:10] The TL;DR Perspective⏲️[08:11] Q1 - The Deepdive: AI Decrypted | COMMUNIA’s Policy Paper #15 states that: “The use of copyrighted works as part of the training data is exactly the type of use that was foreseen when the TDM exception was drafted and this has recently been confirmed by the EC in response to a parliamentary question”. Can you clarify that?⏲️[11:04] Q2 - The Deepdive: AI Decrypted | COMMUNIA clearly favours transparency when it comes to AI models, but also points out that when it comes to copyrighted material: “Policy makers should not forget that the copyright ecosystem itself suffers from a lack of transparency”. What do you mean by that?⏲️[16:41] Q3 - The Deepdive: AI Decrypted | COMMUNIA sees a need to operationalise the TDM opt-out mechanism. You recommend the EC should play an active role to encourage a fair and balanced approach to opt-out and transparency through a broad stakeholder dialogue. What could that entail?⏲️[21:39] Outro🗣️ Everything would be easier if there was more transparency across the copyright ecosystem itself. (...)  There's no place that you can consult that will tell you who are the owners, who are the creators, the title of the work.🗣️ Machine learning developers (...) will not be able to provide this [copyright] information, because this information is simply not publicly available for the vast majority of works.🗣️ To demonstrate compliance with copyright law, machine learning developers only need to show that they have respected machine-readable rights reservations.🗣️ Our recommendation: European Commission, do something that's more towards involving everyone in the solution to the problem.📌About Our Guest🎙️ Teresa Nobre | Legal Director, COMMUNIA  𝕏  https://twitter.com/tenobre🌐 The AI Act and the quest for transparency (COMMUNIA blog post)🌐 Policy Paper #15 on Using Copyrighted Works for Teaching the Machine (COMMUNIA)🌐 Answer given by Commissioner Thierry Breton on behalf of the European Commission to the Parliamentary Question by MEP Emmanuel Maurel (The Left, France)🌐 Teresa NobreTeresa Nobre is the Legal Director of COMMUNIA, an international association that advocates for policies that expand the Public Domain and increase access to and reuse of culture and knowledge. She is an attorney-at-law and is involved in policy work both at the EU level and at the international level, representing COMMUNIA at the World Intellectual Property Organization.
1:1 with João Pedro Quintais
Sep 6 2023
1:1 with João Pedro Quintais
In this podcast João Pedro Quintais (Institute for Information Law, IViR) & the AI lab ‘decrypt’ Artificial Intelligence from a policy making point of view📌Episode Highlights⏲️[00:00] Intro⏲️[00:00] The TL;DR Perspective⏲️[00:00] Q1 - The Deepdive: AI Decrypted | You consider that it is impossible to comply with the transparency obligation to “document and make publicly available a summary of the use of training data protected under copyright law”. Can you explain why?⏲️[00:00] Q2 - The Deepdive: AI Decrypted | Can you explain: 1. how you connect the TDM exceptions in the Directive on Copyright in the Digital Single Market to the EU AI Act; and, 2. your views on the different schools of thoughts on this?⏲️[00:00] Q3 - The Deepdive: AI Decrypted | You refer to the user safeguards  in Article 17 of the Directive on Copyright in the Digital Single Market, e.g. exceptions and freedom of speech. Where do you make the link with the EU AI Act?⏲️[00:00] Outro📌About Our Guest🎙️ Dr João Pedro Quintais | Assistant Professor, Institute for Information Law (IViR)🐦 https://twitter.com/JPQuintais🌐 Kluwer Copyright Blog | Generative AI, Copyright and the AI Act🌐 Institute for Information Law (IViR)🌐 Dr João Pedro QuintaisDr João Pedro Quintais is Assistant Professor at the University of Amsterdam’s Law School, in the Institute for Information Law (IViR). João notably studies how intellectual property law applies to new technologies and the implications of copyright law and its enforcement by algorithms on the rights and freedoms of Internet users, on the remuneration of creators, and on technological development. João is also Co-Managing Editor of the widely read Kluwer Copyright Blog and has published extensively in the area of information law.
1:1 with Alina Trapova
Jun 15 2023
1:1 with Alina Trapova
In this podcast Alina Trapova (UCL Faculty of Laws) & the AI lab ‘decrypt’ Artificial Intelligence from a policy making point of view📌Episode Highlights⏲️[00:00] Intro⏲️[01:07] The TL;DR Perspective⏲️[10:05] Q1 - The Deepdive: AI Decrypted | You consider that the way copyright relevant legislation is looked at currently by EU legislators benefits only a limited number of cultural industries. Can you expand on that?⏲️[17:34] Q2 - The Deepdive: AI Decrypted | In the AI Act, the EP slid Article 28b sub 4c, relating to transparency. Knowing that it is not that obvious to identify what is copyrighted and what isn’t, do you think this can even be done?⏲️[22:31] Q3 - The Deepdive: AI Decrypted | You encourage legislators to be cautious when looking at regulating an emerging digital technology. Where do you see a risk of using an elephant gun to kill a fly?⏲️[26:47] Outro📌About Our Guest🎙️ Dr Alina Trapova | Lecturer in Intellectual Property Law, UCL Faculty of Laws🐦 https://twitter.com/alinatrapova🌐 European Parliament AI Act Position Put to Vote on 14 June, 2023🌐 European Law Review [(2023) 48] | Copyright for AI-Generated Works: A Task for the Internal Market?🌐 Kluwer Copyright Blog | Copyright for AI-Generated Works: A Task for the Internal Market?🌐 Institute of Brand and Innovation Law (UCL, University College London)🌐 Dr Alina TrapovaDr Alina Trapova is a Lecturer in Intellectual Property Law at University College London (UCL) and a Co-Director of the Institute for Brand and Innovation Law. Alina is one of the Co-Managing editors at the Kluwer Copyright Blog. Prior to UCL, she worked as an Assistant Professor in Autonomous Systems and Law at The University of Nottingham (UK) and Bocconi University (Italy). Before joining academia, she has worked in private practice, as well as the EU Intellectual Property Office (EUIPO) and the International Federation for the Phonographic Industry (IFPI).