Navigating AI in the UK
Navigating AI in the UK
The legal framework on AI in the UK is in a state of flux, with the UK government’s desire to be a “pro-innovation” hub for AI businesses yet to be underpinned by a clear legal framework.
Join AI and Technology partner Charlotte Walker-Osborn and IP partner Steven James to discover some of the areas you need to be thinking about when developing, deploying or using AI systems.
[00:00:00] Welcome to MoFo Perspectives, a podcast by Morrison Foerster where we share the perspectives of our clients, colleagues, subject matter experts, and lawyers.
[00:00:25] Steven: Thank you for joining us for this Morrison Foerster Perspectives video cast where we’re looking at AI and some of the key legislative developments and issues you need to consider. My name is Steven James. I’m a partner in the intellectual property team in London at Morrison Foerster. I’m delighted to be joined by Charlotte Walker-Osborn, who is a partner in our technology transaction team and co-heads our AI and tech practice in Europe. Charlotte, great to have you here.
[00:00:51] Charlotte: Good to be here, Steven.
Steven: Excellent. So, talking about AI, it seems to me that there’s an absolute swathe of legislation, guidance, code of conduct, almost at a daily basis at the moment. And if you are operating this space, you’d be forgiven for despairing slightly about all these different developments. So, broadly speaking, what is the UK position in terms of AI, some of the risks when we come to training some of the principles? Is there legislation? What is the position? What do businesses in this space need to be thinking about?
[00:01:27] Charlotte: Well, that’s a big question.
Steven: It is.
[00:01:29] Charlotte: Well, I think, suffice to say, for those that are grappling with this, they probably see something new every day on AI around the world, and even in the UK. So, there’s hundreds of AI laws and guidance, as you say, and lots of sectoral guidance depending on what sector you’re in, FS or healthcare, all coming through. So, I think you have to be quite specific as a company to how you’re going to approach all of this. And so, a lot of the work, well, you and I are doing with companies, but a lot of the work companies are doing themselves is say, “what is my risk appetite for full compliance versus compliance with very key things”? So, we’ll pivot to the UK legislation in a moment, but I would be remiss not to mention the EU AI Act, which is an omnibus regulation that is focused on AI specifically.
It’s about 300 pages long. It is about to enter into force and depending on which bit of AI it is because it’s got more high-risk AI for things like certain financial sector results like mortgage applications, those sorts of things, the whole list, AI and critical infrastructure, that sort of thing, or other AI, and there’s also a bit on generative AI. That’s going to have extra territorial reach. If you’re in the UK and you’re providing goods and services into the EU block, a really important block for people to sell or license into, you’re going to have to comply with that either in the next six months for certain high risk that’s going to be set out, or in the next 24 months.
[00:03:19] And so getting that wrong, there’s a big scary, potential penalty in there, which is set at 35 million euros or 7% of annual turnover, whichever is the higher now. Just to put that in perspective though, that type of level of fine is only likely to be encountered for the more serious breaches, but a bit like GDPR, this means this is a serious piece of legislation to comply with. But pivoting back to the UK, I mean, you and I’ve been talking about this. I really do think the UK is a little bit confused about how to regulate AI. On the one hand, it’s saying it’s pro-innovation. It’s saying, what we’re asking existing regulators to do is look at the regulations in the UK at the moment and say, “is there something that is missing because of now, AI that needs adding into that”? And we’ve already seen in the UK that the information commissioner who looks after essentially regulation of personal data has been bringing out a lot of guidance already on that and others will follow. And only recently in early Feb, there was a response, as you know, to the consultation that had gone out on AI.
[00:04:49] And the response is saying, we still want to do that approach. That’s still what we’re asking to do. And we want regulators to look at that by a certain point in 2024. So, I think there will be changes to regulations, particularly we already need to be looking at certain areas like personal data that need to be factored in.
[00:05:11] But again, slipped in just before Christmas break, or the Christmas holiday break, there was a House of Lords bill, which you and I have been talking to some of our U.S. clients with recently, which is saying, actually, we do want an AI-specific law. And the response to the consultation is saying that is something that may be needed in the future, but not at the moment.
[00:05:35] But this is still going through the House of Lords. And so, we do need to keep an eye on whether there will be an AI-specific law. And if it comes in, I’m not sure it will come in its current suggested place, but if it does, it does have really, really strong transparency obligations, which are potentially stronger than the EU.
[00:06:00] I’m not going to go into detail, but I do want to pivot back to IP, obviously a favourite subject of yours, an important subject for all of us working in tech. I think this is where the UK is potentially confused on pro‑innovation with some sympathy, because it’s got two forces here. It’s got a very strong AI tech community that wants to be able to train data in the UK and do that without fear of infringing IP. But a big wheelhouse of yours, it’s also got a very strong creative industry who have sort of said well, this needs thinking about more carefully. So, I just wonder if you could just explain the IP position at the moment and maybe talk about where you think that might go for the UK?
[00:06:52] Steven: Sure. So, you’re right, and it speaks exactly to these tensions. This is really about if you’re an AI provider and you want to be able to use copyright-protected material to train up your system to develop that system. And there is, in the EU, there’s quite a broad, coming into effect, text and data mining exemption, which permits this kind of training using copyright-protected material for commercial purposes. Although there is an opt-out right, if you wish to exercise that, and that’s designed to try and meet some of the commercial dynamics and opportunities in connection with these systems.
[00:07:32] In the UK, however, because it left the EU before member states had to apply these exemptions or to start consider doing so, we’re left in this rather curious position where it has a very narrow text and data mining exemption, which only permits that kind of use, that mining, for non-commercial purposes, which isn’t going to be very helpful for the vast majority of AI providers who are looking to train their database on copyright-protected material. Now, interestingly, pursuant to the point you were making around having this pro‑innovation approach, there was a consultation a couple of years ago about potentially introducing a much broader text and data mining exemption in the UK, which would apply to any purpose and, as a differentiator from the EU, there’ll be no opt-out.
[00:08:20] So, potentially super broad, but the creative forces, the creative industries in the UK, the content creators, very strong in the UK in all sorts of different fields. There’s a huge amount of pushback to that proposal. And as a consequence, in early 2023, the plans to introduce this exemption were scrapped.
[00:08:43] And so we’re now left in this quite curious position where, if you want to train your AI model in the UK, you’re relying on a very, very narrow exemption, which isn’t going to apply in most circumstances, very narrow fair dealing exemptions, again, which isn’t going to help many AI providers.
[00:08:59] It presents real challenges if you want to be, as the UK wants to be, a pro‑innovation destination. So, there is likely to be some degree of reform in this area. The UK Intellectual Property Office is looking into, again, establishing some principles that will try and strike this balance between the content creators and the AI providers.
[00:09:23] And we’re expecting that guidance to come out fairly soon, possibly looking at model licensing regimes, et cetera. But there is going to need to be reform in this area, whether it’s through principles, through guidance or, ultimately legislation, as you were mentioning. So, it’s going to be an evolving area, an evolving situation, as I think you were touching upon.
[00:09:45] Charlotte: Yeah, exactly. And such an important thing to kind of—companies to get to grips with whether they’re using AI in their back-office systems, to the extent it’s likely trained on some data. Are they comfortable that there is either something in the contract with that provider and/or they’ve done their diligence that that has been trained in a way that’s non‑infringing.
[00:10:13] And then back to sort of, I suppose, the big question that you initially posed, regardless of all these changing laws coming in, almost that diligence in that process for back-office systems, or when building their own AI systems and product services, is just really thinking about what’s their risk appetite, compliance, et cetera. But fundamentally, for governance of the organization, for compliance, it is a question right now in the UK of saying, “Well, what are the current laws saying”?
[00:10:46] A lot of those companies will have been applying to tech, but at the same time very much looking at, well, what is the guidance coming out for personal data, which is very strong in this space, watching out for other guidance, which will be informative how they need to change their back‑office governance and how they deal with procurement and/or their licensing or selling of products and services.
[00:11:18] I suppose the other thing then is I think just holding onto those core OECD‑based principles that most legislation around the world has been looking at as a good way to think about, at a high level, how to govern what I’d call Responsible AI. So, making sure that the AI system has data, the right data in there, it’s tagged in the right way, it’s been trained in a way that it won’t bring in bias or the wrong results or discrimination. There’s transparency that AI is being used, and I’ve already seen some social media companies tagging very, very clearly when AI is being used in some of the images generated.
[00:12:05] So we’re seeing some changes coming through, reacting to some of these transparency requirements, even before they come into force. But crucially, I think, whether in the back office for your own employees, for your own systems, or externally, products and services, there’s that transparency of how it’s being used.
[00:12:26] And if it does potentially reach a decision, how it’s reached, and particularly if it’s consumers or individuals, if that’s challenged—but also challenged by employees or companies, other businesses—how are you going to deal with that? So, there’s a lot of processes and procedures. And I know you and I, and others at Morrison Foerster, have been grappling quite a lot with this through contractual terms, either on the buy side or the licensing side, and also, just helping change governance processes. What people are saying out to the market on what they’re doing in their business with AI and many, many more things. There’s a lot we’ve been thinking about with clients.
[00:13:13] Steven: Yeah, so it sounds like from what you’re saying that you need to be really careful on the diligence side. You need to be careful of the contractual side. You also need to be super careful from a governance perspective. Is there anything else from a practical perspective that these AI businesses should be thinking about?
[00:13:28] Charlotte: Yeah. Again, it goes back to what do we mean by an AI business? I mean, as a tech lawyer for 25 years, I’ve always said most businesses are tech businesses, but more than ever now. And I think that will be the same with the use of AI in a lot of businesses. So, yeah, I think it is fundamentally a number of those things. But I think that redress and contestability is a key thing that feels over and above for many companies unless they’ve already been in that space. And then the other thing, and the thing that we, your team, my team, Morrison Foerster, are helping with a lot, is changing M&A activities.
[00:14:15] It’s really changed the diligence that companies buying or investing in companies that are either AI suppliers or AI-heavy companies, there’s a lot more diligence in this area. So, I think if you are looking for investment, or you are preparing for sale, or you and I would say you’re a founder and you need to get things in the right place.
[00:14:39] You have to think about that quite early and then on the buy investment side, some companies are already very sophisticated at what they’re needing to look at, but a lot more are now just grappling with, well, what do I need to think about? And I think that’s quite interesting because this is an area that could really change the monetary value quite quickly if that diligence is wrong.
[00:15:04] I think the key thing is if—going back to IP or bias or other harms, if those are found to be in the model, it’s really, really understanding—or security. I mean, that’s another big tenet that we haven’t even discussed. If that is problematic, how easily is that model unpicked? And that’s not necessarily going to be a big thing.
[00:15:33] It just absolutely depends on the model, but it is a huge thing that needs thinking about, particularly if you’ve chosen to use that AI as a pivotal part of your business, for example, smart manufacturing, and it’s suddenly controlling a lot of your systems. So, many things to think about, and I think we’ve only just touched the surface here.
[00:15:58] Steven: Yeah. And then that’s it. I think it’s a constantly evolving area, huge challenges, huge risks, huge opportunities that are also in this space. And we’ll do our very best here to try and keep you updated on developments as they come out, big changes in legislation. It seems to be changing almost on a daily basis at the moment. So please do look out for further updates, videocasts and podcasts from us on this topic.
[00:16:25] Charlotte: And finally, also, we do have some AI-specific pages on mofo.com, which are often uploaded with the latest regulations and guidance. So, you can absorb the information in a number of ways.
Steven: Charlotte, thank you very much.
[00:16:42] Charlotte: Thanks, Steven.
[00:16:50] Please make sure to subscribe to the MoFo Perspectives podcast so you don’t miss an episode. If you have any questions about what you’ve heard today or would like more information on this topic, please visit mofo.com slash podcasts. Again, that’s mofo, M O F O dot com slash podcasts.