Lewis Silkin – AI 101: The Regulatory Framework

This is the third article in our “AI 101” series, where the team at Lewis Silkin unravel the legal issues involved in the development and use of AI text and image generation tools. In the previous article of the series, we looked at the infringement risks of using AI-generated works. In this article, we consider the regulatory framework for AI being proposed by the European Commission and how the UK might follow suit.

Text:

The Draft European AI Regulation

Back in April 2021, the European Commission published its proposal for the Artificial Intelligence Regulation (“AI Regulation), which is currently making its way through the European legislative process. This draft AI Regulation seeks to harmonise rules on artificial intelligence by ensuring AI products are sufficiently safe and robust before they enter the EU market.

The AI Regulation is intended to apply to what the EU terms “AI systems”. The most recent iteration of this concept is defined (in summary) as all systems developed through machine learning approaches and logic, and knowledge-based approaches. This is a wide definition aimed to accommodate future developments in AI technology but extends to much of modern AI software.

The broad scope of this definition is narrowed by the operational impact of the draft legislation, as the AI Regulation takes a ‘risk-based approach’ to governing AI systems. Not all AI systems will be subject to the obligations under the AI Regulation. The AI Regulation divides AI systems into different strands of risk, based on the intended use of the system:

  • Prohibited Practices: AI systems that use social scoring (i.e. creating a social score for a person that leads to unfavourable treatment), facial recognition, manipulation (by exploiting any vulnerabilities of specific groups of people, e.g. due to their age, to distort their behaviours) and dark pattern AI.
  • High-Risk AI Systems: AI systems with use cases in education, employment, justice, and immigration law among others use cases.
  • Limited Risk AI Systems: this includes, at the time of writing, chatbots, emotion recognition and biometric categorisation and systems generating ‘deep fake’ or synthetic content.
  • Minimal Risk AI Systems: this includes spam filters or AI-enabled video games.

Providers of such an AI system will be under an obligation to ensure that the AI system complies with the requirements under the risk level that corresponds to its risk allocation. For example, a provider of a “High-Risk AI System” will become subject to a whole host of requirements relating to risk management; the quality of data sets used to train the AI; performance-testing; record keeping; cybersecurity; and a requirement for effective human oversight of the AI.

Equally, users of “High-Risk AI systems” will be required to use the AI system in accordance with the provider’s instructions (including with regards to the implementation of human oversight measures); ensure that the input data is relevant for the intended purpose; monitor the operation for incidents or risks; “interrupt” the system in the case of serious incidents (or suspend its use if they consider that use may result in such a risk); and keep logs generated by the AI system. They will also be required to carry out a data protection impact assessment (DPIA) under the GDPR before using a high-risk AI system (although it feels the horse may have bolted on this front given the widespread public use of ChatGPT and other “GPAIS” tools already – see below).

The AI Regulation provides for substantial fines in the event of non-compliance as well as other remedies, which can scale up to the higher of EUR 30 million and 6% of the total worldwide annual turnover in the most serious cases.

The draft AI Regulation is intended to have broad territorial scope reaching far beyond the borders of just the EU – it is envisaged to apply to:

  • providers that first supply commercially or put an AI system into service in the EU, regardless of whether the providers are located inside or outside the EU;
  • users of AI located within the EU; and
  • providers and users located outside the EU, if the output produced by the system is used within the EU.

Will the draft AI Regulation impact “generative AI” tools (like ChatGPT)?

Particularly relevant to the AI tools we have discussed so far in this blog (i.e. text and image-generating AI) are the recent amendments to the AI Regulation in 2022 – which introduced the concept of General Purpose AI System (“GPAIS“) and includes any AI system that can be used for many different purposes and tasks.

Again, this wide definition captures a variety of AI tools, including AI models for image and speech recognition, pattern detection, translation and also Text and Image Generating AI (like OpenAI’s ChatGPT and Dall-E). It is difficult to predict the potential applications for a GPAIS because these systems are versatile and can complete a variety of tasks when compared to ‘narrow-based’ AI systems, which have specific intended use cases. For example, a text-generating AI tool might be used to draft patient letters for medical professionals, utilising sensitive patient data, even if this was not its original intention. Whilst a GPAIS might be considered as a great technological development by AI enthusiasts, from the EU law-making perspective, such unpredictable applications are considered “high-risk”.

The AI Regulation previously designated an AI system as “high-risk” where its intended purpose was high-risk. However, bringing GPAIS within scope of the “high-risk” classification due to the (however unlikely) chance of a “high-risk application” means such systems are likely to become subject to tough compliance requirements and the associated cost consequences.

The concern with this amendment is that providers will be given impractical requirements, such as listing all possible applications of a tool and the requirement to develop mitigation strategies to deal with such applications. Some commentators have suggested that the full force of the high-risk section of the AI Regulation should apply only if a GPAIS is indeed used for high-risk purposes, rather than having a possible application.

What about in the UK?

As mentioned above, the EU’s draft AI Regulation will likely extend beyond the borders of the EU and may apply to providers and users based within the UK. Therefore, Brexit won’t allow UK-based developers to avoid its effect completely.

Domestically, by way of its National AI Strategy, the UK government set out an ambitious ten-year plan for the UK to remain a global AI superpower – seeking to harness the enormous economic and societal benefits of AI while also addressing the complex challenges it presents.

Even though the UK has not outlined its regulatory framework for AI just yet, the Government’s AI Policy paper published last year (“Establishing a pro-innovation approach to regulating AI”) does provide cause for optimism (if you are a developer) as it sets out a new pro-innovation approach that is “context-specific, risk-based, coherent, proportionate and adaptable” – all buzzwords that imply a different approach to regulation when compared to the staunch regulatory rhetoric of the EU.

Content moderation

Content moderation has been a hot topic given the recently adopted EU Digital Services Act (“DSA”), which has redesigned the rules for offering online content, services and products to consumers in the EU, and the UK’s parallel but contrasting key domestic proposals in the form of the Online Safety Bill (“OSB”), which touches on many of the same aspects as the EU’s DSA.

The DSA has been created with the intention of setting a new standard for greater accountability of online platforms regarding illegal or potentially harmful online content and is due to take effect in 2024.

However, the DSA primarily applies to intermediary services (defined as internet access providers, caching services or hosting services) that store information provided by, and at the request of, a user. In the case of generative AI tools, which themselves generate harmful content, the software is not hosting content created by users and the DSA provisions on intermediary liability are ill-suited to deal with the harms. This may leave room for the EU to legislate (or adapt legislation) to capture the generation of harmful content using AI tools.

Data Protection

The draft EU AI Regulation will also overlap with the protections offered by the General Data Protection Regulation (“GDPR”). Our next blog will delve further into the applicability of the GDPR and privacy regulation to AI tools.

Related Item(s): Technology & Communications, Intellectual Property, Metaverse and web 3.0, AI 101: What are the infringement risks of using AI-generated works?, AI 101: Who owns the output of generative AI?, AI 101: How do AI tools work and why are lawsuits being raised?

Author(s)/Speaker(s): JJ Shaw, Jordan Quartey,