22 January 2024

Preparing for the EU AI Act: compliance and insurance guidance for AI providers and deployers

Written By Bob Williams in Insurance

Preparing for the EU AI Act: compliance and insurance guidance for AI providers and deployers

The EU’s AI Act is set to introduce new responsibilities for businesses that develop and deploy artificial intelligence (AI) systems. Below, we have outlined some of the potential implications for organisations impacted by the Act, including key questions that insurers are likely to ask when assessing AI-related risks.

Background

The EU AI Act was proposed in April 2021 by the European Commission, and is intended to promote better conditions for the development and use of AI within the EU. Specifically, the Act seeks to ensure that AI systems are “overseen by people, are safe, transparent, traceable, non-discriminatory, and environmentally friendly.”

On 14 June 2023, members of the European Parliament (MEPs) adopted their draft negotiating mandate on the AI Act(opens a new window). Talks are currently underway with the EU Council and European Commission on the final form of the law, with an aim to reach agreement by the end of 2023, following which the Act should come into force after a two-year implementation period. Much like the EU’s General Data Protection Regulation (GDPR) in 2018, the AI Act could become a global standard.

How does the EU define AI?

Under the European Parliament’s draft mandate, the Act defines an ‘artificial intelligence system’ as:

...a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments.

Crucially, the Act seeks to base its definition on the key characteristics of AI, including learning, reasoning, or modelling capabilities, to distinguish from simpler software systems and programming approaches.

By aligning this definition with those of international organisations working on AI, including the Organisation for Economic Co-operation and Development (OECD), the hope is that the Act will establish legal certainty and encourage wide acceptance, while retaining the flexibility to accommodate the rapid technological developments in the field.

How will the risk posed by AI be classified?

There is a concern that AI systems could threaten several fundamental rights and end-users’ safety. To address concerns around the use of AI, the Act categorises AI practices according to four concrete levels of risk, each of which carries an appropriate level of legal intervention:

Unacceptable-risk AI systems that represent a clear threat to people’s safety or rights, which would be banned. This category includes practices with the potential to manipulate end-users, either through subliminal techniques or by exploiting the vulnerabilities of specific groups (i.e., children, persons with disabilities), or practices which subject individuals to profiling or social scoring. Also included is the use of “real-time” remote biometric identification in public spaces for the purpose of law enforcement. Such systems will be prohibited and non-compliance with such prohibition is punishable with a maximum fine of the higher of EUR 30 million and, for companies, 6% of total worldwide annual turnover for the preceding financial year.

High-risk AI systems that create adverse impacts on people’s safety or rights, which would face stringent regulations. Classification as high risk depends both on how the AI functions, and the purpose for which it is used. This includes systems used in critical infrastructure, educational or vocational training, safety components, employment, essential services, law enforcement, migration, and justice, or systems intended to influence voting behaviour.

Limited-risk AI systems that interact with humans, which would face specific transparency requirements. This category includes chatbots and deepfakes.

Low- or minimal-risk AI systems, which would face no regulations. This category accounts for most AI systems in current use, including AI-enabled video games or spam filters.

Crucially, the Commission may expand its list of high-risk AI systems, to ensure that the regulation can be adjusted to emerging uses and applications of AI.

Who is affected by the Act?

The AI Act sets out specific responsibilities for ‘providers’ and ‘deployers’ of AI systems. ‘Providers’ are any natural or legal person, public authority, agency, or other body that develops an AI system, or that has an AI system developed, with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge. ‘Deployers’, meanwhile, refers to any of the above using an AI system under its authority (except in the course of personal non-professional activities). The term ‘deployer’ has replaced ‘user’ from the EU Commission’s original proposal.

Parties affected by the AI Act fall into three categories:

  • Providers of AI systems established within the EU or a third country
  • Deployers of AI systems located in the EU
  • Providers and deployers of AI systems located in a third country, where the output produced by the systems is intended to be used in the EU

There are exceptions to the above, however. The draft regulation does not apply to AI systems developed or used exclusively for military purposes, or to public authorities in a third country, international organisations, or authorities using AI systems in the framework of international agreements for law enforcement and judicial cooperation.

Compliance checklist

For providers and deployers of AI, systems designated as ‘high-risk’ carry the greatest regulatory burden. The Act prescribes specific responsibilities for each party, which must be fulfilled to ensure compliance with the Act.

Providers of high-risk AI systems must:

  • Have in place a sound quality management system
  • Establish and document a robust post-market monitoring system
  • Draw up and keep relevant technical documentation
  • Ensure the system undergoes a conformity assessment procedure, prior to its placing on the market
  • When under their control, keep the logs automatically generated by their AI system that are required for ensuring compliance with the Act, for a period of at least 6 months
  • Where they have reason to consider that a system does not confirm with the Act, take the necessary corrective actions to bring it into conformity, or withdraw, disable, or recall it as appropriate
  • Inform the relevant national competent authorities (NCAs), distributors, importers, and deployers of any risks relating to the AI system of which they are aware, as well as any corrective actions taken

Meanwhile, deployers of high-risk AI systems must:

  • Take appropriate technical and organisational measures to ensure they use such systems in accordance with the instructions of use
  • To the extent that they exercise control over the high-risk AI system, implement human oversight of the high-risk AI system
  • To the extent that they exercise control over input data, ensure that input data is relevant to the intended purpose of the AI system
  • Inform the provider or distributor of any risks, serious incidents or malfunctioning involved with the use of AI systems, and suspend use of the system if needed
  • Keep the logs automatically generated by the AI system, to the extent such logs are under their control and required for ensuring compliance with the Act, for a period of at least 6 months
  • Where applicable, carry out a data protection impact assessment based on the information provided by the provider of the AI system

AI systems which are designed to interact with individuals must also fulfil specific transparency obligations:

  • Flag to end-users that they are interacting with an AI system
  • Where appropriate, disclose which functions are AI-enabled, if there is human oversight for the system, who is responsible for the decision-making process, as well as the rights of end-users to object against the system’s application to them
  • For any emotion recognition or biometric categorisation system, obtain consent prior to the processing of biometric or personal data
  • Label any artificially generated or manipulated content as inauthentic, and where possible, disclose the name of the person that generated or manipulated it

What about generative AI?

MEPs also included regulations for foundation models, defined as an AI model that is “trained on broad data at scale, is designed for generality of output, and can be adapted to wide range of distinctive tasks.” This category includes generative foundation models, such as Chat GPT.

As with AI systems, the Act specifies a set of obligations for providers of foundation models. Specifically, providers must:

  • Design models to mitigate against foreseeable risks to health, safety, fundamental rights, the environment, democracy, and the rule of law
  • Only process datasets that are subject to appropriate data governance measures
  • Design and develop the model using standards to reduce energy use, resource use, waste, and to increase efficiency
  • Draw up technical documentation and instructions for use
  • Establish a sound quality management system to ensure compliance with the Act
  • Generative foundation models also face additional transparency requirements, specifically:
  • Disclose that content was generated by AI
  • Design models to prevent the generation of illegal content
  • Publish summaries of copyrighted data used for training

How will insurers view AI-related risks?

Once the Act enters into force, businesses will have a transitional period of two years to comply. In advance of that date, insurers may begin refining their assessment of AI-related risks. Providers and deployers of AI systems should be prepared to answer questions from underwriters, such as:

  • What is the intent of the AI (e.g. streamlining services, advisory, increasing efficiency)?
  • What type of AI is it (e.g. generative, or background data and reporting)?
  • Where does it source its information (e.g. within a closed network, or from open-source data)?
  • How does it validate the data it has collected?
  • Where does the code come from/what is the intent of the code (e.g. is it generic or tailored to specific solutions, what are its limitations and oversights)?
  • For deployers of AI systems specifically, insurers may also seek clarity around the selection and onboarding of AI technologies. Deployers may be asked to demonstrate:
  • Compliance of chosen AI systems with the AI Act
  • Prior to onboarding of AI systems, that protocols are in place to assess the compliance of those systems with the AI Act
  • Protocols to govern data inputs, suspension of the AI system, incident or malfunction reporting, and retention of automatic logs
  • Evidence of having carried out a data protection impact assessment

The AI Act is not the only attempt to regulate AI, however. Other jurisdictions are already pursuing differing approaches, which are likely to place different obligations upon AI providers and deployers. The implications of each these regulations will need to be considered as part of businesses’ risk mitigation strategies.

Disclaimer: The above text was correct at the time of writing. Discussions continue between the EU Commission, EU Parliament, and EU Council over the final form of the legislation, with further changes expected.

Up Next ...
17 December 2024

Nubank Leads $250 Million Investment in African Digital Bank Tyme, Valuing It at $1.5 Billion

The funding will support Tyme's expansion into Southeast Asia, including Vietnam and Indonesia, as it aims to become a top retail bank in South Africa within three years....

17 December 2024

FIS Reportedly Set to Acquire UK-Based Fintech Demica for Around $300 Million

The acquisition is expected to enhance FIS's portfolio, following its recent purchase of San Francisco-based Banking-as-a-Service fintech Bond.

17 December 2024

AHAM Capital, leading asset manager in Malaysia, selects Temenos Multifonds SaaS to modernize its fund accounting platform

AHAM Capital is replacing legacy, on-premises systems with Temenos Multifonds Global Accounting on SaaS to drive scalable automation and future growth in the Malaysian ma...

16 December 2024

Astra Tech’s Quantix Secures $500 Million for Regional Expansion

The funding will drive the growth of its consumer lending platform, CashNow, and improve its Ultra app ecosystem

More in Insurance

Posted By The Community

Is the Insurance Industry Finally Embracing a Data-Driven, Customer-Focused Future?

18 March 2024

The insurance sector has been relatively slow to adopt the ...

Written By: Epam Systems

Insurance in the FinTech industry

23 November 2023

We sat down with Nick Rugg, Head of FinTech Insurance ...

Monzo plans insurance push with Brolly founder

08 November 2023

A recent hire at Monzo has seemingly tipped off that ...

Posted By The Community

Why fixing the broken rung is employers' next big challenge

26 October 2023

With many areas of financial services still male-dominated, it’s important for employers to consider...

Written By: Lockton

Articles Insurance

Digital Modernization in the Insurance Industry

20 March 2024

The insurance industry is conservative and prudent, and the technology investments from decades gone...

Articles Insurance

2024 global insurance outlook

02 October 2023

Key messagesEscalating frequency and severity of global risks—from climate change to cybercrime—is i...

Thought Leadership Insurance

What Does the Digital Landscape Look Like for Insurance?

08 December 2020

Emerging Digital Trends in Insurance “The world as we know it today wouldn’t work ver...

Articles Insurance

2020 insurance outlook Insurers adapt to grow in a volatile economy

13 May 2020

As insurance firms adapt to maturing markets and economic turbulence, in the long run, their ability...

There are no Events in this category