Go back to menu

The new EU regulation on artificial intelligence

‘AI system’ = any software?

28 September 2021

The recently proposed EU regulation on artificial intelligence (the AI Act) will prohibit the deployment of and regulate the use, provision, import and distribution of certain classes of ‘artificial intelligence system (AI system)’. However, the proposed definition of AI system could capture almost any software used by businesses, even if it does not involve any recognisable form of artificial intelligence.

In addition, the AI Act defines some of the regulated classes of 'AI system' in very broad terms. As a result, the AI Act will impose new compliance burdens on a wide range of EU businesses and, because of its extraterritorial application, many businesses outside the EU.

The AI Act defines an AI system as any:

“software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.”

The annexed list of techniques and approaches covers machine learning approaches, logic- and knowledge-based approaches, statistical approaches, Bayesian estimation and search and optimisation methods. The Commission will be given the power to amend the annexed list of techniques and approaches to reflect market and technological developments.

This definition could capture almost any business software, even if it does not involve any recognisable form of artificial intelligence. For example, almost any spreadsheet or database tool could be regarded as using a logic-based approach or search method to generate outputs which meet human-defined objectives and influence decisions taken by its users.

Even if the listed techniques and approaches are computer-science ‘terms of art’ which limit the scope of the definition, it may be difficult to determine their meaning with legal certainty in a rapidly evolving field — the Commission’s own impact assessment acknowledges that definitions of artificial intelligence are highly contested.

Also, users, importers and distributors of third-party software – and firms treated as providers of software because they commission or modify third-party software - may not have sufficient information to determine whether the software was developed using the listed techniques or approaches. In any event, it seems likely that new software will increasingly use techniques and approaches that are associated with recognised artificial intelligence technologies.

Therefore, firms may have to assume that a very wide range of software potentially falls within the definition and look to other provisions of the AI Act to determine whether software is prohibited or subject to regulation.

The AI Act will prohibit the deployment of limited classes of AI system and will regulate the use, provision, import and distribution of other classes of 'AI system' deemed to be 'high-risk'. However, the classes of 'high-risk AI system' are also defined in very broad terms. For example, the AI Act would treat as 'high-risk' all 'AI systems' that are intended to be used for:

  • recruitment or selection of natural persons (including for advertising vacancies, screening or filtering applications, evaluating candidates in the course of interviews or tests);
  • making decisions on promotion and termination of work-related contractual relationships;
  • task allocation; and
  • monitoring and evaluating performance and behaviour of persons in work-related contractual relationships.

This broad definition, combined with the broad definition of 'AI system', means that a wide range of EU businesses will have to comply with at least some obligations under the AI Act because they use software tools to manage their human resources. In many cases, those businesses may be even be treated as providers of a 'high-risk AI system' subject to more onerous obligations under the AI Act because, for example, they create specific tools for these purposes using generic products (such as database or spreadsheet software). Software providers will be required to ensure the establishment of prescribed systems for risk management, quality management and post-market monitoring, to ensure that the design of the software meets prescribed standards (eg, to enable effective human oversight), to prepare prescribed technical documentation, to assess the software’s conformity with the AI Act before use or distribution, to make a declaration of EU conformity and to register the software on a publicly-available Commission database.

In addition, non-EU businesses will have to comply with obligations under the AI Act if the output of their 'high-risk' software is used in the EU or if they provide that software to others for use in the EU. This will create challenges for international businesses with complex group structures where EU and non-EU group companies may be subject to obligations under the AI Act in different capacities because they participate in the procurement, development, maintenance and use of 'high-risk' software and its outputs.

Other classes of business software potentially treated as 'high-risk AI systems' include software used for biometric identification and categorisation and software used for the credit evaluation of individuals in 'essential' private services (as well as 'AI systems' intended to be used as safety component of products that are subject to third party conformity assessments under EU law).

The AI Act is in the early stage of the legislative process and the European Parliament and the Council may amend the legislative proposal before it is finally adopted - but there have already been calls for changes to extend the obligations of businesses under the regulation. The legislation could become law towards the end of 2022 and businesses will have to comply with the new requirements two years later.

However, businesses will need to develop the procedures, systems and controls needed to ensure compliance well in advance of that date. When doing so, they will need to take into account how the new rules will interact with other existing and planned sectoral and cross-sectoral regulatory requirements, including their obligations under the EU General Data Protection Regulation. Businesses will be subject to fines of up to 6% of global turnover for contraventions of the AI Act and implementing governance and compliance for artificial intelligence should be a priority for boards.

For more detailed discussion of the AI Act and other international developments on the regulation of artificial intelligence, see our briefings on the future of AI regulation in Europe and its global impact (May 2021) and the impact of the new EU AI regulation on financial sector firms (September 2021).