The Chartered Governance Institute - Irish Region

Irish Region

On 11 May 2023 the latest compromise text of the AI Act was voted on by the leading parliamentary committees of the European Parliament.


Major Developments in EU's Regulatory Landscape

On 11 May 2023 the latest compromise text of the AI Act was voted on by the leading parliamentary committees of the European Parliament.

They agreed to give the go ahead to the Parliament’s Compromise Text of the AI Act (Compromise Text), allowing for plenary adoption in a matter of weeks.

Some of the more noteworthy elements deal with the new application of the proposed AI Act to foundation models (such as ChatGpt, GPT4, Llama, etc).

Foundation Models

Foundation models are included in the Compromise Text and are defined as an AI model that is trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks.

Copyright Implications

A hugely important development for copyright owners is that providers of generative AI foundation models (Providers) like Midjourney, Dall-E and ChatGPT must also:

  • comply with transparency obligations;
  • ensure safeguards against content generation in breach of EU law; and
  • document and make publicly available a detailed summary of the use of copyright-protected training data.

Foundation Model Obligations

Before releasing a foundation model, Providers must ensure compliance with the requirements of the Act and therefore must:

  • demonstrate risk identification, reduction, and mitigation in areas like health, safety, fundamental rights, environment, democracy, and rule of law throughout development using methods such as involving an independent expert;
  • use datasets with proper data governance measures, ensuring the examination of data sources for suitability, possible biases, and appropriate mitigation;
  • design and develop foundation models with appropriate levels of performance, predictability, interpretability, corrigibility, safety, and cybersecurity, assessed through methods such as independent expert involvement, documented analysis, and extensive testing during conceptualisation, design, and development;
  • create extensive technical documentation and instructions for use to enable downstream providers to comply with their obligations;
  • establish a quality management system to ensure and document compliance with the Article’s requirements and register the foundation model in the EU database; and
  • keep technical documentation available for national competent authorities for ten years after their foundation models are released.

Regulatory obligations

Under the Compromise Text, providers of high-risk AI systems must ensure that their high-risk AI systems are compliant with the requirements set out in the AI Act before placing them on the EU market or putting them into service.

Providers must also ensure that natural persons to whom human oversight of high-risk AI systems is assigned are specifically made aware of the risk of automation or confirmation bias.

Providers will also be required to provide specifications for the input data, or any other relevant information in terms of the datasets used, including their limitation and assumptions, taking into account the intended purpose and the foreseeable and reasonably foreseeable misuses of the AI system.

Contract Law

From a contract law perspective, the Compromise Text stipulates that a contractual term concerning the supply of tools, services, components or processes that are used or integrated in a high-risk AI system or the remedies for the breach or the termination of related obligations which has been unilaterally imposed by an organisation on a SME or start-up will not be binding on that SME or start-up if it is unfair. This could be an important point for commercial contracts lawyers who will be drafting AI related licensing contracts.

AI Impact Assessments

AI Impact Assessments are also covered in the Compromise Text. This assessment should include a clear outline of the intended purpose, geographic and temporal scope of the system’s use, and categories of natural persons and groups likely to be affected.

AI Principles

Finally, like the GDPR – the Compromise Text has given us “AI Principles”. All operators will be required to make their best efforts to develop and use AI systems or foundation models in accordance with the following general principles:

  • human agency and oversight
  • technical robustness and safety
  • privacy and data governance
  • transparency
  • diversity, non-discrimination and fairness
  • social and environmental well-being

Conclusion

While the AI Act is far from being complete, with the Parliament’s plenary vote due, which will commence the EU’s trilogue process, we are beginning to see the clear direction it is taking, particularly in relation to data governance and human oversight. MEPs are optimistically aiming to complete the trilogue process before the end of 2023, with the AI Act aimed to be passed potentially in December, after which there will be an expected two-year transition period.

Originally published by William Fry on 11 May 2023

Search CGI