This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
List Professionals Alphabetically
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z View All
Search Professionals
Site Search Submit
| 2 minutes read

EU Leading the Way With Passing of the Artificial Intelligence Act 

As if implementing rules to make USB-C ports mandatory for electronic devices was not enough, yesterday the European Union approved the world’s first major set of legislation to regulate the use of AI (EU AI Act), setting the global standard for AI regulations with uncertainties around the US federal bill and the UK’s Artificial Intelligence (Regulation) Bill in its second reading. 

As reported by CNBC, the AI Act has been in the making since 2021, long before the launch of ChatGPT in late 2022. The legislation is expected to come into force in May 2024, with compliance expected (depending on the type of AI system) as soon as six months later, and the majority of general-purpose AI systems are subject to compliance 12 months after entry into force. 

The EU AI Act classifies AI Systems into four categories based on risks being – 1) Unacceptable Risks, 2) High Risks, 3) Low/Limited Risks, and 4) Minimal Low Risks. The regulation ranges from a total prohibition of AI systems with unacceptable risks to stakeholders with minimal low-risk AI systems to sign up for a voluntary code of conduct. 

The scope of the EU AI Act doesn’t just cover those developing AI platforms but also a wide range of stakeholders up and down the chain, from developers all the way to distributors.

The regulations set out in the EU AI Act provide clear-cut requirements on the need for risk management systems, technical documentation and human oversight, as well as a certification standard of compliance for high-risk AI systems, amongst other things. It also provides reassurance of quality management systems for compliance, quality control and quality assurance to ensure the robustness of AI systems that are deployed in the EU. This is a welcomed set of rules to ensure the reliability and accuracy of AI systems within the EU. 

Just like GDPR, the sanctions carried with EU regulations are serious, with breaches of prohibited AI practices and EU AI Act obligations resulting in fines between €15m and up to €35m or 7 per cent to 3 per cent of annual worldwide turnover. Even false or incomplete statements to authorities could land you a fine of €7.5m or 1 per cent of your annual worldwide turnover. 

If you are a US business deploying an AI system, you may think you are off the hook, given this is European legislation. Not necessarily! If you are a US business deploying AI-based products or services that are available to anyone located in the EU, then you will still have to comply with the EU AI Act.

With a rapid ramp-up of the implementation period, there is no doubt that a lot of businesses will need to start reviewing the AI systems they use to ensure compliance and to start setting up policy and procedural frameworks to avoid the serious and high-profile fines that come with breaches of the EU AI Act. 

This post was co-written between Sarah Simpson and Larry Wong. For more information on this topic and if you need help understanding your obligations under the EU AI Act, please contact us. 

Tags

artificial intelligence, intellectual property, privacy data and cybersecurity, regulatory