This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
List Professionals Alphabetically
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z View All
Search Professionals
Site Search Submit
| 1 minute read

ChatGPT Violates EU Privacy Rules According to Italian Watchdog

OpenAI was thrust into the limelight last year thanks to the popularity of its generative AI chatbot ChatGPT. While it is expected that with fame, there is both praise and scrutiny, for OpenAI the balance has tipped more towards the latter – significantly at a potential cost of either €20m or 4 per cent of its global annual turnover (whichever is higher). 

When Italy’s data protection authority (Garante) applied a temporary ban on ChatGPT in March 2023, there was the expectation that a problem was brewing – the extent of which was unclear until now. Garante revealed, not yet in full, that OpenAI has violated the EU General Data Protection Regulation (GDPR), likely because ChatGPT was trained by ingesting masses of data scraped from the internet. More information on the investigation is outlined here

So while you can ask ChatGPT for details of an individual (that is not in the public eye), and it will tell you it “cannot reveal or provide access to personal data about individuals,” the issue for Garante is that the internet stores the personal data of individuals, which can be used as training data by ChatGPT. To use this personal data to train the platform, OpenAI will need a GDPR-compliant legal basis to do so, which Grante refutes. OpenAI has 30 days to respond to Garante’s notice with its defence. If OpenAI cannot provide a satisfactory defence, it could face fines at the top end of the scale, which are €20 million, or up to 4 per cent of global annual turnover. 

While we await the full details, there are key takeaways from this that apply to all organisations, not necessarily just those in the artificial intelligence space. Namely, it is crucial that all organisations understand how, why and for what reasons they use and collect personal data. From that, it is crucial to have a robust legal basis for processing the data – it is not enough to loosely assume a contractual or legitimate interest basis; it needs to be founded in fact. 

If you are a company that uses or develops AI models, while there is uncertainty, there is also a level of comfort. While we are coming to grips with the fast-changing world of AI, grappling with its many unknowns, and anticipating new AI-related legislation such as the EU AI Act, we already know the EU GDPR, how it applies and how it protects data subjects. It is difficult to ascertain whether the hard-line approach towards OpenAI is to make it an example or whether it’s indicative of the approach towards AI-related companies. In any event, the warning signals state that reviewing processes and practices thoroughly through an EU GDPR (and UK GDPR) lens is paramount. 

This post was co-written with London IP and Data Privacy Trainee, Hayley Rabet


artificial intelligence, privacy data and cybersecurity