Italy’s data protection authority, Garante, has given OpenAI a “to-do list” of tasks it has to complete to be compliant with GDPR in the country and has until 30 April to get it completed. This includes publishing information on the logic of data processing required to make ChatGPT work, ensuring users are over the age of 18, and creating tools where non-users can object to processing of personal data. It comes the same week the Biden Administration in the US announced plans for general AI regulations.
OpenAI was forced to block ChatGPT in Italy after the data watchdog issued a GDPR breach notice. (Photo by Ascannio / Shutterstock.com)
Garante first took action to block ChatGPT in Italy at the end of last month after issuing an order against OpenAI, suspecting the chatbot of being in breach of the EU’s GDPR legislation. It barred the company from processing local data. It was seen by some experts as a “test case” that could be followed by other data regulators throughout the EU.
Sam Altman, OpenAI CEO, tweeted at the time that it would cease offering the service in Italy but added that “I think we are following all privacy laws”. OpenAI placed a geoblocking exclusion on Italian IP addresses and stopped offering its Plus subscription. OpenAI has since published its “safety measures” in training the tool.
To comply with the order, which could be mirrored across the EU and lead to extensive fines against OpenAI‘s global turnover for failing to comply, the company will have to become very open about its data collection and processing practices.
“OpenAI will have to draft and make available, on its website, an information notice describing the arrangements and logic of the data processing required for the operation of ChatGPT along with the rights afforded to data subjects (users and non-users),” the regulator declared. “The information notice will have to be easily accessible and placed in such a way as to be read before signing up for the service.”
As well as being shown this notice when signing up for the first time, users will have to be presented with it at the time of accessing the service once it is reactivated.
There will also need to be age-gating technology deployed to ensure no Italian users are under the age of 18, or at least have parental consent if aged 13 to 18. This will first be done with a notice asking users to declare their age before accessing the service, but also filter out any users already signed up who are under the age of 13, or who are between the ages of 13 and 18 but with no parental authority. They have until 31 May to submit a plan for implementing such a system and have to have it in place by the end of September.
The wider legal basis
The bigger issue comes from the legal basis OpenAI has for processing user data in terms of training the models that are used to run ChatGPT, namely GPT-3 and GPT-4. These are made up of trillions of pieces of data from sources such as Wikipedia and book repositories, but also from internet scraping. The final aspect could see personal information caught up in the data set and potentially exposed during a “conversation” with the AI.
Content from our partners
The war in Ukraine has changed the cybercrime landscape. SMEs must beware
Why F&B must leverage digital to unlock innovation
Resilience: The power of automating cloud disaster recovery
“Regarding the legal basis of the processing of users’ data for training algorithms, the Italian SA ordered OpenAI to remove all references to contractual performance and to rely – in line with the accountability principle – on either consent or legitimate interest as the applicable legal basis,” the regulator said. “This will be without prejudice to the exercise the SA’s investigatory and enforcement powers in this respect.”
View all newsletters
Sign up to our newsletters
Data, insights and analysis delivered to you
By The Tech Monitor team
Sign up here
OpenAI will also have to enable a mechanism where users and non-users whose data is inside the system, either through the training data or stored by OpenAI, can have that information corrected if it is shown to be wrong, or have it erased completely. “OpenAI will have to make available easily accessible tools to allow non-users to exercise their right to object to the processing of their personal data as relied upon for the operation of the algorithm,” the watchdog declared.
We of course defer to the Italian government and have ceased offering ChatGPT in Italy (though we think we are following all privacy laws).Italy is one of my favorite countries and I look forward to visiting again soon!
Even if OpenAI is able to comply with the entire checklist it doesn’t mean they are “off the hook”. It is simply what is required to be allowed to process Italian user data and reactivate the service. Garante says it is continuing its investigation “to establish possible infringements of the legislation” which could lead to fines or further action or additional measures.
Claire Tratchet, cybersecurity expert and CFO of bug bounty program YesWeHack told Tech Monitor there is a global debate around regulation of generative AI that swings between stimulating innovation and mitigating privacy concerns. “Because of this, we’re seeing various countries approach the situation differently,” she said. “For instance, Italy has prioritised the latter, they are so focused on safeguarding that they have implemented an entire ban on ChatGPT.
“Whereas the US are introducing various forms of legislation and regulation. And then the UK has introduced a framework which has been regarded as ‘light touch’ compared to other countries, with the hope that this increases investment in the sector at a time when the economy needs it. While all these views are reasonable in their own way, realistic regulation cannot be made overnight.”
She said CEOs and governments need to ensure safeguarding principles are in place before deploying the technology. “The biggest risk of generative AI is that is so fast-paced, that it becomes so difficult to keep up with the innovation. This is why implementing some form of risk management and security in a coordinated manner is so important. Both the government and the CEOs of AI companies need to track the safeguarding of AI to ensure that users do not run into problems with the software.”