UK data watchdog the Information Commissioner’s Office (ICO) has warned businesses deploying and developing generative AI systems like ChatGPT to ensure that protecting customer information is central to their plans. The advice comes as more European countries consider whether to ban ChatGPT while its publisher OpenAI answers questions about how it collects and processes data.
In a blog post published on Monday, Stephen Almond, the ICO’s director of technology and innovation, published a list of eight questions businesses should ask themselves before incorporating AI into workflows where customer data is involved.
“It is important to take a step back and reflect on how personal data is being used by a technology that has made its own CEO ‘a bit scared’,” Almond wrote, referring to comments from OpenAI CEO Sam Altman about his own company’s systems.
He continued: “It doesn’t take too much imagination to see the potential for a company to quickly damage a hard-earned relationship with customers through poor use of generative AI. But while the technology is novel, the principles of data protection law remain the same – and there is a clear roadmap for organisations to innovate in a way that respects people’s privacy.”
Generative AI has enjoyed a boom in popularity since the launch of ChatGPT, OpenAI’s powerful natural-language chatbot which now runs on its recently released GPT-4 large language model (LLM). Microsoft has been incorporating the technology, which can answer questions with detailed and normally accurate prose, into its Office 365 suite, while other companies such as Google and Salesforce have been queuing up to launch their own AI-powered productivity tools based on LLMs.
How should businesses approach generative AI to safeguard data?
However, a backlash has already started against ChatGPT. On Friday Tech Monitor reported that Italy had blocked the chatbot from being used until OpenAI can guarantee that the way data on Italian citizens is collected and stored is compatible with the EU’s GDPR.
Italy’s data authority, Garante Privacy (GPDP), said OpenAI provides a “lack of information to users and all interested parties” over what data is collected, as well as a lack of a legal basis to justify the collection and storage of personal data that it used to train the algorithm and models that power ChatGPT.
Almond said that “organisations developing or using generative AI should be considering their data protection obligations from the outset, taking a data protection by design and by default approach”.
Content from our partners
The war in Ukraine has changed the cybercrime landscape. SMEs must beware
Why F&B must leverage digital to unlock innovation
Resilience: The power of automating cloud disaster recovery
The “data protection by design and default” approach is part of UK GDPR, and mandates businesses to “integrate or ‘bake in’ data protection into your processing activities and business practices, from the design stage right through the lifecycle”.
View all newsletters Sign up to our newsletters Data, insights and analysis delivered to you By The Tech Monitor team
Almond added: “This isn’t optional – if you’re processing personal data, it’s the law. Data protection law still applies when the personal information that you’re processing comes from publicly accessible sources.”
The blog goes on to list eight points organisations must consider should they wish to use generative AI or build their own model, covering transparency, unnecessary data processing and the impact of using AI in automated decision making.
It also encourages tech leaders using generative AI to consider their role as a data controller. “If you are developing generative AI using personal data, you have obligations as the data controller. If you are using or adapting models developed by others, you may be a controller, joint controller or a processor,” Almond says.
European countries consider ChatGPT bans
The ICO advice is in line with the UK’s general approach to regulating ChatGPT and other AI systems. Last week the government published a white paper setting out a light-touch, pro-innovation approach to AI, and said it had no plans to launch a dedicated regulator. But other European countries are considering whether to follow Italy’s lead and ban the chatbot.
France and Ireland’s privacy regulators have contacted GPDP to find out more about the basis for Italy’s ban, Reuters reported on Monday. “We are following up with the Italian regulator to understand the basis for their action and we will coordinate with all EU data protection authorities in relation to this matter,” a spokesperson for Ireland’s Data Protection Commissioner said.
Meanwhile, Germany’s data commissioner, Ulrich Kelber, told the Handelsblatt newspaper that his country could instigate a ban similar to Italy’s.
Potential privacy violations by generative AI are “just a tip of the iceberg of rapidly unfolding legal troubles,” according to Dr Ilia Kolochenko, founder of pen testing platform ImmuniWeb and a member of Europol Data Protection Experts Network.
“After the pompous launch of ChatGPT last year, companies of all sizes, online libraries and even individuals – whose online content could, or had been, used without permission for training of generative AI – started updating terms of use of their websites to expressly prohibit collecting or using their online content for AI training,” Kolochenko said.
“Even individual software developers are now incorporating similar provisions to their software licenses when distributing their open-sourced tools, restricting tech giants from stealthily using their source code for generative AI training, without paying the authors a dime.”
He added: “Contrasted to contemporary privacy legislation that currently has no clear answer whether and to what extent generative AI infringes privacy laws, website terms of service and software licenses fall under the well-established body of contract law, having an abundance of case law in most countries.
“In jurisdictions where liquidated damages in contract are permitted and enforceable, violations of website’s terms of use may trigger harsh financial consequences in addition to injunctions and other legal remedies for breach of contract, which may eventually paralyse AI vendors.”
Read more: ChatGPT is giving the rest of the world AI FOMO