European politicians want to expand the EU AI Act to regulate general purpose AI such as ChatGPT and image generators like Midjourney. These tools weren’t included in the original version of the act currently being debated by lawmakers but the technology has progressed faster than expected. It follows moves in the US and China to regulate AI at developer, rather than user, level.
ChatGPT launched in November 2022 and its success led companies like Google and Salesforce to change direction. (Photo: Ascannio/Shutterstock)
Since its launch in November 2022, OpenAI’s chatbot ChatGPT has become one of the fastest-growing apps in history and led to companies like Google, Microsoft and Salesforce updating their software and business plans to incorporate generative AI.
A report by Goldman Sachs suggests this technology could hit 300 million jobs across every sector of the economy but also lead to a global productivity boom. This, combined with the risk of misinformation from generated content has led to calls for stricter regulation.
A group of 12 European Parliament politicians are working on updating the EU AI Act, details of which are currently being finalised, to include a “set of preliminary rules for the development and deployment of powerful General Purpose AI systems” that can be adapted easily for a range of purposes.
Companies like Microsoft, which has a $10bn investment in OpenAI and utilises its technology across its product suite through the Copilot brand, say general AI tools should be regulated based on end-use rather than at the development level. In contrast, China recently unveiled a set of generative AI regulations that place the burden of accuracy and trust on the developer of the AI model.
The EU politicians have written an open letter to European Commission president Ursula von der Leyen and US president Joe Biden calling for a global summit on the governance of general AI, suggesting a need for a joined-up, human-centric and safe approach.
EU AI approach: need to ‘pause’ and reflect on development
“The recent advent of and widespread public access to powerful AI, alongside the exponential performance improvements over the last year of AI trained to generate complex content, has prompted us to pause and reflect on our work,” in regulating AI, the letter states. The current act focuses on high-risk use cases such as medicine and law, but doesn’t consider the implications of tools with more than one purpose such as a large language model.
It comes as a number of EU countries have opened investigations into OpenAI’s ChatGPT over data protection concerns. Italy recently banned the use of the tool and stopped OpenAI processing Italian user data until it complied with GDPR and in the US the Federal Trade Commission (FTC) has warned it will pursue any company using AI to violate laws against discrimination.
Content from our partners
How enhanced analytics is creating a more sustainable F&B sector
The war in Ukraine has changed the cybercrime landscape. SMEs must beware
Why F&B must leverage digital to unlock innovation
Lina Khan, FTC chair, warned against using AI tools in this way during a congressional hearing yesterday, alongside commissioners Rebecca Slaughter and Alvaro Bedoya. “It’s not okay to say that your algorithm is a black box and you can’t explain it,” declared Bedoya, explaining that companies couldn’t hide behind an algorithm or AI to violate laws.
View all newsletters
Sign up to our newsletters
Data, insights and analysis delivered to you
By The Tech Monitor team
Sign up here
The UK is taking a more open, innovation-friendly approach to AI regulation, focusing on end-use and putting guidelines in the hands of individual industry regulators, rather than having a single, overarching AI act. This runs counter to the stricter rules being considered elsewhere.
The open letter from EU lawmakers follows a similar open letter from the Future of Life Institute published in March, and signed by the likes of Elon Musk and Steve Wozniak, which called for a pause on the development of large language models until ethical standards could be established.
Not all MEPs agreed with the Future of Life Institute letter, describing it as “alarmist” but with some good points, suggesting agreement with “the letter’s core message” if not the entire message. They added that “we see the need for significant political action,” in their own open letter that demanded democratic and “non-democratic” countries reflect on the potential risks posed by powerful AI.