小程序
传感搜
传感圈

Could UK build a national large language AI model to power tools like ChatGPT?

2023-02-25
关注

The UK urgently needs to develop its own artificial intelligence large language model (LLM) to allow its start-ups, scale-ups and enterprise companies to compete with rivals in China and the US on AI and data, a senior BT executive told MPs earlier today.

Large Language AI models are used to power services such as Microsoft’s new Bing search portal. (Photo by Rokas Tenys/Shutterstock)

Large language and other foundation AI models power tools like the ChatGPT chatbot from OpenAI, as well as image generators and other generative AI applications, including in the health and material sciences field. The largest models today are owned by big tech companies such as Google, Meta and Microsoft through its partnership with OpenAI, but they are also being developed at a national level in China and the US.

To better understand how the UK should regulate, promote and invest in artificial intelligence technology, the House of Commons Science and Technology select committee is holding an inquiry entitled Governance of artificial intelligence, and today heard from Microsoft, Google and BT executives, with BT’s chief data and AI office Adrian Joseph declaring that the UK was in an “AI arms race”. He said the country could be left behind without the right investment and government direction.

“We have been in an arms race for a long time,” he told MPs. “Big tech companies have been buying start-ups and investing in their own expertise for ten if not 20 years now.” He added that the UK is competing not just with US companies but also Chinese companies like Baidu, Tencent and Alibaba which have the scale to roll-out large models quickly.

Joseph, who also sits on the UK AI Council, said: “I strongly suggest the UK should have its own national investment in large language models and we think that is really important. There is a risk that we in the UK will lose out to the large tech companies and possibly China. There is a real risk that unless we leverage and invest, and encourage our start-up companies, we in the UK will be left behind with risks in cybersecurity, healthcare and everywhere else.“

The UK is currently ranked third in the Global AI Index and the government has announced plans to capitalise on this momentum, turning the country into an AI global superpower within the decade. But Joseph warned that without proper investment that plan would be put at risk.

Need for home-grown UK large language model

A number of UK AI companies have been sold to US and European rivals in recent years including DeepMind, the neural network research company founded out of University College London in 2010. It was acquired by Google in 2014 and has one of the largest large language models developed to date. called Gopher.

Dame Wendy Hall, regius professor of computer science at the University of Southampton also appeared before the panel of MPs and seconded the urgent need for better sovereignty over large language models and AI technology, particularly when used on NHS data. “We are at the beginning of the beginning with AI, even after all of the years it has been around, we need to keep our foot on the accelerator or risk falling behind,” Dame Hall said.

Content from our partners

The role of modern ERP in transforming the distribution and logistics sector

The role of modern ERP in transforming the distribution and logistics sector

How designers are leveraging tech to apply the brakes to fast fashion

How designers are leveraging tech to apply the brakes to fast fashion

Why the tech sector must embrace faster, smarter talent recruitment

Why the tech sector must embrace faster, smarter talent recruitment

She urged the government to get behind proposals for a sovereign large language model and the competitive capacity necessary to make it a reality and accessible to academia and start-ups alike. “It needs the UK government to get behind it, not in terms of the money as the money is out there but as a facilitator,” she explained, warning that without government support the UK would fall behind and lose out like it has with the cloud, ceding control to large US companies.

View all newsletters Sign up to our newsletters Data, insights and analysis delivered to you By The Tech Monitor team

“What we see with ChatGPT and the biases and things it gets wrong, it is based on searches across the internet,” Dame Hall continued. “When you think about taking generative AI and applying it to NHS data – data that we can trust – that is going to be incredibly powerful. Do we want to be reliant on technology from outside the UK for something that can be so powerful?”

The UK wouldn’t be the first country to consider the need for a sovereign large language model, particularly with regard to nationally sensitive or valuable data. Scale AI is a large language model designed for US national security, intelligence and operations used by the army, air force and contractors. China also has Wu Dao, a massive language model built by the government-backed Beijing Academy of AI.

“We have to be more ambitious rather than less. There is a sense of feeling we’ve done AI. This is just the very beginning of the beginning. We need to build on what we’ve done, make it better, world-leading and focus on sovereign capability,” Dame Wendy declared at the end of the session.

UK AI needs ‘light touch’ regulation

As well as a heavy focus on the need for a national large language model, the MPs also questioned the experts on the need for regulation of AI and how it should be approached, with all saying it needs to be focused on end use rather than development.

Hugh Milward, general manager for corporate, external and legal affairs at Microsoft UK said AI is a general-purpose technology and from a regulatory perspective it is best to focus on the final use case rather than how it is built, trained and developed.

“If we regulate its use then that AI, in its use in the UK, has to abide by a set of principles,” he said. “If we have no way to regulate the development of the AI and its use in China, we can regulate how it is used in the UK. It allows us to worry less about where it is developed and worry more about how it is used irrespective of where it is developed.”

He gave the example of a dual-use technology like facial recognition. It could be used to recognise a cancer in a scan, find a missing child, identify a military target or by a despotic regime to find unlicensed citizens. “Those are the same technology used for very different purposes and if we restrict the technology itself we wouldn’t get the benefits from the cancer scans in order to solve for the problem with its use in a surveillance society,” he said.

James Gill, partner and co-head of law firm Lewis Silkin’s digital, commerce and creative team watched the session for Tech Monitor and said regulation in this space is “very much a movable feast”. He explained: “AI has huge potential as a problem-solving tool, including to address major socio-economic and environmental challenges. As ever with disruptive technology, the challenge will be to develop a regulatory position which provides sufficient protections for safe usage while not stymying progress and innovation.

“As the committee heard, the general regulatory environment for digital technology is much more developed now than at, say, the outset of the Web 2.0 revolution – and the law needs to keep abreast of technological advancements. Savvy businesses wishing to develop or deploy AI will now be planning ahead to understand the implications of the emerging regulatory framework.”

Read more: Microsoft takes ‘multi-billion dollar’ stake in ChatGPT maker OpenAI

Topics in this article : AI , Regulation

参考译文
英国能建立一个全国性的大型语言人工智能模型来支持ChatGPT这样的工具吗?
英国电信(BT)一名高管今日早些时候向国会议员表示,英国迫切需要开发自己的人工智能大型语言模型(LLM),以允许英国的初创企业、规模扩大企业和企业公司在人工智能和数据方面与中国和美国的竞争对手竞争。大型语言和其他基础人工智能模型支持OpenAI的ChatGPT聊天机器人等工具,以及图像生成器和其他生成式人工智能应用程序,包括在健康和材料科学领域。目前最大的智能机器人模型由谷歌、Meta和微软(Microsoft)等大型科技公司通过与OpenAI的合作拥有,但它们也在中国和美国的国家层面上开发。为了更好地理解英国应该如何监管、促进和投资人工智能技术,下议院科学和技术特别委员会正在举行一场名为“人工智能治理”的调查,今天听取了微软、谷歌和英国电信高管的意见,英国电信首席数据和人工智能办公室阿德里安·约瑟夫(Adrian Joseph)宣布,英国正处于“人工智能军备竞赛”中。他说,如果没有正确的投资和政府指导,这个国家可能会落后。他对议员们表示:“我们长期以来一直在进行军备竞赛。”“大型科技公司收购初创企业并投资于自己的专业技术,即使没有20年,也已经有10年了。”他补充说,英国不仅要与美国公司竞争,还要与百度、腾讯和阿里巴巴等中国公司竞争,这些公司拥有迅速推出大型模式的规模。约瑟夫也是英国人工智能委员会的成员,他说:“我强烈建议英国应该在大型语言模型方面进行国家投资,我们认为这真的很重要。我们在英国可能会输给大型科技公司,也可能是中国。这是一个真正的风险,除非我们利用、投资并鼓励我们的初创公司,否则我们英国人将在网络安全、医疗保健和其他领域被抛在后面。“英国目前在全球人工智能指数中排名第三,政府已经宣布计划利用这一势头,在十年内将英国变成全球人工智能超级大国。但约瑟夫警告说,如果没有适当的投资,该计划将面临风险。近年来,多家英国人工智能公司被出售给了美国和欧洲的竞争对手,其中包括2010年在伦敦大学学院(University College London)成立的神经网络研究公司DeepMind。它在2014年被谷歌收购,拥有迄今为止开发的最大的大型语言模型之一。金花鼠。南安普顿大学(University of Southampton)计算机科学皇家教授温迪·霍尔爵士(Dame Wendy Hall)也出现在国会议员小组面前,并表示迫切需要对大型语言模型和人工智能技术拥有更好的主权,特别是在用于NHS数据时。霍尔女士说:“我们正处于人工智能的起步阶段,即使它已经存在了这么多年,我们也需要继续踩油门,否则就有落后的风险。”她敦促政府支持主权大型语言模型的提议,并具备必要的竞争能力,以使其成为现实,并为学术界和初创企业所接受。她解释说:“它需要英国政府的支持,不是在资金方面,因为资金就在那里,而是作为一个推动者。”她警告说,如果没有政府的支持,英国将落后,就像在云计算领域一样,将控制权拱手让给美国大公司。“我们在ChatGPT上看到的偏见和错误,都是基于互联网上的搜索,”Dame Hall继续说道。“当你考虑将生成式人工智能应用于我们可以信任的NHS数据时,这将是非常强大的。我们想要依赖英国以外的技术来实现如此强大的功能吗?” 英国并不是第一个考虑建立主权大型语言模型的国家,尤其是涉及国家敏感或有价值的数据时。Scale AI是为美国国家安全、情报和行动设计的大型语言模型,供陆军、空军和承包商使用。中国还有政府支持的北京人工智能研究院(Beijing Academy of AI)构建的大型语言模型“吴道”(Wu Dao)。“我们必须更有雄心,而不是更少。我们有一种做人工智能的感觉。这只是开始的开始。我们需要在我们已经做过的事情的基础上,让它变得更好,世界领先,并专注于主权能力,”温迪爵士在会议结束时宣布。除了高度关注建立国家大型语言模型的必要性外,议员们还就人工智能监管的必要性以及应该如何处理人工智能问题向专家提出了质疑,所有人都表示,人工智能需要关注最终用途,而不是发展。微软英国公司、外部和法律事务总经理休•米尔沃德(Hugh Milward)表示,人工智能是一种通用技术,从监管角度来看,最好关注最终的用例,而不是如何构建、培训和开发。他说:“如果我们对其使用进行监管,那么在英国使用人工智能就必须遵守一系列原则。”“如果我们没有办法监管人工智能的发展及其在中国的使用,我们可以监管它在英国的使用方式。它使我们更少地担心它是在哪里开发的,而更多地担心它是如何使用的,而不管它是在哪里开发的。他举了一个面部识别等军民两用技术的例子。它可以被用来在扫描中识别癌症,寻找失踪的儿童,识别军事目标,或者被专制政权用来寻找无证公民。他说:“这些是用于不同目的的相同技术,如果我们限制技术本身,我们就无法从癌症扫描中获益,从而解决在监控社会中使用癌症扫描的问题。”刘易斯•西尔金律师事务所(Lewis Silkin)数字、商业和创意团队的合伙人兼联席主管詹姆斯•吉尔(James Gill)为《科技观察》(Tech Monitor)观看了此次会议,他表示,这一领域的监管“在很大程度上是一场流动的盛宴”。他解释说:“人工智能作为解决问题的工具具有巨大潜力,包括应对重大的社会经济和环境挑战。与以往的颠覆性技术一样,挑战将是制定一个监管立场,为安全使用提供足够的保护,同时不阻碍进步和创新。“正如委员会所听到的,现在数字技术的一般监管环境比Web 2.0革命之初要发达得多,法律需要跟上技术进步的步伐。希望开发或部署人工智能的精明企业现在将提前规划,以了解新兴监管框架的影响。”
您觉得本篇内容如何
评分

评论

您需要登录才可以回复|注册

提交评论

提取码
复制提取码
点击跳转至百度网盘