小程序
传感搜
传感圈

Qualcomm moves AI image generation to the edge

2023-02-26
关注

Producing pictures of cats in jackets, chip giant Qualcomm has successfully ported and run the Stable Diffusion foundation model on a mobile phone for the first time. The company says this is a major breakthrough in Edge AI that will enable companies to reduce cloud fees and increase security by bringing generative AI to edge computing.

Qualcomm says by running Stable Diffusion on an edge device like a mobile phone companies could save on cloud fees (Photo: Qualcomm)
Qualcomm says by running Stable Diffusion on an edge device such as a mobile phone, companies could save on cloud fees. (Photo: Qualcomm)

The model runs completely on the device which significantly reduces runtime latency and power consumption, according to the Qualcomm AI research team. The neural network was trained on a vast quantity of data at scale, allowing users to generate photorealistic images from a line or word of text – which makes it power-hungry when it is run.

The company had to optimise every layer of the Stable Diffusion model as well as the entire application, the model, the algorithms, software and the hardware to get it working on a Snapdragon 8 Gen 2 mobile platform. They did this through re-training and post-training quantisation that significantly reduced the power, memory and energy requirements.

They used the FP32 version of Stable Diffusion from Hugging Face – that is the 32 single precision floating point model, a format used in scientific calculation that doesn’t require a great emphasis on precision, and is widely used in AI – then converted it to the smaller, more manageable INT8 format which uses 8-bit integers instead of floating points.

They had to run the re-training techniques across every component model used to make Stable Diffusion work, including the text encoder and Unet. It was also in part possible due to optimisations made to the Qualcomm AI Engine and co-design and integration of hardware and software on the HExagon Processor. Snapdragon 8 Gen 2 also comes with micro tile inferencing which enables large models to run efficiently, which suggests we could see more AI models running on edge devices in future.

Reducing latency and cloud fees

“The result of this full-stack optimisation is running Stable Diffusion on a smartphone in under 15 seconds for 20 inference steps to generate a 512×512 pixel image — this is the fastest inference on a smartphone and comparable to cloud latency. User text input is completely unconstrained,” Qualcomm engineers wrote.

This, says Qualcomm, is the start of the “edge AI era” with large AI cloud models gravitating towards edge devices making them faster and more secure. “Although the Stable Diffusion model seems quite large, it encodes a huge amount of knowledge about speech and visuals for generating practically any imaginable picture,” the engineers wrote.

Its potential goes beyond making pretty pictures, as developers could now integrate this technology into image editing, in painting and style transfer applications running completely on the device even without an internet connection.

Content from our partners

The role of modern ERP in transforming the distribution and logistics sector

The role of modern ERP in transforming the distribution and logistics sector

How designers are leveraging tech to apply the brakes to fast fashion

How designers are leveraging tech to apply the brakes to fast fashion

Why the tech sector must embrace faster, smarter talent recruitment

Why the tech sector must embrace faster, smarter talent recruitment

Qualcomm says it will now focus on scaling edge AI including further optimisation of Stable Diffusion to run efficiently on phones and other platforms including laptops, XR headsets and any other device with a Snapdragon processor.

View all newsletters Sign up to our newsletters Data, insights and analysis delivered to you By The Tech Monitor team

This, the company says, will allow end users to reduce cloud computing costs by running processes at the edge and ensure privacy as the input and output never leave the device. “The new AI stack optimisation also means that the time-to-market for the next foundation model that we want to run on the edge will also decrease. This is how we scale across devices and foundation models to make edge AI truly ubiquitous.”

Read more: Nvidia reveals cloud AI tools as it cashes in on ChatGPT success

Topics in this article : AI , Qualcomm

参考译文
高通将人工智能图像生成推向边缘
芯片巨头高通公司(Qualcomm)制作了穿着夹克的猫的照片,首次成功地将“稳定扩散”(Stable Diffusion)模型移植到手机上。该公司表示,这是边缘AI的重大突破,将使公司通过将生成式AI引入边缘计算来降低云计算费用并提高安全性。高通人工智能研究团队表示,该模型完全在设备上运行,大大降低了运行延迟和功耗。神经网络是在大规模的大量数据上进行训练的,允许用户从一行或一个字的文本中生成逼真的图像,这使得它在运行时耗电很大。该公司必须优化稳定扩散模型的每一层,以及整个应用程序、模型、算法、软件和硬件,以使其在骁龙8 Gen 2移动平台上工作。他们通过重新训练和训练后量化来实现这一点,这大大降低了功率、内存和能量需求。他们使用了拥抱脸的FP32版本的稳定扩散-这是32单精度浮点模型,一种用于科学计算的格式,不需要非常强调精度,并广泛用于人工智能-然后将其转换为更小,更易于管理的INT8格式,使用8位整数而不是浮点数。他们必须在用于使稳定扩散工作的每个组件模型上运行重新训练技术,包括文本编码器和Unet。这在一定程度上也得益于高通AI引擎的优化,以及六边形处理器上硬件和软件的协同设计和集成。骁龙8 Gen 2还配备了微瓦推理功能,使大型模型能够高效运行,这表明未来我们可能会看到更多AI模型在边缘设备上运行。“这种全堆栈优化的结果是在15秒内在智能手机上运行Stable Diffusion,进行20个推断步骤,生成512×512像素的图像——这是智能手机上最快的推断,与云延迟相当。用户文本输入完全不受限制,”高通工程师写道。高通表示,这是“边缘人工智能时代”的开始,大型人工智能云模型被吸引到边缘设备,使它们更快、更安全。工程师们写道:“虽然稳定扩散模型看起来相当大,但它编码了大量关于语音和视觉的知识,可以生成几乎任何可以想象到的图片。”它的潜力不仅仅是制作漂亮的图片,因为开发人员现在可以将这项技术集成到图像编辑、绘画和风格转换应用程序中,即使没有互联网连接,也可以完全在设备上运行。高通表示,它现在将专注于扩展边缘人工智能,包括进一步优化Stable Diffusion,以便在手机和其他平台(包括笔记本电脑、XR耳机和任何其他带有骁龙处理器的设备)上高效运行。该公司表示,这将允许最终用户通过在边缘运行进程来降低云计算成本,并确保输入和输出永远不会离开设备的隐私。“新的AI堆栈优化也意味着我们希望在边缘上运行的下一个基础模型的上市时间也将缩短。这就是我们如何跨设备和基础模型扩展,使边缘人工智能真正无处不在。”
您觉得本篇内容如何
评分

评论

您需要登录才可以回复|注册

提交评论

提取码
复制提取码
点击跳转至百度网盘