小程序
传感搜
传感圈

Beyond Quantum Supremacy: The Hunt for Useful Quantum Computers

2022-08-10
关注

Occasionally Alán Aspuru-Guzik has a movie-star moment, when fans half his age will stop him in the street. “They say, ‘Hey, we know who you are,’” he laughs. “Then they tell me that they also have a quantum start-up and would love to talk to me about it.” He doesn’t mind a bit. “I don’t usually have time to talk, but I’m always happy to give them some tips.” That affable approach is not uncommon in the quantum-computing community, says Aspuru-Guzik, who is a computer scientist at the University of Toronto and co-founder of Zapata Computing in Cambridge, Mass. Although grand claims have been made about a looming revolution in computing, and private investment has been flowing into quantum technology, it is still early days, and no one is sure whether it is even possible to build a useful quantum computer.

Today’s quantum machines have at best a few dozen quantum bits, or qubits, and they are often beset by computation-destroying noise. Researchers are still decades—and many thousands of qubits—away from general-purpose quantum computers, ones that could do long-heralded calculations such as factoring large numbers. A team at Google has reportedly demonstrated a quantum computer that can outperform conventional machines, but such “quantum supremacy” is expected to be extremely limited. For general applications, 30 years is “not an unrealistic timescale,” says physicist John Preskill of the California Institute of Technology. Some researchers have raised the possibility that, if quantum computers fail to deliver anything of use soon, a quantum winter will descend: enthusiasm will wane and funding will dry up before researchers get anywhere close to building full-scale machines. “Quantum winter is a real concern,” Preskill says. Yet he remains upbeat because the slow progress has forced researchers to adjust their focus and see whether the devices they have already built might be able to do something interesting in the near future.

Judging from a flurry of papers published during the past few years, it’s a definite possibility. This is the era of the small, error-prone, or “noisy intermediate-scale quantum” (NISQ), machine, as Preskill has put it. And so far it has turned out to be a much more interesting time than anyone had anticipated. Although the results are still quite preliminary, algorithm designers are finding work for NISQ machines that could have an immediate impact in chemistry, machine learning, materials science and cryptography—offering insights into the creation of chemical catalysts, for example. These innovations are also provoking unexpected progress in conventional computing. All this activity is running alongside efforts to build bigger, more robust quantum systems. Aspuru-Guzik advises people to expect the unexpected. “We’re here for the long run,” he says. “But there might be some surprises tomorrow.”

Fresh Prospects

Quantum computing might feel like a 21st-century idea, but it came to life the same year that IBM released its first personal computer. In a 1981 lecture, physicist Richard Feynman pointed out that the best way to simulate real-world phenomena that have a quantum-mechanical basis, such as chemical reactions or the properties of semiconductors, is with a machine that follows quantum-mechanical rules. Such a computer would make use of entanglement, a phenomenon unique to quantum systems. With entanglement, a particle’s properties are affected by what happens to other particles with which it shares intimate quantum connections. These links give chemistry and many branches of materials science a complexity that defies simulation on classical computers. Algorithms designed to run on quantum computers aim to make a virtue of these correlations, performing computational tasks that are impossible on conventional machines.

Yet the same property that gives quantum computers such promise also makes them difficult to operate. Noise in the environment, whether from temperature fluctuations, mechanical vibrations or stray electromagnetic fields, weakens the correlations among qubits, the computational units that encode and process information in the computer. That degrades the reliability of the machines, limits their size and compromises the kinds of computation that they can perform. One potential way to address the issue is to run error-correction routines. Such algorithms, however, require their own qubits—the theoretical minimum is five error-correcting qubits for every qubit devoted to computation—adding a lot of overhead costs and further limiting the size of quantum systems.

Some researchers are focusing on hardware. Microsoft Quantum’s multinational team is attempting to use exotic “topological particles” in extremely thin semiconductors to construct qubits that are much more robust than today’s quantum systems. But these workarounds are longer-term projects, and many researchers are focusing on what can be done with the noisy small-scale machines that are available now—or will be in the next five to 10 years. Instead of aiming for a universal, error-corrected quantum computer, for example, physicist Jian-Wei Pan and his team at the University of Science and Technology of China in Hefei are pursuing short- and mid-term targets. That includes quantum supremacy and developing quantum-based simulators that can solve meaningful problems in areas such as materials science. “I usually refer to it as ‘laying eggs along the way,’” he says.

Researchers at Zapata Computing, including co-founder Alán Asparu-Guzik (fourth from left), are building quantum algorithms for today’s systems. Credit: Doug Levy

Bert de Jong of Lawrence Berkeley National Laboratory has his eye on applications in chemistry, such as finding alternatives to the Haber process for the manufacture of ammonia. At the moment, researchers must make approximations to run their simulations on classical machines, but that approach has its limits. “To enable large scientific advances in battery research or any scientific area relying on strong electron correlation,” he says, “we cannot use the approximate methods.” NISQ systems won’t be able to perform full-scale chemistry simulations. But when combined with conventional computers, they might demonstrate an advantage over existing classical simulations. “The classically hard part of the simulation is solved on a quantum processor, while the rest of the work is done on a classical computer,” de Jong says.

This kind of hybrid approach is where Aspuru-Guzik earned his fame. In 2014 he and his colleagues devised an algorithm called the variational quantum eigensolver (VQE), which uses conventional machines to optimize guesses. Those guesses might be about the shortest path for a traveling salesperson, the best shape for an aircraft wing or the arrangement of atoms that constitutes the lowest energy state of a particular molecule. Once that best guess has been identified, the quantum machine searches through the nearby options. Its results are fed back to the classical machine, and the process continues until the optimum solution is found. As one of the first ways to use NISQ machines, VQE had an immediate impact, and teams have used it on several quantum computers to find molecular ground states and explore the magnetic properties of materials.

That same year Edward Farhi, then at the Massachusetts Institute of Technology, proposed another heuristic, or best-guess, approach called the quantum approximation optimization algorithm (QAOA). The QAOA, another quantum-classical hybrid, performs what is effectively a game of quantum educated guessing. The only application so far has been fairly obscure—optimizing a process for dividing up graphs—but the approach has already generated some promising spin-offs, says Eric Anschuetz, a graduate student at M.I.T., who has worked at Zapata. One of those, devised by Anschuetz and his colleagues, is an algorithm called variational quantum factoring (VQF), which aims to bring the encryption-breaking, large-number-factoring capabilities of quantum processing to NISQ-era machines.

Until VQF, the only known quantum algorithm for such work was one called Shor’s algorithm. That approach offers a fast route to factoring large numbers but is likely to require hundreds of thousands of qubits to go beyond what is possible on classical machines. In a paper published in 2019, Zapata researchers suggest that VQF might be able to outperform Shor’s algorithm on smaller systems within a decade. Even so, no one expects VQF to beat a classical machine in that time frame. Others are looking for more general ways to make the most of NISQ hardware. Instead of diverting qubits to correct noise-induced errors, for example, some scientists have devised a way to work with the noise. With “error mitigation,” the same routine is run on a noisy processor multiple times. By comparing the results of runs of different lengths, researchers can learn the systematic effect of noise on the computation and estimate what the result would be without noise.

The approach looks particularly promising for chemistry. In March 2019 a team led by physicist Jay Gambetta of IBM’s Thomas J. Watson Research Center in Yorktown Heights, N.Y., showed that error mitigation can improve chemistry computations performed on a four-qubit computer. The team used the approach to calculate basic properties of the molecules hydrogen and lithium hydride, such as how their energy states vary with interatomic distance. Although single, noisy runs did not map onto the known solution, the error-mitigated result matched it almost exactly.

Errors might not even be a problem for some applications. Vedran Dunjko, a computer scientist and physicist at the University of Leiden in the Netherlands, notes that the kinds of tasks performed in machine learning, such as labeling images, can cope with noise and approximations. “If you’re classifying an image to say whether it is a human face, or a cat, or a dog, there is no clean mathematical description of what these things look like—and nor do we look for one,” he says.

Fuzzy Future

Gambetta’s team at IBM has also been pursuing quantum machine learning for NISQ systems. In early 2019, while working with researchers at the University of Oxford and at M.I.T., the group reported two quantum machine-learning algorithms that are designed to pick out features in large data sets. It is thought that as quantum systems get bigger, their data-handling capabilities should grow exponentially, ultimately allowing them to handle many more data points than classical systems can. The algorithms provide “a possible path to quantum advantage,” the team wrote.

But as with other examples in the machine-learning field, no one has yet managed to demonstrate a quantum advantage. In the era of NISQ computing, there is always a “but.” Zapata’s factoring algorithm, for instance, might never factor numbers faster than classical machines. No experiments have been done on real hardware yet, and there is no way to definitively, mathematically prove superiority.

Other doubts are arising. Gian Giacomo Guerreschi and Anne Matsuura of Intel Labs in Santa Clara, Calif., performed simulations of Farhi’s QAOA algorithms and found that real-world problems with realistically modeled noise do not fare well on machines the size of today’s NISQ systems. “Our work adds a word of caution,” Guerreschi says. “If order-of-magnitude improvements to the QAOA protocols are not introduced, it will take many hundreds of qubits to outperform what can be done on classical machines.” One general problem for NISQ computing, Dunjko points out, comes down to time.

False-color image of a seven-qubit system that has been used for quantum chemistry computations. Credit: "Error Mitigation Extends the Computational Reach of a Noisy Quantum Processor," by Abhinav Kandala et al., in Nature, Vol. 567; March 28, 2019

Conventional computers can effectively operate indefinitely. A quantum system can lose its correlations, and thus its computing power, in fractions of a second. As a result, a classical computer does not have to run for very long before it can outstrip the capabilities of today’s quantum machines. NISQ research has also created a challenge for itself by focusing attention on the shortcomings of classical algorithms. It turns out that many of those, when investigated, can be improved to the point at which quantum algorithms cannot compete.

In 2016, for instance, researchers developed a quantum algorithm that could draw inferences from large data sets. It is known as a type of recommendation algorithm because of its similarity to the “you might also like” algorithms used online. Theoretical analysis suggested that this scheme was exponentially faster than any known classical algorithm. But in July 2018 computer scientist Ewin Tang, then an undergraduate student at the University of Texas at Austin, formulated a classical algorithm that worked even faster. Tang has since generalized her tactic, taking processes that make quantum algorithms fast and reconfiguring them so that they work on classical computers. This has allowed her to strip the advantage from a few other quantum algorithms, too.

Despite the thrust and parry, researchers say it is a friendly field and one that is improving both classical computing and quantum approaches. “My results have been met with a lot of enthusiasm,” says Tang, who is now a Ph.D. student at the University of Washington. For now, however, researchers must contend with the fact that there is still no proof that today’s quantum machines will yield anything of use. NISQ could simply turn out to be the name for the broad, possibly featureless landscape researchers must traverse before they can build quantum computers capable of outclassing conventional ones in helpful ways.

“Although there were a lot of ideas about what we could do with these near-term devices,” Preskill says, “nobody really knows what they are going to be good for.” De Jong, for one, is okay with the uncertainty. He sees the short-term quantum processor as more of a lab bench—a controlled experimental environment. The noise component of NISQ might even be seen as a benefit because real-world systems, such as potential molecules for use in solar cells, are also affected by their surroundings. “Exploring how a quantum system responds to its environment is crucial to obtain the understanding needed to drive new scientific discovery,” he says.

For his part, Aspuru-Guzik is confident that something significant will happen soon. As a teenager in Mexico, he used to hack phone systems to get free international calls. He says he sees the same adventurous spirit in some of the young quantum researchers he meets—especially now that they can effectively “dial in” and try things out on the small-scale quantum computers and simulators made available by companies such as Google and IBM. This ease of access, he thinks, will be key to working out the practicalities.

“You have to hack the quantum computer,” Aspuru-Guzik says. “There is a role for formalism, but there is also a role for imagination, intuition and adventure. Maybe it’s not about how many qubits we have; maybe it’s about how many hackers we have.”

参考译文
超越量子霸权:寻找有用的量子计算机
偶尔Alán Aspuru-Guzik也会有像电影明星一样的时刻,那时年龄只有他一半的粉丝会在街上拦住他。“他们说,‘嘿,我们知道你是谁,’”他笑着说。“然后他们告诉我,他们也有一家量子创业公司,很愿意和我谈谈。”他一点也不介意。“我通常没有时间聊天,但我总是很乐意给他们一些建议。”Aspuru-Guzik是多伦多大学的计算机科学家,也是马萨诸塞州剑桥Zapata计算公司的联合创始人。他说,这种友善的方式在量子计算领域并不少见。尽管计算机技术即将发生革命的说法不绝于耳,私人投资也一直在涌入量子技术,但现在还处于早期阶段,甚至没有人确定是否有可能制造出有用的量子计算机。今天的量子机器最多只有几十个量子比特,它们经常被破坏计算的噪声所困扰。研究人员距离通用量子计算机还有几十年——数千个量子比特——的距离,这些计算机可以进行长期预言的计算,比如分解大数。据报道,谷歌的一个团队展示了一种可以超越传统机器的量子计算机,但这种“量子霸权”预计非常有限。加州理工学院的物理学家John Preskill说,对于一般应用来说,30年“并不是一个不切实际的时间尺度”。一些研究人员提出了这样一种可能性:如果量子计算机不能很快提供任何有用的东西,量子冬天将会降临:在研究人员接近建造全面机器之前,热情将会减弱,资金将会枯竭。普雷斯基尔说:“量子冬天是一个真正令人担忧的问题。然而,他仍然保持乐观,因为缓慢的进展迫使研究人员调整他们的注意力,看看他们已经建造的设备是否可以在不久的将来做一些有趣的事情。从过去几年发表的大量论文来看,这是一种肯定的可能性。正如Preskill所说,这是一个小的、容易出错的、或“嘈杂的中等规模量子”(NISQ)机器的时代。到目前为止,这是一个比任何人预期的都要有趣得多的时期。虽然结果还处于初步阶段,但算法设计师正在为NISQ机器寻找工作,这些机器可能对化学、机器学习、材料科学和密码学产生直接影响——例如,为化学催化剂的创造提供见解。这些创新也激发了传统计算领域意想不到的进步。所有这些活动都与建立更大、更强大的量子系统的努力同时进行。阿斯普鲁-古济克建议人们对意外做好准备。他说:“我们在这里是为了长远利益。”“但明天可能会有一些惊喜。”量子计算可能感觉像是一个21世纪的想法,但它在IBM发布其第一台个人电脑的同一年诞生。在1981年的一次演讲中,物理学家理查德·费曼(Richard Feynman)指出,模拟具有量子力学基础的真实世界现象(如化学反应或半导体的性质)的最佳方法是使用一台遵循量子力学规则的机器。这种计算机将利用量子系统特有的纠缠现象。有了纠缠,一个粒子的性质会受到与它有密切量子联系的其他粒子发生的事情的影响。这些联系给化学和材料科学的许多分支带来了难以用经典计算机模拟的复杂性。设计在量子计算机上运行的算法旨在利用这些相关性,执行传统计算机无法完成的计算任务。 然而,赋予量子计算机如此前景的特性也让它们难以操作。环境中的噪声,无论是来自温度波动、机械振动还是杂散电磁场,都会削弱量子位之间的相关性。量子位是计算机中编码和处理信息的计算单元。这降低了机器的可靠性,限制了它们的大小,并损害了它们可以执行的计算类型。解决这个问题的一个潜在方法是运行错误纠正例程。然而,这种算法需要它们自己的量子位(理论上最小值是每个用于计算的量子位对应5个纠错量子位),这增加了大量的开销,并进一步限制了量子系统的规模。一些研究人员正专注于硬件。微软量子(Microsoft Quantum)的多国团队正试图在极薄半导体中使用奇异的“拓扑粒子”来构建比现在的量子系统更稳健的量子比特。但这些解决方案是长期的项目,许多研究人员正专注于如何利用现在或未来5到10年内可用的、噪音很大的小型机器。例如,合肥中国科学技术大学的物理学家潘建伟和他的团队并没有把目标放在通用的、纠错的量子计算机上,而是在追求短期和中期目标。这包括量子霸权和开发基于量子的模拟器,可以解决材料科学等领域的有意义的问题。他说:“我通常把它称为‘一路下蛋’。”劳伦斯伯克利国家实验室的伯特·德容(Bert de Jong)将目光投向了化学领域的应用,比如寻找制造氨的哈伯法的替代品。目前,研究人员必须在经典机器上进行近似模拟,但这种方法有其局限性。“为了使电池研究或任何依赖强电子相关性的科学领域取得巨大的科学进展,”他说,“我们不能使用近似的方法。”NISQ系统将无法进行全面的化学模拟。但当与传统计算机结合时,它们可能会比现有的经典模拟显示出优势。de Jong说:“模拟的经典难点是在量子处理器上解决的,而其余的工作是在经典计算机上完成的。”这种混合的方法正是阿斯普鲁-古济克成名的原因。2014年,他和他的同事设计了一种叫做变分量子特征解算器(VQE)的算法,该算法使用传统的机器来优化猜测。这些猜测可能是旅行推销员的最短路径,飞机机翼的最佳形状,或者构成特定分子最低能量状态的原子排列。一旦确定了最佳猜测,量子机器就会搜索附近的选项。它的结果反馈到经典机器,并继续这一过程,直到找到最优解。作为NISQ机器的首批使用方法之一,VQE立即产生了影响,研究团队已经将其用于数台量子计算机上,以发现分子基态和探索材料的磁性。 同年,时任麻省理工学院(Massachusetts Institute of Technology)教授的爱德华·法希(Edward Farhi)提出了另一种启发式或最佳猜测方法,称为量子近似优化算法(QAOA)。QAOA是另一种量子-经典混合,它执行的实际上是一种受量子教育的猜测游戏。到目前为止,唯一的应用还相当模糊——优化图形分割的过程——但这种方法已经产生了一些有希望的副产品,麻省理工学院的研究生、Zapata的工作人员Eric Anschuetz说。其中,Anschuetz和他的同事设计了一种叫做变分量子分解(VQF)的算法,旨在将量子处理的加密破解和大数字分解能力带到nisq时代的机器上。在VQF之前,这类工作中唯一已知的量子算法是一种名为肖尔算法的算法。这种方法提供了一种分解大数的快速途径,但可能需要数十万个量子比特,才能超出传统机器的能力。在2019年发表的一篇论文中,萨帕塔的研究人员表示,十年内,VQF可能在较小的系统上超过肖尔的算法。即便如此,没有人认为VQF能在这个时间段内击败传统机器。其他人正在寻找更通用的方法来充分利用NISQ硬件。例如,一些科学家设计了一种利用噪声工作的方法,而不是通过改变量子位来纠正噪声引起的错误。通过“错误缓解”,可以在有噪声的处理器上多次运行相同的例程。通过比较不同长度的运行结果,研究人员可以了解噪声对计算的系统影响,并估计没有噪声的结果是什么。这种方法在化学领域看起来特别有前途。2019年3月,位于纽约州约克敦海茨的IBM托马斯·j·沃森研究中心的物理学家杰伊·Gambetta领导的团队表明,减少错误可以改善在四量子比特计算机上进行的化学计算。研究小组用这种方法计算了氢和氢化锂分子的基本性质,比如它们的能态如何随原子间距离的变化而变化。虽然单一的、有噪声的运行并没有映射到已知的解决方案,但减少错误的结果几乎完全匹配它。对于某些应用程序,错误甚至可能不是问题。荷兰莱顿大学(University of Leiden)的计算机科学家和物理学家Vedran Dunjko指出,机器学习中执行的各种任务,比如给图像标记,可以应对噪声和近似。他说:“如果你对一张图像进行分类,以判断它是人脸、猫还是狗,没有清晰的数学描述来描述这些东西的样子——我们也没有寻找。”Gambetta在IBM的团队也一直在为NISQ系统研究量子机器学习。2019年初,在与牛津大学和麻省理工学院的研究人员合作时,该小组报告了两种量子机器学习算法,旨在在大型数据集中挑选特征。人们认为,随着量子系统变得越来越大,它们的数据处理能力应该会呈指数级增长,最终使它们能够处理比经典系统更多的数据点。该团队写道,这些算法提供了“一条获得量子优势的可能路径”。但就像机器学习领域的其他例子一样,还没有人能够证明量子优势。在NISQ计算时代,总有一个“但是”。例如,萨帕塔的因式分解算法可能永远不会比传统机器更快地分解数字。目前还没有在真正的硬件上做过实验,也没有办法从数学上肯定地证明优越性。 其他质疑也在产生。加州圣克拉拉英特尔实验室的Gian Giacomo Guerreschi和Anne Matsuura对Farhi的QAOA算法进行了模拟,发现现实世界的问题与现实建模的噪音在今天NISQ系统的大小的机器上表现不佳。Guerreschi说:“我们的工作给我们增加了一个警告。”“如果不引入QAOA协议的数量级改进,将需要数百个量子比特才能超过传统机器的性能。”Dunjko指出,NISQ计算的一个普遍问题是时间问题。传统计算机可以无限有效地运行。一个量子系统可以在几分之一秒内失去它的相关性,从而失去它的计算能力。因此,一台传统计算机不需要运行很长时间就能超过今天的量子计算机的能力。NISQ的研究也给自己带来了挑战,因为它把注意力集中在经典算法的缺点上。事实证明,当研究这些算法时,其中许多都可以改进到量子算法无法与之竞争的程度。例如,2016年,研究人员开发了一种量子算法,可以从大型数据集中进行推断。它被称为一种推荐算法,因为它与在线使用的“你可能也喜欢”算法相似。理论分析表明,该方案的速度比任何已知的经典算法都要快得多。但在2018年7月,当时还是德克萨斯大学奥斯汀分校本科生的计算机科学家Ewin Tang制定了一种工作速度更快的经典算法。自那以后,唐将她的策略普遍化,采用使量子算法快速的过程,并重新配置它们,使它们在传统计算机上工作。这也让她剥夺了其他一些量子算法的优势。尽管如此,研究人员说,这是一个友好的领域,它正在改进经典计算和量子方法。现在在华盛顿大学读博士的唐说:“我的研究结果受到了极大的欢迎。”然而,就目前而言,研究人员必须面对这样一个事实:仍然没有证据证明今天的量子机器会产生任何有用的东西。NISQ可能只是研究人员在建造出能够以有用的方式超越传统计算机的量子计算机之前必须穿越的广阔而可能毫无特色的领域的名称。普雷斯基尔说:“尽管关于这些近期设备我们能做些什么有很多想法,但没有人真正知道它们会有什么好处。”德容就是其中之一,他对这种不确定性没有意见。他认为短期的量子处理器更像是一个实验室工作台——一个受控的实验环境。NISQ的噪声成分甚至可能被视为一个好处,因为真实世界的系统,比如用于太阳能电池的潜在分子,也会受到其周围环境的影响。他说:“探索量子系统如何对其环境做出反应,对于获得推动新的科学发现所需的理解至关重要。”就阿斯普鲁-古济克而言,他相信不久就会发生一些重大的事情。十几岁时,他曾在墨西哥侵入电话系统,获得免费国际电话。他说,他在他遇到的一些年轻的量子研究人员身上看到了同样的冒险精神——特别是现在他们可以有效地“拨号”,并在谷歌和IBM等公司提供的小型量子计算机和模拟器上进行试验。他认为,这种获取的便利性将是实现实用性的关键。“你必须黑进量子计算机,”阿斯普鲁-古兹克说。“形式主义很重要,但想象力、直觉和冒险也很重要。也许这与我们有多少量子比特无关;也许这与我们有多少黑客有关。”
您觉得本篇内容如何
评分

评论

您需要登录才可以回复|注册

提交评论

scientific

这家伙很懒,什么描述也没留下

关注

点击进入下一篇

如何投资无人机行业:深入研究无人机ETF

提取码
复制提取码
点击跳转至百度网盘