ORCA Computing has recently launched its PT-2 photonic quantum computer. [ORCA Computing]
For decades, photonic systems have provided physicists with a rich playground for exploring fundamental quantum phenomena, from understanding the ambiguous nature of wave–particle duality to pioneering experiments in the late 20th century that proved that quantum information could be shared between particles through entanglement. That foundational work has enabled scientists to harness such esoteric behavior in practical systems that have delivered unprecedented capabilities in sensing, imaging and ultrasecure communications. Now, a growing number of startup companies are convinced that photonic systems will become the star player in the emerging world of quantum computing.
A growing number of startup companies are convinced that photonic systems will become the star player in the emerging world of quantum computing.
Some of these early-stage enterprises have already launched commercial photonic processors that can boost the performance of classical computers for specific computational tasks, particularly those that depend on machine learning and artificial intelligence. But there is a growing consensus within the photonics community that the long-held dream of demonstrating a universal quantum computer that can solve real-world problems more effectively than classical machines is finally within reach.
One of the companies striving toward that goal is PsiQuantum, which since its founding in 2016 in Palo Alto, CA, USA, has raised investor funding of more than US$700 million, along with significant government support in the United States and Australia, to build a photonic quantum computer with a million qubits. It has been working in partnership with top-tier chip manufacturer Global Foundries to implement an architecture based on photonic integrated circuits and is confident that it will deliver a utility-scale quantum computer by the end of 2027. Canadian startup Xanadu has also demonstrated a prototype that integrates all the key elements needed for a universal photonic quantum computer, and it has charted a clear path for building a large-scale machine before the end of the decade.
A researcher at work in Xanadu’s lab. [Xanadu]
If those predicted timescales prove true, photonic systems have the potential to leapfrog other qubit modalities that have demonstrated early successes but must now tackle significant scaling challenges. One advantage of photons is that they do not easily lose their quantum state due to heat or electromagnetic noise, which avoids the need to isolate the quantum processor within a cryogenic system or a vacuum chamber. Optical fiber also provides a ready-made solution for sending quantum information from one location to another, supplying easy connectivity between components and allowing multiple modules and processors to be networked together.
“The only way to create a large-scale quantum computer is through a distributed approach where multiple units are connected together, just as you would in a conventional data center,” says PsiQuantum cofounder and chief technology officer Mark Thompson. “Optical fiber is the most efficient way to transmit information between chips and between modules, and that’s why photonic quantum computing is so compelling.”
Photonics fundamentals
In many ways, photons offer the ideal physical resource for a quantum processor. The qubits that provide the fundamental unit of quantum information can be encoded in two different optical modes, such as the polarization or spin of a single photon, or even the spatial location of the photon within two distinct waveguides. These “flying qubits” retain their quantum information as they are routed around a processor, providing the long-lived quantum states that are needed to reach the end of a lengthy computation. What’s more, hardware developers have access to the wealth of technologies that have been perfected by the global photonics industry, as well as plenty of expertise in optical engineering to fashion new and innovative solutions.
Not everything is easy, however. Perhaps the single biggest challenge when using photons for information processing is that almost all of their interactions are governed by probability. This nondeterministic behavior affects all the critical operations needed in a quantum computer, from generating single photons at the input to creating the entangled states needed to perform a quantum calculation. The resilience of photons to environmental disturbances also stops them from interacting with each other, making it difficult to implement two-qubit operations in which the state of one qubit depends on the other.
PsiQuantum has worked with Global Foundries to create a manufacturable platform for large-scale quantum computing called Omega. [PsiQuantum]
In the early 2000s, physicists realized that the way to solve this problem is to rethink the computing paradigm. Instead of building quantum circuits step-by-step using entangled pairs of qubits, Hans Briegel and Robert Raussendorf at the Ludwig Maximilian University of Munich, Germany, proposed a new approach called measurement-based quantum computing. Their idea was to create a large network of pre-entangled qubits that would provide the entire resource needed for any computation, allowing the desired calculation to be carried out by simply measuring the qubits in a specific order. “It’s like doing the computation beforehand, and then selecting which logic gates you want to use,” explains Josh Nunn, cofounder and chief science officer of ORCA Computing, UK.
This measurement-based paradigm is particularly suited to photonic systems, since it allows any computation to be achieved with single-qubit operations that can be performed using interferometers or waveplates. Creating the initial entangled state remains a significant challenge, however, which prompted theorists to devise modified schemes that relax the requirements for useful quantum computing using optical systems. In these simplified protocols, only a small part of the entanglement is generated, often called the resource state, and then the next layer is produced as the first part is being measured. This rolling process means that resource states are continually being created and then consumed through measurement, which in the absence of errors would enable the computation to carry on forever.
The problem of probability
These breakthroughs inspired optical physicists to engineer simple entangling operations that deliver a single logical function, at first using discrete optical components and by the 2010s creating waveguide structures on photonic chips. But the probabilistic nature of photonic processes still presents a problem for practical quantum computing, since they require the same operation to be repeated until a measurement reveals that the desired outcome has been achieved. This inherent uncertainty at each stage of the computation means that the chance of executing a complex algorithm becomes vanishingly small.
One way to address this problem is to exploit some form of multiplexing, allowing each process to be attempted many times in the knowledge that occasionally it will succeed. The founders of PsiQuantum, who at the time were academics at Imperial College London and the University of Bristol in the UK, believed that the most scalable solution would be to build a massively parallel architecture based on photonic chips. “Our vision was to leverage state-of-the-art photonic engineering to generate and manipulate single photons,” explains Thompson. “As well as enabling miniaturization, this engineering approach provides the performance and manufacturability needed to scale the hardware and produce millions of high-quality qubits.”
In PsiQuantum’s approach, often called the dual-rail system, the qubits are single photons contained within two parallel waveguides. In the company’s implementation of this architecture, all the qubit operations are probabilistic, with measurements made at every step to identify successful operations and route them forward to the next step of the computation. “We are trading off the probability against the quality of the qubits we can produce, along with the manufacturability and scalability of the system,” says Thompson. “All our technology decisions have been driven by the need to create a large-scale system as rapidly as possible.”
Other companies have adopted alternative technologies that could reduce the number of components needed to build a fault-tolerant system. ORCA, for example, achieves the multiplexing in time rather than space, using rubidium atoms as a quantum memory that stores the qubits generated by successful operations. Qubits held in memory are then released when they are needed for another part of the computation, allowing them to be synchronized with other successful events. “Catching successful attempts and routing them to the next stage of the processor is a core capability to make these nondeterministic events more scalable,” says Nunn.
Another approach is to eliminate some of the inherent uncertainty in light-based systems. Quandela in France and Aegiq in the UK have engineered single-photon sources that exploit quantum dots to generate identical photons in a deterministic way, and they have combined them with quantum processors to create a modular architecture for quantum computing. Xanadu, meanwhile, has chosen to replace single photons with squeezed states of light, which reduce uncertainty by providing access to the full optical mode rather than just a two-level qubit. “Squeezed states are generated deterministically every time, and the qubits synthesized using squeezed states can also be entangled in a deterministic way,” says Zachary Vernon, Xanadu’s chief technology officer for hardware. “That reduces the amount of multiplexing that is needed to compensate for the intrinsic failure rate of single-photon systems.”
Several companies have already demonstrated small-scale photonic processors, most of which have been designed to tackle a specific computational task called boson sampling.
Steps along the way
Several companies have already demonstrated small-scale photonic processors, most of which have been designed to tackle a specific computational task called boson sampling. Originally conceived by theorists as a problem that would enable early quantum computers to deliver a demonstrable advantage over a classical machine, the challenge is to predict and characterize the probability distribution produced when identical bosons are scattered by an interferometer. While a physical photonic system can deliver a result without even needing to entangle the qubits, this type of sampling problem presents an almost insurmountable hurdle for a classical computer.
In a landmark experiment in 2020, a team led by Jian-Wei Pan of the University of Science and Technology of China created the first large-scale physical implementation of Gaussian boson sampling, a specific form of the problem that uses squeezed light rather than single photons. They configured hundreds of optical components to form a huge optical network, generating and sending squeezed states through many different paths and then measuring their quantum state at the output. The sampling rate measured in the experiment was around 100 trillion times faster than can be achieved with a conventional supercomputer, arguably providing the first convincing demonstration of quantum advantage.
Then, in 2022, Xanadu achieved the same feat with an integrated photonics platform called Borealis. The company had previously fabricated Gaussian boson samplers with up to 12 squeezed-state qubits, with its X-series of devices still available for programming through the cloud. Borealis extended that capability to 216 qubits, creating a fully programmable photonic processor that can complete the Gaussian boson sampling calculation within just 36 µs, compared with around 9,000 years for the most powerful supercomputers.
Ben McElfresh, head of product development at ORCA Computing, with the company’s PT-1 quantum processor. [ORCA Computing]
Meanwhile, ORCA has combined a more general form of boson sampling with its time-multiplexing technology to create commercial photonic processors designed to deliver a computational boost to classical systems. In 2022, the company launched the PT-1, a rack-mounted system that operates in a normal computing environment, with nine of these units now installed in customer premises around the world. In 2024, the firm unveiled a more powerful version, the PT-2, which it believes can deliver a tangible improvement in performance when combined with existing high-performance computers. “We took an early decision to add value as soon as we could, to start with easily deployable products that offer niche capabilities,” says Nunn. “In doing so, we have found that there is a huge opportunity to create hybrid systems that provide an advantage for specific applications.”
Prime among those are optimization problems in machine learning and artificial intelligence, which benefit from the improved sampling capabilities provided by photonic processors. “The models used for generative AI require the computer to produce a candidate solution from a programmable distribution of possibilities and then iterate that solution to refine the result,” explains Nunn. “These sampling tasks are really difficult for classical computers, but they are something that quantum does really well.”
ORCA has worked with industry partners to show that hybrid quantum-classical systems for generative AI can outperform their classical counterparts, in particular for identifying candidate molecules that could yield more effective chemicals and therapeutics. In a project involving NVIDIA and the Poznan Supercomputing and Networking Center in Poland, the company also showed that two PT-1 machines can be networked with a pair of NVIDIA processors and used the combined system to train a hybrid neural network to solve a classification task on a biological dataset.
ORCA is now working to extend the atom-based technology used for its quantum memories to realize deterministic gates, exploiting the predictable interactions between photons and atoms to produce controlled entangling operations. It also plans to augment its time-based multiplexing with fast spatial routing, using a low-loss switching platform being developed by a research team in Austin, TX, USA, that offers deep expertise in integrated photonics. “Our technical pathway is to combine multiplexing with deterministic gates to actively select successful events and weave them together into the resource states we need for fault tolerance,” says Nunn. “That approach will reduce the need for redundancy, enabling us to build a universal quantum computer on a scale that we think will be more viable as a commercial proposition.”
The quantum processing unit (QPU) in Xanadu’s Aurora system entangles qubits to form a cluster state, and also performs quantum gate operations and real-time error correction. [Xanadu]
Designs for fault tolerance
Xanadu has also focused its attention on demonstrating fault tolerance, and in January the company unveiled a prototype system that networks together the major elements for error-corrected quantum computing. The Aurora system comprises four rack-mounted modules that contain 35 photonic chips for generating the initial qubits, creating entangled states, implementing logic gates, and performing real-time operations for error correction. Experiments show that the model system can complete all of these functions within a single clock cycle, enabling continuous and end-to-end quantum computation with a resource state containing 12 qubits.
“The system is not yet fault tolerant because we don’t have many qubits and the optical losses are too high to deliver error-corrected performance,” says Vernon. “But we believe that this is the first demonstration of a complete photonic architecture that could be scaled up to enable fault-tolerant quantum computing. More generally, we think it makes a powerful statement about the scalability and modularity of our approach, with dozens of photonic chips networked together using optical fiber.”
The Aurora system exploits many of the optical technologies that Xanadu has previously demonstrated. For example, Gaussian boson sampling is used as a tool to convert the initial squeezed states into more complex qubits based on so-called GKP (Gottesman–Kitaev–Preskill) states. GKP is a quantum error correction that uses continuous-variable systems (like light) to encode information in a way that allows errors to be detected and corrected efficiently.
Synthesizing these GKP states is a probabilistic operation, with the measurements made by multiple Gaussian boson samplers used to select the best states for routing forward to the next stage of the computation. “A GKP state has an innate redundancy that makes it more tolerant to photon loss,” explains Vernon. “Once the GKP states have been created, they can be entangled deterministically using linear optics, and logical gates can be implemented using simpler detectors than single-photon counters.”
Another key enabling technology, previously demonstrated in the Borealis processor, is the use of fiber delay lines to provide a buffer memory for the qubits. In Aurora, these fiber delays are used to store the output from the Gaussian boson samplers and then to synchronize the release of GKP states to enable entanglement into the resource state. “These time delays make it possible to knit together the quantum states in three dimensions,” says Vernon. “That temporal entanglement allows us to do the measurement on one resource state while also propagating the quantum information into the next time step of the computation.”
Xanadu is now focused on driving down the optical losses, which are the main source of errors that affect computational performance.
Xanadu is now focused on driving down the optical losses, which are the main source of errors that affect computational performance. Aurora has provided a useful tool to analyze the loss pathways and the system’s tolerance for loss, and the company has also invested in bespoke fabrication processes to reduce scattering and absorption in the chips. “We have already made progress in lowering the losses and reconfiguring some of the circuits to tolerate more loss,” says Vernon. “Closing that gap will be our task for the next couple of years, and by the end of decade we believe we will have a large-scale system that provides the performance needed for error-corrected quantum computing.”
PsiQuantum is hoping to accelerate the pathway to fault tolerance by working with Global Foundries to integrate its single-photon architecture into the chipmaker’s gold-standard process for silicon photonics. One important addition to the baseline process has been the capability for single-photon detection, which is achieved using a thin film of superconducting niobium nitride that loses its superconductivity when a single photon is absorbed. Tests show that modules produced using this fully integrated process technology can generate and measure single-photon qubits with a fidelity close to 100%, with two-qubit logical operations achieved with an accuracy better than 99.2%—close to the best-available 99.9% that has been reported for some qubit platforms. “We have shown that we can take qubits from one chip to another with almost no degradation of the quantum information,” says Thompson. “The quality of the photonic states, and the losses we can achieve, are the best in the world.”
In the latest iteration of its platform, called Omega, PsiQuantum has reduced optical losses by replacing standard silica waveguides with silicon nitride and improving the chip-to-fiber coupling through the use of edge-coupled devices. Another crucial innovation has been the design and implementation of high-speed optical switches that route qubits around the processor without losing photons along the way. These best-in-class devices are fabricated from barium titanate, an electro-optical material with a performance some 30 times better than lithium niobate. “One key difficulty with integrating these switches into the fabrication process was developing the tools to grow high-quality films,” says Thompson. “We now have a custom-built reactor that can deposit this material on 300-mm wafers, which are then shipped to Global Foundries for processing.”
[Enlarge image]The Omega chipset incorporates several next-generation optical technologies, including waveguides made from silicon nitride and a fast but low-loss optical switch. [PsiQuantum]
Now that these next-generation technologies have been integrated into the process flow, PsiQuantum is focusing on connecting the modules and subsystems to create a quantum computer that can run billions or trillions of computational steps. A central part of that process has been building large-scale cryogenics cabinets to keep the superconducting detectors needed throughout the architecture at their operating temperature of 4 K. Such temperatures can be achieved using the liquid-helium cooling systems that are typically deployed in particle accelerators, so the company is now installing the latest version of its cryogenic cabinet at the Stanford Linear Accelerator, USA. It has also set up a research facility at Daresbury Laboratory, UK, which is home to one of Europe’s largest cryogenic plants. “We can just plug our quantum cabinets into the existing cryogenic facility and benefit from that established expertise,” says Thompson.
PsiQuantum recognizes that its photonics architecture will require large-scale infrastructure and has announced plans to build two hyperscale Quantum Compute Centers in Brisbane, Australia, and Chicago, USA. Other companies believe that their technology choices will enable them to achieve fault-tolerant computing using fewer resources, for example by limiting the need for superconducting detectors. It’s not yet clear which route will prove more successful—or indeed whether multiple solutions might co-exist to meet the needs of different end users—but it’s clear that the photonics sector is now fully primed to play a transformative role in the rapid rise of quantum computing.
Susan Curtis is a freelance science and technology writer based in Bristol, UK.
For references and resources, visit: optica-opn.org/link/0625-computing.