On Tuesday, D-Wave announced the details of its next-generation computation hardware, which it’s calling “Advantage,” and released a set of white papers that describe some of the machine’s performance characteristics. While some of the details of the upcoming system have been revealed earlier, Ars had the chance to sit in on a D-Wave users’ group meeting, which included talks by company VP of Product dDsign, Mark Johnson and Senior Scientist Cathy McGeoch. We also sat down to discuss the hardware with Alan Baratz, D-Wave’s chief product officer. They gave us a sense of what to expect when the machine comes online next year.
Part of the landscape
D-Wave’s hardware performs a form of computation that’s distinct from the one being pursued by companies like Google, Intel, and IBM. Those companies are attempting to build a gate-based quantum computer that’s able to perform general computation, but they’ve run into known issues with scaling up the number of qubits and limiting the appearance of noise in their computations. D-Wave’s quantum annealer is more limited in the types of problems it can solve, but its design allows the number of qubits to scale up more easily and limits the impact of noise.
It’s easiest to think of a D-Wave as exploring an energy landscape filled with hills and valleys. It specializes in finding the lowest valley in one of these landscapes and avoids getting stuck in a local valley by using quantum effects to “tunnel” through intervening hillsides. That can be used to perform calculations, but only if the calculation can be structured so that it looks like an energy minimization problem.
In this analogy, the amount of landscape you can explore is roughly equivalent to the complexity of the problem you are able to tackle, and both of these go up with the addition of more qubits. And that’s one of the big changes the new system brings to the table: while the current generation tops out at about 2,000 qubits, the next one will have 5,000, allowing it to handle more complex calculations. Jackson put a concrete number on that by discussing how it can model a physics system called a spin glass lattice. The prior version could handle an 8x8x8 lattice; the new one can do 15x15x12.
The other big boost to computational complexity is in the connections among the qubits, which are necessary to get the system to behave as a single unit. The current generation of chips has 6,000 connections among its 2,000 qubits, but the next system will have 40,000 for its 5,000. Connections between specific qubits are critical for calculations; if two qubits aren’t connected directly, the system would have to identify other qubits that would bridge the gap between the two critical ones, forming what’s called a chain. Not only does this leave fewer qubits for calculations, but chains also create a potential point of failure.
“If there are no chains, you’re going to get the answer, very high probability,” Baratz told Ars. “If there are lots of chains that are relatively short, you’re going to do pretty well. If there are lots of chains in there [that are] long, there’s where the probability starts to decline.” By so increasing the number of connections, the need for chains goes down dramatically, and the outcomes of calculations are more likely to represent a global minimum, rather than a local one.
The last item on D-Wave’s agenda for the new chip is to lower the noise of individual qubits—Baratz said the reduction was by about three- to four-fold. Obviously, lower noise makes a qubit more likely to be in the right state when it’s time to measure it. But it also has a significant impact on the tunneling needed to escape a local minimum. “It’s translating to about a seven x improvement in tunneling rates,” Baratz said.
This also makes a difference in how many times you have to repeat a calculation to have a strong sense of what the best answer is. “Our system is a probabilistic system, in the sense that you get the correct solution with some probability,” he continued. “You get a good solution always, but the correct solution, the optimal solution [you get] with some probability. And so you run multiple times to get to the correct solution. With a low noise technology for [a] particular problem, the probability of getting it correct was 25 times higher, so we could run it 25 times faster.” (McGeoch separately said the speedup could be anywhere from five- to 100-fold, depending on the calculation.)
All of that makes for some pretty impressive stats, which Johnson described in his talk: a single chip with over a million Josephson junctions and over 100 meters of wiring. Not only has D-Wave had to make the design for these chips, but these improvements required careful control over the entire process. When asked how lowering the on-chip noise came about, Baratz answered, “We have changed the materials on the processor to materials that have fewer impurities and as a result are less susceptible to environmental impact. And what this allows us to do is maintain coherence for a longer period of time and increase tunneling rates.”
To do that, D-Wave has put together a system in which a company builds the base of the chip, including some of its wiring, before sending it to a D-Wave facility in Palo Alto. There, D-Wave adds the qubits and some support hardware before sending it back to the original fab for the addition of further circuitry. This loop has allowed the company to add some of the features, like enhanced connectivity and low noise, to the existing generation of chips, which is where some of the company’s performance claims come from.
The other thing the company has complete control over is the chip’s interface with the outside world. Aside from choosing hardware that can function at temperatures near absolute zero, the key determinant of performance is the system that queues up calculations, configures the processor to run them, and then extracts the answer (or answers, when sampling). Baratz said that, while this system is made from standard processors, they’re chosen for their ability to handle matrix math and digital-to-analog conversions, both of which are needed for managing the quantum annealer.
With the next generation of hardware, D-Wave is attempting to cut the latency to the system down considerably. That’s in part to meet user needs; as we’ll go over in a follow-up article, many users are finding that they need to perform multiple annealing steps as part of the flow of a traditional computer program. Lowering the latency means that the regular portion of the program spends less time waiting around for the results from the D-Wave hardware.
While we’re still nearly a year away from the next-generation chip showing up on D-Wave’s cloud service, the company is already looking optimistically to the generation beyond that. “We’re continuing to work with even newer dielectrics that can reduce noise even further—we’re looking at dielectric tricks that can give us at least a 10x reduction in noise for the next system. We’re looking at qubit structures that allow us to maybe double again the connectivity, and we’re fabricating some of those.”
But in the meantime, people are starting to extract some interesting results from the existing hardware. Over the next few days, we’ll take a look at some of those.