Looking under the quantum hood: Why hardware-oriented quantum software engineering matters
Whether quantum technology will fulfill its speedup promises hinges on how well “textbook” quantum algorithms will be adapted to and optimized for imperfect physical quantum devices thanks to specific hardware models and simulations.
Will high-performance computing systems ever get quantum? The stakes behind this question are high: in theory, quantum computers could yield considerable speedups in fields ranging from chemistry and materials science to cryptography and data science. Yet, physical quantum devices have intrinsic imperfections that may degrade or even destroy those speedups. These imperfections need to be factored in when assessing the performances of quantum computers. More importantly, they can be used to optimize those performances.
Quantum computers were at first advocated to solve quantum problems such as how quantum particles like electrons behave within a molecule used to synthesize a new drug. Surprisingly, researchers soon found them potentially useful for computational problems with not the faintest link to the quantum world. This is the case, for instance, of the problem of factoring large numbers, which is central to encryption systems. Mathematician Peter Shor showed that, if programmed on a hypothetical quantum computer, this problem could be solved exponentially faster than the best known classical algorithm.
Quantum properties: exponential gains… with caveats
To understand how such gains are possible, let us consider one of the key ingredients of Shor’s algorithm, the Fourier transform. Shor reduced the factoring problem down to a period-finding problem that can itself be solved using a Fourier transform. His key insight was to propose to use a quantum version of the Fourier transform instead of the so-called “Fast Fourier Transform”, the fastest known Fourier algorithm on a classical machine. Indeed, while the Fast Fourier Transform requires about “P x log P” operations for a signal of size P (“log P” denotes the logarithm of P), its quantum counterpart requires only “log P x log P” operations, resulting in an exponential speedup.
Essentially, this speedup stems from two quantum properties known as superposition and entanglement. While a classical bit can be either in state 0 or 1, a quantum bit (or “qubit”) can be somewhere in between, i.e. in a linear combination or “superposition” of 0 and 1, with two (complex) coefficients to quantify their relative importance. Similarly, N qubits can be in a superposition of all the possible N-bit “classical” states, e.g., for three qubits, a superposition of the eight (=2^3) states 000, 001, 010, 011, 100, 101, 110 and 111. One quantum state thus holds the information corresponding to 2^N complex numbers. Contrary to classical bits, knowing the total state of N quantum bits does not boil down to knowing the individual state of each bit, a quantum property dubbed “entanglement”. This exotic property can be used in the reverse way: a signal containing P numbers can be described by exponentially fewer quantum bits, namely logP qubits. This gives an intuition why exponential speedups can be obtained: if the classical algorithm works on data of a given size, its quantum counterpart manipulates data of an exponentially smaller size.
Yet, reaping this exponential benefit is not straightforward: quantum states cannot be measured as easily as classical ones. A quantum measure is probabilistic and destructive: it returns only one of the 2^N classical states with a probability proportional to the squared norm of its coefficient, and automatically projects (and thus destroys) the output signal to this classical state. Thus, retrieving all the coefficients of the output would necessitate running the quantum Fourier algorithm many times to get a histogram of the state probabilities. This would in turn greatly reduce, if not cancel, the aforementioned gain. Therefore, to take advantage of the exponential speedup, quantum computer scientists must devise smart ways to exploit the quantum information contained in the register without actually measuring it, or engineer an output that can be read out with few measurements.
These programming constraints (and numerous other counterintuitive rules) radically depart from classical programming paradigms. Quantum software engineering thus requires dedicated tools to harness this strangeness. This point is valid at the “assembly” level with a quantum circuit description: quantum bits are acted upon by quantum “gates” that modify the state of the quantum register. These gates differ from classical logic gates: for instance, they need to be unitary, which prevents a quantum bit from being copied. Together with quantum measurements, they form a sequence called a quantum circuit (Fig. 1).
Figure 1. Quantum circuit for the quantum Fourier transform on five qubits. Each line represents a qubit, each box is a quantum gate. Gates are applied sequentially from left to right to the qubit register.
This point is also valid at higher levels: most quantum algorithms, like Shor’s factoring algorithm, are made up of distinct subroutines (like the quantum Fourier transform). Like in classical software engineering, quantum computer scientists not only need libraries that provide ready-made implementations of common quantum routines, but also higher-level languages to describe the combination, and possibly repetition of various subroutines into a full algorithm.
Finally, since quantum developers don’t have fault-free quantum computers yet, they need to be able to simulate, on classical computers, the behavior of new quantum algorithms to make sure they produce the intended result. Simple as it may seem, this task is sharply limited by the exponential (2^N) memory requirement of storing a quantum state on a classical machine.
Two challenges: quantum compilation and hardware imperfections
Experimental quantum devices come with numerous and hardware-specific constraints and imperfections: therefore, the tools we just mentioned are important, but insufficient to assess (and possibly enhance) the actual performances of a quantum algorithm.
Hardware constraints can be handled by a process called compilation. Physical implementations of quantum processors consist of arrays of quantum bits laid out in one or two dimensions, each qubit having a limited number of neighbors (Fig. 2a). In most implementations, two-qubit gates can only be applied between neighboring qubits. On the other hand, textbook algorithms like the aforementioned Fourier transform generically overlook these topological constraints, with gates between “disconnected” qubits. A quantum compiler must therefore rewrite the textbook quantum circuit as an equivalent quantum circuit that respects the device’s topological constraints. This rewriting comes at the expense of inserting so-called “swap gates” that swap the internal state of two qubits (Fig. 2b). Similarly, while textbook algorithms are written in terms of a large number of different types of quantum gates, each quantum device comes with its own limited set of gates due to physical constraints. The textbook quantum circuit must be translated to this hardware-specific gate set (Fig. 2c).
This rewriting comes at a cost: it usually results in much longer quantum circuits and hence longer execution times, leading to an increased sensitivity to hardware errors. Compilation must thus be optimized under constraints like minimizing the gate count or “depth” of the circuit. This constrained minimization problem is in itself a computationally hard problem for classical computers. There are therefore currently substantial theoretical and numerical efforts to devise efficient hardware-specific optimization procedures.
Figure 2. Hardware constraints. a) Example of layout of five qubits (orange circles). Red arrows denote pairs of “connected” qubits. b) Insertion of swap gates to take topological constraints into account. c) Replacement of hardware-incompatible controlled-phase gate by hardware-compatible gates (phase and CNOT gates).
Quantum states are very sensitive objects: any unwanted interaction with the outside world may disrupt a quantum computation in various ways: a quantum state gradually loses its “quantumness” through a phenomenon called “quantum decoherence”, or gates may be applied in a faulty way due to improper control. These imperfections, also called “quantum noise”, degrade the accuracy of quantum computers (Fig. 3).
Trying to optimize these results only experimentally is costly and time-consuming: resorting to numerical simulation allows the exploration of more possibilities. To this aim, one must first establish a useful mathematical description for the hardware, then simulate it on a classical computer, and finally design suitable optimization or error mitigation strategies. All three steps are difficult research problems. The first step consists in identifying the relevant parameters to characterize as precisely and yet as simply as possible a given type of quantum hardware. The second step consists in simulating, with a classical computer, the quantum mechanical equations corresponding to this model. This usually requires advanced numerical techniques like Monte-Carlo simulation, as well as large classical computing resources.
With a proper back-and-forth between the so-obtained numerical results and the experimental facts, one can adjust and eventually validate a realistic model for the hardware. This model can then be used to implement the third step, namely optimize the circuit: for instance, if one can characterize precisely the respective “quality” of each gate of a given hardware, one can compile a quantum circuit such that the number of high-quality gates is maximized. Similarly, if the dominant error source is increased decoherence with increasing circuit duration, one can compile the circuit such that the total duration of the circuit is minimized.
Figure 3. Example of a Fourier transform on five qubits (32 points). Top: input signal (squared amplitude). Bottom: output signal (squared amplitude) for an ideal processor (blue) and for a noisy processor with topological and gate set constraints, and quantum decoherence (orange). Quantum noise induces spurious Fourier peaks.
In a word: not only must future quantum programmers be able to verify the correctness of quantum algorithms on an ideal, fault-free quantum processor simulator, but they also need to simulate and optimize the corresponding quantum program for specific, generally faulty hardware. Only through this “debugging” process, together with fundamental and technological advances on the physical devices themselves, can real practical progress be made towards useful quantum technology.