The Biggest Challenge Of Quantum Computing Is Between 0 And 1

In October 2019, a Google study on quantum computing appeared on the cover of Nature. Google claims that the 53-qubit quantum computer Sycamore has achieved quantum supremacy, which has attracted widespread attention in the academic community. The paper pointed out that their quantum computer completed a task in 3 minutes and 20 seconds, while the supercomputer Summit took 10,000 years to complete the same task. Sycamore is a fully programmable quantum computer that can run general-purpose quantum algorithms. Many industry experts praised Google’s research as a milestone breakthrough in quantum computing.

But under the praise, doubts also appeared frequently.

The content of Google’s experiment is to use Sycamore to analyze the pattern of the quantum state output by the quantum random circuit. Random quantum circuits will continue to generate bit strings. Due to the existence of quantum interference, when the experiment is repeated many times, some bit strings are more likely to appear than others. As the number of qubits and quantum gates increase, the difficulty of analyzing random quantum circuit bit string patterns on classical computers will increase exponentially. Quantum computers can perform this task in polynomial time thanks to its parallelism.

Sycamore can sample the quantum random circuit 1 million times in about 200 seconds, while the supercomputer Summit takes about 10,000 years to complete the same task.

Also in October 2019, IBM researchers published a paper titled “Leveraging Secondary Storage to Simulate Deep 54-qubit Sycamore Circuits” on arXiv, stating that secondary storage can be used to expand the quantum circuits actually simulated by classical computers. Range. Therefore, it does not take 10,000 years to simulate Sycamore on Summit, only two and a half days.

Although IBM did not conduct experiments, this conclusion has also been recognized by industry experts.

UC Austin Professor and theoretical computer scientist Scott Aaronson said: “IBM claims to be able to store quantum state vectors on the hard disk of a supercomputer, thereby simulating Sycamore within two and a half days… IBM has not conducted experiments to prove this conclusion, but I have almost no Reasons to suspect that their analysis is basically correct.”

Greg Kuperberg, a mathematician at the University of California, Davis, pointed out that the bit string pattern output in Google’s random quantum circuit is almost the same as the random flip of qubits caused by noise, so he doubts the value of his experiment.

Qubits are usually unstable. In order to maintain the accuracy of logical qubits, quantum error correction is required. Different from traditional error correction methods, due to the quantum unclonability theorem and quantum superposition state collapse (or wave function collapse) limitations, auxiliary qubits must be added for error correction of qubits. Compared with classical bits, qubits still have phase errors, which also brings greater complexity to quantum error correction. Usually thousands of physical qubits are needed to realize a high-fidelity logical qubit, which will also bring a lot of noise to the system . In other words, the error correction method itself inevitably introduces more noise. Google’s quantum computer Sycamore faces the same difficulty.

In addition, since error correction requires repeated use of qubits, this makes the error correction process much more demanding than the quantum supremacy experiment in which all qubits are measured only once.

Many industry experts said that in the future development, the main challenge facing quantum computers is still quantum error correction. In other words, we are far from achieving scalable quantum computing, and we should focus on the error correction of a single qubit, focusing on the range between 0 and 1, rather than outside 0 and 1.

Science contributor Adrian Cho recently published an article in which he popularized the concept of quantum error correction for us and introduced the current research status in this field.

1. Overrated quantum supremacy

In October 2019, Google announced that their quantum computer solved a problem that a supercomputer could not solve, and achieved quantum supremacy. Some people say that this marks the arrival of the era of quantum computing. However, Greg Kuperberg disagrees. He had expected Google to do less flashy but more important research.

The computer functions by manipulating a long bit string of 0s and 1s. In contrast, the qubits used in quantum computers can be in a state where 0 and 1 exist at the same time, called a quantum superposition state. Researchers have implemented such quantum states in ions, photons, or superconducting circuits, but these quantum states are also very fragile, and the slightest interaction with the surrounding environment will cause the superposition state to collapse into a non-superposition state of 0 or 1. Therefore, scientists must solve this problem, and Kuperberg had expected Google to take a crucial step to achieve this goal.

Questions about Google’s quantum supremacy experiment have emphasized the importance of quantum error correction. Chad Rigetti, a physicist and co-founder of Rigetti Computing, said: “A quantum computer with 10,000 qubits is a random noise generator or the most powerful computer in the world. The difference in value between the two is 100 million US dollars.” All Everyone agrees with Kuperberg’s first step to be realized: spread the information encoded by a single qubit, even if there is quantum noise, the information can be maintained. Scott Aaronson, a computer scientist at the University of Texas at Austin, explains: “This is equivalent to building a ship, even if every wooden board in it is decayed and must be replaced.”

Error correction also requires repeated use of qubits. Google physicist Marissa Giustina said this makes the process much more demanding than the quantum supremacy experiment, which measures all qubits only once . She said that error correction needs to be measured over and over again in a cycle, and this must be done quickly and reliably.

Google, Rigetti and IBM are all working towards this goal. Hartmut Neven, who leads Google’s Quantum Artificial Intelligence Laboratory, said: “This is obviously the next important milestone.” Jay Gambetta, who leads IBM’s quantum computing work, also admitted: “In the next few years, we are working on quantum error correction. The meeting will make visible results. “

To prove quantum supremacy, Google scientists entangled 53 qubits. In order to encode data in a single qubit with sufficient fidelity, they may need to manipulate 1,000 physical qubits.

2. Fragile Qubit

MIT mathematician Peter Shor proved in 1994 that quantum computers can quickly solve the factorization problem of large numbers. The Shor algorithm for factoring large numbers only requires polynomial time, while traditional algorithms require exponential time. A machine running Shor’s algorithm may crack the current encryption system that protects Internet communications, that is, the RSA encryption algorithm.

However, Shor’s algorithm assumes that each qubit is stable, and qubits in actual situations are far unstable. The qubits used by Google, IBM, and Rigetti are made of tiny resonant circuits of superconducting metal etched into a microchip. So far, these qubits have proven to be easier to control and integrate than other types of qubits To the circuit.

By manipulating superconducting quantum circuits with microwaves, researchers can convert the state of any qubit into any combination of 0 and 1, such as 30% 0 and 70% 1. But these quantum states cannot be maintained for one second, and even before that, noise may disturb and change the state, thereby destroying calculations.

The bit state of the ordinary circuit must be 0 or 1, and the qubit can be any combination of 0 and 1. Therefore, the state of the qubit can be represented by a point on the sphere, the latitude represents the relative amplitude of 0 and 1, and the longitude represents the phase. Noise can perturb qubits in two basic ways, namely perturbing the amplitude or phase.

The noise almost drowned out the signal in Google’s quantum supremacy experiment. The researchers started by setting 53 qubits and encoded all possible outputs, ranging from 0 to 2⁵³. They implemented a set of randomly selected interactions between qubits, and in trial and error, certain outputs were more likely to appear than others.

The researchers said that given the complexity of the interaction, a supercomputer will take 10,000 years to calculate the output pattern. But this mode is almost indistinguishable from the random flip of qubits caused by noise. “99% of their demo is noise and only 1% is signal.” Kuperberg said.

3. Classical error correction and quantum error correction

The method of spreading information about a qubit in many physical qubits can be traced back to the early days of ordinary computers in the 1950s. Early computer components consisted of vacuum tubes or mechanical relays, which were easy to accidentally flip. To overcome this problem, the famous mathematician John von Neumann pioneered the field of error correction.

Von Neumann’s method relies on redundancy. Suppose the computer makes three copies of each bit. Then, even if one of the three bits is flipped, most of the bits are still correct. In the so-called parity check, the computer can find and repair flipped bits by comparing pairs of bits. If the first and third bits match, but the first and second bits and the second and third bits are different, it is very likely that the second bit is flipped and the computer can flip it go back. Greater redundancy means greater ability to correct errors. The irony is that the transistors etched into microchips that modern computers use to etch their places are so reliable that error correction is not very useful.

But quantum computers will rely on error correction, at least on quantum computers made of superconducting qubits. (Qubits composed of single ions are less affected by noise, but more difficult to integrate.) Unfortunately, quantum mechanics itself has proven that it is impossible to replicate a qubit. The quantum unclonability theorem says that if the state of the first qubit is not changed, it is impossible to copy its state to another qubit . Joschka Roffe, a theorist at the University of Sheffield, said: “This means that it is impossible to directly convert classical error correction codes into quantum error correction codes.”

To make matters worse, experimenters cannot measure the quantum superposition state. Once the measurement is made, the quantum superposition state will collapse to a quantum state of 0 or 1. Kuperberg said: “The simplest classical error correction is to check what has happened by measuring.” “But if it is a qubit, you must check for errors without observing, but this is impossible.”

Although the qubit state flip is likely to happen right under your nose, we cannot directly detect the qubit.

These obstacles seem insurmountable, but quantum mechanics points to potential solutions. Researchers cannot replicate the state of qubits, but they can use entanglement to extend it to other qubits.

Under the action of microwaves, the original qubit interacts with another qubit that must start in the 0 state through a “control not” (CNOT) operation. If the state of the first qubit is 1, CNOT will change the state of the second qubit; if the state of the first qubit is 0, CNOT will keep the state of the second qubit unchanged. However, this operation does not actually measure the first qubit and collapse its state. Instead, it maintains the superposition state of the first qubit while simultaneously changing and not changing the second qubit. This keeps the two qubits in a superposition of 0 and 1.

In a conventional computer, a bit is a switch that can be set to 0 or 1. To protect the bit, the computer can copy it. If the noise subsequently flips the copied bits, the machine can find the error by taking a parity measurement: compare the pairs of bits to see if they are the same or different.

For example, if the original qubit is in the state of 30% 0 and 70% 1, the researchers can entangle it with other qubits to form an entangled state of three qubits, each of which is 30% 0 and 70% 1. This state is different from the three copies of the original qubit. Now, these three qubits are completely related: if the first qubit is measured and it collapses to 1, then the other two qubits must also collapse to 1 immediately. If the first quantum collapses to zero, the other qubits must also collapse to zero. This association is the essence of entanglement. This operation is equivalent to using three physical qubits to represent a logical qubit.

In a larger entanglement state, the other “auxiliary” qubits are entangled with the original three qubits, one entangled with the first and second qubits, and the other with the second and third qubits. Then, the researchers realized the parity check of quantum mechanics by measuring the small catheter. For example, without destroying entanglement, noise can flip any of the three coded qubits, thereby flipping the 0 and 1 states, changing the potential correlation between the three coded bits. Researchers can then perform “stabilizer” measurements on auxiliary qubits to explore these correlations.

Although the measurement auxiliary qubit collapses its state, the encoding qubit will not be disturbed. Roffe said: “This is a specially designed parity measurement that does not collapse the information encoded in the logical state.” For example, if the measurement result shows that the first auxiliary qubit is 0, it only indicates the first and second The two coded qubits must be in the same state, and it cannot indicate which state they are in. If the measurement result shows that the first auxiliary qubit is 1, it only indicates that the first and second coded qubits must be in opposite states. Researchers can use microwaves to flip the qubit back to its original state and restore its coherence before it changes state.

If noise causes one of the qubits to flip, researchers can detect this change without actually measuring the state. They entangle a pair of main qubits with other auxiliary qubits in a measurable state. If the correlation between a pair of qubits remains unchanged, the auxiliary bit will be 0; if the correlation is flipped, the auxiliary The bit will be 1. Then, the microwave can flip the qubit and restore the original entangled state.

This is just the basic idea of ​​quantum error correction. The state of a qubit is much more complicated than the combination of 0 and 1. The state of a qubit also depends on the phase, which can range from 0° to 360°, which is the key to the wave-like interference effect that gives quantum computers powerful functions. From the perspective of quantum mechanics, any error in the state of a qubit can be regarded as a combination of a bit flip error that exchanges 0 and 1 and a phase flip that changes the phase by 180° .

In order to correct these two types of errors, researchers can expand to another dimension (literally). Three entangled qubits and two auxiliary qubits are the smallest array that can detect and correct bit flip errors. The simplest three by three qubits and 8 auxiliary qubits constitute a grid array that can detect and correct bit flip and phase flip errors. At this time, the logical qubit is in an entangled state of nine qubits. The measurement of the stabilizer along one dimension of the grid will check for bit flip errors, and the measurement of the stabilizer along the other dimension will check for phase flip errors.

The experimental scheme of the two-dimensional array will be different, depending on the geometric arrangement of the qubits and the details of the stabilizer measurement. Nevertheless, we can also summarize the law of error correction: encode a single logical qubit in a grid of physical qubits, and as the grid size increases, the fidelity of the logical qubit will become better.

4. Long way ahead

Researchers have already explored experimentally. In the “Nature Physics” study published on June 8, Andreas Wallraff of ETH Zurich and his colleagues proved that they can detect but not correct logical qubits composed of four qubits and three auxiliary qubits Error in.

But there are still daunting challenges. IBM physicist Maika Takita said that manipulating a single qubit will introduce errors, and unless the error rate drops below a certain level, entanglement of more qubits with the original qubit will only bring more noise to the system. She said: “To prove anything, the error rate must be below a certain threshold.” Auxiliary qubits and other error correction mechanisms will add more noise, and once these effects are included, the error threshold will drop further. In order for the program to work, researchers must reduce the error rate to less than 1%.

In principle, as long as certain threshold conditions on the physical qubit are met, the logic error rate can be suppressed arbitrarily. However, there are trade-offs: the quantum error correction protocol requires a large number of qubits to operate effectively, which will greatly increase the computational overhead.

Although a small number of qubits is enough to prove the principle of quantum error correction. But in practice, researchers must control a large number of qubits. In order to run the Shor algorithm well, a very low error rate is required. For example, a 1000-qubit quantum computer needs to control the error rate of logical qubits within one billionth. In addition, it may be necessary to entangling a grid of 1,000 physical qubits to ensure the safety of a single logical qubit. To realize this prospect, several generations of larger and better quantum computing chips are needed.

The irony is that if you want to overcome this problem, developers need to go back to 20 years ago. Twenty years ago, developers had just begun to let pairs of physical qubits interact in order to perform various logical operations (ie “gates”) required for calculations. In addition, once researchers begin to master error correction, they will have to rerun every current development in quantum computing with powerful and highly complex logical qubits. Giustina quipped: “People think that error correction is the next step in quantum computing; in fact, it is the next 25 steps.”

Of course, “re-run” is not easy. Not only because currently any logic gate involving two qubits requires thousands of physical qubits, but another theorem of quantum mechanics states that not all logic gates can be easily converted from a single physical qubit to diffusion logic (Diffuse logical) Qubit.

There are also researchers who believe that if they can initialize all the qubits in the computer, especially the magic states, then they can avoid this problem. After all, the magic states have more or less completed half of the problematic gates.

Unfortunately, more qubits may be needed to generate these magic states. As Roffe said: “If you want to implement an algorithm like Shor’s algorithm, about 90% of the qubits must be used to prepare magic states.”

Therefore, if a mature quantum computer has 1,000 logical bits, it may eventually contain many millions of physical qubits.

It is said that Google plans to build such a machine within 10 years, which sounds ridiculous at first. After all, superconducting qubits need to be cooled to near absolute zero, and they need to be placed in a device called a cryostat, which is large enough to fill a small room. In addition, a million-qubit machine may require a thousand cryostats densely arranged in a large factory. In response to this problem, Google researchers said: They can keep the device compact. For example, Neven once said: “I don’t want to go back, but we believe we have solved this problem.”

Because the chip only allows adjacent qubits to interact, Google’s solution requires 1,000 physical qubits to encode a logical qubit. If more distant qubits can also interact, the required number of physical qubits may be much smaller. Therefore, IBM researchers are also working on a solution for interconnecting qubits over longer distances.

Developing quantum code is not easy. The problem is complicated by the quantum unclonability theorem, the collapse of the wave function, and the necessity to deal with multiple types of errors. The stabilizer code provides constraints for constructing quantum error correction codes. For the stabilizer code, quantum redundancy is achieved by entanglement of the quantum information in the initial register in the extended qubit space. The error can then be detected by performing a series of projection stabilizer measurements and interpreting the results to determine the best recovery operation to restore the quantum information to its expected state.

Surface code is the most widely used quantum error correction scheme in current experiments. This is because it has a high threshold and only requires nearest neighbor interaction. However, surface coding also has disadvantages, the most obvious of which is its coding density. We can simply increase the distance of the surface code by scaling the size of the quantum ratio characteristic matrix, but this will cause the vanishing code rate, where the code rate is defined as the ratio of the encoded qubit to the physical qubit. Another disadvantage of surface coding is that it requires resource-intensive methods to obtain a set of universal coding gates.

People have proposed alternatives based on different slices of the quantum ratio feature matrix and extended surface encoding to higher dimensions. These structures usually have lower thresholds, but have other advantages, such as (probably) easier access to universal coded gate sets. Based on the principle of high-performance classic code, people are also working hard to develop a code structure that does not lose code rate. However, for these codes, it is usually necessary to perform arbitrary remote interactions between code qubits.

An interesting thing happened in 2014 when physicists discovered evidence that there is a profound connection between quantum error correction and the nature of space, time, and gravity. In Einstein’s general theory of relativity, gravity is defined as a space-time structure (or “space-time”) curved around a large object. John Preskill, a theoretical physicist at the California Institute of Technology, said that quantum error correction explains how space-time achieves its “internal robustness” even though space-time is woven from fragile quantum matter. This connection may also point the way to the realization of scalable quantum computing.

No one wants to predict how long it will take us to master “error correction”, but now is the time to seriously consider this issue. After all, so far, all scholars who consider themselves “error correction” researchers are theorists. Quantum error correction must be made an experimental field, with real experimental data for feedback. So, obviously in the field of quantum computing, error correction is the next hot spot.

Freelancer Blogger and Writer. I am now studying CSE at Chengdu University Of Technology. Feel free to contact with me.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store