# Warum ist es schwieriger, Quantencomputer zu bauen als klassische Computer?

31

Liegt es daran, dass wir nicht genau wissen, wie man Quantencomputer erstellt (und wie sie funktionieren müssen), oder dass wir wissen, wie man sie theoretisch erstellt, aber nicht über die Werkzeuge verfügen, um sie tatsächlich in der Praxis auszuführen? Ist es eine Mischung aus den beiden oben genannten? Andere Gründe?

Why is it harder to build a GPU than to build a CPU? Same difference. A Quantum computer is not a stand-alone computer. It's a co-processor to a host computer, just like what your GPU is inside your current PC. The two videos starting at youtu.be/PN7mPYcWFKg are very insightful for beginners like us.
Mark Jeronimus

2
@MarkJeronimus it's not the same difference. A GPU is basically a whole lot of very simple CPUs running in parallel. It does have tight restriction on how memory access can be performed etc., but that just makes it more difficult to program, not to build.

3
Classical computers don't break if you look at them.
Mark

@leftaroundabout It's not the same difference now, but I'd argue it was with the very first 3D accelerators (and to some extent, even 3D software rendering). A huge part of the problem is simply exploring new technology, having to build up all new tools and approaches. Once someone found a good way of making 3D accelerators, it became a lot more "mundane" (though do keep in mind that most makers of 3D accelerators are now out of business). Granted, the "quantum computer" is an even bigger challenge (requiring a lot more entirely new tools and approaches), but it's not fundamentally different
Luaan

1
The two are so different they can't be compared. It's harder to built because it's a heck of a lot newer and a heck of a lot more complicated. The both of them being called 'computer' doesn't mean they're comparable in nature.
Mast

Antworten:

34

We know exactly, in theory, how to construct a quantum computer. But that is intrinsically more difficult than to construct a classical computer.

In a classical computer, you do not have to use a single particle to encode bits. Instead, you might say that anything less than a billion electrons is a 0 and anything more than that is a 1, and aim for, say, two billion of electrons to encode a 1 normally. That makes you inherently fault-tolerant: Even if there are hundreds of millions of electrons more or less than expected, you will still get the correct classification as a digital 0 or a 1.

In a quantum computer, this trick is not possible due to the non-cloning theorem: You cannot trivially employ more than one particle to encode a qubit (quantum bit). Instead, you must make all your gates operate so well that they are not just accurate to the single particle level but even to a tiny fraction of how much they act on a single particle (to the so-called quantum-error correction threshold). This is much more challenging than to get gates accurate merely to within hundreds of millions of electrons.

Meanwhile we do have the tools to, just barely, make quantum computers with the required level of accuracy. But nobody has, as of yet, managed to make a big one meaning one that can accurately operate on the perhaps hundred of thousands of physical qubits needed to implement a hundred or so logical qubits to then be undeniably in the realm where the quantum computer beats classical computers at select problems (quantum supremacy).

Well... there is D-Wave. The 2000Q system has 2000 qubits and is definitely outperforming classical systems on algorithms with efficient quantum implementations. They've been growing capability pretty rapidly - I'd expect a next-gen 4000 qubit system from them within 12 months.
J...

1
Are replicated circuits still cloning? What stops you from having parrallel circuits with copied inputs? Can't you use voting to increase the robustness of such systems?
whn

2
@snb It doesn't scale. The problem is that as you go "deeper" with the gates, you need more and more replicated circuits to get the same accuracy. But do keep in mind that calculations on quantum computers nowadays are usually ran many times over anyway. Overall, there's a reason why we're so interested in problems that are hard to solve, but easy to verify - you can use a quantum computer to give the problem a try, and verify the result with a classical computer. Keep repeating until they agree :)
Luaan

11

There's many reasons, both in theory and implementation, that make quantum computers much harder to build.

The simplest might be this: while it is easy to build machines that exhibit classical behaviour, demonstrations of quantum behaviour require really cold and really precisely controlled machines. The thermodynamic conditions of the quantum regime are just hard to access. When we finally do achieve a quantum system, it's hard to keep it isolated from the environment which seeks to decohere it and make it classical again.

Scalability is a big issue. The bigger our computer, the harder it is to keep quantum. The phenomena that promise to make quantum computers really powerful, like entanglement, require the qubits can interact with eachother in a controlled way. Architectures that allow this control are hard to engineer, and hard to scale. Nobody's agreed on a design!

As @pyramids points out, the strategies we use to correct errors in classical machines usually involve cloning information, which is forbidden by quantum information theory. While we have some strategies to mitigate errors in clever quantum ways, they require that are qubits are already pretty noise-free and that we have lots of them. If we can't improve our engineering past some threshold, we can't employ these strategies - they make things worse!

Also notable: the reason we use digital systems is that small variations in inputs and outputs of individual elements usually don't propagate, so you can keep adding more "layers" of computation without significantly decreasing the reliability. This kind of isolation seems to be impossible for quantum computers, at least for now - and no-cloning simply adds more salt to the wound :)
Luaan

3

Simpler answer: All quantum computers are classical computers too, if you limit their gate set to only classical gates such as $X$$X$, which is the NOT gate. Every time you build a quantum computer, you're also building a classical computer, so you can prove mathematically that building a quantum computer must be at least as hard as building a classical computer.

2

One important point is that quantum computers contain classical computers. So it must be at least as hard to build a quantum computer as it is a classical computer.

For a concrete illustration, it's worth thinking about universal gate sets. In classical computation, you can create any circuit you want via the combination of just a single type of gate. Often people talk about the NAND gate, but for the sake of this argument, it's easier to talk about the Toffoli gate (also known as the controlled-controlled-not gate). Every classical (reversible) circuit can be written in terms of a whole bunch of Toffolis. An arbitrary quantum computation can be written as a combination of two different types of gate: the Toffoli and the Hadamard.

This has immediate consequences. Obviously, if you're asking for two different things, one of which does not exist in classical physics, that must be harder than just making the one thing that does exist in classical physics. Moreover, making use of the Hadamard means that the sets of possible states you have to consider are no longer orthogonal, so you cannot simply look at the state and determine how to proceed. This is particularly relevant to the Toffoli, because it becomes harder to implement as a result: before, you could safely measure the different inputs and, dependent upon their values, do something to the output. But if the inputs are not orthogonal (or even if they are, but in an unknown basis!) you cannot risk measuring them because you will destroy the states, specifically, you destroy the superpositions that are the whole thing that's making quantum computation different from classical computation.

“Because quantum computers contain classical computers” is a questionable argument. It's a bit like saying that due to Turing completeness it's at least as difficult build a Zuse-style mechanical calculator as it is to build a modern high-performance cluster. That's clearly not true.

@leftaroundabout that's not what I'm saying at all. There you're comparing two different implementations of computers that implement P-complete problems. I'm comparing the generic thing that implements BQP-complete computations to the generic thing that implements P-complete computations. Even if you find the absolute best architecture for implementing quantum computation, that provides a way of implementing classical, which must be the same or worse than the best way. What I'm really saying is that P is contained within BQP, but we believe that there's much more in BQP.
DaftWullie

2

In 1996, David DiVincenzo listed five key criteria to build a quantum computer:

1. A quantum computer must be scalable,
2. It must be possible to initialise the qubits,
3. Good qubits are needed, the quantum state cannot be lost,
4. We need to have a universal set of quantum gates,
5. We need to be able to measure all qubits.

1. The ability to interconvert stationary and flying qubits,
2. The ability to transmit flying qubits between distant locations.

Long Explanation

0

I have to disagree with the idea that the no-cloning theorem make error correction with repetition codes difficult. Given that your inputs are provided in the computational basis (i.e. you inputs are not arbitrary superpositions, which is almost always the case, especially when you're solving a classical problem e.g. Schor's algorithm), you can clone them with controlled-Not gates, run your computation in parallel on all the copies, and then correct errors. The only trick is to make sure you don't do a measurement during error-correction (except possible of the syndrome), and to do this all you have to do is continue to make use quantum gates.

Error correction for quantum computers is not much more difficult than for classical computers. Linearity takes can of most of the perceived difficulties.

I'd also like to mention that there are much more efficient schemes for quantum error correction than repetition codes. And that you need two pauli-matrices to generate the rest, so you need two types of repetition codes if you're going to go for the inefficient, but conceptually simple repetition code route (one for bit-flips and one for phase flips).

Quantum error correction shows that linear increase in the number of physical qubits per logical qubit improves the error rate exponentially, just as it does in the classical case.

Still, we're nowhere near 100 physical qubits. This is the real problem. We need to be able to glue a lot more semi-accurate qubits together before any of this starts to matter.

5
I think you are forgetting that, for any sizable computation, it is insufficient to just do error correction by repeating the calculation as you suggest: The fidelity after $N$$N$ gates scales as ${F}^{N}$$F^N$ if $F$$F$ is the single gate fidelity. This becomes exponentially small if you only use this scheme. But during the computation, in general, you cannot use the repetition code you suggest.
pyramids

Can't you replace every gate $G$$G$ with the gate $decode-G-encode$$decode-G-encode$ for at worst a constant increase in circuit depth, even if you can't compile this expression down in you gate set?
Reid Hayes

0

# Ultimate Black Box

A quantum computer is by definition the ultimate black box. You feed in an input and you get a process, which produces an output.

Any attempt to open up the black box, will result in the process not happening.

Any engineer would tell you that would hinder any design process. Even the smallest design flaw would takes months of trial and error to trace down.

Durch die Nutzung unserer Website bestätigen Sie, dass Sie unsere Cookie-Richtlinie und Datenschutzrichtlinie gelesen und verstanden haben.