Quantum computing
Everyone seems to be talking about quantum computing, so this semester I might try to learn a little about it, which means you might be subjected to it too. (Mathematics overcompensates those who follows trends, so why not try to benefit.)
Let me try to describe the basic mathematical ideas of how quantum computing works, why it can be good, and why it can be difficult. I'm going to focus purely on the mathematics - I won't touch on any physics or engineering aspects, which add their own layers of complexity. But there's a whole world of theoretical quantum computing out there, and it connects with braids, TQFT, and lots of other parts of mathematics!
Basic idea: In classical computers, we might study a bit: it takes the value 0 or 1. In quantum computers, we consider qubits (quantum bits). These can kind of be 0 and 1 at the same time. More precisely, we think of a qubit as $z \mathbf{0} + w \mathbf{1}$ where $z, w \in \mathbb{C}$ and $|z|^2 + |w|^2 = 1$. If both $z, w \neq 0$, then the qubit is called a superposition of the two states 0 and 1. The way I think of this is that we don't know the value of the qubit currently, but it is 0 with probability $|z|^2$ and 1 with probability $|w|^2$. If we measure the qubit, it changes to purely a 0 or 1 and will no longer be in superposition.
This means we can think of a single qubit as living in $\mathbb{C}^2$ with standard basis vectors $e_0 = \mathbf{0}$ and $e_1 = \mathbf{1}$. We can apply unitary transformations to $\mathbb{C}^2$ to change the qubit as an operation on the qubit. For example, if we apply $\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}$, this would switch $z$ and $w$. (Already, this seems a lot more flexible than what we can do with purely binary strings.)
Two qubits can be described using tensor products. We think of the possible states for two qubits as being a linear combination of $e_0 \otimes e_0$, $e_0 \otimes e_1$, $e_1 \otimes e_0$, and $e_1 \otimes e_1$, since the two qubits could take on any of $00$, $01$, $10$, or $11$. Well, really it's taking on all of the possible values at once with some associated probabilities. So, a pair of qubits gives us a vector in $\mathbb{C}^2 \otimes \mathbb{C}^2$, and more generally, we can study $n$ qubits through vectors in $\mathbb{C}^{2^n} = (\mathbb{C}^2)^{\otimes n}$.
Advantages: I'll just point out two interesting ones.
1) (Space) Suppose we have three qubits. (For comparison, IBM's latest quantum computer has 433...) Our three qubits could know about $000, 001, \ldots, 111$ all at the same time if the associated probabilities are all non-zero. That means three qubits can contain eight pieces of information all at once. On the other hand, three classical bits only tells us three pieces of information! So, quantum computers might be good for doing a task that requires handling a lot of data at once.
2) (Entanglement) Let's say we run a program that takes in two qubits and the output is $\frac{1}{\sqrt{2}} e_0 \otimes e_0 + \frac{1}{\sqrt{2}} e_1 \otimes e_1$. (Note that this is not a simple tensor! It is called the Bell state.) This might happen, for instance, by applying a single unitary transformation to the "ground" state of $e_0 \otimes e_0$. Maybe one of the qubits is at NC State and the other is at Duke. This means that if we measure the qubit at NCSU and get 1, we know that the qubit at Duke would be 1 if we measured it. But that means we don't even need to measure the one at Duke - it is entangled with the qubit at NCSU. In general, if we have qubits which are entangled, we can learn a lot of information just by making one measurement. That means we are increasing processing speeds! A couple years ago, a research group created 27 qubits that were entangled like the Bell state.
Disadvantages: One of the biggest difficulties of quantum computing is the "No-Cloning Theorem". It basically says that if you have some quantum data, you can't make a copy of it without changing the data you started with. This is actually expressed purely in terms of linear algebra: there is no unitary transformation from $\mathbb{C}^{2^n}\otimes \mathbb{C}^{2^n} \to \mathbb{C}^{2^n} \otimes \mathbb{C}^{2^n}$ that sends $\otimes_{n} e_0 \otimes x$ to $x \otimes x$. In other words, you can't start with your data ($x$), take a blank disk ($e_0$), and record your data to the blank disk, without changing your starting data. This can make implementing algorithms much more difficult.
Here's an example. Suppose you have a (classical) computer program and it spits out 0 or 1. You want to send the output to your collaborator, but there's some chance that in the sending process, there's a small error and the bit might flip value. Error-correcting codes are a long-studied means to alter the message we send to make sure that even if there's a bit of a mishap, we can recover the message. For example, we could just send $000$ if we want to communicate 0 or $111$ if we want to communicate 1. If someone receives $010$, then they might reasonably assume we meant to send $0$ and one bit got flipped. In other words, we make several copies of the output and send all of them, assuming most of them should be right. The more copies we send, the more confident we are in deducing what the intended output was. The no-cloning theorem tells us we can't do this! So, to do error correcting in the quantum world, new ideas are needed. (It's also worth pointing out that physical and engineering aspects of quantum computing are more likely to distort the data than in the classical setting.)
To be honest, I still don't really understand what is going to come out of quantum computing, but I'm hoping to learn a little more about the various aspects of it this term. Hopefully my interest lasts more than a week and can write additional posts on quantum computing in the future
Comments
Post a Comment