mitchell vitez blog music art media dark mode

Deconstructing Hadamard Gates

Hadamard gates provide a nice introduction to the use of superposition in quantum computing. Here, I’ll try to show that figuring out how they work isn’t all that hard. Though we will touch on many different subfields of mathematics, even a fairly basic understanding of each should hopefully be enough.


Hadamard gate diagram

In this diagram, the square with an H inside represents a Hadamard gate. The little lines sticking out of each side of it can be thought of as a wire in our circuit. We typically think of values flowing along this wire from left to right, matching the flow of English text. Except, in quantum circuits, all gates are reversible so in theory we can compute from either end. But let’s just imagine we want the sequence that goes left to right.

Why are all quantum logic gates reversible? In fact, there’s a slightly stronger condition that applies to them: they’re unitary.

A Hilbert space is a multidimensional vector space (think of those arrow diagrams you had to use in physics class to add up vectors, but with as many or as few dimensions as you like) that comes with an operation called the inner product. You can think of inner product as a function from any two vectors in the Hilbert space to some scalar. The type of this function might look like innerProduct : Vector -> Vector -> Scalar, that is, it takes a vector and another vector and gives us a scalar.

The thing that’s special about unitary operators (like Hadamard gates) acting on Hilbert spaces is that they preserve this inner product. If we denote the inner product of \(a\) and \(b\) in some Hilbert space \(H\) as \(\langle a, b \rangle_H\), then applying a unitary operator (call it \(\circledast\)) to both \(a\) and \(b\) obeys the equation \(\langle a, b \rangle_H = \langle \circledast a, \circledast b \rangle_H\)

Loosely, this kind of preservation property is what lets us have reversibility. Even more loosely, because the underlying physics is reversible, so are the computations we’re running on it.

What we want to figure out here is the purpose of that box with an H on it, why they’re useful in circuits, and even a bit of what’s going on inside the box itself.

Linear algebra and change of basis

One important idea, not just in quantum computing but in all of physics, is the ability to change your basis, or roughly to change the coordinate system you’re working under.

We can think of this in terms of standard 2d xy coordinates along with another coordinate system superimposed. Let’s take a sample point \(P\) which has different coordinates in each, but itself hasn’t moved.

In the blue coordinate system, point \(P\) is at something like \((3, 2)\). But when we change to the red coordinate system, it “moves” closer to \((3.6, 0.7)\). This isn’t a problem though, because every other point in the system has had the same transformation applied. We just need to remember whether we’re using red or blue.

The generalized version of changing between these cartesian coordinate systems is performing a change of basis in some vector space. We can get rid of some assumptions (like the coordinate system’s axes being perpendicular) when generalizing. As long as the basis provides us with a way to form any vector in the system through linear combinations, we’re good to go.

Simplifying matrix math with ket notation

A typical way of representing qubits is as a vector with two elements. We usually write them vertically, like

\[ \begin{bmatrix} 1 \\ 0 \end{bmatrix} \]

The above vector represents certainty that a qubit has state “0” (as a rough analogy think of each row as some kind of mapping to a chance that the qubit is in that state). If we’re certain the qubit has state “1”, we’d write that as:

\[ \begin{bmatrix} 0 \\ 1 \end{bmatrix} \]

Because these kinds of states are so commonly used, there’s a simpler notation for them, called ket notation.

\[ \begin{bmatrix} 1 \\ 0 \end{bmatrix} = |0\rangle \]

\[ \begin{bmatrix} 0 \\ 1 \end{bmatrix} = |1\rangle \]

Ket notation extends fairly naturally if you think of it as a binary index into each cell of the matrix. The first cell corresponds with \(|00\rangle\), the second with \(|01\rangle\), then \(|10\rangle\) and \(|11\rangle\)…. For example:

\[ \begin{bmatrix} 0 \\ 0 \\ 1 \\ 0 \end{bmatrix} = |10\rangle \]

We can represent more complex superpositions in terms of these basic “certain” qubits, which eases the use of so many matrices

\[ \begin{bmatrix} \frac{1}{\sqrt{2}} \\ -\frac{1}{\sqrt{2}} \end{bmatrix} = \frac{|0\rangle - |1\rangle}{\sqrt{2}} \]

Trigonometry and the unit circle

Rotation and matrix multiplication are intimately linked. If we have some point \((x, y)\) we can find the point we’ll get after a rotation of \(\theta\) by doing the matrix multiplication

\[ \begin{bmatrix} \cos \theta & - \sin \theta \\ \sin \theta & \cos \theta \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} \]

For example, we can find what the point \((2, 1)\) maps to if we rotate it through \(\frac{\pi}{3}\) with some fairly simple matrix multiplication.

\[ \begin{bmatrix} \cos \frac{\pi}{3} & - \sin \frac{\pi}{3} \\ \sin \frac{\pi}{3} & \cos \frac{\pi}{3} \end{bmatrix} \begin{bmatrix} 2 \\ 1 \end{bmatrix} = \begin{bmatrix} \frac{1}{2} & - \frac{\sqrt{3}}{2} \\ \frac{\sqrt{3}}{2} & \frac{1}{2} \end{bmatrix} \begin{bmatrix} 2 \\ 1 \end{bmatrix} = (1 - \frac{\sqrt{3}}{2}, \frac{1}{2} + \sqrt{3}) \]

These kinds of rotations provide a very useful change-of-basis-style transformation for the states of qubits. To put a qubit with a known state (either \(|0\rangle\) or \(|1\rangle\)) into a superposition with an equal chance of either, we need to go from the blue basis to the red basis in the diagram here.

This has to do with uncertainty. Because the two sets of axes are orthogonal to each other, we’re introducing the most possible. In one sense Hadamard gates just map from a known state to a probabilistic state with a 50/50 chance of showing up either way once measured. This is a fundamentally important way to introduce quantum randomness, and this is also what gives quantum computers their theorized exponential speedup of being able to check \(2^n\) states at once where a classical computer cannot. We’ve introduced an “option” for each bit (in the form of a superposition), where the bits of classical computers have to be either 0 or 1.

If we take each coordinate as a vector, and put those vectors into ket notation, we can come up with this diagram

This circle is a bit of a simplification of something called a Bloch sphere. Bloch spheres are a way to visualize the state of a qubit as a superposition, with different positions on the surface of the sphere corresponding to different “mixtures” of states, just like different points on our unit circle represent different “mixtures” of \(|0\rangle\) and \(|1\rangle\).

Doing the algebra

Let’s now set out to find the Hadamard matrix \(H\) that has all the properties we want. Namely, \(H |0\rangle\) and \(H |1\rangle\) should have the values we expect them to have (the values that rotate us around our unit circle/Bloch sphere the correct amount).

First of all, we happen to know that \(H\) is \(2 \times 2\) because it’s unitary, all unitary matrices are square, and the non-ket notations for \(|0\rangle\) and \(|1\rangle\) have 2 rows. We can fill in variables for each cell.

\[ H = \begin{bmatrix} a & b \\ c & d \end{bmatrix} \]

We know that we want the following two equations to hold, because \(H\) has to send \(|0\rangle\) to \(\frac{|0\rangle + |1\rangle}{\sqrt{2}}\) and \(|1\rangle\) to \(\frac{|0\rangle - |1\rangle}{\sqrt{2}}\)

\[ \begin{bmatrix} a & b \\ c & d \end{bmatrix} \begin{bmatrix} 1 \\ 0 \end{bmatrix} = \frac{1}{\sqrt{2}} \begin{bmatrix} 1 \\ 1 \end{bmatrix} \]

\[ \begin{bmatrix} a & b \\ c & d \end{bmatrix} \begin{bmatrix} 0 \\ 1 \end{bmatrix} = \frac{1}{\sqrt{2}} \begin{bmatrix} 1 \\ -1 \end{bmatrix} \]

Let’s multiply through the \(\frac{1}{\sqrt{2}}\). It’s a bit messier, but gets us closer to solving this equation.

\[ \begin{bmatrix} a & b \\ c & d \end{bmatrix} \begin{bmatrix} 1 \\ 0 \end{bmatrix} = \begin{bmatrix} \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} \end{bmatrix} \]

\[ \begin{bmatrix} a & b \\ c & d \end{bmatrix} \begin{bmatrix} 0 \\ 1 \end{bmatrix} = \begin{bmatrix} \frac{1}{\sqrt{2}} \\ -\frac{1}{\sqrt{2}} \end{bmatrix} \]

After doing matrix multiplication, we end up with the trivial equations

\(1 \times a + 0 \times b = \frac{1}{\sqrt{2}}\)

\(1 \times c + 0 \times d = \frac{1}{\sqrt{2}}\)

\(0 \times a + 1 \times b = \frac{1}{\sqrt{2}}\)

\(0 \times c + 1 \times d = -\frac{1}{\sqrt{2}}\)

Plug these values back in for \(a\), \(b\), \(c\), and \(d\).

\[ \begin{bmatrix} \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} \end{bmatrix} \]

Finally, we can factor out the \(\frac{1}{\sqrt{2}}\) again to get the usual simplified form you’ll see for a Hadamard matrix

\[ H = \frac{1}{\sqrt{2}} \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} \]

Putting it together

Hadamard gates are just a kind of quantum logic gate, useful for introducing superposition to our circuits (which in turns lets us do things like use randomness or test many more states than a classical computer). The gate itself takes some qubit and changes its basis. And the math it uses to do all this is multiplication with the matrix

\[ \frac{1}{\sqrt{2}} \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} \]

There’s certainly a lot of math involved (trigonometry, linear algebra, probability…). But I hope you’ll agree that none of the pieces is beyond someone with a decent high school math education. I think it’s great that even people without intense math backgrounds can understand some of the fundamentals of quantum computing.