Quantum Computing Overview

Quantum Computing Overview

A topic that has caught the attention of many in recent years is that of quantum physics. It has many practical applications, such as those in computing, and there is, therefore, no wonder that it has found its way into the stream of popular science in various media.

The possibility of leveraging the principles of the quantum theory in computers has, reportedly, been first proposed by Richard Feynman in the late 1950s. Since then, a number of publications have followed with regards to the furthering of this subject. Today, independent researchers and large corporations such as IBM alike are investing resources into this field.

Before further talking about quantum computing, let us consider its classical counterpart.

In classical computing, we have individual bits that can be in one of two states – 0 or 1 (off or on). This state corresponds to the lack of, or presence of, an electrical current. The current flows across transistors and its path is directed along physical logic gates such as AND, OR and NOT. We can model these operations logically through Boolean Algebra.

Each particular bit can only be either 1 or 0. These are discrete states.

In the context of the quantum computational model, instead of ordinary bits, there are quantum bits (qubits) and each individual qubit can be a 0, 1, or both states (a superposition). This is because on the quantum level, where everything is minuscule, particles can behave as waves. For example, consider the wave-particle duality of photons, or the spin of an electron around the atomic nucleus – where the electron is essentially in multiple places at once and we can calculate the probabilities of its position in a region of space but not the exact location. In order to obtain the known state of a quantum particle, we have to measure it. This “breaks” or collapses the superposition and gives us a solid answer.
Indeed, the quantum physical model and the mechanics it applies to is rooted in this behaviour of particles acting as waves and considering the probabilities of their states.

In a quantum computer,  there exists its own set of logic gates, such as Hadamard and the Pauli- X, Y and Z gates. Let us look at the Hadamard gate.

The Hadamard gate works on the input of a single qubit and places it in a superposition – a possible state of 0 or 1, where we won’t know which it is until we measure it.

Imagine that you have a coin representing your qubit. There are two states – Heads and Tails.
You flip the coin. At this point, ignoring other external factors, there is an equal chance of the coin landing on Heads or Tails and this is the superposition you would establish by applying the Hadamard gate. When the coin lands on your palm and you look at it, then you, as the observer, have made a measure and established the state of the coin – collapsing the superposition.

As one may guess, the Hadamard gate can be used for random number generation.

It is worth noting that the quantum system can be simulated on a classical one, but this software approach is, as it is often the case, less efficient than the equivalent hardware solution. In order to represent \(n\) qubits, the classical computer would need to maintain \(2^n\) states. This leads to an exponential increase in resources that is quite demanding.

Still, both approaches have their own advantages and disadvantages. The software approach increases the accessibility to quantum information and allows for a more flexible and portable analysis while the hardware solution has a plethora of complexity associated with it and is not economically feasible at this time.

For an example of a physical quantum computing system, have a look at D-Wave Systems.

Even in its early stages, this field of research is showing great potential in augmenting the way we reason about and approach problems, as well as the way we apply algorithms to solve them.

Leave a Reply

Your email address will not be published. Required fields are marked *