Quantum computing could cease being a dream and become a reality within the next ten years. Quantum computers will drastically reduce processing time because quantum algorithms provide more efficient solutions to computational tasks than classical algorithms.
Scientists have long believed that the correlations between two or more systems are the key to quantum computing. An example is quantum entanglement, which occurs when pairs or groups of particles are created or interact in such a way that each particle’s quantum state cannot be described independently, because it depends on the state of the system as a whole (more at http://agencia.fapesp.br/20649).
However, a recent study shows that even an isolated quantum system (i.e., a system with no correlations with other systems) is sufficient to implement a faster quantum algorithm than its classical counterpart. A paper describing the study, “Computational speed-up with a single qudit,” was published in early October by Scientific Reports, which belongs to Nature Publishing Group.
The study, which was both theoretical and experimental, started with an idea put forward by physicist Mehmet Zafer Gedik from Sabanci University in Istanbul, Turkey, and was performed collaboratively by Turkish and Brazilian researchers. Felipe Fernandes Fanchini, a professor at São Paulo State University’s Bauru School of Sciences (FCB-UNESP), is one of the authors. His participation in the study was linked to the project “Quantum control in dissipative systems”, supported by FAPESP.
“The study makes an important contribution to the search for answers to the question ‘what is the resource responsible for the quantum computer’s superior processing power?’,” Fanchini told Agência FAPESP.
“Starting with Gedik’s idea, here in Brazil, we performed an experiment using the University of São Paulo’s nuclear magnetic resonance system in São Carlos. It was a collaboration among researchers at Sabanci University and the University of São Paulo, as well as with UNESP. We showed that a quantum circuit with a single physical system and three or more energy levels can determine the parity of a numerical permutation by evaluating the function only once. This would be inconceivable with a classical protocol.”
According to Fanchini, what Gedik proposed was a very simple quantum algorithm that basically determines a permutation’s parity, i.e., whether the sequence is odd or even.
For example, if we say that for the numbers 1, 2 and 3, the sequence 1-2-3 is in order, then the sequences 2-3-1 and 3-1-2, resulting from cyclic permutations of the same numbers, will be in the same order.
This is easy to understand by imagining the numbers arranged on a sphere. Given the first sequence, it suffices to advance the sphere by one number to obtain the second sequence and to advance it once again to obtain the third.
However, the sequences 1-3-2, 3-2-1 and 2-1-3 cannot be created without acyclic permutations. From the parity of the transpositions, therefore, the first three permutations are termed “even,” and the other three are termed “odd.”
“In classical terms, observing a single number or measurement is not sufficient to say whether a sequence is even or odd. At least two observations are required. What Gedik demonstrated was that in quantum terms, a single measurement is sufficient to determine parity. So, the quantum algorithm is faster than any classical equivalent and can be run experimentally using a single particle, which means that its efficiency doesn’t depend on any type of quantum correlation,” Fanchini said.
The algorithm in question does not specify a sequence but merely whether it is odd or even. This is possible only when there are three or more levels because with only two levels (e.g., 1-2 or 2-1), it is not possible to define a sequence as odd or even. “In recent times, the quantum computing community has explored contextuality, a key concept in quantum theory. Because contextuality also operates only when there are three or more levels, we suspect that it may explain the effectiveness of our algorithm,” Fanchini said.
“The concept of quantum contextuality can best be understood by comparing the ideas of measurement in classical physics and quantum physics,” Fanchini explained. “In classical physics, the assumption is that measurement merely reveals characteristics already intrinsic to the system being measured, such as a given length or mass. In quantum physics, the result of any measurement depends not only on the characteristics measured but also on how the measurement is organized and on all previous measurements. In other words, the results depend on the context of the experiment, and contextuality is the magnitude that describes this context.”
The history of physics recounts that contextuality was first recognized as a necessary part of quantum theory by Irish physicist John Stewart Bell (1928-90). According to Bell’s Theorem (1964), no physical theory based on local variables can reproduce all the predictions of quantum mechanics. In other words, physical phenomena cannot be described in strictly local terms because they express a totality.
“It’s important to note that another article [“Contextuality supplies the ‘magic’ for quantum computation”], published in Nature in June 2014, points to contextuality as the possible source of quantum computing’s power. Our study does something similar by presenting a concrete algorithm that’s more efficient than any algorithm conceivable in the classical framework,” Fanchini said.