While more and more quantum computers are emerging onto the computing scene, scientists have limited options when it comes to actually evaluating their capabilities. To answer this gap scientists have now designed a quantum computing benchmark test to see if these new machines really pass muster.
For a refresher, quantum computers deviate from so-called classical computers because instead of using pieces of information that can only take on one identity at a time (e.g. classical bits can either be a 0 or a 1 but not both) quantum computers instead use pieces of information called qubits that — because of their quantum mechanical properties — can be both 0 and 1. This flexibility is what gives quantum computers so much more computational potential than our classical computers.
While still far from mainstream, more and more quantum computers have been making their debut in recent years, including the 20-qubit IBM Tokyo and 16-qubit Rigetti Aspen, which Oak Ridge National Laboratory (ORNL) researchers focused on when designing this new benchmark.
The study, which was published this November in the journal npj Quantum Information, made use of relatively simply quantum chemistry principles (which sounds like an oxymoron) to test how well these two quantum machines would fare in scientific scenarios. The quantum chemistry problem in question was to calculate the well-understood bound energy state (e.g. when electrons of an atom are held steady within one orbital instead of dancing around as a “free” electron) of alkali metal molecules.
One of the study’s co-authors and the principal investigator of the ORNL Quantum Testbed Pathfinder project, Raphael Pooser, said in a statement that testing the capabilities of these machines on “fairly simple” problems like these will be a stepping stone to testing more complex problems down the road.
“We are currently running fairly simple scientific problems that represent the sort of problems we believe these systems will help us to solve in the future,” said Pooser. “These benchmarks give us an idea of how future quantum systems will perform when tackling similar, though exponentially more complex, simulations.”
In addition to simply calculating the energy states of these alkali molecules the benchmark was also designed to look for opportunities for error mitigation caused by “noise” in the quantum machines. Rather than something auditory like TV static, quantum noise develops when qubits are at odds with their environment, such as temperature or vibration variants. Such environmental changes could disrupt the state of the qubit and in turn throw off its accuracy.
You could call this need for delicate balance the qubit’s Achilles heel.
When quantum systems experience too much noise, classical computers are still required to clean-up the data and make it usable. The authors write that this will become a problem from quantum computers in the future when it comes to scaling algorithms and achieving an advantage over classical computers.
“If a quantum computer performs well at this task as the size of the algorithm scales up, then it will be able to achieve a quantum advantage. However, error mitigation as it exists today is not yet scalable. This means that the more error mitigation required to reach a good accuracy, the less scalable the algorithm is on that machine. Not surprisingly, none of the publicly available hardware is capable of a quantum advantage….”
Going forward, ORNL quantum chemist and co-author on the study, Jacek Jakowski, said in a statement that having dynamic benchmarks like this will be crucial to determine how well these computers will perform in different experimental scenarios.
“The current benchmark is a first step towards a comprehensive suite of benchmarks and metrics that govern the performance of quantum processors for different science domains,” said Jakowski. “We expect it to evolve with time as the quantum computing hardware improves. ORNL’s vast expertise in domain sciences, computer science and high-performance computing make it the perfect venue for the creation of this benchmark suite.”
Such scientific domains would include everything from traditional computer science to computational and experimental physics and chemistry used to design new technology and medicine.
We present a quantum chemistry benchmark for noisy intermediate-scale quantum computers that leverages the variational quantum eigensolver, active-space reduction, a reduced unitary coupled cluster ansatz, and reduced density purification as error mitigation. We demonstrate this benchmark using 4 of the available qubits on the 20-qubit IBM Tokyo and 16-qubit Rigetti Aspen processors via the simulation of alkali metal hydrides (NaH, KH, RbH), with accuracy of the computed ground state energy serving as the primary benchmark metric. We further parameterize this benchmark suite on the trial circuit type, the level of symmetry reduction, and error mitigation strategies. Our results demonstrate the characteristically high noise level present in near-term superconducting hardware, but provide a relevant baseline for future improvement of the underlying hardware, and a means for comparison across near-term hardware types. We also demonstrate how to reduce the noise in post processing with specific error mitigation techniques. Particularly, the adaptation of McWeeny purification of noisy density matrices dramatically improves accuracy of quantum computations, which, along with adjustable active space, significantly extends the range of accessible molecular systems. We demonstrate that for specific benchmark settings and a selected range of problems, the accuracy metric can reach chemical accuracy when computing over the cloud on certain quantum computers.