qiskit-community/qiskit-hackathon-singapore-19

Direct Randomzied Benchmarking

Opened this issue · 1 comments

Abstract

Implement the characterization protocol presented in [1] and use it to characterise an IBMQ device.
[1] TJ Proctor, A Carignan-Dugas, K Rudinger, E Nielsen, R Blume-Kohout, K Young,
Direct randomized benchmarking for multi-qubit devices Phys. Rev. Lett. 123, 030503 (2019), arXiv:1807.07975

Original Description

Paper abstract: Benchmarking methods that can be adapted to multi-qubit systems are essential for assessing the overall or "holistic" performance of nascent quantum processors. The current industry standard is Clifford randomized benchmarking (RB), which measures a single error rate that quantifies overall performance. But scaling Clifford RB to many qubits is surprisingly hard. It has only been performed on 1, 2, and 3 qubits as of this writing. This reflects a fundamental inefficiency in Clifford RB: the n-qubit Clifford gates at its core have to be compiled into large circuits over the 1- and 2-qubit gates native to a device. As n grows, the quality of these Clifford gates quickly degrades, making Clifford RB impractical at relatively low n. In this Letter, we propose a direct RB protocol that mostly avoids compiling. Instead, it uses random circuits over the native gates in a device, seeded by an initial layer of Clifford-like randomization. We demonstrate this protocol experimentally on 2 -- 5 qubits, using the publicly available IBMQX5. We believe this to be the greatest number of qubits holistically benchmarked, and this was achieved on a freely available device without any special tuning up. Our protocol retains the simplicity and convenient properties of Clifford RB: it estimates an error rate from an exponential decay. But it can be extended to processors with more qubits -- we present simulations on 10+ qubits -- and it reports a more directly informative and flexible error rate than the one reported by Clifford RB. We show how to use this flexibility to measure separate error rates for distinct sets of gates, which includes tasks such as measuring an average CNOT error rate.

Final Description

Hey guys, this is the link to our project https://github.com/Supanut-Thanasilp/Direct-Randomzied-Benchmarking-using-Qiskit

Project summary:
Given a quantum device, how do we know if the device behaves quantum mechanically as we would like it to. Randomized benchmarking is the procedure to verify performance of a quantum hardware and estimate errors of quantum gates. The traditional randomized benchmarking (also known as Clifford Randomized Benchmarking, CRB) is ineffient to evaluate the performance of more than two qubit gates because the size of clifford becomes large for multi-qubits. Only errors of individual one- and two-qubit gates cannot be used to properly evaluate the multi-qubit performance due to correlated errors which happens when all qubits run at the same time. Near-term quantum hardware will contains tenth to few hundred qubits. Therefore, it is crucial we can properly benchmark multi-qubit gates to check performance of the quantum computers.

In this project, we follow a new procedure proposed in the paper above which can benchmark multi-qubit gates and try to reproduce the results using Qiskit. The new procedure which is called Direct Randomized Benchmarking (DRB) consists of three parts.

creating a random stabilizer for n-qubits. (this part is done using Pygsti to produce openqasm of a random circuit and then converted into Qiskit).
randomly pick native gates for m layers. (this is entirely done in Qiskit)
convert the stabilizer into one of the computational basis (called stabilizer measurement). (partially, in Pygsti).

What's next ?
We will try to fix the bugs, reproduce the complete results, and convert codes in only Qiskit.

Members

Deliverable

A qiskit-ignis module

GitHub repo

Hey guys, this is the link to our project https://github.com/Supanut-Thanasilp/Direct-Randomzied-Benchmarking-using-Qiskit

Project summary:
Given a quantum device, how do we know if the device behaves quantum mechanically as we would like it to. Randomized benchmarking is the procedure to verify performance of a quantum hardware and estimate errors of quantum gates. The traditional randomized benchmarking (also known as Clifford Randomized Benchmarking, CRB) is ineffient to evaluate the performance of more than two qubit gates because the size of clifford becomes large for multi-qubits. Only errors of individual one- and two-qubit gates cannot be used to properly evaluate the multi-qubit performance due to correlated errors which happens when all qubits run at the same time. Near-term quantum hardware will contains tenth to few hundred qubits. Therefore, it is crucial we can properly benchmark multi-qubit gates to check performance of the quantum computers.

In this project, we follow a new procedure proposed in the paper above which can benchmark multi-qubit gates and try to reproduce the results using Qiskit. The new procedure which is called Direct Randomized Benchmarking (DRB) consists of three parts.

creating a random stabilizer for n-qubits. (this part is done using Pygsti to produce openqasm of a random circuit and then converted into Qiskit).
randomly pick native gates for m layers. (this is entirely done in Qiskit)
convert the stabilizer into one of the computational basis (called stabilizer measurement). (partially, in Pygsti).

What's next ?
We will try to fix the bugs, reproduce the complete results, and convert codes in only Qiskit.