The realization of a computer that exploits quantum  rather than classical  principles represents a formidable scientific and technological challenge. Today, superconducting quantum processors are achieving significant results in simulation and computation capabilities. However, the realization of a faulttolerant quantum device still poses many technical difficulties. First, it requires the ability to generate highfidelity gates by exploiting both hardware and software solutions and improvements. Second, it requires the ability to perform quantum error correction. Finally, it is of primary importance to have a highfidelity qubit readout to actually extract information from the device. The thesis will focus on the first and last requirements, proposing advances in quantum optimal control protocols for highfidelity gates and machinelearning based qubit readout. The methods used for these improvements exploit general mathematical machinery that can be specialized for the specific quantum device to obtain (reconfigurable) machineaware protocols. This is easily achieved by accessing the properties and parameters of the machine. In this dissertation, these techniques are tested on superconducting qubits. Optimal control protocols make it possible to tailor control signals (in the form of electromagnetic or optical fields) that implement arbitrary unitary transformations in quantum computers. This helps to reduce the depth and hence the noise of quantum circuits. These protocols replace long gate circuits, which are the result of decomposing a unitary operator into a sequence of elementary transformations, with a single application of a customized gate obtained by appropriately optimizing a microwave pulse. However, optimization algorithms can be computationally demanding, especially in contexts where many controls related to parametric variations in the unitary are required. This can negate the benefits of the optimal control approach. The most common qubit readout technique today is dispersive readout (in the circuit QED architecture), in which the qubit is coupled to a readout resonator. In this approach, the state of the qubit is determined by measuring the quadrature amplitudes of an electromagnetic field transmitted through the resonator. Hardware random thermal noise, gate errors, or qubit decay processes that occur during measurements can reduce readout fidelity. Machine learning techniques and classification schemes could help to restore good fidelity by improving the classification accuracy of the measurement outputs. The Gaussian Mixture Model is the most commonly used classification method due to its ease of use. It uses parametric modeling of the probability distribution of averaged readout output data in terms of a sum of Gaussians to perform a classification of new measurements. However, more advanced techniques can be applied. Some authors have proposed and realized various classification methods based on neural networks trained on the entire output measurement signals instead of their averages, with good results. Another approach is based on the unsupervised approach of the Hidden Markov Model, which allows a detailed classification of the measurement results and the detection of decay processes that the qubit might undergo during the measurement. These schemes help to improve the accuracy of the classification of the qubit readout measurements. The present dissertation will follow these two tracks with the common goal of improving the performance of quantum computers. In the case of quantum optimal control, the application of fitting procedures among a previously computed set of controls could help to reduce computation time. A new control should not be optimized with slow algorithms but simply interpolated from the set of controls already available. In addition, more advanced mathematical techniques can be used. Quantum computers can only perform unitary transformations. Therefore, by precomputing the controls corresponding to a set of unitary matrices (belonging to $SU(2)$ and $SU(4)$) constructed by sampling their generators, machine learning techniques can then be used to interpolate between them and reconstruct the control for any unitary matrix (of these dimensions). These approaches are tested on the simulation of quantum (nuclear) systems, which is one of the most interesting and promising applications of the capabilities of quantum computers. Regarding the qubit readout part, we expected that the measurement output signal of the qubit system can be exploited to improve the readout procedure. In particular, by exploiting the information contained therein, one can improve the classification of states, making the procedure more parameterindependent and noiseresistant. Depending on the number of qubits or qubit levels, the readout signals are divided into different classes. These signals are noisy and often confused between classes due to thermal fluctuations, instrument noise, and quantum state decay processes. In this dissertation, we investigate how these data can be used to infer the state of the qubit more precisely and to improve the measurements. This can be achieved by applying advanced machine learning protocols, both with a supervised approach, using different realizations of neural networks, and with an unsupervised approach (e.g. autoencoders). Machine learning algorithms, taking advantage of their generalization and universal fitting capabilities, should allow better handling of hidden correlations in the data and provide better classification results of the measurements, as already demonstrated by some preliminary work. Furthermore, the use of unsupervised models paves the way for further studies on the behavior of the qubit in a more speculative way.
MachineAware Enhancing of Quantum Computers / Luchi, Piero.  (2023 Jul 27), pp. 1. [10.15168/11572_384589]
MachineAware Enhancing of Quantum Computers
Luchi, Piero
20230727
Abstract
The realization of a computer that exploits quantum  rather than classical  principles represents a formidable scientific and technological challenge. Today, superconducting quantum processors are achieving significant results in simulation and computation capabilities. However, the realization of a faulttolerant quantum device still poses many technical difficulties. First, it requires the ability to generate highfidelity gates by exploiting both hardware and software solutions and improvements. Second, it requires the ability to perform quantum error correction. Finally, it is of primary importance to have a highfidelity qubit readout to actually extract information from the device. The thesis will focus on the first and last requirements, proposing advances in quantum optimal control protocols for highfidelity gates and machinelearning based qubit readout. The methods used for these improvements exploit general mathematical machinery that can be specialized for the specific quantum device to obtain (reconfigurable) machineaware protocols. This is easily achieved by accessing the properties and parameters of the machine. In this dissertation, these techniques are tested on superconducting qubits. Optimal control protocols make it possible to tailor control signals (in the form of electromagnetic or optical fields) that implement arbitrary unitary transformations in quantum computers. This helps to reduce the depth and hence the noise of quantum circuits. These protocols replace long gate circuits, which are the result of decomposing a unitary operator into a sequence of elementary transformations, with a single application of a customized gate obtained by appropriately optimizing a microwave pulse. However, optimization algorithms can be computationally demanding, especially in contexts where many controls related to parametric variations in the unitary are required. This can negate the benefits of the optimal control approach. The most common qubit readout technique today is dispersive readout (in the circuit QED architecture), in which the qubit is coupled to a readout resonator. In this approach, the state of the qubit is determined by measuring the quadrature amplitudes of an electromagnetic field transmitted through the resonator. Hardware random thermal noise, gate errors, or qubit decay processes that occur during measurements can reduce readout fidelity. Machine learning techniques and classification schemes could help to restore good fidelity by improving the classification accuracy of the measurement outputs. The Gaussian Mixture Model is the most commonly used classification method due to its ease of use. It uses parametric modeling of the probability distribution of averaged readout output data in terms of a sum of Gaussians to perform a classification of new measurements. However, more advanced techniques can be applied. Some authors have proposed and realized various classification methods based on neural networks trained on the entire output measurement signals instead of their averages, with good results. Another approach is based on the unsupervised approach of the Hidden Markov Model, which allows a detailed classification of the measurement results and the detection of decay processes that the qubit might undergo during the measurement. These schemes help to improve the accuracy of the classification of the qubit readout measurements. The present dissertation will follow these two tracks with the common goal of improving the performance of quantum computers. In the case of quantum optimal control, the application of fitting procedures among a previously computed set of controls could help to reduce computation time. A new control should not be optimized with slow algorithms but simply interpolated from the set of controls already available. In addition, more advanced mathematical techniques can be used. Quantum computers can only perform unitary transformations. Therefore, by precomputing the controls corresponding to a set of unitary matrices (belonging to $SU(2)$ and $SU(4)$) constructed by sampling their generators, machine learning techniques can then be used to interpolate between them and reconstruct the control for any unitary matrix (of these dimensions). These approaches are tested on the simulation of quantum (nuclear) systems, which is one of the most interesting and promising applications of the capabilities of quantum computers. Regarding the qubit readout part, we expected that the measurement output signal of the qubit system can be exploited to improve the readout procedure. In particular, by exploiting the information contained therein, one can improve the classification of states, making the procedure more parameterindependent and noiseresistant. Depending on the number of qubits or qubit levels, the readout signals are divided into different classes. These signals are noisy and often confused between classes due to thermal fluctuations, instrument noise, and quantum state decay processes. In this dissertation, we investigate how these data can be used to infer the state of the qubit more precisely and to improve the measurements. This can be achieved by applying advanced machine learning protocols, both with a supervised approach, using different realizations of neural networks, and with an unsupervised approach (e.g. autoencoders). Machine learning algorithms, taking advantage of their generalization and universal fitting capabilities, should allow better handling of hidden correlations in the data and provide better classification results of the measurements, as already demonstrated by some preliminary work. Furthermore, the use of unsupervised models paves the way for further studies on the behavior of the qubit in a more speculative way.File  Dimensione  Formato  

PhD_Tesi_Piero_Luchi_final.pdf
accesso aperto
Descrizione: Phd Thesis
Tipologia:
Tesi di dottorato (Doctoral Thesis)
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
13.51 MB
Formato
Adobe PDF

13.51 MB  Adobe PDF  Visualizza/Apri 
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione