Front. Phys.Frontiers in PhysicsFront. Phys.2296-424XFrontiers Media S.A.106998510.3389/fphy.2022.1069985PhysicsOriginal ResearchMulticlass classification using quantum convolutional neural networks with hybrid quantum-classical learningBokhan et al.10.3389/fphy.2022.1069985BokhanDenis^{1}^{2}*MastiukovaAlena S.^{2}^{3}BoevAleksey S.^{2}TrubnikovDmitrii N.^{1}FedorovAleksey K.^{2}^{3}*^{1}Laboratory of Molecular Beams, Physical Chemistry Division, Department of Chemistry, Lomonosov Moscow State University, Moscow, Russia^{2}Russian Quantum Center, Moscow, Russia^{3}National University of Science and Technology “MISIS”, Moscow, Russia
Edited by:Xiao Yuan, Peking University, China
Reviewed by:Yukun Zhang, Peking University, China
Yiming Huang, Peking University, China
*Correspondence: Denis Bokhan, denisbokhan@mail.ru; Aleksey K. Fedorov, akf@rqc.ru
This article was submitted to Quantum Engineering and Technology, a section of the journal Frontiers in Physics
This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
Multiclass classification is of great interest for various applications, for example, it is a common task in computer vision, where one needs to categorize an image into three or more classes. Here we propose a quantum machine learning approach based on quantum convolutional neural networks for solving the multiclass classification problem. The corresponding learning procedure is implemented via TensorFlowQuantum as a hybrid quantum-classical (variational) model, where quantum output results are fed to the softmax activation function with the subsequent minimization of the cross entropy loss via optimizing the parameters of the quantum circuit. Our conceptional improvements here include a new model for a quantum perceptron and an optimized structure of the quantum circuit. We use the proposed approach to solve a 4-class classification problem for the case of the MNIST dataset using eight qubits for data encoding and four ancilla qubits; previous results have been obtained for 3-class classification problems. Our results show that accuracies of our solution are similar to classical convolutional neural networks with comparable numbers of trainable parameters. We expect that our finding provide a new step towards the use of quantum neural networks for solving relevant problems in the NISQ era and beyond.
Quantum computing is now widely considered as a new paradigm for solving computational problems, which are believed to be intractable for classical computing devices [1–5]. The idea behind quantum computing is to use quantum physics phenomena [2], such as superposition and entanglement. Specifically, in the quantum gate-based model, quantum algorithms are implemented as a sequence of logical operations under the qubits (quantum analogs of classical bits), which compose the corresponded quantum circuits terminating by qubit-selective measurements [3]. Examples of the problems, for whose quantum speedups are expected to be exponential, are prime factorization [4] and simulating quantum systems [5], for example, modelling complex molecules and chemical reactions [6]. The amount of computing power for such applications is, however, greatly exceeds resources of currently available quantum computing devices. For example, factoring RSA-2048 bit key requires 20 million noisy qubits [7], whereas currently available noisy intermediate-scale quantum (NISQ) devices have about 50–100 qubits [8]. Quantum computing can be also considered in the context of data processing [9] and machine learning applications [10], where the required resources for solving practical problems are expected to be not so high. Still the caveats of quantum machine learning are related to the input/output problems [11]: Although quantum algorithms can provide sizable speedups for processing data, they do not provide advantages in reading classical input data. The cost of reading the input then may in some cases dominate over the advantage of quantum algorithms. One may note that various approaches have been suggested, specifically, amplitude encoding [12], but the problem of the conversion of classical data into quantum data in the general case remains open [11].
The use of NISQ devices in the context of the quantum-classical (variational) model has emerged as a leading strategy for their use in the NISQ era [13, 14]. In such a framework, a classical optimizer is used to train a parameterized quantum circuit [13]. This helps to address constraints of the current NISQ devices, specifically, limited numbers of qubits and noise processes limiting circuit depths. An interesting link between quantum-classical (variational) model and architectures of artificial neural networks opens up prospects for the use of such an approach for machine learning problems [15–22]. The workflow of variational quantum algorithms, where parameters of circuit are iteratively updated (optimized), resembles classical learning procedures [19].
A cornerstone problem of various machine-learning-based approaches is classification, that it why it has been widely considered from the view point of potential speedups using quantum computing. As it has been demonstrated in Refs. [9, 23], kernel-based quantum algorithms may provide efficient solutions for the classification problem. Specifically, the quantum version of the support vector machine [9] can be used as an optimized binary classifier with complexity logarithmic in the size of the vectors and the number of training examples. A distant-based quantum binary classification has been proposed in Ref. [24]. Alternative versions of binary quantum classifiers have been considered in Refs. [25–29] (for a review, see also Ref. [30]). A natural next step is to consider the multiclass classification, which has been addressed recently in Ref. [31] with the demonstration of the performance on the IBMQX quantum computing platform. This method uses single-qubit encoding and amplitude encoding with embedding of data, so the obtained results are of quite high accuracy for the 3-class classification task. Very recently, an approach based on quantum convolutional neural network (QCNN) [32] have been used for binary classification, albeit, a way to its extension to the multiclass classification case has been discussed. We also note that some of the proposed quantum machine learning algorithms have been tested in practically relevant settings, for example, analyzing NMR readings [33, 34] with the trapped-ion quantum computer, learning for the classification of lung cancer patients [35] and classifying and ranking DNA to RNA transcription factors [36] using a quantum annealer, weather forecasting [37] on the basis of the superconducting quantum computer, and many others [38].
In this work, we present a quantum multiclass classifier that is based on the QCNN architecture. The developed approach use a traditional utilization of convolutional neural networks, in which few fully connected layers are placed after several convolutional layers. The corresponding learning procedure is implemented via TensorFlowQuantum [39] as a hybrid quantum-classical (variational) model, where quantum output results are fed to softmax cost function with subsequent minimization of it via optimization of parameters of quantum circuit. Then we discuss the modification of a quantum peceptron, which enables us to obtain highly accurate results using quantum circuits with relatively small number of parameters. The obtained results demonstrate successful solving the classification problem for the 4-classes of MNIST images.
Our paper is organized as follows. In Section 2, we present the general description of the proposed quantum algorithm that is used for multiclass classification. In Section 3, we provide in-detail discussion of the layer of the proposed quantum machine learning algorithm. In Section 4, we demonstrate the results of the implementation of the proposed algorithm for multiclass image classification for hand-written digits from MNIST and clothes images from fashion MNIST datasets. We conclude in Section 5.
2 General scheme
The core concept that we use here is the hybrid (variational or quantum-classical) approach (for a review, see Refs. [13, 14]). This approach use parametrized (variational) quantum circuits, where the exact parameters of quantum gates within the circuit can be changed. The general structure of our variational circuit is represented on Figure 1. Below we describe the proposed approach for multiclass classification based on the classical-quantum approach.
General structure of the proposed quantum neural network structure consisting of several steps: Preliminary scanning using n-qubit filters, pooling, and regular layers.
At the first step, we realize an amplitude encoding of input data, in our case, MNIST images. In fact, due to the high cost of this step [11], we generate a set of encoding circuits, and store their parameters and structure in a memory, thus, hereby making the quantum version of dataset. We consider MNIST images, which are rescaled from 28 by 28 to 16 by 16 pixels, and, thus, 8 qubits are needed. In terms of the corresponding qubit states, encoded images can be expressed as follows:Ψk=∑m=0NCmk|m〉,where k is the index of image and |m⟩ is a qubit register of 8 qubits, which encode index m, and N = 255. Coefficients Cmk are equal to elements of normalized flatten vectors of images. In general, this approach enables us to pack vector of N double-precision numbers into log_{2}(N) qubits, and, thus, significantly reduce the size of processed data. It should be noted however that existing algorithms for amplitude encoding scales exponentialy with N; further study is needed to overcome this problem.
We first employ the amplitude encoding procedure [12], where ancilla qubits are used for one-hot encoding of the class of target images. Preliminary analysis of encoded images is performed with 3 convolutional layers with the sizes of filters, equal to 4, 3, and 2, respectively. Each such layer consists of 2 sublayers that are needed to maintain translational invariance (at least, partially), and all the filters of the same size contain identical trainable parameters as it takes place the case for classical convolutional neural networks (CCNN). We note that for filters with the size of 3 we need a virtual qubit, which is always set to zero; such a trick is needed to fit the filter into 8 qubits in the translationally invariant manner. The convolutional layer with pooling is then placed after preliminary layers; at this step the first reduction of the required qubit number is realized.
As in the classical setup, several fully connected layers are added after convolutional layers (9 layers in our case). The further reduction of qubit numbers is realized after regular layers and subsequent pooling are done (in the same way as it is done after convolutional filters).
The final filter is needed for mixing the information from two parts of divided circuit. In the process of learning the output of final filter would contain the codes of classes: |00⟩, |01⟩, |10⟩, and |11⟩. Output cascade contains four Toffoli gates, which activate the corresponding ancilla qubit; at the end of the quantum circuit we have one-hot encoded by ancillas class of image. Measurement results of ancilla qubits are passed to the softmax activation function. The categorial cross-entropy is then used as the cost function. The subsequent calculations of gradients of the cost function with respect to the parameters of gates are done using parameter shift rule New parameters of quantum gates obtained by the gradient descent step. The detailed structure of all layers is described below.
3 Structures of layers
Here we present the detailed description of the layers that are used in our quantum machine learning algorithm.
3.1 Preliminary scanning using <italic>n</italic>-qubit filters
The structure of 4-qubit filters is presented in Figure 2A. First of all, RY(Θ1), RY(Θ2), RY(Θ3) and RY(Θ4) rotations are added in order to rotate each of four qubit separately. We propose to use controlled parameterized rotations RY(Φ) for the entanglement, which is an essential new element in the structure of quantum perceptron. We note that in Ref. [31] authors use the standard controlled X gates for this purpose. Here, as we demonstrate, the parameterized entanglement scheme provide higher accuracy of image classification due to the more flexible learning algorithm.
Quantum circuits for preliminary scanning: In (A) the 4-qubit filter with 4-qubit entanglement shown; in (B) the stack of 3-qubit filters with 4-qubit entanglement is presented; in (C) the 2-qubit filters with 4-qubit entanglement demonstrated.
In classical machine learning, the linear perceptron is passed through a certain non-linear function, which is essential for the learning process. In the quantum case, instead of summations of neurons we use entanglement of qubits. The degree of entanglement is controlled by parameters Φ, which makes learning process more flexible, and, thus, classification procedure may become more accurate. In fact, many classical activation functions like sigmoid or tanh behave akin to switches, so their values change from 0 or from −1 to 1 in a certain region. In quantum domain, we can switch from separable (non-entangled) to entangled state, what could play the role of non-linearities in classical learning. So far, individual rotations, which are followed by the parameterized entanglement, can be considered as an analog of the perceptron with the non-linearity.
After 4-qubit scanning, smaller-scale filters are applied to analyze obtained quantum feature map in more details. The structure of layers with 3-qubit filters is presented in Figure 2B. In order to rotate 3 indiviadual qubits RY(Θ1), RY(Θ2) and RY(Θ3) gates are added. Similarly to the case of 4-qubit filters, individual rotations are performed by parameterized RY gates. We note that even in the case of 3-qubit filters, we use the entanglement of 4 qubits. Even though, the entanglement of 3 qubits looks more intuitive in this case, as we show below, the 4-qubit one provide more accurate results on image recognition. More detailed scanning of images is performed by layer with 2-qubit filters; the corresponding circuit is given in Figure 2C (also see Ref. [32]). As in all previous cases, we use 4-qubit entanglement and the filter consist of 4 individual rotations with additional entanglement by CNOT gates. The idea of using 4-qubit entanglement is inspired by classical CNN, where generation of new feature maps is done by summation of contracted with weights previous feature maps and subsequent application of non-linearity.
3.2 Quantum convolutional neural network layer with pooling
After the preliminary scanning step, the obtained quantum state of 8 qubits contains encoding of feature maps. The role of the next layer (see Figure 3) is to analyze these maps in more detail and pick up the most important of them. The scheme of the layer is given in Figure 3B, where the convolutional filter is the same as in Figure 3A. We note that in the pooling circuit, controlled RZ rotation is activated if the first qubit is at state 1, while the controlled RX gate is used when upper qubit at state 0. This is conceptually similar to the structure proposed in Ref. [32].
In (A) the convolutional layer with pooling is shown. In (B) the structure of regular layers is illustrated.
3.3 Regular layers
Similarly to the CCNN case, several regular layers are placed after convolutional layers. In our case we add 8 layers, as shown in Figure 3A. In order to get more accurate results, the double entanglement is added after individual rotations. The second reduction of qubit number in circuit is done by two pooling procedure as in the case of convolution layers. In order to obtain the required structure, we add a final filter at the end of the quantum circuit. As it is shown below, the use of the final filter is essential for obtaining more accurate results of image classifications.
3.4 Toffoli and controlled rotation gates
The practical realization of high-fidelity two-qubit operations on quantum hardware is still a challenging task. The situation is typically more difficult with for three-qubit gates, such as Toffoli gate. Thus, it is necessary to decompose these gates via single- and two-qubit gates, which can be practically performed. The general algorithm of n-controlled rotations is presented in Ref. [40] and for the case of single-controlled rotation it can be expressed as it is shown in Figure 4A. In order to implement the Toffoli gate, we consider the qubit inversion as a rotation operation around X or Y axes and in our case doubly-controlled RY(Θ) gate is used with the value of Θ = 2π. The circuit is presented in Figure 4B and it corresponds to the representation of sum of parameterized n-controlled rotations, which are considered in Ref. [40]. Toffoli gate, in fact, can be considered as a sum of such rotations with n = 2, where Θ angles of all rotations, except the one that is controlled by 11th combination, are set to zero. The definition of α angles is realized along the lines of the procedure of Ref. [40]; they are obtained from Θ angles by simple matrix transformation.
(A) Single-controlled rotation gates in terms of rotations and CNOT gates. (B) Decomposition of the Toffoli gate in terms of RY rotations and CNOT gates.
We note that multiqubit gate decomposition can be further improved using qudits, which are multilevel quantum systems. As it has been shown, the upper levels of qudits can be used instead of ancilla qubits in the decomposition [41–45].
4 Classification results
We benchmark the proposed quantum machine learning algorithm with the use of hand-written digits from MNIST and clothes images from fashion MNIST datasets. Examples are presented in Figure 5.
Examples of MNIST digits (top) and MNIST fashion (buttom) images used in experiments.
All the simulations are performed using Cirq python library for the constructions of quantum circuits; TensorFlowQuantum library [39] is used for the implementation of machine learning algorithm with parameterized quantum circuits. We use the Adam version of gradient descent with learning rate equal to 0.00005, the overall number of trainable parameters in the QCNN circuit is equal to 149. As a metric for the model performance, we simply use the accuracy of the recognition and for more detailed analysis two sets of experiments are done. In all the conducted experiments, parameterized quantum circuits are trained during 50 epochs.
Within the first set training and classification is done for the case, when dataset consists of images, which has certain similarity and, thus, classification problem become more difficult. We use MNIST images of digits 3, 4, 5, and 6 for this part. Also, fashion MNIST images with labels 0, 1, 2, and 3 are used for this purpose.
The second experimental set is focused on images, which are strongly differs from each other, thus, making recognition process easier; MNIST digits 0, 1, 2, and 3; fashion MNIST images with labels 1, 2, 8, and 9 are considered. Total number of considered images of each type is given in Table 1.
Number of images of each type.
MNIST digits
0
1
2
3
4
5
6
Training
5923
6742
5958
6132
5842
5421
5918
Test
980
1135
1032
1010
982
892
958
MNIST fashion
0
1
2
3
8
9
Training
6000
6000
6000
6000
6000
6000
Test
1000
1000
1000
1000
1000
1000
Each image vector is normalized to one since only such kind of vectors can be used by amplitude encoding algorithm. Results of image classification are given in Table 2. Quantum circuits for multiclass classification are considered in Ref. [31]. QCNN examples, provided within documentation of TensorFlowQuantum [39] also can be relatively simply generalized for the case of multiclass classification tasks. In the second column of Table 2 we provide results of experiments with circuits, similar to those of Ref. [31]. In order to obtain these results we replace all the RY(Θ) used at entanglement steps by CNOT gates. Also, we remove all the parts of circuit of Figure 1, which are placed after regular layers, i.e. pooling layers, final layers and the part with Toffoli gates. Entanglement of ancilla qubits with regular layers is done via CNOT gates according to Figure 2 of reg [31]. Third column of Table 2 contain results, obtained with full circuit of Figure 1. Significant improvement of the accuracy of classification results caused by two facts. First one - is the usage of parameterized entanglemet in our circuit. Secondly, increase of the performance may be connected with the fact, that our circuit constructed in a similar way to classical neural networks - we use qubit reduction procedure in analogy with the reduction of number of layers outputs in classical case until the number of outputs become equal to the number of target classes. Note that in Figure 1 ancilla qubits are used only at read-out step and no entanglement is needed between ancillas and other qubits during computational procedure, what can significantly simplify requirements to the corresponding quantum hardware. We also compare obtained quantum results with results of the CCNN with similar number of parameters, which is 188 in our case. The structure of the CCNN is presented in Table 3. Clearly, classical results are more accurate, what indicate on the fact that with similar number of parameters classical model is still more expressive. An analysis of possible quantum advantage in ML tasks is presented in Ref. [46]. In their study authors analyze ML models based on kernel functions and show that with enough data provided classical methods become more powerfull then corresponding quantum algoritms. Thus, additional study is still needed to find ML tasks where quantum algorithms will outperform thier classical analogs.
Accuracies of classification for quantum and classical convolutional neural networks.
Quantum
Quantum
Classical
Reference [31]—like
Figure 1
MNIST digits (3456)
71.44
85.14
94.25
MNIST digits (0123)
77.64
90.03
95.85
MNIST fashion (0123)
71.15
85.93
89.69
MNIST fashion (1289)
79.33
93.63
97.42
Structure of the used classical convolutional neural network with 188 parameters.
Layer type
Output shape
Number of parameters
Conv2D
(None, 14, 14, 1)
10
Conv2D
(None, 12, 12, 1)
10
Pooling
(None, 6, 6, 1)
0
flatten
(None, 36)
0
Dense
(None, 4)
148
Dense
(None, 4)
20
In overall, the QCNN can produce accuracy of multiclass classification that are qualitatively similar to the classical model if the number of parameters are comparable. We would like to mention that the similar level of the accuracy has been achieved in Ref. [31] for the case of the 3-class classification problem. Here we have demonstrated this level of the accuracy for the 4-class classification tasks, which to the best of our knowledge is the first such demonstration.
5 Conclusion
Here we have demonstrated the quantum multiclass classifier, which is based on the QCNN architecture. The main conceptual improvements that we have realized are the new model for quantum perceptron and the optimized structure of the quantum circuit. We have shown the use of the proposed approach for 4-class classification for the case of four MNIST. As we have presented, the results obtained with the QCNN are comparable with those of CCNN for the case if the number of parameters are comparable. We expect that further optimizations of the perceptron can be studied in the future in order to make this approach more efficient. Moreover, since the scheme require the use of multiqubit gates, the qudit processors, where multiqubit gate decompositions can be implemented in a more efficient manner, can be of interest for the realization of such algorithms.
Data availability statement
Publicly available datasets were analyzed in this study. This data can be found here: https://deepai.org/dataset/mnist.
We thank A. Gircha for useful comments. AM and AF acknowledge the support of the Russian Science Foundation (19-71-10092). This work was also supported by the Priority 2030 program at the National University of Science and Technology “MISIS” under the project K1-2022-027 (analysis of the method) and by the Russian Roadmap on Quantum Computing (testing the MNIST images).
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
ReferencesFedorovAKGisinNBeloussovSMLvovskyAI. Quantum computing at the quantum advantage threshold: A down-to-business review (2022), arXiv:2203.17181v1 [quant-ph]. 10.48550/arXiv.2203.17181LaddTDJelezkoFLaflammeRNakamuraYMonroeCO’BrienJL. Quantum computers. BrassardGChuangILloydSMonroeC. Quantum computing. ShorP. In: Proceedings 35th Annual Symposium on Foundations of Computer Science (1994). p. 124–34.LloydS. Universal quantum simulators. McArdleSEndoSAspuru-GuzikABenjaminSCYuanX. Quantum computational chemistry. GidneyCEkeråM. How to factor 2048 bit RSA integers in 8 hours using 20 million noisy qubits. PreskillJ. Quantum computing in the NISQ era and beyond. RebentrostPMohseniMLloydS. Quantum support vector machine for big data classification. DunjkoVBriegelHJ. Machine learning & artificial intelligence in the quantum domain: A review of recent progress. BiamonteJWittekPPancottiNRebentrostPWiebeNLloydS. Quantum machine learning. MottonenMVartiainenJJBergholmVSalomaaMM. Transformation of quantum states using uniformly controlled rotations. CerezoMArrasmithABabbushRBenjaminSCEndoSFujiiKVariational quantum algorithms. BhartiKCervera-LiertaAKyawTHHaugTAlperin-LeaSAnandANoisy intermediate-scale quantum algorithms. LloydSMohseniMRebentrostP. Quantum algorithms for supervised and unsupervised machine learning. (2013), arXiv:1307.0411v2 [quant-ph]. 10.48550/arXiv.1307.0411DunjkoVTaylorJMBriegelHJ. Quantum-Enhanced machine learning. AminMHAndriyashERolfeJKulchytskyyBMelkoR. Quantum Boltzmann machine. CongIChoiSLukinMD. Quantum convolutional neural networks. ZoufalCLucchiAWoernerS. Quantum Generative Adversarial Networks for learning and loading random distributions. AbbasASutterDZoufalCLucchiAFigalliAWoernerS. The power of quantum neural networks. SchuldMKilloranN. Quantum machine learning in feature hilbert spaces. SchuldMBergholmVGogolinCIzaacJKilloranN. Evaluating analytic gradients on quantum hardware. MengoniRDi PierroA. Kernel methods in quantum machine learning. SchuldMFingerhuthMPetruccioneF. Implementing a distance-based classifier with a quantum interference circuit. BenedettiMRealpe-GómezJPerdomo-OrtizA. Quantum-assisted helmholtz machines: A quantum–classical deep learning framework for industrial datasets in near-term devices. GrantEBenedettiMCaoSHallamALockhartJStojevicVHierarchical quantum classifiers. HavlíčekVCórcolesADTemmeKHarrowAWKandalaAChowJMSupervised learning with quantum-enhanced feature spaces. TacchinoFMacchiavelloCGeraceDBajoniD. An artificial neuron implemented on an actual quantum processor. JohriSDebnathSMocherlaASinghAPrakashAKimJNearest centroid classification on a trapped ion quantum computer (2020), arXiv:2012.04145 [quant-ph]. 10.48550/arXiv.2012.04145LiWDengD-L. Recent advances for quantum classifiers. ChalumuriAKuneRManojBS. A hybrid classical-quantum approach for multi-class classification. HurTKimLParkDK. Quantum convolutional neural network for classical data classification. SelsDDashtiHMoraSDemlerODemlerE. Quantum approximate Bayesian computation for NMR model inference. SeetharamKBiswasDNoelCRisingerAZhuDKatzODigital quantum simulation of nmr experiments (2021). arXiv:2109.13298 [quant-ph]. 10.48550/arXiv.2109.13298JainSZiauddinJLeonchykPYenkanchiSGeraciJ. Quantum and classical machine learning for the classification of non-small-cell lung cancer patients. AlbashTLidarDA. Demonstration of a scaling advantage for a quantum annealer over simulated annealing. EnosGRReagorMJHendersonMPYoungCHortonKBirchMSynthetic weather radar using hybrid quantum-classical machine learning (2021). arXiv:2111.15605 [quant-ph]. 10.48550/arXiv.2111.15605Perdomo-OrtizABenedettiMRealpe-GómezJBiswasR. Opportunities and challenges for quantum-assisted machine learning in near-term quantum computers. BroughtonMVerdonGMcCourtTMartinezAJYooJHIsakovSVTensorflow quantum: A software framework for quantum machine learning (2020), arXiv:2003.02989v2 [quant-ph]. 10.48550/arXiv.2003.02989MöttönenMVartiainenJJBergholmVSalomaaMM. Quantum circuits for general multiqubit gates. KiktenkoEONikolaevaASXuPShlyapnikovGVFedorovAK. Scalable quantum computing with qudits on a graph. LiuW-QWeiH-RKwekL-C. Implementation of cnot and toffoli gates with higher - dimensional spaces. (2021), arXiv:2105.10631v3 [quant-ph].NikolaevaASKiktenkoEOFedorovAK. Efficient realization of quantum algorithms with qudits (2021), arXiv:2111.04384v2 [quant-ph]. 10.48550/arXiv.2111.04384NikolaevaASKiktenkoEOFedorovAK. Decomposing the generalized toffoli gate with qutrits. GokhalePBakerJMDuckeringCBrownNCBrownKRChongFT. In: Proceedings of the 46th International Symposium on Computer Architecture, ISCA ’19. New York, NY, USA: Association for Computing Machinery (2019). p. 554–66.HuangHYBroughtonMMohseniMBabbushRBoxioSNevenHPower of data in quantum machine learning.