Abstract:We propose the Quantum Graph Attention Network (QGAT), a hybrid graph neural network that integrates variational quantum circuits into the attention mechanism. At its core, QGAT employs strongly entangling quantum circuits with amplitude-encoded node features to enable expressive nonlinear interactions. Distinct from classical multi-head attention that separately computes each head, QGAT leverages a single quantum circuit to simultaneously generate multiple attention coefficients. This quantum parallelism facilitates parameter sharing across heads, substantially reducing computational overhead and model complexity. Classical projection weights and quantum circuit parameters are optimized jointly in an end-to-end manner, ensuring flexible adaptation to learning tasks. Empirical results demonstrate QGAT's effectiveness in capturing complex structural dependencies and improved generalization in inductive scenarios, highlighting its potential for scalable quantum-enhanced learning across domains such as chemistry, biology, and network analysis. Furthermore, experiments confirm that quantum embedding enhances robustness against feature and structural noise, suggesting advantages in handling real-world noisy data. The modularity of QGAT also ensures straightforward integration into existing architectures, allowing it to easily augment classical attention-based models.




Abstract:In this study, we propose a novel architecture, the Quantum Pointwise Convolution, which incorporates pointwise convolution within a quantum neural network framework. Our approach leverages the strengths of pointwise convolution to efficiently integrate information across feature channels while adjusting channel outputs. By using quantum circuits, we map data to a higher-dimensional space, capturing more complex feature relationships. To address the current limitations of quantum machine learning in the Noisy Intermediate-Scale Quantum (NISQ) era, we implement several design optimizations. These include amplitude encoding for data embedding, allowing more information to be processed with fewer qubits, and a weight-sharing mechanism that accelerates quantum pointwise convolution operations, reducing the need to retrain for each input pixels. In our experiments, we applied the quantum pointwise convolution layer to classification tasks on the FashionMNIST and CIFAR10 datasets, where our model demonstrated competitive performance compared to its classical counterpart. Furthermore, these optimizations not only improve the efficiency of the quantum pointwise convolutional layer but also make it more readily deployable in various CNN-based or deep learning models, broadening its potential applications across different architectures.