The fabric-based pneumatic exosuit is now a hot research topic because it is lighter and softer than traditional exoskeletons. Existing research focused more on the mechanical properties of the exosuit (e.g., torque and speed), but less on its wearability (e.g., appearance and comfort). This work presents a new design concept for fabric-based pneumatic exosuits Volume Transfer, which means transferring the volume of pneumatic actuators beyond the garments profile to the inside. This allows for a concealed appearance and a larger stress area while maintaining adequate torques. In order to verify this concept, we develop a fabric-based pneumatic exosuit for knee extension assistance. Its profile is only 26mm and its stress area wraps around almost half of the leg. We use a mathematical model and simulation to determine the parameters of the exosuit, avoiding multiple iterations of the prototype. Experiment results show that the exosuit can generate a torque of 7.6Nm at a pressure of 90kPa and produce a significant reduction in the electromyography activity of the knee extensor muscles. We believe that Volume Transfer could be utilized prevalently in future fabric-based pneumatic exosuit designs to achieve a significant improvement in wearability.
Quantum computing promises to enhance machine learning and artificial intelligence. Different quantum algorithms have been proposed to improve a wide spectrum of machine learning tasks. Yet, recent theoretical works show that, similar to traditional classifiers based on deep classical neural networks, quantum classifiers would suffer from the vulnerability problem: adding tiny carefully-crafted perturbations to the legitimate original data samples would facilitate incorrect predictions at a notably high confidence level. This will pose serious problems for future quantum machine learning applications in safety and security-critical scenarios. Here, we report the first experimental demonstration of quantum adversarial learning with programmable superconducting qubits. We train quantum classifiers, which are built upon variational quantum circuits consisting of ten transmon qubits featuring average lifetimes of 150 $\mu$s, and average fidelities of simultaneous single- and two-qubit gates above 99.94% and 99.4% respectively, with both real-life images (e.g., medical magnetic resonance imaging scans) and quantum data. We demonstrate that these well-trained classifiers (with testing accuracy up to 99%) can be practically deceived by small adversarial perturbations, whereas an adversarial training process would significantly enhance their robustness to such perturbations. Our results reveal experimentally a crucial vulnerability aspect of quantum learning systems under adversarial scenarios and demonstrate an effective defense strategy against adversarial attacks, which provide a valuable guide for quantum artificial intelligence applications with both near-term and future quantum devices.