Automatic radiology report generation is essential to computer-aided diagnosis. Through the success of image captioning, medical report generation has been achievable. However, the lack of annotated disease labels is still the bottleneck of this area. In addition, the image-text data bias problem and complex sentences make it more difficult to generate accurate reports. To address these gaps, we pre-sent a self-guided framework (SGF), a suite of unsupervised and supervised deep learning methods to mimic the process of human learning and writing. In detail, our framework obtains the domain knowledge from medical reports with-out extra disease labels and guides itself to extract fined-grain visual features as-sociated with the text. Moreover, SGF successfully improves the accuracy and length of medical report generation by incorporating a similarity comparison mechanism that imitates the process of human self-improvement through compar-ative practice. Extensive experiments demonstrate the utility of our SGF in the majority of cases, showing its superior performance over state-of-the-art meth-ods. Our results highlight the capacity of the proposed framework to distinguish fined-grained visual details between words and verify its advantage in generating medical reports.
Temperature drift, stress birefringence and low frequency vibration lead to the randomness and fluctuation of the output of optical voltage sensor(OVS). In order to solve the problem, this study adopts the lock-in amplifier technology with the aid of a high-speed rotating electrode to realize electric field modulation. This technology could shift the measured signal frequency band from near 50 Hz moved to several kilometer Hz, so as to make the output signal avoid the interference from low-frequency temperature drift, stress birefringence and vibration, leading to higher stability and reliability. The electro-optic coupling wave theory and static electric field finite element method are utilized to investigate the shape of modulation wave. The simulation results proves that lock-in technology is able to prevent the measured voltage signal from the large step signal interference and restore the perfect original signal. While the sample rate is decreased to the value of the modulation frequency.
Object detection is a difficult downstream task in computer vision. For the on-board edge computing platforms, a giant model is difficult to achieve the real-time detection requirement. And, a lightweight model built from a large number of the depth-wise separable convolutional layers cannot achieve the sufficient accuracy. We introduce a new method, GSConv, to lighten the model but maintain the accuracy. The GSConv balances the model's accuracy and speed better. And, we provide a design paradigm, slim-neck, to achieve a higher computational cost-effectiveness of the detectors. In experiments, our method obtains state-of-the-art results (e.g. 70.9% mAP0.5 for the SO-DA10M at a speed of ~100FPS on a Tesla T4) compared with the original networks. Code will be open source.
Transformer, the latest technological advance of deep learning, has gained prevalence in natural language processing or computer vision. Since medical imaging bear some resemblance to computer vision, it is natural to inquire about the status quo of Transformers in medical imaging and ask the question: can the Transformer models transform medical imaging? In this paper, we attempt to make a response to the inquiry. After a brief introduction of the fundamentals of Transformers, especially in comparison with convolutional neural networks (CNNs), and highlighting key defining properties that characterize the Transformers, we offer a comprehensive review of the state-of-the-art Transformer-based approaches for medical imaging and exhibit current research progresses made in the areas of medical image segmentation, recognition, detection, registration, reconstruction, enhancement, etc. In particular, what distinguishes our review lies in its organization based on the Transformer's key defining properties, which are mostly derived from comparing the Transformer and CNN, and its type of architecture, which specifies the manner in which the Transformer and CNN are combined, all helping the readers to best understand the rationale behind the reviewed approaches. We conclude with discussions of future perspectives.
Point cloud completion task aims to predict the missing part of incomplete point clouds and generate complete point clouds with details. In this paper, we propose a novel point cloud completion network, CompleteDT, which is based on the transformer. CompleteDT can learn features within neighborhoods and explore the relationship among these neighborhoods. By sampling the incomplete point cloud to obtain point clouds with different resolutions, we extract features from these point clouds in a self-guided manner, while converting these features into a series of $patches$ based on the geometrical structure. To facilitate transformers to leverage sufficient information about point clouds, we provide a plug-and-play module named Relation-Augment Attention Module (RAA), consisting of Point Cross-Attention Module (PCA) and Point Dense Multi-Scale Attention Module (PDMA). These two modules can enhance the ability to learn features within Patches and consider the correlation among these Patches. Thus, RAA enables to learn structures of incomplete point clouds and contribute to infer the local details of complete point clouds generated. In addition, we predict the complete shape from $patches$ with an efficient generation module, namely, Multi-resolution Point Fusion Module (MPF). MPF gradually generates complete point clouds from $patches$, and updates $patches$ based on these generated point clouds. Experimental results show that our method largely outperforms the state-of-the-art methods.
In recent years, the research of rehabilitation robot technology has become a hotspot in the field of rehabilitation medicine engineering and robotics. To assist active rehabilitation in patients with unilateral lower extremity injury, we propose a new self-service rehabilitation training method to control the injured lower extremity through its contralateral healthy upper limbs. Firstly, the movement data of upper limbs and lower limbs of healthy people in normal walking state are obtained by gait measurement experiment. Secondly, the eigenvectors of upper limb and lower limb movements in a single movement cycle are extracted respectively. Thirdly, the linear mapping relationship between the upper limbs movement and the lower limbs movement is identified using the least squares method. Finally, the simulation experiment of self-service rehabilitation training is implemented on MATLAB/Simulink. The results indicate that the identified linear mapping model can achieve good accuracy and adaptability. The self-service rehabilitation training method is effective for helping patients with unilateral limb injury to make rehabilitation training on themselves.
Recently, the development of mobile edge computing has enabled exhilarating edge artificial intelligence (AI) with fast response and low communication cost. The location information of edge devices is essential to support the edge AI in many scenarios, like smart home, intelligent transportation systems and integrated health care. Taking advantages of deep learning intelligence, the centralized machine learning (ML)-based positioning technique has received heated attention from both academia and industry. However, some potential issues, such as location information leakage and huge data traffic, limit its application. Fortunately, a newly emerging privacy-preserving distributed ML mechanism, named federated learning (FL), is expected to alleviate these concerns. In this article, we illustrate a framework of FL-based localization system as well as the involved entities at edge networks. Moreover, the advantages of such system are elaborated. On practical implementation of it, we investigate the field-specific issues associated with system-level solutions, which are further demonstrated over a real-word database. Moreover, future challenging open problems in this field are outlined.
This paper studies the cooperative learning of two generative flow models, in which the two models are iteratively updated based on the jointly synthesized examples. The first flow model is a normalizing flow that transforms an initial simple density to a target density by applying a sequence of invertible transformations. The second flow model is a Langevin flow that runs finite steps of gradient-based MCMC toward an energy-based model. We start from proposing a generative framework that trains an energy-based model with a normalizing flow as an amortized sampler to initialize the MCMC chains of the energy-based model. In each learning iteration, we generate synthesized examples by using a normalizing flow initialization followed by a short-run Langevin flow revision toward the current energy-based model. Then we treat the synthesized examples as fair samples from the energy-based model and update the model parameters with the maximum likelihood learning gradient, while the normalizing flow directly learns from the synthesized examples by maximizing the tractable likelihood. Under the short-run non-mixing MCMC scenario, the estimation of the energy-based model is shown to follow the perturbation of maximum likelihood, and the short-run Langevin flow and the normalizing flow form a two-flow generator that we call CoopFlow. We provide an understating of the CoopFlow algorithm by information geometry and show that it is a valid generator as it converges to a moment matching estimator. We demonstrate that the trained CoopFlow is capable of synthesizing realistic images, reconstructing images, and interpolating between images.
Self-supervised grasp learning, i.e., learning to grasp by trial and error, has made great progress. However, it is still time-consuming to train such a model and also a challenge to apply it in practice. This work presents an accelerating method of robotic grasp learning via pretraining with coarse affordance maps of objects to be grasped based on a quite small dataset. A model generated through pre-training is harnessed as an initialization policy to warmly start grasp learning so as to guide a robot to capture more effective rewards at the beginning of training. An object in its coarse affordance map is annotated with a single key point and thereby, the burden of labeling is greatly alleviated. Extensive experiments in simulation and on a real robot are conducted to evaluate the proposed method. The simulation results show that it can significantly accelerate grasp learning by nearly three times over a vanilla Deep Q-Network -based method. Its test on a real UR3 robot shows that it reaches a grasp success rate of 89.5% via only 500 times of grasp tries within about two hours, which is four times faster than its competitor. In addition, it enjoys an outstanding generalization ability to grasp prior-unseen novel objects. It outperforms some existing methods and has the potential to directly apply to a robot for real-world grasp learning tasks.
The prior self-supervised learning researches mainly select image-level instance discrimination as pretext task. It achieves a fantastic classification performance that is comparable to supervised learning methods. However, with degraded transfer performance on downstream tasks such as object detection. To bridge the performance gap, we propose a novel object-level self-supervised learning method, called Contrastive learning with Downstream background invariance (CoDo). The pretext task is converted to focus on instance location modeling for various backgrounds, especially for downstream datasets. The ability of background invariance is considered vital for object detection. Firstly, a data augmentation strategy is proposed to paste the instances onto background images, and then jitter the bounding box to involve background information. Secondly, we implement architecture alignment between our pretraining network and the mainstream detection pipelines. Thirdly, hierarchical and multi views contrastive learning is designed to improve performance of visual representation learning. Experiments on MSCOCO demonstrate that the proposed CoDo with common backbones, ResNet50-FPN, yields strong transfer learning results for object detection.