Data augmentation is a critical component of training deep learning models. Although data augmentation has been shown to significantly improve image classification, its potential has not been thoroughly investigated for object detection. Given the additional cost for annotating images for object detection, data augmentation may be of even greater importance for this computer vision task. In this work, we study the impact of data augmentation on object detection. We first demonstrate that data augmentation operations borrowed from image classification may be helpful for training detection models, but the improvement is limited. Thus, we investigate how learned, specialized data augmentation policies improve generalization performance for detection models. Importantly, these augmentation policies only affect training and leave a trained model unchanged during evaluation. Experiments on the COCO dataset indicate that an optimized data augmentation policy improves detection accuracy by more than +2.3 mAP, and allow a single inference model to achieve a state-of-the-art accuracy of 50.7 mAP. Importantly, the best policy found on COCO may be transferred unchanged to other detection datasets and models to improve predictive accuracy. For example, the best augmentation policy identified with COCO improves a strong baseline on PASCAL-VOC by +2.7 mAP. Our results also reveal that a learned augmentation policy is superior to state-of-the-art architecture regularization methods for object detection, even when considering strong baselines. Code for training with the learned policy is available online at https://github.com/tensorflow/tpu/tree/master/models/official/detection
High-frequency ultrasound (HFU) is well suited for imaging embryonic mice due to its noninvasive and real-time characteristics. However, manual segmentation of the brain ventricles (BVs) and body requires substantial time and expertise. This work proposes a novel deep learning based end-to-end auto-context refinement framework, consisting of two stages. The first stage produces a low resolution segmentation of the BV and body simultaneously. The resulting probability map for each object (BV or body) is then used to crop a region of interest (ROI) around the target object in both the original image and the probability map to provide context to the refinement segmentation network. Joint training of the two stages provides significant improvement in Dice Similarity Coefficient (DSC) over using only the first stage (0.818 to 0.906 for the BV, and 0.919 to 0.934 for the body). The proposed method significantly reduces the inference time (102.36 to 0.09 s/volume around 1000x faster) while slightly improves the segmentation accuracy over the previous methods using slide-window approaches.
Meta-learning methods, most notably Model-Agnostic Meta-Learning or MAML, have achieved great success in adapting to new tasks quickly, after having been trained on similar tasks. The mechanism behind their success, however, is poorly understood. We begin this work with an experimental analysis of MAML, finding that deep models are crucial for its success, even given sets of simple tasks where a linear model would suffice on any individual task. Furthermore, on image-recognition tasks, we find that the early layers of MAML-trained models learn task-invariant features, while later layers are used for adaptation, providing further evidence that these models require greater capacity than is strictly necessary for their individual tasks. Following our findings, we propose a method which enables better use of model capacity at inference time by separating the adaptation aspect of meta-learning into parameters that are only used for adaptation but are not part of the forward model. We find that our approach enables more effective meta-learning in smaller models, which are suitably sized for the individual tasks.
Keeping fit has been increasingly important for people nowadays. However, people may not get expected exercise results without following professional guidance while hiring personal trainers is expensive. In this paper, an effective real-time system called Fitness Done Right (FDR) is proposed for helping people exercise correctly on their own. The system includes detecting human body parts, recognizing exercise pose and detecting errors for test poses as well as giving correction advice. Generally, two branch multi-stage CNN is used for training data sets in order to learn human body parts and associations. Then, considering two poses, which are plank and squat in our model, we design a detection algorithm, combining Euclidean and angle distances, to determine the pose in the image. Finally, key values for key features of the two poses are computed correspondingly in the pose error detection part, which helps give correction advice. We conduct our system in real-time situation with error rate down to $1.2\%$, and the screenshots of experimental results are also presented.
In Image Compression, the researchers' aim is to reduce the number of bits required to represent an image by removing the spatial and spectral redundancies. Recently discrete wavelet transform and wavelet packet has emerged as popular techniques for image compression. The wavelet transform is one of the major processing components of image compression. The result of the compression changes as per the basis and tap of the wavelet used. It is proposed that proper selection of mother wavelet on the basis of nature of images, improve the quality as well as compression ratio remarkably. We suggest the novel technique, which is based on wavelet packet best tree based on Threshold Entropy with enhanced run-length encoding. This method reduces the time complexity of wavelet packets decomposition as complete tree is not decomposed. Our algorithm selects the sub-bands, which include significant information based on threshold entropy. The enhanced run length encoding technique is suggested provides better results than RLE. The result when compared with JPEG-2000 proves to be better.
Recent work by Brock et al. (2018) suggests that Generative Adversarial Networks (GANs) benefit disproportionately from large mini-batch sizes. Unfortunately, using large batches is slow and expensive on conventional hardware. Thus, it would be nice if we could generate batches that were effectively large though actually small. In this work, we propose a method to do this, inspired by the use of Coreset-selection in active learning. When training a GAN, we draw a large batch of samples from the prior and then compress that batch using Coreset-selection. To create effectively large batches of 'real' images, we create a cached dataset of Inception activations of each training image, randomly project them down to a smaller dimension, and then use Coreset-selection on those projected activations at training time. We conduct experiments showing that this technique substantially reduces training time and memory usage for modern GAN variants, that it reduces the fraction of dropped modes in a synthetic dataset, and that it allows GANs to reach a new state of the art in anomaly detection.
This paper presents an approach to address data scarcity problems in underwater image datasets for visual detection of marine debris. The proposed approach relies on a two-stage variational autoencoder (VAE) and a binary classifier to evaluate the generated imagery for quality and realism. From the images generated by the two-stage VAE, the binary classifier selects "good quality" images and augments the given dataset with them. Lastly, a multi-class classifier is used to evaluate the impact of the augmentation process by measuring the accuracy of an object detector trained on combinations of real and generated trash images. Our results show that the classifier trained with the augmented data outperforms the one trained only with the real data. This approach will not only be valid for the underwater trash classification problem presented in this paper, but it will also be useful for any data-dependent task for which collecting more images is challenging or infeasible.
Automatic detection of shadow regions in an image is a difficult task due to the lack of prior information about the illumination source and the dynamic of the scene objects. To address this problem, in this paper, a deep-learning based segmentation method is proposed that identifies shadow regions at the pixel-level in a single RGB image. We exploit a novel Convolutional Neural Network (CNN) architecture to identify and extract shadow features in an end-to-end manner. This network preserves learned contexts during the training and observes the entire image to detect global and local shadow patterns simultaneously. The proposed method is evaluated on two publicly available datasets of SBU and UCF. We have improved the state-of-the-art Balanced Error Rate (BER) on these datasets by 22\% and 14\%, respectively.
This study proposes an approach to segment human object from a depth image based on histogram of depth values. The region of interest is first extracted based on a predefined threshold for histogram regions. A region growing process is then employed to separate multiple human bodies with the same depth interval. Our contribution is the identification of an adaptive growth threshold based on the detected histogram region. To demonstrate the effectiveness of the proposed method, an application in driver distraction detection was introduced. After successfully extracting the driver's position inside the car, we came up with a simple solution to track the driver motion. With the analysis of the difference between initial and current frame, a change of cluster position or depth value in the interested region, which cross the preset threshold, is considered as a distracted activity. The experiment results demonstrated the success of the algorithm in detecting typical distracted driving activities such as using phone for calling or texting, adjusting internal devices and drinking in real time.
Spiking Neural Network (SNN), as a brain-inspired machine learning algorithm, is closer to the computing mechanism of human brain and more suitable to reveal the essence of intelligence compared with Artificial Neural Networks (ANN), attracting more and more attention in recent years. In addition, the information processed by SNN is in the form of discrete spikes, which makes SNN have low power consumption characteristics. In this paper, we propose an efficient and strong unsupervised SNN named BioSNet with high biological plausibility to handle image classification tasks. In BioSNet, we propose a new biomimetic spiking neuron model named MRON inspired by 'recognition memory' in the human brain, design an efficient and robust network architecture corresponding to biological characteristics of the human brain as well, and extend the traditional voting mechanism to the Vote-for-All (VFA) decoding layer so as to reduce information loss during decoding. Simulation results show that BioSNet not only achieves state-of-the-art unsupervised classification accuracy on MNIST/EMNIST data sets, but also exhibits superior learning efficiency and high robustness. Specifically, the BioSNet trained with only dozens of samples per class can achieve a favorable classification accuracy over 80% and randomly deleting even 95% of synapses or neurons in the BioSNet only leads to slight performance degradation.