Advances in Deep Learning have recently made it possible to recover full 3D meshes of human poses from individual images. However, extension of this notion to videos for recovering temporally coherent poses still remains unexplored. A major challenge in this regard is the lack of appropriately annotated video data for learning the desired deep models. Existing human pose datasets only provide 2D or 3D skeleton joint annotations, whereas the datasets are also recorded in constrained environments. We first contribute a technique to synthesize monocular action videos with rich 3D annotations that are suitable for learning computational models for full mesh 3D human pose recovery. Compared to the existing methods which simply "texture-map" clothes onto the 3D human pose models, our approach incorporates Physics based realistic cloth deformations with the human body movements. The generated videos cover a large variety of human actions, poses, and visual appearances, whereas the annotations record accurate human pose dynamics and human body surface information. Our second major contribution is an end-to-end trainable Recurrent Neural Network for full pose mesh recovery from monocular video. Using the proposed video data and LSTM based recurrent structure, our network explicitly learns to model the temporal coherence in videos and imposes geometric consistency over the recovered meshes. We establish the effectiveness of the proposed model with quantitative and qualitative analysis using the proposed and benchmark datasets.
We introduce Label Universal Targeted Attack (LUTA) that makes a deep model predict a label of attacker's choice for `any' sample of a given source class with high probability. Our attack stochastically maximizes the log-probability of the target label for the source class with first order gradient optimization, while accounting for the gradient moments. It also suppresses the leakage of attack information to the non-source classes for avoiding the attack suspicions. The perturbations resulting from our attack achieve high fooling ratios on the large-scale ImageNet and VGGFace models, and transfer well to the Physical World. Given full control over the perturbation scope in LUTA, we also demonstrate it as a tool for deep model autopsy. The proposed attack reveals interesting perturbation patterns and observations regarding the deep models.
We propose the construction of a prototype scanner designed to capture multispectral images of documents. A standard sheet-feed scanner is modified by disconnecting its internal light source and connecting an external multispectral light source comprising of narrow band light emitting diodes (LED). A document is scanned by illuminating the scanner light guide successively with different LEDs and capturing a scan of the document. The system is portable and can be used for potential applications in verification of questioned documents, cheques, receipts and bank notes.
Objective: Monitoring athlete internal workload exposure, including prevention of catastrophic non-contact knee injuries, relies on the existence of a custom early-warning detection system. This system must be able to estimate accurate, reliable, and valid musculoskeletal joint loads, for sporting maneuvers in near real-time and during match play. However, current methods are constrained to laboratory instrumentation, are labor and cost intensive, and require highly trained specialist knowledge, thereby limiting their ecological validity and volume deployment. Methods: Here we show that kinematic data obtained from wearable sensor accelerometers, in lieu of embedded force platforms, can leverage recent supervised learning techniques to predict in-game near real-time multidimensional ground reaction forces and moments (GRF/M). Competing convolutional neural network (CNN) deep learning models were trained using laboratory-derived stance phase GRF/M data and simulated sensor accelerations for running and sidestepping maneuvers derived from nearly half a million legacy motion trials. Then, predictions were made from each model driven by five sensor accelerations recorded during independent inter-laboratory data capture sessions. Results: Despite adversarial conditions, the proposed deep learning workbench achieved correlations to ground truth, by GRF component, of vertical 0.9663, anterior 0.9579 (both running), and lateral 0.8737 (sidestepping). Conclusion: The lessons learned from this study will facilitate the use of wearable sensors in conjunction with deep learning to accurately estimate near real-time on-field GRF/M. Significance: Coaching, medical, and allied health staff can use this technology to monitor a range of joint loading indicators during game play, with the ultimate aim to minimize the occurrence of non-contact injuries in elite and community-level sports.
We propose an octree guided neural network architecture and spherical convolutional kernel for machine learning from arbitrary 3D point clouds. The network architecture capitalizes on the sparse nature of irregular point clouds, and hierarchically coarsens the data representation with space partitioning. At the same time, the proposed spherical kernels systematically quantize point neighborhoods to identify local geometric structures in the data, while maintaining the properties of translation-invariance and asymmetry. We specify spherical kernels with the help of network neurons that in turn are associated with spatial locations. We exploit this association to avert dynamic kernel generation during network training that enables efficient learning with high resolution point clouds. The effectiveness of the proposed technique is established on the benchmark tasks of 3D object classification and segmentation, achieving new state-of-the-art on ShapeNet and RueMonge2014 datasets.
Automatic generation of video captions is a fundamental challenge in computer vision. Recent techniques typically employ a combination of Convolutional Neural Networks (CNNs) and Recursive Neural Networks (RNNs) for video captioning. These methods mainly focus on tailoring sequence learning through RNNs for better caption generation, whereas off-the-shelf visual features are borrowed from CNNs. We argue that careful designing of visual features for this task is equally important, and present a visual feature encoding technique to generate semantically rich captions using Gated Recurrent Units (GRUs). Our method embeds rich temporal dynamics in visual features by hierarchically applying Short Fourier Transform to CNN features of the whole video. It additionally derives high level semantics from an object detector to enrich the representation with spatial dynamics of the detected objects. The final representation is projected to a compact space and fed to a language model. By learning a relatively simple language model comprising two GRU layers, we establish new state-of-the-art on MSVD and MSR-VTT datasets for METEOR and ROUGE_L metrics.
Multiple Object Tracking (MOT) plays an important role in solving many fundamental problems in video analysis in computer vision. Most MOT methods employ two steps: Object Detection and Data Association. The first step detects objects of interest in every frame of a video, and the second establishes correspondence between the detected objects in different frames to obtain their tracks. Object detection has made tremendous progress in the last few years due to deep learning. However, data association for tracking still relies on hand crafted constraints such as appearance, motion, spatial proximity, grouping etc. to compute affinities between the objects in different frames. In this paper, we harness the power of deep learning for data association in tracking by jointly modelling object appearances and their affinities between different frames in an end-to-end fashion. The proposed Deep Affinity Network (DAN) learns compact; yet comprehensive features of pre-detected objects at several levels of abstraction, and performs exhaustive pairing permutations of those features in any two frames to infer object affinities. DAN also accounts for multiple objects appearing and disappearing between video frames. We exploit the resulting efficient affinity computations to associate objects in the current frame deep into the previous frames for reliable on-line tracking. Our technique is evaluated on popular multiple object tracking challenges MOT15, MOT17 and UA-DETRAC. Comprehensive benchmarking under twelve evaluation metrics demonstrates that our approach is among the best performing techniques on the leader board for these challenges. The open source implementation of our work is available at https://github.com/shijieS/SST.git.
Vision-based automatic counting of people has widespread applications in intelligent transportation systems, security, and logistics. However, there is currently no large-scale public dataset for benchmarking approaches on this problem. This work fills this gap by introducing the first real-world RGB-D People Counting DataSet (PCDS) containing over 4,500 videos recorded at the entrance doors of buses in normal and cluttered conditions. It also proposes an efficient method for counting people in real-world cluttered scenes related to public transportations using depth videos. The proposed method computes a point cloud from the depth video frame and re-projects it onto the ground plane to normalize the depth information. The resulting depth image is analyzed for identifying potential human heads. The human head proposals are meticulously refined using a 3D human model. The proposals in each frame of the continuous video stream are tracked to trace their trajectories. The trajectories are again refined to ascertain reliable counting. People are eventually counted by accumulating the head trajectories leaving the scene. To enable effective head and trajectory identification, we also propose two different compound features. A thorough evaluation on PCDS demonstrates that our technique is able to count people in cluttered scenes with high accuracy at 45 fps on a 1.7 GHz processor, and hence it can be deployed for effective real-time people counting for intelligent transportation systems.
In sports analytics, an understanding of accurate on-field 3D knee joint moments (KJM) could provide an early warning system for athlete workload exposure and knee injury risk. Traditionally, this analysis has relied on captive laboratory force plates and associated downstream biomechanical modeling, and many researchers have approached the problem of portability by extrapolating models built on linear statistics. An alternative approach would be to capitalize on recent advances in deep learning. In this study, using the pre-trained CaffeNet convolutional neural network (CNN) model, multivariate regression of marker-based motion capture to 3D KJM for three sports-related movement types were compared. The strongest overall mean correlation to source modeling of 0.8895 was achieved over the initial 33 % of stance phase for sidestepping. The accuracy of these mean predictions of the three critical KJM associated with anterior cruciate ligament (ACL) injury demonstrate the feasibility of on-field knee injury assessment using deep learning in lieu of laboratory embedded force plates. This multidisciplinary research approach significantly advances machine representation of real-world physical models with practical application for both community and professional level athletes.