Alert button
Picture for Jason J. Corso

Jason J. Corso

Alert button

Learning to Estimate External Forces of Human Motion in Video

Jul 12, 2022
Nathan Louis, Tylan N. Templin, Travis D. Eliason, Daniel P. Nicolella, Jason J. Corso

Figure 1 for Learning to Estimate External Forces of Human Motion in Video
Figure 2 for Learning to Estimate External Forces of Human Motion in Video
Figure 3 for Learning to Estimate External Forces of Human Motion in Video
Figure 4 for Learning to Estimate External Forces of Human Motion in Video

Analyzing sports performance or preventing injuries requires capturing ground reaction forces (GRFs) exerted by the human body during certain movements. Standard practice uses physical markers paired with force plates in a controlled environment, but this is marred by high costs, lengthy implementation time, and variance in repeat experiments; hence, we propose GRF inference from video. While recent work has used LSTMs to estimate GRFs from 2D viewpoints, these can be limited in their modeling and representation capacity. First, we propose using a transformer architecture to tackle the GRF from video task, being the first to do so. Then we introduce a new loss to minimize high impact peaks in regressed curves. We also show that pre-training and multi-task learning on 2D-to-3D human pose estimation improves generalization to unseen motions. And pre-training on this different task provides good initial weights when finetuning on smaller (rarer) GRF datasets. We evaluate on LAAS Parkour and a newly collected ForcePose dataset; we show up to 19% decrease in error compared to prior approaches.

* Accepted to ACMMM 2022 
Viaarxiv icon

Q-TART: Quickly Training for Adversarial Robustness and in-Transferability

Apr 14, 2022
Madan Ravi Ganesh, Salimeh Yasaei Sekeh, Jason J. Corso

Figure 1 for Q-TART: Quickly Training for Adversarial Robustness and in-Transferability
Figure 2 for Q-TART: Quickly Training for Adversarial Robustness and in-Transferability
Figure 3 for Q-TART: Quickly Training for Adversarial Robustness and in-Transferability
Figure 4 for Q-TART: Quickly Training for Adversarial Robustness and in-Transferability

Raw deep neural network (DNN) performance is not enough; in real-world settings, computational load, training efficiency and adversarial security are just as or even more important. We propose to simultaneously tackle Performance, Efficiency, and Robustness, using our proposed algorithm Q-TART, Quickly Train for Adversarial Robustness and in-Transferability. Q-TART follows the intuition that samples highly susceptible to noise strongly affect the decision boundaries learned by DNNs, which in turn degrades their performance and adversarial susceptibility. By identifying and removing such samples, we demonstrate improved performance and adversarial robustness while using only a subset of the training data. Through our experiments we highlight Q-TART's high performance across multiple Dataset-DNN combinations, including ImageNet, and provide insights into the complementary behavior of Q-TART alongside existing adversarial training approaches to increase robustness by over 1.3% while using up to 17.9% less training time.

* 13 pages 
Viaarxiv icon

Come Again? Re-Query in Referring Expression Comprehension

Oct 19, 2021
Stephan J. Lemmer, Jason J. Corso

Figure 1 for Come Again? Re-Query in Referring Expression Comprehension
Figure 2 for Come Again? Re-Query in Referring Expression Comprehension
Figure 3 for Come Again? Re-Query in Referring Expression Comprehension
Figure 4 for Come Again? Re-Query in Referring Expression Comprehension

To build a shared perception of the world, humans rely on the ability to resolve misunderstandings by requesting and accepting clarifications. However, when evaluating visiolinguistic models, metrics such as accuracy enforce the assumption that a decision must be made based on a single piece of evidence. In this work, we relax this assumption for the task of referring expression comprehension by allowing the model to request help when its confidence is low. We consider two ways in which this help can be provided: multimodal re-query, where the user is allowed to point or click to provide additional information to the model, and rephrase re-query, where the user is only allowed to provide another referring expression. We demonstrate the importance of re-query by showing that providing the best referring expression for all objects can increase accuracy by up to 21.9% and that this accuracy can be matched by re-querying only 12% of initial referring expressions. We further evaluate re-query functions for both multimodal and rephrase re-query across three modern approaches and demonstrate combined replacement for rephrase re-query, which improves average single-query performance by up to 6.5% and converges to as close as 1.6% of the upper bound of single-query performance.

* 17 pages, 3 figures 
Viaarxiv icon

The DEVIL is in the Details: A Diagnostic Evaluation Benchmark for Video Inpainting

May 11, 2021
Ryan Szeto, Jason J. Corso

Figure 1 for The DEVIL is in the Details: A Diagnostic Evaluation Benchmark for Video Inpainting
Figure 2 for The DEVIL is in the Details: A Diagnostic Evaluation Benchmark for Video Inpainting
Figure 3 for The DEVIL is in the Details: A Diagnostic Evaluation Benchmark for Video Inpainting
Figure 4 for The DEVIL is in the Details: A Diagnostic Evaluation Benchmark for Video Inpainting

Quantitative evaluation has increased dramatically among recent video inpainting work, but the video and mask content used to gauge performance has received relatively little attention. Although attributes such as camera and background scene motion inherently change the difficulty of the task and affect methods differently, existing evaluation schemes fail to control for them, thereby providing minimal insight into inpainting failure modes. To address this gap, we propose the Diagnostic Evaluation of Video Inpainting on Landscapes (DEVIL) benchmark, which consists of two contributions: (i) a novel dataset of videos and masks labeled according to several key inpainting failure modes, and (ii) an evaluation scheme that samples slices of the dataset characterized by a fixed content attribute, and scores performance on each slice according to reconstruction, realism, and temporal consistency quality. By revealing systematic changes in performance induced by particular characteristics of the input content, our challenging benchmark enables more insightful analysis into video inpainting methods and serves as an invaluable diagnostic tool for the field. Our code is available at https://github.com/MichiganCOG/devil .

Viaarxiv icon

Depth from Camera Motion and Object Detection

Mar 02, 2021
Brent A. Griffin, Jason J. Corso

Figure 1 for Depth from Camera Motion and Object Detection
Figure 2 for Depth from Camera Motion and Object Detection
Figure 3 for Depth from Camera Motion and Object Detection
Figure 4 for Depth from Camera Motion and Object Detection

This paper addresses the problem of learning to estimate the depth of detected objects given some measurement of camera motion (e.g., from robot kinematics or vehicle odometry). We achieve this by 1) designing a recurrent neural network (DBox) that estimates the depth of objects using a generalized representation of bounding boxes and uncalibrated camera movement and 2) introducing the Object Depth via Motion and Detection Dataset (ODMD). ODMD training data are extensible and configurable, and the ODMD benchmark includes 21,600 examples across four validation and test sets. These sets include mobile robot experiments using an end-effector camera to locate objects from the YCB dataset and examples with perturbations added to camera motion or bounding box data. In addition to the ODMD benchmark, we evaluate DBox in other monocular application domains, achieving state-of-the-art results on existing driving and robotics benchmarks and estimating the depth of objects using a camera phone.

* CVPR 2021 
Viaarxiv icon

Temporally Guided Articulated Hand Pose Tracking in Surgical Videos

Jan 12, 2021
Nathan Louis, Luowei Zhou, Steven J. Yule, Roger D. Dias, Milisa Manojlovich, Francis D. Pagani, Donald S. Likosky, Jason J. Corso

Figure 1 for Temporally Guided Articulated Hand Pose Tracking in Surgical Videos
Figure 2 for Temporally Guided Articulated Hand Pose Tracking in Surgical Videos
Figure 3 for Temporally Guided Articulated Hand Pose Tracking in Surgical Videos
Figure 4 for Temporally Guided Articulated Hand Pose Tracking in Surgical Videos

Articulated hand pose tracking is an underexplored problem that carries the potential for use in an extensive number of applications, especially in the medical domain. With a robust and accurate tracking system on in-vivo surgical videos, the motion dynamics and movement patterns of the hands can be captured and analyzed for rich tasks including skills assessment, training surgical residents, and temporal action recognition. In this work, we propose a novel hand pose estimation model, Res152- CondPose, which improves tracking accuracy by incorporating a hand pose prior into its pose prediction. We show improvements over state-of-the-art methods which provide frame-wise independent predictions, by following a temporally guided approach that effectively leverages past predictions. Additionally, we collect the first dataset, Surgical Hands, that provides multi-instance articulated hand pose annotations for in-vivo videos. Our dataset contains 76 video clips from 28 publicly available surgical videos and over 8.1k annotated hand pose instances. We provide bounding boxes, articulated hand pose annotations, and tracking IDs to enable multi-instance area-based and articulated tracking. When evaluated on Surgical Hands, we show our method outperforms the state-of-the-art method using mean Average Precision (mAP), to measure pose estimation accuracy, and Multiple Object Tracking Accuracy (MOTA), to assess pose tracking performance.

* 10 pages 
Viaarxiv icon

Integrating Human Gaze into Attention for Egocentric Activity Recognition

Nov 08, 2020
Kyle Min, Jason J. Corso

Figure 1 for Integrating Human Gaze into Attention for Egocentric Activity Recognition
Figure 2 for Integrating Human Gaze into Attention for Egocentric Activity Recognition
Figure 3 for Integrating Human Gaze into Attention for Egocentric Activity Recognition
Figure 4 for Integrating Human Gaze into Attention for Egocentric Activity Recognition

It is well known that human gaze carries significant information about visual attention. However, there are three main difficulties in incorporating the gaze data in an attention mechanism of deep neural networks: 1) the gaze fixation points are likely to have measurement errors due to blinking and rapid eye movements; 2) it is unclear when and how much the gaze data is correlated with visual attention; and 3) gaze data is not always available in many real-world situations. In this work, we introduce an effective probabilistic approach to integrate human gaze into spatiotemporal attention for egocentric activity recognition. Specifically, we represent the locations of gaze fixation points as structured discrete latent variables to model their uncertainties. In addition, we model the distribution of gaze fixations using a variational method. The gaze distribution is learned during the training process so that the ground-truth annotations of gaze locations are no longer needed in testing situations since they are predicted from the learned gaze distribution. The predicted gaze locations are used to provide informative attentional cues to improve the recognition performance. Our method outperforms all the previous state-of-the-art approaches on EGTEA, which is a large-scale dataset for egocentric activity recognition provided with gaze measurements. We also perform an ablation study and qualitative analysis to demonstrate that our attention mechanism is effective.

* WACV 2021 camera ready (Supplementary material: on CVF soon) 
Viaarxiv icon

The RobotSlang Benchmark: Dialog-guided Robot Localization and Navigation

Oct 23, 2020
Shurjo Banerjee, Jesse Thomason, Jason J. Corso

Figure 1 for The RobotSlang Benchmark: Dialog-guided Robot Localization and Navigation
Figure 2 for The RobotSlang Benchmark: Dialog-guided Robot Localization and Navigation
Figure 3 for The RobotSlang Benchmark: Dialog-guided Robot Localization and Navigation
Figure 4 for The RobotSlang Benchmark: Dialog-guided Robot Localization and Navigation

Autonomous robot systems for applications from search and rescue to assistive guidance should be able to engage in natural language dialog with people. To study such cooperative communication, we introduce Robot Simultaneous Localization and Mapping with Natural Language (RobotSlang), a benchmark of 169 natural language dialogs between a human Driver controlling a robot and a human Commander providing guidance towards navigation goals. In each trial, the pair first cooperates to localize the robot on a global map visible to the Commander, then the Driver follows Commander instructions to move the robot to a sequence of target objects. We introduce a Localization from Dialog History (LDH) and a Navigation from Dialog History (NDH) task where a learned agent is given dialog and visual observations from the robot platform as input and must localize in the global map or navigate towards the next target object, respectively. RobotSlang is comprised of nearly 5k utterances and over 1k minutes of robot camera and control streams. We present an initial model for the NDH task, and show that an agent trained in simulation can follow the RobotSlang dialog-based navigation instructions for controlling a physical robot platform. Code and data are available at https://umrobotslang.github.io/.

* Conference on Robot Learning 2020 
Viaarxiv icon

DAER to Reject Seeds with Dual-loss Additional Error Regression

Sep 16, 2020
Stephan J. Lemmer, Jason J. Corso

Figure 1 for DAER to Reject Seeds with Dual-loss Additional Error Regression
Figure 2 for DAER to Reject Seeds with Dual-loss Additional Error Regression
Figure 3 for DAER to Reject Seeds with Dual-loss Additional Error Regression
Figure 4 for DAER to Reject Seeds with Dual-loss Additional Error Regression

Many vision tasks require side information at inference time---a seed---to fully specify the problem. For example, an initial object segmentation is needed for video object segmentation. To date, all such work makes the tacit assumption that the seed is a good one. However, in practice, from crowd-sourcing to noisy automated seeds, this is not the case. We hence propose the novel problem of seed rejection---determining whether to reject a seed based on expected degradation relative to the gold-standard. We provide a formal definition to this problem, and focus on two challenges: distinguishing poor primary inputs from poor seeds and understanding the model's response to noisy seeds conditioned on the primary input. With these challenges in mind, we propose a novel training method and evaluation metrics for the seed rejection problem. We then validate these metrics and methods on two problems which use seeds as a source of additional information: keypoint-conditioned viewpoint estimation with crowdsourced seeds and hierarchical scene classification with automated seeds. In these experiments, we show our method reduces the required number of seeds that need to be reviewed for a target performance by up to 23% over strong baselines.

* 10 pages, 6 figures 
Viaarxiv icon

Detection and Localization of Robotic Tools in Robot-Assisted Surgery Videos Using Deep Neural Networks for Region Proposal and Detection

Jul 29, 2020
Duygu Sarikaya, Jason J. Corso, Khurshid A. Guru

Figure 1 for Detection and Localization of Robotic Tools in Robot-Assisted Surgery Videos Using Deep Neural Networks for Region Proposal and Detection
Figure 2 for Detection and Localization of Robotic Tools in Robot-Assisted Surgery Videos Using Deep Neural Networks for Region Proposal and Detection
Figure 3 for Detection and Localization of Robotic Tools in Robot-Assisted Surgery Videos Using Deep Neural Networks for Region Proposal and Detection
Figure 4 for Detection and Localization of Robotic Tools in Robot-Assisted Surgery Videos Using Deep Neural Networks for Region Proposal and Detection

Video understanding of robot-assisted surgery (RAS) videos is an active research area. Modeling the gestures and skill level of surgeons presents an interesting problem. The insights drawn may be applied in effective skill acquisition, objective skill assessment, real-time feedback, and human-robot collaborative surgeries. We propose a solution to the tool detection and localization open problem in RAS video understanding, using a strictly computer vision approach and the recent advances of deep learning. We propose an architecture using multimodal convolutional neural networks for fast detection and localization of tools in RAS videos. To our knowledge, this approach will be the first to incorporate deep neural networks for tool detection and localization in RAS videos. Our architecture applies a Region Proposal Network (RPN), and a multi-modal two stream convolutional network for object detection, to jointly predict objectness and localization on a fusion of image and temporal motion cues. Our results with an Average Precision (AP) of 91% and a mean computation time of 0.1 seconds per test frame detection indicate that our study is superior to conventionally used methods for medical imaging while also emphasizing the benefits of using RPN for precision and efficiency. We also introduce a new dataset, ATLAS Dione, for RAS video understanding. Our dataset provides video data of ten surgeons from Roswell Park Cancer Institute (RPCI) (Buffalo, NY) performing six different surgical tasks on the daVinci Surgical System (dVSS R ) with annotations of robotic tools per frame.

* IEEE Transactions on Medical Imaging 36 (2017) 1542-1549  
Viaarxiv icon