Picture for Rohan Chandra

Rohan Chandra

UT Austin

B-GAP: Behavior-Guided Action Prediction for Autonomous Navigation

Add code
Nov 07, 2020
Figure 1 for B-GAP: Behavior-Guided Action Prediction for Autonomous Navigation
Figure 2 for B-GAP: Behavior-Guided Action Prediction for Autonomous Navigation
Figure 3 for B-GAP: Behavior-Guided Action Prediction for Autonomous Navigation
Figure 4 for B-GAP: Behavior-Guided Action Prediction for Autonomous Navigation
Viaarxiv icon

BoMuDA: Boundless Multi-Source Domain Adaptive Segmentation in Unconstrained Environments

Add code
Oct 13, 2020
Figure 1 for BoMuDA: Boundless Multi-Source Domain Adaptive Segmentation in Unconstrained Environments
Figure 2 for BoMuDA: Boundless Multi-Source Domain Adaptive Segmentation in Unconstrained Environments
Figure 3 for BoMuDA: Boundless Multi-Source Domain Adaptive Segmentation in Unconstrained Environments
Figure 4 for BoMuDA: Boundless Multi-Source Domain Adaptive Segmentation in Unconstrained Environments
Viaarxiv icon

Emotions Don't Lie: A Deepfake Detection Method using Audio-Visual Affective Cues

Add code
Mar 17, 2020
Figure 1 for Emotions Don't Lie: A Deepfake Detection Method using Audio-Visual Affective Cues
Figure 2 for Emotions Don't Lie: A Deepfake Detection Method using Audio-Visual Affective Cues
Figure 3 for Emotions Don't Lie: A Deepfake Detection Method using Audio-Visual Affective Cues
Figure 4 for Emotions Don't Lie: A Deepfake Detection Method using Audio-Visual Affective Cues
Viaarxiv icon

EmotiCon: Context-Aware Multimodal Emotion Recognition using Frege's Principle

Add code
Mar 14, 2020
Figure 1 for EmotiCon: Context-Aware Multimodal Emotion Recognition using Frege's Principle
Figure 2 for EmotiCon: Context-Aware Multimodal Emotion Recognition using Frege's Principle
Figure 3 for EmotiCon: Context-Aware Multimodal Emotion Recognition using Frege's Principle
Figure 4 for EmotiCon: Context-Aware Multimodal Emotion Recognition using Frege's Principle
Viaarxiv icon

CMetric: A Driving Behavior Measure Using Centrality Functions

Add code
Mar 09, 2020
Figure 1 for CMetric: A Driving Behavior Measure Using Centrality Functions
Figure 2 for CMetric: A Driving Behavior Measure Using Centrality Functions
Figure 3 for CMetric: A Driving Behavior Measure Using Centrality Functions
Figure 4 for CMetric: A Driving Behavior Measure Using Centrality Functions
Viaarxiv icon

DenseCAvoid: Real-time Navigation in Dense Crowds using Anticipatory Behaviors

Add code
Feb 07, 2020
Figure 1 for DenseCAvoid: Real-time Navigation in Dense Crowds using Anticipatory Behaviors
Figure 2 for DenseCAvoid: Real-time Navigation in Dense Crowds using Anticipatory Behaviors
Figure 3 for DenseCAvoid: Real-time Navigation in Dense Crowds using Anticipatory Behaviors
Figure 4 for DenseCAvoid: Real-time Navigation in Dense Crowds using Anticipatory Behaviors
Viaarxiv icon

Forecasting Trajectory and Behavior of Road-Agents Using Spectral Clustering in Graph-LSTMs

Add code
Dec 02, 2019
Figure 1 for Forecasting Trajectory and Behavior of Road-Agents Using Spectral Clustering in Graph-LSTMs
Figure 2 for Forecasting Trajectory and Behavior of Road-Agents Using Spectral Clustering in Graph-LSTMs
Figure 3 for Forecasting Trajectory and Behavior of Road-Agents Using Spectral Clustering in Graph-LSTMs
Figure 4 for Forecasting Trajectory and Behavior of Road-Agents Using Spectral Clustering in Graph-LSTMs
Viaarxiv icon

M3ER: Multiplicative Multimodal Emotion Recognition Using Facial, Textual, and Speech Cues

Add code
Nov 22, 2019
Figure 1 for M3ER: Multiplicative Multimodal Emotion Recognition Using Facial, Textual, and Speech Cues
Figure 2 for M3ER: Multiplicative Multimodal Emotion Recognition Using Facial, Textual, and Speech Cues
Figure 3 for M3ER: Multiplicative Multimodal Emotion Recognition Using Facial, Textual, and Speech Cues
Figure 4 for M3ER: Multiplicative Multimodal Emotion Recognition Using Facial, Textual, and Speech Cues
Viaarxiv icon

Take an Emotion Walk: Perceiving Emotions from Gaits Using Hierarchical Attention Pooling and Affective Mapping

Add code
Nov 20, 2019
Figure 1 for Take an Emotion Walk: Perceiving Emotions from Gaits Using Hierarchical Attention Pooling and Affective Mapping
Figure 2 for Take an Emotion Walk: Perceiving Emotions from Gaits Using Hierarchical Attention Pooling and Affective Mapping
Figure 3 for Take an Emotion Walk: Perceiving Emotions from Gaits Using Hierarchical Attention Pooling and Affective Mapping
Figure 4 for Take an Emotion Walk: Perceiving Emotions from Gaits Using Hierarchical Attention Pooling and Affective Mapping
Viaarxiv icon

STEP: Spatial Temporal Graph Convolutional Networks for Emotion Perception from Gaits

Add code
Oct 28, 2019
Figure 1 for STEP: Spatial Temporal Graph Convolutional Networks for Emotion Perception from Gaits
Figure 2 for STEP: Spatial Temporal Graph Convolutional Networks for Emotion Perception from Gaits
Figure 3 for STEP: Spatial Temporal Graph Convolutional Networks for Emotion Perception from Gaits
Figure 4 for STEP: Spatial Temporal Graph Convolutional Networks for Emotion Perception from Gaits
Viaarxiv icon