Alert button
Picture for Yi Ding

Yi Ding

Alert button

Continuous Emotion Recognition using Visual-audio-linguistic information: A Technical Report for ABAW3

Mar 30, 2022
Su Zhang, Ruyi An, Yi Ding, Cuntai Guan

Figure 1 for Continuous Emotion Recognition using Visual-audio-linguistic information: A Technical Report for ABAW3
Figure 2 for Continuous Emotion Recognition using Visual-audio-linguistic information: A Technical Report for ABAW3
Figure 3 for Continuous Emotion Recognition using Visual-audio-linguistic information: A Technical Report for ABAW3
Viaarxiv icon

NURD: Negative-Unlabeled Learning for Online Datacenter Straggler Prediction

Mar 16, 2022
Yi Ding, Avinash Rao, Hyebin Song, Rebecca Willett, Henry Hoffmann

Figure 1 for NURD: Negative-Unlabeled Learning for Online Datacenter Straggler Prediction
Figure 2 for NURD: Negative-Unlabeled Learning for Online Datacenter Straggler Prediction
Figure 3 for NURD: Negative-Unlabeled Learning for Online Datacenter Straggler Prediction
Figure 4 for NURD: Negative-Unlabeled Learning for Online Datacenter Straggler Prediction
Viaarxiv icon

Statistical Learning for Individualized Asset Allocation

Jan 20, 2022
Yi Ding, Yingying Li, Rui Song

Figure 1 for Statistical Learning for Individualized Asset Allocation
Figure 2 for Statistical Learning for Individualized Asset Allocation
Figure 3 for Statistical Learning for Individualized Asset Allocation
Figure 4 for Statistical Learning for Individualized Asset Allocation
Viaarxiv icon

Programming with Neural Surrogates of Programs

Dec 12, 2021
Alex Renda, Yi Ding, Michael Carbin

Figure 1 for Programming with Neural Surrogates of Programs
Figure 2 for Programming with Neural Surrogates of Programs
Figure 3 for Programming with Neural Surrogates of Programs
Figure 4 for Programming with Neural Surrogates of Programs
Viaarxiv icon

Sparse Fusion for Multimodal Transformers

Nov 24, 2021
Yi Ding, Alex Rich, Mason Wang, Noah Stier, Matthew Turk, Pradeep Sen, Tobias Höllerer

Figure 1 for Sparse Fusion for Multimodal Transformers
Figure 2 for Sparse Fusion for Multimodal Transformers
Figure 3 for Sparse Fusion for Multimodal Transformers
Figure 4 for Sparse Fusion for Multimodal Transformers
Viaarxiv icon

Audio-visual Attentive Fusion for Continuous Emotion Recognition

Jul 09, 2021
Su Zhang, Yi Ding, Ziquan Wei, Cuntai Guan

Figure 1 for Audio-visual Attentive Fusion for Continuous Emotion Recognition
Figure 2 for Audio-visual Attentive Fusion for Continuous Emotion Recognition
Figure 3 for Audio-visual Attentive Fusion for Continuous Emotion Recognition
Viaarxiv icon

LGGNet: Learning from Local-Global-Graph Representations for Brain-Computer Interface

May 05, 2021
Yi Ding, Neethu Robinson, Qiuhao Zeng, Cuntai Guan

Figure 1 for LGGNet: Learning from Local-Global-Graph Representations for Brain-Computer Interface
Figure 2 for LGGNet: Learning from Local-Global-Graph Representations for Brain-Computer Interface
Figure 3 for LGGNet: Learning from Local-Global-Graph Representations for Brain-Computer Interface
Figure 4 for LGGNet: Learning from Local-Global-Graph Representations for Brain-Computer Interface
Viaarxiv icon

TSception: Capturing Temporal Dynamics and Spatial Asymmetry from EEG for Emotion Recognition

Apr 07, 2021
Yi Ding, Neethu Robinson, Qiuhao Zeng, Cuntai Guan

Figure 1 for TSception: Capturing Temporal Dynamics and Spatial Asymmetry from EEG for Emotion Recognition
Figure 2 for TSception: Capturing Temporal Dynamics and Spatial Asymmetry from EEG for Emotion Recognition
Figure 3 for TSception: Capturing Temporal Dynamics and Spatial Asymmetry from EEG for Emotion Recognition
Figure 4 for TSception: Capturing Temporal Dynamics and Spatial Asymmetry from EEG for Emotion Recognition
Viaarxiv icon