Alert button
Picture for Lijun Yin

Lijun Yin

Alert button

Weakly-Supervised Text-driven Contrastive Learning for Facial Behavior Understanding

Add code
Bookmark button
Alert button
Mar 31, 2023
Xiang Zhang, Taoyue Wang, Xiaotian Li, Huiyuan Yang, Lijun Yin

Figure 1 for Weakly-Supervised Text-driven Contrastive Learning for Facial Behavior Understanding
Figure 2 for Weakly-Supervised Text-driven Contrastive Learning for Facial Behavior Understanding
Figure 3 for Weakly-Supervised Text-driven Contrastive Learning for Facial Behavior Understanding
Figure 4 for Weakly-Supervised Text-driven Contrastive Learning for Facial Behavior Understanding
Viaarxiv icon

A Transformer-based Deep Learning Algorithm to Auto-record Undocumented Clinical One-Lung Ventilation Events

Add code
Bookmark button
Alert button
Feb 16, 2023
Zhihua Li, Alexander Nagrebetsky, Sylvia Ranjeva, Nan Bi, Dianbo Liu, Marcos F. Vidal Melo, Timothy Houle, Lijun Yin, Hao Deng

Figure 1 for A Transformer-based Deep Learning Algorithm to Auto-record Undocumented Clinical One-Lung Ventilation Events
Figure 2 for A Transformer-based Deep Learning Algorithm to Auto-record Undocumented Clinical One-Lung Ventilation Events
Figure 3 for A Transformer-based Deep Learning Algorithm to Auto-record Undocumented Clinical One-Lung Ventilation Events
Figure 4 for A Transformer-based Deep Learning Algorithm to Auto-record Undocumented Clinical One-Lung Ventilation Events
Viaarxiv icon

Multimodal Learning with Channel-Mixing and Masked Autoencoder on Facial Action Unit Detection

Add code
Bookmark button
Alert button
Sep 25, 2022
Xiang Zhang, Huiyuan Yang, Taoyue Wang, Xiaotian Li, Lijun Yin

Figure 1 for Multimodal Learning with Channel-Mixing and Masked Autoencoder on Facial Action Unit Detection
Figure 2 for Multimodal Learning with Channel-Mixing and Masked Autoencoder on Facial Action Unit Detection
Figure 3 for Multimodal Learning with Channel-Mixing and Masked Autoencoder on Facial Action Unit Detection
Figure 4 for Multimodal Learning with Channel-Mixing and Masked Autoencoder on Facial Action Unit Detection
Viaarxiv icon

Knowledge-Spreader: Learning Facial Action Unit Dynamics with Extremely Limited Labels

Add code
Bookmark button
Alert button
Mar 30, 2022
Xiaotian Li, Xiang Zhang, Taoyue Wang, Lijun Yin

Figure 1 for Knowledge-Spreader: Learning Facial Action Unit Dynamics with Extremely Limited Labels
Figure 2 for Knowledge-Spreader: Learning Facial Action Unit Dynamics with Extremely Limited Labels
Figure 3 for Knowledge-Spreader: Learning Facial Action Unit Dynamics with Extremely Limited Labels
Figure 4 for Knowledge-Spreader: Learning Facial Action Unit Dynamics with Extremely Limited Labels
Viaarxiv icon

An EEG-Based Multi-Modal Emotion Database with Both Posed and Authentic Facial Actions for Emotion Analysis

Add code
Bookmark button
Alert button
Mar 29, 2022
Xiaotian Li, Xiang Zhang, Huiyuan Yang, Wenna Duan, Weiying Dai, Lijun Yin

Figure 1 for An EEG-Based Multi-Modal Emotion Database with Both Posed and Authentic Facial Actions for Emotion Analysis
Figure 2 for An EEG-Based Multi-Modal Emotion Database with Both Posed and Authentic Facial Actions for Emotion Analysis
Figure 3 for An EEG-Based Multi-Modal Emotion Database with Both Posed and Authentic Facial Actions for Emotion Analysis
Figure 4 for An EEG-Based Multi-Modal Emotion Database with Both Posed and Authentic Facial Actions for Emotion Analysis
Viaarxiv icon

Your "Attention" Deserves Attention: A Self-Diversified Multi-Channel Attention for Facial Action Analysis

Add code
Bookmark button
Alert button
Mar 23, 2022
Xiaotian Li, Zhihua Li, Huiyuan Yang, Geran Zhao, Lijun Yin

Figure 1 for Your "Attention" Deserves Attention: A Self-Diversified Multi-Channel Attention for Facial Action Analysis
Figure 2 for Your "Attention" Deserves Attention: A Self-Diversified Multi-Channel Attention for Facial Action Analysis
Figure 3 for Your "Attention" Deserves Attention: A Self-Diversified Multi-Channel Attention for Facial Action Analysis
Figure 4 for Your "Attention" Deserves Attention: A Self-Diversified Multi-Channel Attention for Facial Action Analysis
Viaarxiv icon

Multi-Modal Learning for AU Detection Based on Multi-Head Fused Transformers

Add code
Bookmark button
Alert button
Mar 22, 2022
Xiang Zhang, Lijun Yin

Figure 1 for Multi-Modal Learning for AU Detection Based on Multi-Head Fused Transformers
Figure 2 for Multi-Modal Learning for AU Detection Based on Multi-Head Fused Transformers
Figure 3 for Multi-Modal Learning for AU Detection Based on Multi-Head Fused Transformers
Figure 4 for Multi-Modal Learning for AU Detection Based on Multi-Head Fused Transformers
Viaarxiv icon

The First Vision For Vitals (V4V) Challenge for Non-Contact Video-Based Physiological Estimation

Add code
Bookmark button
Alert button
Sep 22, 2021
Ambareesh Revanur, Zhihua Li, Umur A. Ciftci, Lijun Yin, Laszlo A. Jeni

Figure 1 for The First Vision For Vitals (V4V) Challenge for Non-Contact Video-Based Physiological Estimation
Figure 2 for The First Vision For Vitals (V4V) Challenge for Non-Contact Video-Based Physiological Estimation
Figure 3 for The First Vision For Vitals (V4V) Challenge for Non-Contact Video-Based Physiological Estimation
Figure 4 for The First Vision For Vitals (V4V) Challenge for Non-Contact Video-Based Physiological Estimation
Viaarxiv icon

How Do the Hearts of Deep Fakes Beat? Deep Fake Source Detection via Interpreting Residuals with Biological Signals

Add code
Bookmark button
Alert button
Aug 26, 2020
Umur Aybars Ciftci, Ilke Demir, Lijun Yin

Figure 1 for How Do the Hearts of Deep Fakes Beat? Deep Fake Source Detection via Interpreting Residuals with Biological Signals
Figure 2 for How Do the Hearts of Deep Fakes Beat? Deep Fake Source Detection via Interpreting Residuals with Biological Signals
Figure 3 for How Do the Hearts of Deep Fakes Beat? Deep Fake Source Detection via Interpreting Residuals with Biological Signals
Figure 4 for How Do the Hearts of Deep Fakes Beat? Deep Fake Source Detection via Interpreting Residuals with Biological Signals
Viaarxiv icon

FERA 2017 - Addressing Head Pose in the Third Facial Expression Recognition and Analysis Challenge

Add code
Bookmark button
Alert button
Feb 14, 2017
Michel F. Valstar, Enrique Sánchez-Lozano, Jeffrey F. Cohn, László A. Jeni, Jeffrey M. Girard, Zheng Zhang, Lijun Yin, Maja Pantic

Figure 1 for FERA 2017 - Addressing Head Pose in the Third Facial Expression Recognition and Analysis Challenge
Figure 2 for FERA 2017 - Addressing Head Pose in the Third Facial Expression Recognition and Analysis Challenge
Figure 3 for FERA 2017 - Addressing Head Pose in the Third Facial Expression Recognition and Analysis Challenge
Figure 4 for FERA 2017 - Addressing Head Pose in the Third Facial Expression Recognition and Analysis Challenge
Viaarxiv icon