Picture for Lijun Yin

Lijun Yin

Inter-Stance: A Dyadic Multimodal Corpus for Conversational Stance Analysis

Add code
Apr 24, 2026
Viaarxiv icon

You Only Need One Stage: Novel-View Synthesis From A Single Blind Face Image

Add code
Mar 01, 2026
Viaarxiv icon

Weakly-Supervised Text-driven Contrastive Learning for Facial Behavior Understanding

Add code
Mar 31, 2023
Viaarxiv icon

A Transformer-based Deep Learning Algorithm to Auto-record Undocumented Clinical One-Lung Ventilation Events

Add code
Feb 16, 2023
Viaarxiv icon

Multimodal Learning with Channel-Mixing and Masked Autoencoder on Facial Action Unit Detection

Add code
Sep 25, 2022
Figure 1 for Multimodal Learning with Channel-Mixing and Masked Autoencoder on Facial Action Unit Detection
Figure 2 for Multimodal Learning with Channel-Mixing and Masked Autoencoder on Facial Action Unit Detection
Figure 3 for Multimodal Learning with Channel-Mixing and Masked Autoencoder on Facial Action Unit Detection
Figure 4 for Multimodal Learning with Channel-Mixing and Masked Autoencoder on Facial Action Unit Detection
Viaarxiv icon

Knowledge-Spreader: Learning Facial Action Unit Dynamics with Extremely Limited Labels

Add code
Mar 30, 2022
Figure 1 for Knowledge-Spreader: Learning Facial Action Unit Dynamics with Extremely Limited Labels
Figure 2 for Knowledge-Spreader: Learning Facial Action Unit Dynamics with Extremely Limited Labels
Figure 3 for Knowledge-Spreader: Learning Facial Action Unit Dynamics with Extremely Limited Labels
Figure 4 for Knowledge-Spreader: Learning Facial Action Unit Dynamics with Extremely Limited Labels
Viaarxiv icon

An EEG-Based Multi-Modal Emotion Database with Both Posed and Authentic Facial Actions for Emotion Analysis

Add code
Mar 29, 2022
Figure 1 for An EEG-Based Multi-Modal Emotion Database with Both Posed and Authentic Facial Actions for Emotion Analysis
Figure 2 for An EEG-Based Multi-Modal Emotion Database with Both Posed and Authentic Facial Actions for Emotion Analysis
Figure 3 for An EEG-Based Multi-Modal Emotion Database with Both Posed and Authentic Facial Actions for Emotion Analysis
Figure 4 for An EEG-Based Multi-Modal Emotion Database with Both Posed and Authentic Facial Actions for Emotion Analysis
Viaarxiv icon

Your "Attention" Deserves Attention: A Self-Diversified Multi-Channel Attention for Facial Action Analysis

Add code
Mar 23, 2022
Figure 1 for Your "Attention" Deserves Attention: A Self-Diversified Multi-Channel Attention for Facial Action Analysis
Figure 2 for Your "Attention" Deserves Attention: A Self-Diversified Multi-Channel Attention for Facial Action Analysis
Figure 3 for Your "Attention" Deserves Attention: A Self-Diversified Multi-Channel Attention for Facial Action Analysis
Figure 4 for Your "Attention" Deserves Attention: A Self-Diversified Multi-Channel Attention for Facial Action Analysis
Viaarxiv icon

Multi-Modal Learning for AU Detection Based on Multi-Head Fused Transformers

Add code
Mar 22, 2022
Figure 1 for Multi-Modal Learning for AU Detection Based on Multi-Head Fused Transformers
Figure 2 for Multi-Modal Learning for AU Detection Based on Multi-Head Fused Transformers
Figure 3 for Multi-Modal Learning for AU Detection Based on Multi-Head Fused Transformers
Figure 4 for Multi-Modal Learning for AU Detection Based on Multi-Head Fused Transformers
Viaarxiv icon

The First Vision For Vitals Challenge for Non-Contact Video-Based Physiological Estimation

Add code
Sep 22, 2021
Figure 1 for The First Vision For Vitals  Challenge for Non-Contact Video-Based Physiological Estimation
Figure 2 for The First Vision For Vitals  Challenge for Non-Contact Video-Based Physiological Estimation
Figure 3 for The First Vision For Vitals  Challenge for Non-Contact Video-Based Physiological Estimation
Figure 4 for The First Vision For Vitals  Challenge for Non-Contact Video-Based Physiological Estimation
Viaarxiv icon