Picture for Yi Zhou

Yi Zhou

COCA: Classifier-Oriented Calibration for Source-Free Universal Domain Adaptation via Textual Prototype

Add code
Aug 21, 2023
Figure 1 for COCA: Classifier-Oriented Calibration for Source-Free Universal Domain Adaptation via Textual Prototype
Figure 2 for COCA: Classifier-Oriented Calibration for Source-Free Universal Domain Adaptation via Textual Prototype
Figure 3 for COCA: Classifier-Oriented Calibration for Source-Free Universal Domain Adaptation via Textual Prototype
Figure 4 for COCA: Classifier-Oriented Calibration for Source-Free Universal Domain Adaptation via Textual Prototype
Viaarxiv icon

Boosting Multi-modal Model Performance with Adaptive Gradient Modulation

Add code
Aug 15, 2023
Figure 1 for Boosting Multi-modal Model Performance with Adaptive Gradient Modulation
Figure 2 for Boosting Multi-modal Model Performance with Adaptive Gradient Modulation
Figure 3 for Boosting Multi-modal Model Performance with Adaptive Gradient Modulation
Figure 4 for Boosting Multi-modal Model Performance with Adaptive Gradient Modulation
Viaarxiv icon

Spatio-Temporal Calibration for Omni-Directional Vehicle-Mounted Event Cameras

Add code
Jul 19, 2023
Viaarxiv icon

A Fast Maximum $k$-Plex Algorithm Parameterized by the Degeneracy Gap

Add code
Jun 23, 2023
Figure 1 for A Fast Maximum $k$-Plex Algorithm Parameterized by the Degeneracy Gap
Figure 2 for A Fast Maximum $k$-Plex Algorithm Parameterized by the Degeneracy Gap
Figure 3 for A Fast Maximum $k$-Plex Algorithm Parameterized by the Degeneracy Gap
Figure 4 for A Fast Maximum $k$-Plex Algorithm Parameterized by the Degeneracy Gap
Viaarxiv icon

Multi-Loss Convolutional Network with Time-Frequency Attention for Speech Enhancement

Add code
Jun 15, 2023
Figure 1 for Multi-Loss Convolutional Network with Time-Frequency Attention for Speech Enhancement
Figure 2 for Multi-Loss Convolutional Network with Time-Frequency Attention for Speech Enhancement
Figure 3 for Multi-Loss Convolutional Network with Time-Frequency Attention for Speech Enhancement
Figure 4 for Multi-Loss Convolutional Network with Time-Frequency Attention for Speech Enhancement
Viaarxiv icon

Together We Make Sense -- Learning Meta-Sense Embeddings from Pretrained Static Sense Embeddings

Add code
May 30, 2023
Figure 1 for Together We Make Sense -- Learning Meta-Sense Embeddings from Pretrained Static Sense Embeddings
Figure 2 for Together We Make Sense -- Learning Meta-Sense Embeddings from Pretrained Static Sense Embeddings
Figure 3 for Together We Make Sense -- Learning Meta-Sense Embeddings from Pretrained Static Sense Embeddings
Figure 4 for Together We Make Sense -- Learning Meta-Sense Embeddings from Pretrained Static Sense Embeddings
Viaarxiv icon

An Efficient Multilingual Language Model Compression through Vocabulary Trimming

Add code
May 24, 2023
Viaarxiv icon

Solving Cosine Similarity Underestimation between High Frequency Words by L2 Norm Discounting

Add code
May 17, 2023
Viaarxiv icon

Accented Text-to-Speech Synthesis with Limited Data

Add code
May 08, 2023
Figure 1 for Accented Text-to-Speech Synthesis with Limited Data
Figure 2 for Accented Text-to-Speech Synthesis with Limited Data
Figure 3 for Accented Text-to-Speech Synthesis with Limited Data
Figure 4 for Accented Text-to-Speech Synthesis with Limited Data
Viaarxiv icon

Dual Residual Attention Network for Image Denoising

Add code
May 07, 2023
Figure 1 for Dual Residual Attention Network for Image Denoising
Figure 2 for Dual Residual Attention Network for Image Denoising
Figure 3 for Dual Residual Attention Network for Image Denoising
Figure 4 for Dual Residual Attention Network for Image Denoising
Viaarxiv icon