Picture for Junyi Peng

Junyi Peng

Probing Self-supervised Learning Models with Target Speech Extraction

Add code
Feb 17, 2024
Viaarxiv icon

Target Speech Extraction with Pre-trained Self-supervised Learning Models

Add code
Feb 17, 2024
Figure 1 for Target Speech Extraction with Pre-trained Self-supervised Learning Models
Figure 2 for Target Speech Extraction with Pre-trained Self-supervised Learning Models
Figure 3 for Target Speech Extraction with Pre-trained Self-supervised Learning Models
Figure 4 for Target Speech Extraction with Pre-trained Self-supervised Learning Models
Viaarxiv icon

Improving Speaker Verification with Self-Pretrained Transformer Models

Add code
May 17, 2023
Figure 1 for Improving Speaker Verification with Self-Pretrained Transformer Models
Figure 2 for Improving Speaker Verification with Self-Pretrained Transformer Models
Figure 3 for Improving Speaker Verification with Self-Pretrained Transformer Models
Figure 4 for Improving Speaker Verification with Self-Pretrained Transformer Models
Viaarxiv icon

Probing Deep Speaker Embeddings for Speaker-related Tasks

Add code
Dec 14, 2022
Figure 1 for Probing Deep Speaker Embeddings for Speaker-related Tasks
Figure 2 for Probing Deep Speaker Embeddings for Speaker-related Tasks
Figure 3 for Probing Deep Speaker Embeddings for Speaker-related Tasks
Figure 4 for Probing Deep Speaker Embeddings for Speaker-related Tasks
Viaarxiv icon

Parameter-efficient transfer learning of pre-trained Transformer models for speaker verification using adapters

Add code
Oct 28, 2022
Figure 1 for Parameter-efficient transfer learning of pre-trained Transformer models for speaker verification using adapters
Figure 2 for Parameter-efficient transfer learning of pre-trained Transformer models for speaker verification using adapters
Figure 3 for Parameter-efficient transfer learning of pre-trained Transformer models for speaker verification using adapters
Figure 4 for Parameter-efficient transfer learning of pre-trained Transformer models for speaker verification using adapters
Viaarxiv icon

An attention-based backend allowing efficient fine-tuning of transformer models for speaker verification

Add code
Oct 03, 2022
Figure 1 for An attention-based backend allowing efficient fine-tuning of transformer models for speaker verification
Figure 2 for An attention-based backend allowing efficient fine-tuning of transformer models for speaker verification
Figure 3 for An attention-based backend allowing efficient fine-tuning of transformer models for speaker verification
Figure 4 for An attention-based backend allowing efficient fine-tuning of transformer models for speaker verification
Viaarxiv icon