Alert button
Picture for Haode Zhang

Haode Zhang

Alert button

Revisit Few-shot Intent Classification with PLMs: Direct Fine-tuning vs. Continual Pre-training

Add code
Bookmark button
Alert button
Jun 08, 2023
Haode Zhang, Haowen Liang, Liming Zhan, Xiao-Ming Wu, Albert Y. S. Lam

Figure 1 for Revisit Few-shot Intent Classification with PLMs: Direct Fine-tuning vs. Continual Pre-training
Figure 2 for Revisit Few-shot Intent Classification with PLMs: Direct Fine-tuning vs. Continual Pre-training
Figure 3 for Revisit Few-shot Intent Classification with PLMs: Direct Fine-tuning vs. Continual Pre-training
Figure 4 for Revisit Few-shot Intent Classification with PLMs: Direct Fine-tuning vs. Continual Pre-training
Viaarxiv icon

Asymmetric feature interaction for interpreting model predictions

Add code
Bookmark button
Alert button
May 12, 2023
Xiaolei Lu, Jianghong Ma, Haode Zhang

Figure 1 for Asymmetric feature interaction for interpreting model predictions
Figure 2 for Asymmetric feature interaction for interpreting model predictions
Figure 3 for Asymmetric feature interaction for interpreting model predictions
Figure 4 for Asymmetric feature interaction for interpreting model predictions
Viaarxiv icon

New Intent Discovery with Pre-training and Contrastive Learning

Add code
Bookmark button
Alert button
May 25, 2022
Yuwei Zhang, Haode Zhang, Li-Ming Zhan, Xiao-Ming Wu, Albert Y. S. Lam

Figure 1 for New Intent Discovery with Pre-training and Contrastive Learning
Figure 2 for New Intent Discovery with Pre-training and Contrastive Learning
Figure 3 for New Intent Discovery with Pre-training and Contrastive Learning
Figure 4 for New Intent Discovery with Pre-training and Contrastive Learning
Viaarxiv icon

Fine-tuning Pre-trained Language Models for Few-shot Intent Detection: Supervised Pre-training and Isotropization

Add code
Bookmark button
Alert button
May 15, 2022
Haode Zhang, Haowen Liang, Yuwei Zhang, Liming Zhan, Xiao-Ming Wu, Xiaolei Lu, Albert Y. S. Lam

Figure 1 for Fine-tuning Pre-trained Language Models for Few-shot Intent Detection: Supervised Pre-training and Isotropization
Figure 2 for Fine-tuning Pre-trained Language Models for Few-shot Intent Detection: Supervised Pre-training and Isotropization
Figure 3 for Fine-tuning Pre-trained Language Models for Few-shot Intent Detection: Supervised Pre-training and Isotropization
Figure 4 for Fine-tuning Pre-trained Language Models for Few-shot Intent Detection: Supervised Pre-training and Isotropization
Viaarxiv icon

Effectiveness of Pre-training for Few-shot Intent Classification

Add code
Bookmark button
Alert button
Sep 13, 2021
Haode Zhang, Yuwei Zhang, Li-Ming Zhan, Jiaxin Chen, Guangyuan Shi, Xiao-Ming Wu, Albert Y. S. Lam

Figure 1 for Effectiveness of Pre-training for Few-shot Intent Classification
Figure 2 for Effectiveness of Pre-training for Few-shot Intent Classification
Figure 3 for Effectiveness of Pre-training for Few-shot Intent Classification
Figure 4 for Effectiveness of Pre-training for Few-shot Intent Classification
Viaarxiv icon