Alert button
Picture for Haoyu Wang

Haoyu Wang

Alert button

LIVABLE: Exploring Long-Tailed Classification of Software Vulnerability Types

Jun 12, 2023
Xin-Cheng Wen, Cuiyun Gao, Feng Luo, Haoyu Wang, Ge Li, Qing Liao

Figure 1 for LIVABLE: Exploring Long-Tailed Classification of Software Vulnerability Types
Figure 2 for LIVABLE: Exploring Long-Tailed Classification of Software Vulnerability Types
Figure 3 for LIVABLE: Exploring Long-Tailed Classification of Software Vulnerability Types
Figure 4 for LIVABLE: Exploring Long-Tailed Classification of Software Vulnerability Types
Viaarxiv icon

Prompt Injection attack against LLM-integrated Applications

Jun 08, 2023
Yi Liu, Gelei Deng, Yuekang Li, Kailong Wang, Tianwei Zhang, Yepang Liu, Haoyu Wang, Yan Zheng, Yang Liu

Figure 1 for Prompt Injection attack against LLM-integrated Applications
Figure 2 for Prompt Injection attack against LLM-integrated Applications
Figure 3 for Prompt Injection attack against LLM-integrated Applications
Figure 4 for Prompt Injection attack against LLM-integrated Applications
Viaarxiv icon

Task-Agnostic Structured Pruning of Speech Representation Models

Jun 02, 2023
Haoyu Wang, Siyuan Wang, Wei-Qiang Zhang, Hongbin Suo, Yulong Wan

Figure 1 for Task-Agnostic Structured Pruning of Speech Representation Models
Figure 2 for Task-Agnostic Structured Pruning of Speech Representation Models
Figure 3 for Task-Agnostic Structured Pruning of Speech Representation Models
Figure 4 for Task-Agnostic Structured Pruning of Speech Representation Models
Viaarxiv icon

DistilXLSR: A Light Weight Cross-Lingual Speech Representation Model

Jun 02, 2023
Haoyu Wang, Siyuan Wang, Wei-Qiang Zhang, Jinfeng Bai

Figure 1 for DistilXLSR: A Light Weight Cross-Lingual Speech Representation Model
Figure 2 for DistilXLSR: A Light Weight Cross-Lingual Speech Representation Model
Figure 3 for DistilXLSR: A Light Weight Cross-Lingual Speech Representation Model
Figure 4 for DistilXLSR: A Light Weight Cross-Lingual Speech Representation Model
Viaarxiv icon

Beyond One-Model-Fits-All: A Survey of Domain Specialization for Large Language Models

May 31, 2023
Chen Ling, Xujiang Zhao, Jiaying Lu, Chengyuan Deng, Can Zheng, Junxiang Wang, Tanmoy Chowdhury, Yun Li, Hejie Cui, Xuchao Zhang, Tianjiao Zhao, Amit Panalkar, Wei Cheng, Haoyu Wang, Yanchi Liu, Zhengzhang Chen, Haifeng Chen, Chris White, Quanquan Gu, Carl Yang, Liang Zhao

Figure 1 for Beyond One-Model-Fits-All: A Survey of Domain Specialization for Large Language Models
Figure 2 for Beyond One-Model-Fits-All: A Survey of Domain Specialization for Large Language Models
Figure 3 for Beyond One-Model-Fits-All: A Survey of Domain Specialization for Large Language Models
Figure 4 for Beyond One-Model-Fits-All: A Survey of Domain Specialization for Large Language Models
Viaarxiv icon

Learning Better with Less: Effective Augmentation for Sample-Efficient Visual Reinforcement Learning

May 25, 2023
Guozheng Ma, Linrui Zhang, Haoyu Wang, Lu Li, Zilin Wang, Zhen Wang, Li Shen, Xueqian Wang, Dacheng Tao

Figure 1 for Learning Better with Less: Effective Augmentation for Sample-Efficient Visual Reinforcement Learning
Figure 2 for Learning Better with Less: Effective Augmentation for Sample-Efficient Visual Reinforcement Learning
Figure 3 for Learning Better with Less: Effective Augmentation for Sample-Efficient Visual Reinforcement Learning
Figure 4 for Learning Better with Less: Effective Augmentation for Sample-Efficient Visual Reinforcement Learning
Viaarxiv icon

Whisper-KDQ: A Lightweight Whisper via Guided Knowledge Distillation and Quantization for Efficient ASR

May 18, 2023
Hang Shao, Wei Wang, Bei Liu, Xun Gong, Haoyu Wang, Yanmin Qian

Figure 1 for Whisper-KDQ: A Lightweight Whisper via Guided Knowledge Distillation and Quantization for Efficient ASR
Figure 2 for Whisper-KDQ: A Lightweight Whisper via Guided Knowledge Distillation and Quantization for Efficient ASR
Figure 3 for Whisper-KDQ: A Lightweight Whisper via Guided Knowledge Distillation and Quantization for Efficient ASR
Viaarxiv icon

Glocal Energy-based Learning for Few-Shot Open-Set Recognition

Apr 24, 2023
Haoyu Wang, Guansong Pang, Peng Wang, Lei Zhang, Wei Wei, Yanning Zhang

Figure 1 for Glocal Energy-based Learning for Few-Shot Open-Set Recognition
Figure 2 for Glocal Energy-based Learning for Few-Shot Open-Set Recognition
Figure 3 for Glocal Energy-based Learning for Few-Shot Open-Set Recognition
Figure 4 for Glocal Energy-based Learning for Few-Shot Open-Set Recognition
Viaarxiv icon

STU-Net: Scalable and Transferable Medical Image Segmentation Models Empowered by Large-Scale Supervised Pre-training

Apr 13, 2023
Ziyan Huang, Haoyu Wang, Zhongying Deng, Jin Ye, Yanzhou Su, Hui Sun, Junjun He, Yun Gu, Lixu Gu, Shaoting Zhang, Yu Qiao

Figure 1 for STU-Net: Scalable and Transferable Medical Image Segmentation Models Empowered by Large-Scale Supervised Pre-training
Figure 2 for STU-Net: Scalable and Transferable Medical Image Segmentation Models Empowered by Large-Scale Supervised Pre-training
Figure 3 for STU-Net: Scalable and Transferable Medical Image Segmentation Models Empowered by Large-Scale Supervised Pre-training
Figure 4 for STU-Net: Scalable and Transferable Medical Image Segmentation Models Empowered by Large-Scale Supervised Pre-training
Viaarxiv icon

Detecting Backdoors During the Inference Stage Based on Corruption Robustness Consistency

Mar 27, 2023
Xiaogeng Liu, Minghui Li, Haoyu Wang, Shengshan Hu, Dengpan Ye, Hai Jin, Libing Wu, Chaowei Xiao

Figure 1 for Detecting Backdoors During the Inference Stage Based on Corruption Robustness Consistency
Figure 2 for Detecting Backdoors During the Inference Stage Based on Corruption Robustness Consistency
Figure 3 for Detecting Backdoors During the Inference Stage Based on Corruption Robustness Consistency
Figure 4 for Detecting Backdoors During the Inference Stage Based on Corruption Robustness Consistency
Viaarxiv icon