Alert button
Picture for Yuxiang Lu

Yuxiang Lu

Alert button

Learning Multiple Representations with Inconsistency-Guided Detail Regularization for Mask-Guided Matting

Add code
Bookmark button
Alert button
Mar 28, 2024
Weihao Jiang, Zhaozhi Xie, Yuxiang Lu, Longjie Qi, Jingyong Cai, Hiroyuki Uchiyama, Bin Chen, Yue Ding, Hongtao Lu

Viaarxiv icon

Task Indicating Transformer for Task-conditional Dense Predictions

Add code
Bookmark button
Alert button
Mar 01, 2024
Yuxiang Lu, Shalayiding Sirejiding, Bayram Bayramli, Suizhi Huang, Yue Ding, Hongtao Lu

Figure 1 for Task Indicating Transformer for Task-conditional Dense Predictions
Figure 2 for Task Indicating Transformer for Task-conditional Dense Predictions
Figure 3 for Task Indicating Transformer for Task-conditional Dense Predictions
Figure 4 for Task Indicating Transformer for Task-conditional Dense Predictions
Viaarxiv icon

YOLO-MED : Multi-Task Interaction Network for Biomedical Images

Add code
Bookmark button
Alert button
Mar 01, 2024
Suizhi Huang, Shalayiding Sirejiding, Yuxiang Lu, Yue Ding, Leheng Liu, Hui Zhou, Hongtao Lu

Figure 1 for YOLO-MED : Multi-Task Interaction Network for Biomedical Images
Figure 2 for YOLO-MED : Multi-Task Interaction Network for Biomedical Images
Figure 3 for YOLO-MED : Multi-Task Interaction Network for Biomedical Images
Figure 4 for YOLO-MED : Multi-Task Interaction Network for Biomedical Images
Viaarxiv icon

Federated Multi-Task Learning on Non-IID Data Silos: An Experimental Study

Add code
Bookmark button
Alert button
Feb 20, 2024
Yuwen Yang, Yuxiang Lu, Suizhi Huang, Shalayiding Sirejiding, Hongtao Lu, Yue Ding

Viaarxiv icon

Towards Hetero-Client Federated Multi-Task Learning

Add code
Bookmark button
Alert button
Nov 22, 2023
Yuxiang Lu, Suizhi Huang, Yuwen Yang, Shalayiding Sirejiding, Yue Ding, Hongtao Lu

Viaarxiv icon

Prompt Guided Transformer for Multi-Task Dense Prediction

Add code
Bookmark button
Alert button
Jul 28, 2023
Yuxiang Lu, Shalayiding Sirejiding, Yue Ding, Chunlin Wang, Hongtao Lu

Viaarxiv icon

ERNIE-ViLG 2.0: Improving Text-to-Image Diffusion Model with Knowledge-Enhanced Mixture-of-Denoising-Experts

Add code
Bookmark button
Alert button
Oct 27, 2022
Zhida Feng, Zhenyu Zhang, Xintong Yu, Yewei Fang, Lanxin Li, Xuyi Chen, Yuxiang Lu, Jiaxiang Liu, Weichong Yin, Shikun Feng, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang

Figure 1 for ERNIE-ViLG 2.0: Improving Text-to-Image Diffusion Model with Knowledge-Enhanced Mixture-of-Denoising-Experts
Figure 2 for ERNIE-ViLG 2.0: Improving Text-to-Image Diffusion Model with Knowledge-Enhanced Mixture-of-Denoising-Experts
Figure 3 for ERNIE-ViLG 2.0: Improving Text-to-Image Diffusion Model with Knowledge-Enhanced Mixture-of-Denoising-Experts
Figure 4 for ERNIE-ViLG 2.0: Improving Text-to-Image Diffusion Model with Knowledge-Enhanced Mixture-of-Denoising-Experts
Viaarxiv icon

ERNIE-Search: Bridging Cross-Encoder with Dual-Encoder via Self On-the-fly Distillation for Dense Passage Retrieval

Add code
Bookmark button
Alert button
May 18, 2022
Yuxiang Lu, Yiding Liu, Jiaxiang Liu, Yunsheng Shi, Zhengjie Huang, Shikun Feng Yu Sun, Hao Tian, Hua Wu, Shuaiqiang Wang, Dawei Yin, Haifeng Wang

Figure 1 for ERNIE-Search: Bridging Cross-Encoder with Dual-Encoder via Self On-the-fly Distillation for Dense Passage Retrieval
Figure 2 for ERNIE-Search: Bridging Cross-Encoder with Dual-Encoder via Self On-the-fly Distillation for Dense Passage Retrieval
Figure 3 for ERNIE-Search: Bridging Cross-Encoder with Dual-Encoder via Self On-the-fly Distillation for Dense Passage Retrieval
Figure 4 for ERNIE-Search: Bridging Cross-Encoder with Dual-Encoder via Self On-the-fly Distillation for Dense Passage Retrieval
Viaarxiv icon

ERNIE-SPARSE: Learning Hierarchical Efficient Transformer Through Regularized Self-Attention

Add code
Bookmark button
Alert button
Mar 23, 2022
Yang Liu, Jiaxiang Liu, Li Chen, Yuxiang Lu, Shikun Feng, Zhida Feng, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang

Figure 1 for ERNIE-SPARSE: Learning Hierarchical Efficient Transformer Through Regularized Self-Attention
Figure 2 for ERNIE-SPARSE: Learning Hierarchical Efficient Transformer Through Regularized Self-Attention
Figure 3 for ERNIE-SPARSE: Learning Hierarchical Efficient Transformer Through Regularized Self-Attention
Figure 4 for ERNIE-SPARSE: Learning Hierarchical Efficient Transformer Through Regularized Self-Attention
Viaarxiv icon

ERNIE 3.0 Titan: Exploring Larger-scale Knowledge Enhanced Pre-training for Language Understanding and Generation

Add code
Bookmark button
Alert button
Dec 23, 2021
Shuohuan Wang, Yu Sun, Yang Xiang, Zhihua Wu, Siyu Ding, Weibao Gong, Shikun Feng, Junyuan Shang, Yanbin Zhao, Chao Pang, Jiaxiang Liu, Xuyi Chen, Yuxiang Lu, Weixin Liu, Xi Wang, Yangfan Bai, Qiuliang Chen, Li Zhao, Shiyong Li, Peng Sun, Dianhai Yu, Yanjun Ma, Hao Tian, Hua Wu, Tian Wu, Wei Zeng, Ge Li, Wen Gao, Haifeng Wang

Figure 1 for ERNIE 3.0 Titan: Exploring Larger-scale Knowledge Enhanced Pre-training for Language Understanding and Generation
Figure 2 for ERNIE 3.0 Titan: Exploring Larger-scale Knowledge Enhanced Pre-training for Language Understanding and Generation
Figure 3 for ERNIE 3.0 Titan: Exploring Larger-scale Knowledge Enhanced Pre-training for Language Understanding and Generation
Figure 4 for ERNIE 3.0 Titan: Exploring Larger-scale Knowledge Enhanced Pre-training for Language Understanding and Generation
Viaarxiv icon