Alert button
Picture for Haoyu Lu

Haoyu Lu

Alert button

DeepSeek-VL: Towards Real-World Vision-Language Understanding

Add code
Bookmark button
Alert button
Mar 11, 2024
Haoyu Lu, Wen Liu, Bo Zhang, Bingxuan Wang, Kai Dong, Bo Liu, Jingxiang Sun, Tongzheng Ren, Zhuoshu Li, Hao Yang, Yaofeng Sun, Chengqi Deng, Hanwei Xu, Zhenda Xie, Chong Ruan

Figure 1 for DeepSeek-VL: Towards Real-World Vision-Language Understanding
Figure 2 for DeepSeek-VL: Towards Real-World Vision-Language Understanding
Figure 3 for DeepSeek-VL: Towards Real-World Vision-Language Understanding
Figure 4 for DeepSeek-VL: Towards Real-World Vision-Language Understanding
Viaarxiv icon

DeepSeek LLM: Scaling Open-Source Language Models with Longtermism

Add code
Bookmark button
Alert button
Jan 05, 2024
DeepSeek-AI, :, Xiao Bi, Deli Chen, Guanting Chen, Shanhuang Chen, Damai Dai, Chengqi Deng, Honghui Ding, Kai Dong, Qiushi Du, Zhe Fu, Huazuo Gao, Kaige Gao, Wenjun Gao, Ruiqi Ge, Kang Guan, Daya Guo, Jianzhong Guo, Guangbo Hao, Zhewen Hao, Ying He, Wenjie Hu, Panpan Huang, Erhang Li, Guowei Li, Jiashi Li, Yao Li, Y. K. Li, Wenfeng Liang, Fangyun Lin, A. X. Liu, Bo Liu, Wen Liu, Xiaodong Liu, Xin Liu, Yiyuan Liu, Haoyu Lu, Shanghao Lu, Fuli Luo, Shirong Ma, Xiaotao Nie, Tian Pei, Yishi Piao, Junjie Qiu, Hui Qu, Tongzheng Ren, Zehui Ren, Chong Ruan, Zhangli Sha, Zhihong Shao, Junxiao Song, Xuecheng Su, Jingxiang Sun, Yaofeng Sun, Minghui Tang, Bingxuan Wang, Peiyi Wang, Shiyu Wang, Yaohui Wang, Yongji Wang, Tong Wu, Y. Wu, Xin Xie, Zhenda Xie, Ziwei Xie, Yiliang Xiong, Hanwei Xu, R. X. Xu, Yanhong Xu, Dejian Yang, Yuxiang You, Shuiping Yu, Xingkai Yu, B. Zhang, Haowei Zhang, Lecong Zhang, Liyue Zhang, Mingchuan Zhang, Minghua Zhang, Wentao Zhang, Yichao Zhang, Chenggang Zhao, Yao Zhao, Shangyan Zhou, Shunfeng Zhou, Qihao Zhu, Yuheng Zou

Viaarxiv icon

speech and noise dual-stream spectrogram refine network with speech distortion loss for robust speech recognition

Add code
Bookmark button
Alert button
May 30, 2023
Haoyu Lu, Nan Li, Tongtong Song, Longbiao Wang, Jianwu Dang, Xiaobao Wang, Shiliang Zhang

Figure 1 for speech and noise dual-stream spectrogram refine network with speech distortion loss for robust speech recognition
Figure 2 for speech and noise dual-stream spectrogram refine network with speech distortion loss for robust speech recognition
Figure 3 for speech and noise dual-stream spectrogram refine network with speech distortion loss for robust speech recognition
Figure 4 for speech and noise dual-stream spectrogram refine network with speech distortion loss for robust speech recognition
Viaarxiv icon

VDT: An Empirical Study on Video Diffusion with Transformers

Add code
Bookmark button
Alert button
May 22, 2023
Haoyu Lu, Guoxing Yang, Nanyi Fei, Yuqi Huo, Zhiwu Lu, Ping Luo, Mingyu Ding

Figure 1 for VDT: An Empirical Study on Video Diffusion with Transformers
Figure 2 for VDT: An Empirical Study on Video Diffusion with Transformers
Figure 3 for VDT: An Empirical Study on Video Diffusion with Transformers
Figure 4 for VDT: An Empirical Study on Video Diffusion with Transformers
Viaarxiv icon

UniAdapter: Unified Parameter-Efficient Transfer Learning for Cross-modal Modeling

Add code
Bookmark button
Alert button
Feb 13, 2023
Haoyu Lu, Mingyu Ding, Yuqi Huo, Guoxing Yang, Zhiwu Lu, Masayoshi Tomizuka, Wei Zhan

Figure 1 for UniAdapter: Unified Parameter-Efficient Transfer Learning for Cross-modal Modeling
Figure 2 for UniAdapter: Unified Parameter-Efficient Transfer Learning for Cross-modal Modeling
Figure 3 for UniAdapter: Unified Parameter-Efficient Transfer Learning for Cross-modal Modeling
Figure 4 for UniAdapter: Unified Parameter-Efficient Transfer Learning for Cross-modal Modeling
Viaarxiv icon

Monolingual Recognizers Fusion for Code-switching Speech Recognition

Add code
Bookmark button
Alert button
Nov 02, 2022
Tongtong Song, Qiang Xu, Haoyu Lu, Longbiao Wang, Hao Shi, Yuqin Lin, Yanbing Yang, Jianwu Dang

Figure 1 for Monolingual Recognizers Fusion for Code-switching Speech Recognition
Figure 2 for Monolingual Recognizers Fusion for Code-switching Speech Recognition
Figure 3 for Monolingual Recognizers Fusion for Code-switching Speech Recognition
Figure 4 for Monolingual Recognizers Fusion for Code-switching Speech Recognition
Viaarxiv icon

LGDN: Language-Guided Denoising Network for Video-Language Modeling

Add code
Bookmark button
Alert button
Oct 03, 2022
Haoyu Lu, Mingyu Ding, Nanyi Fei, Yuqi Huo, Zhiwu Lu

Figure 1 for LGDN: Language-Guided Denoising Network for Video-Language Modeling
Figure 2 for LGDN: Language-Guided Denoising Network for Video-Language Modeling
Figure 3 for LGDN: Language-Guided Denoising Network for Video-Language Modeling
Figure 4 for LGDN: Language-Guided Denoising Network for Video-Language Modeling
Viaarxiv icon

Multimodal foundation models are better simulators of the human brain

Add code
Bookmark button
Alert button
Aug 17, 2022
Haoyu Lu, Qiongyi Zhou, Nanyi Fei, Zhiwu Lu, Mingyu Ding, Jingyuan Wen, Changde Du, Xin Zhao, Hao Sun, Huiguang He, Ji-Rong Wen

Figure 1 for Multimodal foundation models are better simulators of the human brain
Figure 2 for Multimodal foundation models are better simulators of the human brain
Figure 3 for Multimodal foundation models are better simulators of the human brain
Figure 4 for Multimodal foundation models are better simulators of the human brain
Viaarxiv icon

COTS: Collaborative Two-Stream Vision-Language Pre-Training Model for Cross-Modal Retrieval

Add code
Bookmark button
Alert button
Apr 15, 2022
Haoyu Lu, Nanyi Fei, Yuqi Huo, Yizhao Gao, Zhiwu Lu, Ji-Rong Wen

Figure 1 for COTS: Collaborative Two-Stream Vision-Language Pre-Training Model for Cross-Modal Retrieval
Figure 2 for COTS: Collaborative Two-Stream Vision-Language Pre-Training Model for Cross-Modal Retrieval
Figure 3 for COTS: Collaborative Two-Stream Vision-Language Pre-Training Model for Cross-Modal Retrieval
Figure 4 for COTS: Collaborative Two-Stream Vision-Language Pre-Training Model for Cross-Modal Retrieval
Viaarxiv icon