Alert button
Picture for Daxin Jiang

Daxin Jiang

Alert button

Bridge the Gap between Language models and Tabular Understanding

Add code
Bookmark button
Alert button
Feb 16, 2023
Nuo Chen, Linjun Shou, Ming Gong, Jian Pei, Chenyu You, Jianhui Chang, Daxin Jiang, Jia Li

Figure 1 for Bridge the Gap between Language models and Tabular Understanding
Figure 2 for Bridge the Gap between Language models and Tabular Understanding
Figure 3 for Bridge the Gap between Language models and Tabular Understanding
Figure 4 for Bridge the Gap between Language models and Tabular Understanding
Viaarxiv icon

LexLIP: Lexicon-Bottlenecked Language-Image Pre-Training for Large-Scale Image-Text Retrieval

Add code
Bookmark button
Alert button
Feb 06, 2023
Ziyang luo, Pu Zhao, Can Xu, Xiubo Geng, Tao Shen, Chongyang Tao, Jing Ma, Qingwen lin, Daxin Jiang

Figure 1 for LexLIP: Lexicon-Bottlenecked Language-Image Pre-Training for Large-Scale Image-Text Retrieval
Figure 2 for LexLIP: Lexicon-Bottlenecked Language-Image Pre-Training for Large-Scale Image-Text Retrieval
Figure 3 for LexLIP: Lexicon-Bottlenecked Language-Image Pre-Training for Large-Scale Image-Text Retrieval
Figure 4 for LexLIP: Lexicon-Bottlenecked Language-Image Pre-Training for Large-Scale Image-Text Retrieval
Viaarxiv icon

Modeling Sequential Sentence Relation to Improve Cross-lingual Dense Retrieval

Add code
Bookmark button
Alert button
Feb 03, 2023
Shunyu Zhang, Yaobo Liang, Ming Gong, Daxin Jiang, Nan Duan

Figure 1 for Modeling Sequential Sentence Relation to Improve Cross-lingual Dense Retrieval
Figure 2 for Modeling Sequential Sentence Relation to Improve Cross-lingual Dense Retrieval
Figure 3 for Modeling Sequential Sentence Relation to Improve Cross-lingual Dense Retrieval
Figure 4 for Modeling Sequential Sentence Relation to Improve Cross-lingual Dense Retrieval
Viaarxiv icon

Fine-Grained Distillation for Long Document Retrieval

Add code
Bookmark button
Alert button
Dec 20, 2022
Yucheng Zhou, Tao Shen, Xiubo Geng, Chongyang Tao, Guodong Long, Can Xu, Daxin Jiang

Figure 1 for Fine-Grained Distillation for Long Document Retrieval
Figure 2 for Fine-Grained Distillation for Long Document Retrieval
Figure 3 for Fine-Grained Distillation for Long Document Retrieval
Figure 4 for Fine-Grained Distillation for Long Document Retrieval
Viaarxiv icon

Adam: Dense Retrieval Distillation with Adaptive Dark Examples

Add code
Bookmark button
Alert button
Dec 20, 2022
Chang Liu, Chongyang Tao, Xiubo Geng, Tao Shen, Dongyan Zhao, Can Xu, Binxing Jiao, Daxin Jiang

Figure 1 for Adam: Dense Retrieval Distillation with Adaptive Dark Examples
Figure 2 for Adam: Dense Retrieval Distillation with Adaptive Dark Examples
Figure 3 for Adam: Dense Retrieval Distillation with Adaptive Dark Examples
Figure 4 for Adam: Dense Retrieval Distillation with Adaptive Dark Examples
Viaarxiv icon

MASTER: Multi-task Pre-trained Bottlenecked Masked Autoencoders are Better Dense Retrievers

Add code
Bookmark button
Alert button
Dec 15, 2022
Kun Zhou, Xiao Liu, Yeyun Gong, Wayne Xin Zhao, Daxin Jiang, Nan Duan, Ji-Rong Wen

Figure 1 for MASTER: Multi-task Pre-trained Bottlenecked Masked Autoencoders are Better Dense Retrievers
Figure 2 for MASTER: Multi-task Pre-trained Bottlenecked Masked Autoencoders are Better Dense Retrievers
Figure 3 for MASTER: Multi-task Pre-trained Bottlenecked Masked Autoencoders are Better Dense Retrievers
Figure 4 for MASTER: Multi-task Pre-trained Bottlenecked Masked Autoencoders are Better Dense Retrievers
Viaarxiv icon

LEAD: Liberal Feature-based Distillation for Dense Retrieval

Add code
Bookmark button
Alert button
Dec 10, 2022
Hao Sun, Xiao Liu, Yeyun Gong, Anlei Dong, Jian Jiao, Jingwen Lu, Yan Zhang, Daxin Jiang, Linjun Yang, Rangan Majumder, Nan Duan

Figure 1 for LEAD: Liberal Feature-based Distillation for Dense Retrieval
Figure 2 for LEAD: Liberal Feature-based Distillation for Dense Retrieval
Figure 3 for LEAD: Liberal Feature-based Distillation for Dense Retrieval
Figure 4 for LEAD: Liberal Feature-based Distillation for Dense Retrieval
Viaarxiv icon

Text Embeddings by Weakly-Supervised Contrastive Pre-training

Add code
Bookmark button
Alert button
Dec 07, 2022
Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei

Figure 1 for Text Embeddings by Weakly-Supervised Contrastive Pre-training
Figure 2 for Text Embeddings by Weakly-Supervised Contrastive Pre-training
Figure 3 for Text Embeddings by Weakly-Supervised Contrastive Pre-training
Figure 4 for Text Embeddings by Weakly-Supervised Contrastive Pre-training
Viaarxiv icon

VATLM: Visual-Audio-Text Pre-Training with Unified Masked Prediction for Speech Representation Learning

Add code
Bookmark button
Alert button
Nov 21, 2022
Qiushi Zhu, Long Zhou, Ziqiang Zhang, Shujie Liu, Binxing Jiao, Jie Zhang, Lirong Dai, Daxin Jiang, Jinyu Li, Furu Wei

Figure 1 for VATLM: Visual-Audio-Text Pre-Training with Unified Masked Prediction for Speech Representation Learning
Figure 2 for VATLM: Visual-Audio-Text Pre-Training with Unified Masked Prediction for Speech Representation Learning
Figure 3 for VATLM: Visual-Audio-Text Pre-Training with Unified Masked Prediction for Speech Representation Learning
Figure 4 for VATLM: Visual-Audio-Text Pre-Training with Unified Masked Prediction for Speech Representation Learning
Viaarxiv icon

Soft-Labeled Contrastive Pre-training for Function-level Code Representation

Add code
Bookmark button
Alert button
Oct 18, 2022
Xiaonan Li, Daya Guo, Yeyun Gong, Yun Lin, Yelong Shen, Xipeng Qiu, Daxin Jiang, Weizhu Chen, Nan Duan

Figure 1 for Soft-Labeled Contrastive Pre-training for Function-level Code Representation
Figure 2 for Soft-Labeled Contrastive Pre-training for Function-level Code Representation
Figure 3 for Soft-Labeled Contrastive Pre-training for Function-level Code Representation
Figure 4 for Soft-Labeled Contrastive Pre-training for Function-level Code Representation
Viaarxiv icon