Picture for Daxin Jiang

Daxin Jiang

KnowDA: All-in-One Knowledge Mixture Model for Data Augmentation in Few-Shot NLP

Add code
Jun 21, 2022
Figure 1 for KnowDA: All-in-One Knowledge Mixture Model for Data Augmentation in Few-Shot NLP
Figure 2 for KnowDA: All-in-One Knowledge Mixture Model for Data Augmentation in Few-Shot NLP
Figure 3 for KnowDA: All-in-One Knowledge Mixture Model for Data Augmentation in Few-Shot NLP
Figure 4 for KnowDA: All-in-One Knowledge Mixture Model for Data Augmentation in Few-Shot NLP
Viaarxiv icon

Bridging the Gap Between Indexing and Retrieval for Differentiable Search Index with Query Generation

Add code
Jun 21, 2022
Figure 1 for Bridging the Gap Between Indexing and Retrieval for Differentiable Search Index with Query Generation
Figure 2 for Bridging the Gap Between Indexing and Retrieval for Differentiable Search Index with Query Generation
Figure 3 for Bridging the Gap Between Indexing and Retrieval for Differentiable Search Index with Query Generation
Figure 4 for Bridging the Gap Between Indexing and Retrieval for Differentiable Search Index with Query Generation
Viaarxiv icon

Towards Robust Ranker for Text Retrieval

Add code
Jun 16, 2022
Figure 1 for Towards Robust Ranker for Text Retrieval
Figure 2 for Towards Robust Ranker for Text Retrieval
Figure 3 for Towards Robust Ranker for Text Retrieval
Figure 4 for Towards Robust Ranker for Text Retrieval
Viaarxiv icon

Unsupervised Context Aware Sentence Representation Pretraining for Multi-lingual Dense Retrieval

Add code
Jun 07, 2022
Figure 1 for Unsupervised Context Aware Sentence Representation Pretraining for Multi-lingual Dense Retrieval
Figure 2 for Unsupervised Context Aware Sentence Representation Pretraining for Multi-lingual Dense Retrieval
Figure 3 for Unsupervised Context Aware Sentence Representation Pretraining for Multi-lingual Dense Retrieval
Figure 4 for Unsupervised Context Aware Sentence Representation Pretraining for Multi-lingual Dense Retrieval
Viaarxiv icon

Task-Specific Expert Pruning for Sparse Mixture-of-Experts

Add code
Jun 02, 2022
Figure 1 for Task-Specific Expert Pruning for Sparse Mixture-of-Experts
Figure 2 for Task-Specific Expert Pruning for Sparse Mixture-of-Experts
Figure 3 for Task-Specific Expert Pruning for Sparse Mixture-of-Experts
Figure 4 for Task-Specific Expert Pruning for Sparse Mixture-of-Experts
Viaarxiv icon

THE-X: Privacy-Preserving Transformer Inference with Homomorphic Encryption

Add code
Jun 02, 2022
Viaarxiv icon

Negative Sampling for Contrastive Representation Learning: A Review

Add code
Jun 01, 2022
Figure 1 for Negative Sampling for Contrastive Representation Learning: A Review
Viaarxiv icon

UnifieR: A Unified Retriever for Large-Scale Retrieval

Add code
May 23, 2022
Figure 1 for UnifieR: A Unified Retriever for Large-Scale Retrieval
Figure 2 for UnifieR: A Unified Retriever for Large-Scale Retrieval
Figure 3 for UnifieR: A Unified Retriever for Large-Scale Retrieval
Figure 4 for UnifieR: A Unified Retriever for Large-Scale Retrieval
Viaarxiv icon

Multi-level Contrastive Learning for Cross-lingual Spoken Language Understanding

Add code
May 07, 2022
Figure 1 for Multi-level Contrastive Learning for Cross-lingual Spoken Language Understanding
Figure 2 for Multi-level Contrastive Learning for Cross-lingual Spoken Language Understanding
Figure 3 for Multi-level Contrastive Learning for Cross-lingual Spoken Language Understanding
Figure 4 for Multi-level Contrastive Learning for Cross-lingual Spoken Language Understanding
Viaarxiv icon

Stylized Knowledge-Grounded Dialogue Generation via Disentangled Template Rewriting

Add code
Apr 12, 2022
Figure 1 for Stylized Knowledge-Grounded Dialogue Generation via Disentangled Template Rewriting
Figure 2 for Stylized Knowledge-Grounded Dialogue Generation via Disentangled Template Rewriting
Figure 3 for Stylized Knowledge-Grounded Dialogue Generation via Disentangled Template Rewriting
Figure 4 for Stylized Knowledge-Grounded Dialogue Generation via Disentangled Template Rewriting
Viaarxiv icon