Alert button
Picture for Md Arafat Sultan

Md Arafat Sultan

Alert button

An Empirical Investigation into the Effect of Parameter Choices in Knowledge Distillation

Jan 12, 2024
Md Arafat Sultan, Aashka Trivedi, Parul Awasthy, Avirup Sil

Viaarxiv icon

Multistage Collaborative Knowledge Distillation from Large Language Models

Nov 15, 2023
Jiachen Zhao, Wenlong Zhao, Andrew Drozdov, Benjamin Rozonoyer, Md Arafat Sultan, Jay-Yoon Lee, Mohit Iyyer, Andrew McCallum

Figure 1 for Multistage Collaborative Knowledge Distillation from Large Language Models
Figure 2 for Multistage Collaborative Knowledge Distillation from Large Language Models
Figure 3 for Multistage Collaborative Knowledge Distillation from Large Language Models
Figure 4 for Multistage Collaborative Knowledge Distillation from Large Language Models
Viaarxiv icon

Ensemble-Instruct: Generating Instruction-Tuning Data with a Heterogeneous Mixture of LMs

Oct 21, 2023
Young-Suk Lee, Md Arafat Sultan, Yousef El-Kurdi, Tahira Naseem Asim Munawar, Radu Florian, Salim Roukos, Ramón Fernandez Astudillo

Viaarxiv icon

Inference-time Re-ranker Relevance Feedback for Neural Information Retrieval

May 19, 2023
Revanth Gangi Reddy, Pradeep Dasigi, Md Arafat Sultan, Arman Cohan, Avirup Sil, Heng Ji, Hannaneh Hajishirzi

Figure 1 for Inference-time Re-ranker Relevance Feedback for Neural Information Retrieval
Figure 2 for Inference-time Re-ranker Relevance Feedback for Neural Information Retrieval
Figure 3 for Inference-time Re-ranker Relevance Feedback for Neural Information Retrieval
Figure 4 for Inference-time Re-ranker Relevance Feedback for Neural Information Retrieval
Viaarxiv icon

UDAPDR: Unsupervised Domain Adaptation via LLM Prompting and Distillation of Rerankers

Mar 01, 2023
Jon Saad-Falcon, Omar Khattab, Keshav Santhanam, Radu Florian, Martin Franz, Salim Roukos, Avirup Sil, Md Arafat Sultan, Christopher Potts

Figure 1 for UDAPDR: Unsupervised Domain Adaptation via LLM Prompting and Distillation of Rerankers
Figure 2 for UDAPDR: Unsupervised Domain Adaptation via LLM Prompting and Distillation of Rerankers
Figure 3 for UDAPDR: Unsupervised Domain Adaptation via LLM Prompting and Distillation of Rerankers
Figure 4 for UDAPDR: Unsupervised Domain Adaptation via LLM Prompting and Distillation of Rerankers
Viaarxiv icon

Knowledge Distillation $\approx$ Label Smoothing: Fact or Fallacy?

Feb 06, 2023
Md Arafat Sultan

Figure 1 for Knowledge Distillation $\approx$ Label Smoothing: Fact or Fallacy?
Figure 2 for Knowledge Distillation $\approx$ Label Smoothing: Fact or Fallacy?
Figure 3 for Knowledge Distillation $\approx$ Label Smoothing: Fact or Fallacy?
Figure 4 for Knowledge Distillation $\approx$ Label Smoothing: Fact or Fallacy?
Viaarxiv icon

PrimeQA: The Prime Repository for State-of-the-Art Multilingual Question Answering Research and Development

Jan 25, 2023
Avirup Sil, Jaydeep Sen, Bhavani Iyer, Martin Franz, Kshitij Fadnis, Mihaela Bornea, Sara Rosenthal, Scott McCarley, Rong Zhang, Vishwajeet Kumar, Yulong Li, Md Arafat Sultan, Riyaz Bhat, Radu Florian, Salim Roukos

Figure 1 for PrimeQA: The Prime Repository for State-of-the-Art Multilingual Question Answering Research and Development
Figure 2 for PrimeQA: The Prime Repository for State-of-the-Art Multilingual Question Answering Research and Development
Figure 3 for PrimeQA: The Prime Repository for State-of-the-Art Multilingual Question Answering Research and Development
Figure 4 for PrimeQA: The Prime Repository for State-of-the-Art Multilingual Question Answering Research and Development
Viaarxiv icon

Moving Beyond Downstream Task Accuracy for Information Retrieval Benchmarking

Dec 02, 2022
Keshav Santhanam, Jon Saad-Falcon, Martin Franz, Omar Khattab, Avirup Sil, Radu Florian, Md Arafat Sultan, Salim Roukos, Matei Zaharia, Christopher Potts

Figure 1 for Moving Beyond Downstream Task Accuracy for Information Retrieval Benchmarking
Figure 2 for Moving Beyond Downstream Task Accuracy for Information Retrieval Benchmarking
Figure 3 for Moving Beyond Downstream Task Accuracy for Information Retrieval Benchmarking
Figure 4 for Moving Beyond Downstream Task Accuracy for Information Retrieval Benchmarking
Viaarxiv icon

SPARTAN: Sparse Hierarchical Memory for Parameter-Efficient Transformers

Nov 29, 2022
Ameet Deshpande, Md Arafat Sultan, Anthony Ferritto, Ashwin Kalyan, Karthik Narasimhan, Avirup Sil

Figure 1 for SPARTAN: Sparse Hierarchical Memory for Parameter-Efficient Transformers
Figure 2 for SPARTAN: Sparse Hierarchical Memory for Parameter-Efficient Transformers
Figure 3 for SPARTAN: Sparse Hierarchical Memory for Parameter-Efficient Transformers
Figure 4 for SPARTAN: Sparse Hierarchical Memory for Parameter-Efficient Transformers
Viaarxiv icon