Alert button
Picture for Avirup Sil

Avirup Sil

Alert button

CLAPNQ: Cohesive Long-form Answers from Passages in Natural Questions for RAG systems

Add code
Bookmark button
Alert button
Apr 02, 2024
Sara Rosenthal, Avirup Sil, Radu Florian, Salim Roukos

Viaarxiv icon

An Empirical Investigation into the Effect of Parameter Choices in Knowledge Distillation

Add code
Bookmark button
Alert button
Jan 12, 2024
Md Arafat Sultan, Aashka Trivedi, Parul Awasthy, Avirup Sil

Viaarxiv icon

Muted: Multilingual Targeted Offensive Speech Identification and Visualization

Add code
Bookmark button
Alert button
Dec 18, 2023
Christoph Tillmann, Aashka Trivedi, Sara Rosenthal, Santosh Borse, Rong Zhang, Avirup Sil, Bishwaranjan Bhattacharjee

Viaarxiv icon

Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection

Add code
Bookmark button
Alert button
Oct 17, 2023
Akari Asai, Zeqiu Wu, Yizhong Wang, Avirup Sil, Hannaneh Hajishirzi

Viaarxiv icon

Inference-time Re-ranker Relevance Feedback for Neural Information Retrieval

Add code
Bookmark button
Alert button
May 19, 2023
Revanth Gangi Reddy, Pradeep Dasigi, Md Arafat Sultan, Arman Cohan, Avirup Sil, Heng Ji, Hannaneh Hajishirzi

Figure 1 for Inference-time Re-ranker Relevance Feedback for Neural Information Retrieval
Figure 2 for Inference-time Re-ranker Relevance Feedback for Neural Information Retrieval
Figure 3 for Inference-time Re-ranker Relevance Feedback for Neural Information Retrieval
Figure 4 for Inference-time Re-ranker Relevance Feedback for Neural Information Retrieval
Viaarxiv icon

UDAPDR: Unsupervised Domain Adaptation via LLM Prompting and Distillation of Rerankers

Add code
Bookmark button
Alert button
Mar 01, 2023
Jon Saad-Falcon, Omar Khattab, Keshav Santhanam, Radu Florian, Martin Franz, Salim Roukos, Avirup Sil, Md Arafat Sultan, Christopher Potts

Figure 1 for UDAPDR: Unsupervised Domain Adaptation via LLM Prompting and Distillation of Rerankers
Figure 2 for UDAPDR: Unsupervised Domain Adaptation via LLM Prompting and Distillation of Rerankers
Figure 3 for UDAPDR: Unsupervised Domain Adaptation via LLM Prompting and Distillation of Rerankers
Figure 4 for UDAPDR: Unsupervised Domain Adaptation via LLM Prompting and Distillation of Rerankers
Viaarxiv icon

PrimeQA: The Prime Repository for State-of-the-Art Multilingual Question Answering Research and Development

Add code
Bookmark button
Alert button
Jan 25, 2023
Avirup Sil, Jaydeep Sen, Bhavani Iyer, Martin Franz, Kshitij Fadnis, Mihaela Bornea, Sara Rosenthal, Scott McCarley, Rong Zhang, Vishwajeet Kumar, Yulong Li, Md Arafat Sultan, Riyaz Bhat, Radu Florian, Salim Roukos

Figure 1 for PrimeQA: The Prime Repository for State-of-the-Art Multilingual Question Answering Research and Development
Figure 2 for PrimeQA: The Prime Repository for State-of-the-Art Multilingual Question Answering Research and Development
Figure 3 for PrimeQA: The Prime Repository for State-of-the-Art Multilingual Question Answering Research and Development
Figure 4 for PrimeQA: The Prime Repository for State-of-the-Art Multilingual Question Answering Research and Development
Viaarxiv icon

Moving Beyond Downstream Task Accuracy for Information Retrieval Benchmarking

Add code
Bookmark button
Alert button
Dec 02, 2022
Keshav Santhanam, Jon Saad-Falcon, Martin Franz, Omar Khattab, Avirup Sil, Radu Florian, Md Arafat Sultan, Salim Roukos, Matei Zaharia, Christopher Potts

Figure 1 for Moving Beyond Downstream Task Accuracy for Information Retrieval Benchmarking
Figure 2 for Moving Beyond Downstream Task Accuracy for Information Retrieval Benchmarking
Figure 3 for Moving Beyond Downstream Task Accuracy for Information Retrieval Benchmarking
Figure 4 for Moving Beyond Downstream Task Accuracy for Information Retrieval Benchmarking
Viaarxiv icon

SPARTAN: Sparse Hierarchical Memory for Parameter-Efficient Transformers

Add code
Bookmark button
Alert button
Nov 29, 2022
Ameet Deshpande, Md Arafat Sultan, Anthony Ferritto, Ashwin Kalyan, Karthik Narasimhan, Avirup Sil

Figure 1 for SPARTAN: Sparse Hierarchical Memory for Parameter-Efficient Transformers
Figure 2 for SPARTAN: Sparse Hierarchical Memory for Parameter-Efficient Transformers
Figure 3 for SPARTAN: Sparse Hierarchical Memory for Parameter-Efficient Transformers
Figure 4 for SPARTAN: Sparse Hierarchical Memory for Parameter-Efficient Transformers
Viaarxiv icon

Zero-Shot Dynamic Quantization for Transformer Inference

Add code
Bookmark button
Alert button
Nov 17, 2022
Yousef El-Kurdi, Jerry Quinn, Avirup Sil

Figure 1 for Zero-Shot Dynamic Quantization for Transformer Inference
Figure 2 for Zero-Shot Dynamic Quantization for Transformer Inference
Figure 3 for Zero-Shot Dynamic Quantization for Transformer Inference
Viaarxiv icon