Picture for Ramakanth Pasunuru

Ramakanth Pasunuru

Complementary Explanations for Effective In-Context Learning

Add code
Nov 25, 2022
Viaarxiv icon

Improving In-Context Few-Shot Learning via Self-Supervised Training

Add code
May 03, 2022
Figure 1 for Improving In-Context Few-Shot Learning via Self-Supervised Training
Figure 2 for Improving In-Context Few-Shot Learning via Self-Supervised Training
Figure 3 for Improving In-Context Few-Shot Learning via Self-Supervised Training
Figure 4 for Improving In-Context Few-Shot Learning via Self-Supervised Training
Viaarxiv icon

Efficient Large Scale Language Modeling with Mixtures of Experts

Add code
Dec 20, 2021
Figure 1 for Efficient Large Scale Language Modeling with Mixtures of Experts
Figure 2 for Efficient Large Scale Language Modeling with Mixtures of Experts
Figure 3 for Efficient Large Scale Language Modeling with Mixtures of Experts
Figure 4 for Efficient Large Scale Language Modeling with Mixtures of Experts
Viaarxiv icon

Few-shot Learning with Multilingual Language Models

Add code
Dec 20, 2021
Figure 1 for Few-shot Learning with Multilingual Language Models
Figure 2 for Few-shot Learning with Multilingual Language Models
Figure 3 for Few-shot Learning with Multilingual Language Models
Figure 4 for Few-shot Learning with Multilingual Language Models
Viaarxiv icon

A Proposition-Level Clustering Approach for Multi-Document Summarization

Add code
Dec 16, 2021
Figure 1 for A Proposition-Level Clustering Approach for Multi-Document Summarization
Figure 2 for A Proposition-Level Clustering Approach for Multi-Document Summarization
Figure 3 for A Proposition-Level Clustering Approach for Multi-Document Summarization
Figure 4 for A Proposition-Level Clustering Approach for Multi-Document Summarization
Viaarxiv icon

Multi-Document Keyphrase Extraction: A Literature Review and the First Dataset

Add code
Oct 03, 2021
Figure 1 for Multi-Document Keyphrase Extraction: A Literature Review and the First Dataset
Figure 2 for Multi-Document Keyphrase Extraction: A Literature Review and the First Dataset
Figure 3 for Multi-Document Keyphrase Extraction: A Literature Review and the First Dataset
Figure 4 for Multi-Document Keyphrase Extraction: A Literature Review and the First Dataset
Viaarxiv icon

iFacetSum: Coreference-based Interactive Faceted Summarization for Multi-Document Exploration

Add code
Sep 23, 2021
Figure 1 for iFacetSum: Coreference-based Interactive Faceted Summarization for Multi-Document Exploration
Figure 2 for iFacetSum: Coreference-based Interactive Faceted Summarization for Multi-Document Exploration
Figure 3 for iFacetSum: Coreference-based Interactive Faceted Summarization for Multi-Document Exploration
Figure 4 for iFacetSum: Coreference-based Interactive Faceted Summarization for Multi-Document Exploration
Viaarxiv icon

Dual Reinforcement-Based Specification Generation for Image De-Rendering

Add code
Mar 02, 2021
Figure 1 for Dual Reinforcement-Based Specification Generation for Image De-Rendering
Figure 2 for Dual Reinforcement-Based Specification Generation for Image De-Rendering
Figure 3 for Dual Reinforcement-Based Specification Generation for Image De-Rendering
Figure 4 for Dual Reinforcement-Based Specification Generation for Image De-Rendering
Viaarxiv icon

Data Augmentation for Abstractive Query-Focused Multi-Document Summarization

Add code
Mar 02, 2021
Figure 1 for Data Augmentation for Abstractive Query-Focused Multi-Document Summarization
Figure 2 for Data Augmentation for Abstractive Query-Focused Multi-Document Summarization
Figure 3 for Data Augmentation for Abstractive Query-Focused Multi-Document Summarization
Figure 4 for Data Augmentation for Abstractive Query-Focused Multi-Document Summarization
Viaarxiv icon

DORB: Dynamically Optimizing Multiple Rewards with Bandits

Add code
Nov 15, 2020
Figure 1 for DORB: Dynamically Optimizing Multiple Rewards with Bandits
Figure 2 for DORB: Dynamically Optimizing Multiple Rewards with Bandits
Figure 3 for DORB: Dynamically Optimizing Multiple Rewards with Bandits
Figure 4 for DORB: Dynamically Optimizing Multiple Rewards with Bandits
Viaarxiv icon