Picture for Michael Bendersky

Michael Bendersky

Boosting Reward Model with Preference-Conditional Multi-Aspect Synthetic Data Generation

Add code
Jul 22, 2024
Figure 1 for Boosting Reward Model with Preference-Conditional Multi-Aspect Synthetic Data Generation
Figure 2 for Boosting Reward Model with Preference-Conditional Multi-Aspect Synthetic Data Generation
Figure 3 for Boosting Reward Model with Preference-Conditional Multi-Aspect Synthetic Data Generation
Figure 4 for Boosting Reward Model with Preference-Conditional Multi-Aspect Synthetic Data Generation
Viaarxiv icon

Multimodal Reranking for Knowledge-Intensive Visual Question Answering

Add code
Jul 17, 2024
Figure 1 for Multimodal Reranking for Knowledge-Intensive Visual Question Answering
Figure 2 for Multimodal Reranking for Knowledge-Intensive Visual Question Answering
Figure 3 for Multimodal Reranking for Knowledge-Intensive Visual Question Answering
Figure 4 for Multimodal Reranking for Knowledge-Intensive Visual Question Answering
Viaarxiv icon

Reliable Confidence Intervals for Information Retrieval Evaluation Using Generative A.I

Add code
Jul 02, 2024
Figure 1 for Reliable Confidence Intervals for Information Retrieval Evaluation Using Generative A.I
Figure 2 for Reliable Confidence Intervals for Information Retrieval Evaluation Using Generative A.I
Figure 3 for Reliable Confidence Intervals for Information Retrieval Evaluation Using Generative A.I
Figure 4 for Reliable Confidence Intervals for Information Retrieval Evaluation Using Generative A.I
Viaarxiv icon

PLaD: Preference-based Large Language Model Distillation with Pseudo-Preference Pairs

Add code
Jun 06, 2024
Figure 1 for PLaD: Preference-based Large Language Model Distillation with Pseudo-Preference Pairs
Figure 2 for PLaD: Preference-based Large Language Model Distillation with Pseudo-Preference Pairs
Figure 3 for PLaD: Preference-based Large Language Model Distillation with Pseudo-Preference Pairs
Figure 4 for PLaD: Preference-based Large Language Model Distillation with Pseudo-Preference Pairs
Viaarxiv icon

Stochastic RAG: End-to-End Retrieval-Augmented Generation through Expected Utility Maximization

Add code
May 05, 2024
Figure 1 for Stochastic RAG: End-to-End Retrieval-Augmented Generation through Expected Utility Maximization
Figure 2 for Stochastic RAG: End-to-End Retrieval-Augmented Generation through Expected Utility Maximization
Viaarxiv icon

Consolidating Ranking and Relevance Predictions of Large Language Models through Post-Processing

Add code
Apr 17, 2024
Figure 1 for Consolidating Ranking and Relevance Predictions of Large Language Models through Post-Processing
Figure 2 for Consolidating Ranking and Relevance Predictions of Large Language Models through Post-Processing
Figure 3 for Consolidating Ranking and Relevance Predictions of Large Language Models through Post-Processing
Figure 4 for Consolidating Ranking and Relevance Predictions of Large Language Models through Post-Processing
Viaarxiv icon

Unlocking the `Why' of Buying: Introducing a New Dataset and Benchmark for Purchase Reason and Post-Purchase Experience

Add code
Feb 20, 2024
Figure 1 for Unlocking the `Why' of Buying: Introducing a New Dataset and Benchmark for Purchase Reason and Post-Purchase Experience
Figure 2 for Unlocking the `Why' of Buying: Introducing a New Dataset and Benchmark for Purchase Reason and Post-Purchase Experience
Figure 3 for Unlocking the `Why' of Buying: Introducing a New Dataset and Benchmark for Purchase Reason and Post-Purchase Experience
Figure 4 for Unlocking the `Why' of Buying: Introducing a New Dataset and Benchmark for Purchase Reason and Post-Purchase Experience
Viaarxiv icon

PRewrite: Prompt Rewriting with Reinforcement Learning

Add code
Jan 16, 2024
Figure 1 for PRewrite: Prompt Rewriting with Reinforcement Learning
Figure 2 for PRewrite: Prompt Rewriting with Reinforcement Learning
Figure 3 for PRewrite: Prompt Rewriting with Reinforcement Learning
Figure 4 for PRewrite: Prompt Rewriting with Reinforcement Learning
Viaarxiv icon

Bridging the Preference Gap between Retrievers and LLMs

Add code
Jan 13, 2024
Figure 1 for Bridging the Preference Gap between Retrievers and LLMs
Figure 2 for Bridging the Preference Gap between Retrievers and LLMs
Figure 3 for Bridging the Preference Gap between Retrievers and LLMs
Figure 4 for Bridging the Preference Gap between Retrievers and LLMs
Viaarxiv icon

Creator Context for Tweet Recommendation

Add code
Nov 29, 2023
Figure 1 for Creator Context for Tweet Recommendation
Figure 2 for Creator Context for Tweet Recommendation
Figure 3 for Creator Context for Tweet Recommendation
Figure 4 for Creator Context for Tweet Recommendation
Viaarxiv icon