Alert button
Picture for Maarten de Rijke

Maarten de Rijke

Alert button

RefNet: A Reference-aware Network for Background Based Conversation

Aug 18, 2019
Chuan Meng, Pengjie Ren, Zhumin Chen, Christof Monz, Jun Ma, Maarten de Rijke

Figure 1 for RefNet: A Reference-aware Network for Background Based Conversation
Figure 2 for RefNet: A Reference-aware Network for Background Based Conversation
Figure 3 for RefNet: A Reference-aware Network for Background Based Conversation
Figure 4 for RefNet: A Reference-aware Network for Background Based Conversation
Viaarxiv icon

Contrastive Explanations for Large Errors in Retail Forecasting Predictions through Monte Carlo Simulations

Jul 17, 2019
Ana Lucic, Hinda Haned, Maarten de Rijke

Figure 1 for Contrastive Explanations for Large Errors in Retail Forecasting Predictions through Monte Carlo Simulations
Figure 2 for Contrastive Explanations for Large Errors in Retail Forecasting Predictions through Monte Carlo Simulations
Figure 3 for Contrastive Explanations for Large Errors in Retail Forecasting Predictions through Monte Carlo Simulations
Figure 4 for Contrastive Explanations for Large Errors in Retail Forecasting Predictions through Monte Carlo Simulations
Viaarxiv icon

A Modular Task-oriented Dialogue System Using a Neural Mixture-of-Experts

Jul 10, 2019
Jiahuan Pei, Pengjie Ren, Maarten de Rijke

Figure 1 for A Modular Task-oriented Dialogue System Using a Neural Mixture-of-Experts
Figure 2 for A Modular Task-oriented Dialogue System Using a Neural Mixture-of-Experts
Figure 3 for A Modular Task-oriented Dialogue System Using a Neural Mixture-of-Experts
Figure 4 for A Modular Task-oriented Dialogue System Using a Neural Mixture-of-Experts
Viaarxiv icon

Do Transformer Attention Heads Provide Transparency in Abstractive Summarization?

Jul 08, 2019
Joris Baan, Maartje ter Hoeve, Marlies van der Wees, Anne Schuth, Maarten de Rijke

Figure 1 for Do Transformer Attention Heads Provide Transparency in Abstractive Summarization?
Figure 2 for Do Transformer Attention Heads Provide Transparency in Abstractive Summarization?
Figure 3 for Do Transformer Attention Heads Provide Transparency in Abstractive Summarization?
Figure 4 for Do Transformer Attention Heads Provide Transparency in Abstractive Summarization?
Viaarxiv icon

Explaining Predictions from Tree-based Boosting Ensembles

Jul 04, 2019
Ana Lucic, Hinda Haned, Maarten de Rijke

Figure 1 for Explaining Predictions from Tree-based Boosting Ensembles
Figure 2 for Explaining Predictions from Tree-based Boosting Ensembles
Viaarxiv icon

SEntNet: Source-aware Recurrent Entity Network for Dialogue Response Selection

Jun 20, 2019
Jiahuan Pei, Arent Stienstra, Julia Kiseleva, Maarten de Rijke

Figure 1 for SEntNet: Source-aware Recurrent Entity Network for Dialogue Response Selection
Figure 2 for SEntNet: Source-aware Recurrent Entity Network for Dialogue Response Selection
Figure 3 for SEntNet: Source-aware Recurrent Entity Network for Dialogue Response Selection
Figure 4 for SEntNet: Source-aware Recurrent Entity Network for Dialogue Response Selection
Viaarxiv icon

Improving Background Based Conversation with Context-aware Knowledge Pre-selection

Jun 16, 2019
Yangjun Zhang, Pengjie Ren, Maarten de Rijke

Figure 1 for Improving Background Based Conversation with Context-aware Knowledge Pre-selection
Figure 2 for Improving Background Based Conversation with Context-aware Knowledge Pre-selection
Figure 3 for Improving Background Based Conversation with Context-aware Knowledge Pre-selection
Figure 4 for Improving Background Based Conversation with Context-aware Knowledge Pre-selection
Viaarxiv icon

Cascading Non-Stationary Bandits: Online Learning to Rank in the Non-Stationary Cascade Model

Jun 01, 2019
Chang Li, Maarten de Rijke

Figure 1 for Cascading Non-Stationary Bandits: Online Learning to Rank in the Non-Stationary Cascade Model
Figure 2 for Cascading Non-Stationary Bandits: Online Learning to Rank in the Non-Stationary Cascade Model
Viaarxiv icon