Alert button
Picture for Ashwin Paranjape

Ashwin Paranjape

Alert button

Stanford University

Lost in the Middle: How Language Models Use Long Contexts

Add code
Bookmark button
Alert button
Jul 31, 2023
Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang

Figure 1 for Lost in the Middle: How Language Models Use Long Contexts
Figure 2 for Lost in the Middle: How Language Models Use Long Contexts
Figure 3 for Lost in the Middle: How Language Models Use Long Contexts
Figure 4 for Lost in the Middle: How Language Models Use Long Contexts
Viaarxiv icon

Evaluating Human-Language Model Interaction

Add code
Bookmark button
Alert button
Dec 20, 2022
Mina Lee, Megha Srivastava, Amelia Hardy, John Thickstun, Esin Durmus, Ashwin Paranjape, Ines Gerard-Ursin, Xiang Lisa Li, Faisal Ladhak, Frieda Rong, Rose E. Wang, Minae Kwon, Joon Sung Park, Hancheng Cao, Tony Lee, Rishi Bommasani, Michael Bernstein, Percy Liang

Figure 1 for Evaluating Human-Language Model Interaction
Figure 2 for Evaluating Human-Language Model Interaction
Figure 3 for Evaluating Human-Language Model Interaction
Figure 4 for Evaluating Human-Language Model Interaction
Viaarxiv icon

When can I Speak? Predicting initiation points for spoken dialogue agents

Add code
Bookmark button
Alert button
Aug 07, 2022
Siyan Li, Ashwin Paranjape, Christopher D. Manning

Figure 1 for When can I Speak? Predicting initiation points for spoken dialogue agents
Figure 2 for When can I Speak? Predicting initiation points for spoken dialogue agents
Figure 3 for When can I Speak? Predicting initiation points for spoken dialogue agents
Figure 4 for When can I Speak? Predicting initiation points for spoken dialogue agents
Viaarxiv icon

Neural Generation Meets Real People: Building a Social, Informative Open-Domain Dialogue Agent

Add code
Bookmark button
Alert button
Jul 25, 2022
Ethan A. Chi, Ashwin Paranjape, Abigail See, Caleb Chiam, Kathleen Kenealy, Swee Kiat Lim, Amelia Hardy, Chetanya Rastogi, Haojun Li, Alexander Iyabor, Yutong He, Hari Sowrirajan, Peng Qi, Kaushik Ram Sadagopan, Nguyet Minh Phu, Dilara Soylu, Jillian Tang, Avanika Narayan, Giovanni Campagna, Christopher D. Manning

Figure 1 for Neural Generation Meets Real People: Building a Social, Informative Open-Domain Dialogue Agent
Figure 2 for Neural Generation Meets Real People: Building a Social, Informative Open-Domain Dialogue Agent
Figure 3 for Neural Generation Meets Real People: Building a Social, Informative Open-Domain Dialogue Agent
Figure 4 for Neural Generation Meets Real People: Building a Social, Informative Open-Domain Dialogue Agent
Viaarxiv icon

You Only Need One Model for Open-domain Question Answering

Add code
Bookmark button
Alert button
Dec 14, 2021
Haejun Lee, Akhil Kedia, Jongwon Lee, Ashwin Paranjape, Christopher D. Manning, Kyoung-Gu Woo

Figure 1 for You Only Need One Model for Open-domain Question Answering
Figure 2 for You Only Need One Model for Open-domain Question Answering
Figure 3 for You Only Need One Model for Open-domain Question Answering
Figure 4 for You Only Need One Model for Open-domain Question Answering
Viaarxiv icon

Hindsight: Posterior-guided training of retrievers for improved open-ended generation

Add code
Bookmark button
Alert button
Oct 21, 2021
Ashwin Paranjape, Omar Khattab, Christopher Potts, Matei Zaharia, Christopher D. Manning

Figure 1 for Hindsight: Posterior-guided training of retrievers for improved open-ended generation
Figure 2 for Hindsight: Posterior-guided training of retrievers for improved open-ended generation
Figure 3 for Hindsight: Posterior-guided training of retrievers for improved open-ended generation
Figure 4 for Hindsight: Posterior-guided training of retrievers for improved open-ended generation
Viaarxiv icon

Human-like informative conversations: Better acknowledgements using conditional mutual information

Add code
Bookmark button
Alert button
Apr 16, 2021
Ashwin Paranjape, Christopher D. Manning

Figure 1 for Human-like informative conversations: Better acknowledgements using conditional mutual information
Figure 2 for Human-like informative conversations: Better acknowledgements using conditional mutual information
Figure 3 for Human-like informative conversations: Better acknowledgements using conditional mutual information
Figure 4 for Human-like informative conversations: Better acknowledgements using conditional mutual information
Viaarxiv icon

Neural Generation Meets Real People: Towards Emotionally Engaging Mixed-Initiative Conversations

Add code
Bookmark button
Alert button
Sep 05, 2020
Ashwin Paranjape, Abigail See, Kathleen Kenealy, Haojun Li, Amelia Hardy, Peng Qi, Kaushik Ram Sadagopan, Nguyet Minh Phu, Dilara Soylu, Christopher D. Manning

Figure 1 for Neural Generation Meets Real People: Towards Emotionally Engaging Mixed-Initiative Conversations
Figure 2 for Neural Generation Meets Real People: Towards Emotionally Engaging Mixed-Initiative Conversations
Figure 3 for Neural Generation Meets Real People: Towards Emotionally Engaging Mixed-Initiative Conversations
Figure 4 for Neural Generation Meets Real People: Towards Emotionally Engaging Mixed-Initiative Conversations
Viaarxiv icon