Alert button
Picture for Rui Yan

Rui Yan

Alert button

HSVI-based Online Minimax Strategies for Partially Observable Stochastic Games with Neural Perception Mechanisms

Add code
Bookmark button
Alert button
Apr 16, 2024
Rui Yan, Gabriel Santos, Gethin Norman, David Parker, Marta Kwiatkowska

Viaarxiv icon

Interpreting Key Mechanisms of Factual Recall in Transformer-Based Language Models

Add code
Bookmark button
Alert button
Apr 09, 2024
Ang Lv, Kaiyi Zhang, Yuhan Chen, Yulong Wang, Lifeng Liu, Ji-Rong Wen, Jian Xie, Rui Yan

Figure 1 for Interpreting Key Mechanisms of Factual Recall in Transformer-Based Language Models
Figure 2 for Interpreting Key Mechanisms of Factual Recall in Transformer-Based Language Models
Figure 3 for Interpreting Key Mechanisms of Factual Recall in Transformer-Based Language Models
Figure 4 for Interpreting Key Mechanisms of Factual Recall in Transformer-Based Language Models
Viaarxiv icon

Selecting Query-bag as Pseudo Relevance Feedback for Information-seeking Conversations

Add code
Bookmark button
Alert button
Mar 22, 2024
Xiaoqing Zhang, Xiuying Chen, Shen Gao, Shuqi Li, Xin Gao, Ji-Rong Wen, Rui Yan

Viaarxiv icon

StyleChat: Learning Recitation-Augmented Memory in LLMs for Stylized Dialogue Generation

Add code
Bookmark button
Alert button
Mar 18, 2024
Jinpeng Li, Zekai Zhang, Quan Tu, Xin Cheng, Dongyan Zhao, Rui Yan

Figure 1 for StyleChat: Learning Recitation-Augmented Memory in LLMs for Stylized Dialogue Generation
Figure 2 for StyleChat: Learning Recitation-Augmented Memory in LLMs for Stylized Dialogue Generation
Figure 3 for StyleChat: Learning Recitation-Augmented Memory in LLMs for Stylized Dialogue Generation
Figure 4 for StyleChat: Learning Recitation-Augmented Memory in LLMs for Stylized Dialogue Generation
Viaarxiv icon

From Skepticism to Acceptance: Simulating the Attitude Dynamics Toward Fake News

Add code
Bookmark button
Alert button
Mar 14, 2024
Yuhan Liu, Xiuying Chen, Xiaoqing Zhang, Xing Gao, Ji Zhang, Rui Yan

Figure 1 for From Skepticism to Acceptance: Simulating the Attitude Dynamics Toward Fake News
Figure 2 for From Skepticism to Acceptance: Simulating the Attitude Dynamics Toward Fake News
Figure 3 for From Skepticism to Acceptance: Simulating the Attitude Dynamics Toward Fake News
Figure 4 for From Skepticism to Acceptance: Simulating the Attitude Dynamics Toward Fake News
Viaarxiv icon

StreamingDialogue: Prolonged Dialogue Learning via Long Context Compression with Minimal Losses

Add code
Bookmark button
Alert button
Mar 13, 2024
Jia-Nan Li, Quan Tu, Cunli Mao, Zhengtao Yu, Ji-Rong Wen, Rui Yan

Figure 1 for StreamingDialogue: Prolonged Dialogue Learning via Long Context Compression with Minimal Losses
Figure 2 for StreamingDialogue: Prolonged Dialogue Learning via Long Context Compression with Minimal Losses
Figure 3 for StreamingDialogue: Prolonged Dialogue Learning via Long Context Compression with Minimal Losses
Figure 4 for StreamingDialogue: Prolonged Dialogue Learning via Long Context Compression with Minimal Losses
Viaarxiv icon

"In Dialogues We Learn": Towards Personalized Dialogue Without Pre-defined Profiles through In-Dialogue Learning

Add code
Bookmark button
Alert button
Mar 12, 2024
Chuanqi Cheng, Quan Tu, Wei Wu, Shuo Shang, Cunli Mao, Zhengtao Yu, Rui Yan

Figure 1 for "In Dialogues We Learn": Towards Personalized Dialogue Without Pre-defined Profiles through In-Dialogue Learning
Figure 2 for "In Dialogues We Learn": Towards Personalized Dialogue Without Pre-defined Profiles through In-Dialogue Learning
Figure 3 for "In Dialogues We Learn": Towards Personalized Dialogue Without Pre-defined Profiles through In-Dialogue Learning
Figure 4 for "In Dialogues We Learn": Towards Personalized Dialogue Without Pre-defined Profiles through In-Dialogue Learning
Viaarxiv icon

What Makes Quantization for Large Language Models Hard? An Empirical Study from the Lens of Perturbation

Add code
Bookmark button
Alert button
Mar 11, 2024
Zhuocheng Gong, Jiahao Liu, Jingang Wang, Xunliang Cai, Dongyan Zhao, Rui Yan

Figure 1 for What Makes Quantization for Large Language Models Hard? An Empirical Study from the Lens of Perturbation
Figure 2 for What Makes Quantization for Large Language Models Hard? An Empirical Study from the Lens of Perturbation
Figure 3 for What Makes Quantization for Large Language Models Hard? An Empirical Study from the Lens of Perturbation
Figure 4 for What Makes Quantization for Large Language Models Hard? An Empirical Study from the Lens of Perturbation
Viaarxiv icon