chatbots


Project Riley: Multimodal Multi-Agent LLM Collaboration with Emotional Reasoning and Voting

Add code
May 26, 2025
Viaarxiv icon

A Fully Generative Motivational Interviewing Counsellor Chatbot for Moving Smokers Towards the Decision to Quit

Add code
May 26, 2025
Viaarxiv icon

Get Experience from Practice: LLM Agents with Record & Replay

Add code
May 23, 2025
Viaarxiv icon

Assessing the performance of 8 AI chatbots in bibliographic reference retrieval: Grok and DeepSeek outperform ChatGPT, but none are fully accurate

Add code
May 23, 2025
Viaarxiv icon

EnSToM: Enhancing Dialogue Systems with Entropy-Scaled Steering Vectors for Topic Maintenance

Add code
May 22, 2025
Viaarxiv icon

PersonaBOT: Bringing Customer Personas to Life with LLMs and RAG

Add code
May 22, 2025
Viaarxiv icon

SWE-Dev: Evaluating and Training Autonomous Feature-Driven Software Development

Add code
May 22, 2025
Viaarxiv icon

X-MAS: Towards Building Multi-Agent Systems with Heterogeneous LLMs

Add code
May 22, 2025
Figure 1 for X-MAS: Towards Building Multi-Agent Systems with Heterogeneous LLMs
Figure 2 for X-MAS: Towards Building Multi-Agent Systems with Heterogeneous LLMs
Figure 3 for X-MAS: Towards Building Multi-Agent Systems with Heterogeneous LLMs
Figure 4 for X-MAS: Towards Building Multi-Agent Systems with Heterogeneous LLMs
Viaarxiv icon

Alignment Under Pressure: The Case for Informed Adversaries When Evaluating LLM Defenses

Add code
May 21, 2025
Figure 1 for Alignment Under Pressure: The Case for Informed Adversaries When Evaluating LLM Defenses
Figure 2 for Alignment Under Pressure: The Case for Informed Adversaries When Evaluating LLM Defenses
Figure 3 for Alignment Under Pressure: The Case for Informed Adversaries When Evaluating LLM Defenses
Figure 4 for Alignment Under Pressure: The Case for Informed Adversaries When Evaluating LLM Defenses
Viaarxiv icon

AI vs. Human Judgment of Content Moderation: LLM-as-a-Judge and Ethics-Based Response Refusals

Add code
May 21, 2025
Viaarxiv icon