Picture for Xiaozhong Liu

Xiaozhong Liu

A Speaker Turn-Aware Multi-Task Adversarial Network for Joint User Satisfaction Estimation and Sentiment Analysis

Add code
Oct 12, 2024
Figure 1 for A Speaker Turn-Aware Multi-Task Adversarial Network for Joint User Satisfaction Estimation and Sentiment Analysis
Figure 2 for A Speaker Turn-Aware Multi-Task Adversarial Network for Joint User Satisfaction Estimation and Sentiment Analysis
Figure 3 for A Speaker Turn-Aware Multi-Task Adversarial Network for Joint User Satisfaction Estimation and Sentiment Analysis
Figure 4 for A Speaker Turn-Aware Multi-Task Adversarial Network for Joint User Satisfaction Estimation and Sentiment Analysis
Viaarxiv icon

LLM Cascade with Multi-Objective Optimal Consideration

Add code
Oct 10, 2024
Figure 1 for LLM Cascade with Multi-Objective Optimal Consideration
Figure 2 for LLM Cascade with Multi-Objective Optimal Consideration
Figure 3 for LLM Cascade with Multi-Objective Optimal Consideration
Figure 4 for LLM Cascade with Multi-Objective Optimal Consideration
Viaarxiv icon

Can Large Language Models Grasp Legal Theories? Enhance Legal Reasoning with Insights from Multi-Agent Collaboration

Add code
Oct 03, 2024
Figure 1 for Can Large Language Models Grasp Legal Theories? Enhance Legal Reasoning with Insights from Multi-Agent Collaboration
Figure 2 for Can Large Language Models Grasp Legal Theories? Enhance Legal Reasoning with Insights from Multi-Agent Collaboration
Figure 3 for Can Large Language Models Grasp Legal Theories? Enhance Legal Reasoning with Insights from Multi-Agent Collaboration
Figure 4 for Can Large Language Models Grasp Legal Theories? Enhance Legal Reasoning with Insights from Multi-Agent Collaboration
Viaarxiv icon

PersonaMark: Personalized LLM watermarking for model protection and user attribution

Add code
Sep 15, 2024
Figure 1 for PersonaMark: Personalized LLM watermarking for model protection and user attribution
Figure 2 for PersonaMark: Personalized LLM watermarking for model protection and user attribution
Figure 3 for PersonaMark: Personalized LLM watermarking for model protection and user attribution
Figure 4 for PersonaMark: Personalized LLM watermarking for model protection and user attribution
Viaarxiv icon

Black-Box Opinion Manipulation Attacks to Retrieval-Augmented Generation of Large Language Models

Add code
Jul 18, 2024
Figure 1 for Black-Box Opinion Manipulation Attacks to Retrieval-Augmented Generation of Large Language Models
Figure 2 for Black-Box Opinion Manipulation Attacks to Retrieval-Augmented Generation of Large Language Models
Figure 3 for Black-Box Opinion Manipulation Attacks to Retrieval-Augmented Generation of Large Language Models
Figure 4 for Black-Box Opinion Manipulation Attacks to Retrieval-Augmented Generation of Large Language Models
Viaarxiv icon

Knowledge-Infused Legal Wisdom: Navigating LLM Consultation through the Lens of Diagnostics and Positive-Unlabeled Reinforcement Learning

Add code
Jun 05, 2024
Figure 1 for Knowledge-Infused Legal Wisdom: Navigating LLM Consultation through the Lens of Diagnostics and Positive-Unlabeled Reinforcement Learning
Figure 2 for Knowledge-Infused Legal Wisdom: Navigating LLM Consultation through the Lens of Diagnostics and Positive-Unlabeled Reinforcement Learning
Figure 3 for Knowledge-Infused Legal Wisdom: Navigating LLM Consultation through the Lens of Diagnostics and Positive-Unlabeled Reinforcement Learning
Figure 4 for Knowledge-Infused Legal Wisdom: Navigating LLM Consultation through the Lens of Diagnostics and Positive-Unlabeled Reinforcement Learning
Viaarxiv icon

Enhance Robustness of Language Models Against Variation Attack through Graph Integration

Add code
Apr 18, 2024
Figure 1 for Enhance Robustness of Language Models Against Variation Attack through Graph Integration
Figure 2 for Enhance Robustness of Language Models Against Variation Attack through Graph Integration
Figure 3 for Enhance Robustness of Language Models Against Variation Attack through Graph Integration
Figure 4 for Enhance Robustness of Language Models Against Variation Attack through Graph Integration
Viaarxiv icon

From Model-centered to Human-Centered: Revision Distance as a Metric for Text Evaluation in LLMs-based Applications

Add code
Apr 11, 2024
Figure 1 for From Model-centered to Human-Centered: Revision Distance as a Metric for Text Evaluation in LLMs-based Applications
Figure 2 for From Model-centered to Human-Centered: Revision Distance as a Metric for Text Evaluation in LLMs-based Applications
Figure 3 for From Model-centered to Human-Centered: Revision Distance as a Metric for Text Evaluation in LLMs-based Applications
Figure 4 for From Model-centered to Human-Centered: Revision Distance as a Metric for Text Evaluation in LLMs-based Applications
Viaarxiv icon

Personalized LLM Response Generation with Parameterized Memory Injection

Add code
Apr 04, 2024
Figure 1 for Personalized LLM Response Generation with Parameterized Memory Injection
Figure 2 for Personalized LLM Response Generation with Parameterized Memory Injection
Figure 3 for Personalized LLM Response Generation with Parameterized Memory Injection
Figure 4 for Personalized LLM Response Generation with Parameterized Memory Injection
Viaarxiv icon

Empowering Dual-Level Graph Self-Supervised Pretraining with Motif Discovery

Add code
Dec 19, 2023
Figure 1 for Empowering Dual-Level Graph Self-Supervised Pretraining with Motif Discovery
Figure 2 for Empowering Dual-Level Graph Self-Supervised Pretraining with Motif Discovery
Figure 3 for Empowering Dual-Level Graph Self-Supervised Pretraining with Motif Discovery
Figure 4 for Empowering Dual-Level Graph Self-Supervised Pretraining with Motif Discovery
Viaarxiv icon