Picture for Kai Shu

Kai Shu

LMO-DP: Optimizing the Randomization Mechanism for Differentially Private Fine-Tuning (Large) Language Models

Add code
May 29, 2024
Viaarxiv icon

Integrating Mamba and Transformer for Long-Short Range Time Series Forecasting

Add code
Apr 23, 2024
Viaarxiv icon

Re-Search for The Truth: Multi-round Retrieval-augmented Large Language Models are Strong Fake News Detectors

Add code
Mar 14, 2024
Figure 1 for Re-Search for The Truth: Multi-round Retrieval-augmented Large Language Models are Strong Fake News Detectors
Figure 2 for Re-Search for The Truth: Multi-round Retrieval-augmented Large Language Models are Strong Fake News Detectors
Figure 3 for Re-Search for The Truth: Multi-round Retrieval-augmented Large Language Models are Strong Fake News Detectors
Figure 4 for Re-Search for The Truth: Multi-round Retrieval-augmented Large Language Models are Strong Fake News Detectors
Viaarxiv icon

Can Large Language Models Identify Authorship?

Add code
Mar 13, 2024
Figure 1 for Can Large Language Models Identify Authorship?
Figure 2 for Can Large Language Models Identify Authorship?
Figure 3 for Can Large Language Models Identify Authorship?
Figure 4 for Can Large Language Models Identify Authorship?
Viaarxiv icon

Can Large Language Model Agents Simulate Human Trust Behaviors?

Add code
Feb 07, 2024
Viaarxiv icon

TrustLLM: Trustworthiness in Large Language Models

Add code
Jan 25, 2024
Figure 1 for TrustLLM: Trustworthiness in Large Language Models
Figure 2 for TrustLLM: Trustworthiness in Large Language Models
Figure 3 for TrustLLM: Trustworthiness in Large Language Models
Figure 4 for TrustLLM: Trustworthiness in Large Language Models
Viaarxiv icon

Beyond Detection: Unveiling Fairness Vulnerabilities in Abusive Language Models

Add code
Dec 05, 2023
Figure 1 for Beyond Detection: Unveiling Fairness Vulnerabilities in Abusive Language Models
Figure 2 for Beyond Detection: Unveiling Fairness Vulnerabilities in Abusive Language Models
Figure 3 for Beyond Detection: Unveiling Fairness Vulnerabilities in Abusive Language Models
Figure 4 for Beyond Detection: Unveiling Fairness Vulnerabilities in Abusive Language Models
Viaarxiv icon

Backdoor Activation Attack: Attack Large Language Models using Activation Steering for Safety-Alignment

Add code
Nov 24, 2023
Viaarxiv icon

CSGNN: Conquering Noisy Node labels via Dynamic Class-wise Selection

Add code
Nov 20, 2023
Viaarxiv icon

Explainable Claim Verification via Knowledge-Grounded Reasoning with Large Language Models

Add code
Oct 20, 2023
Figure 1 for Explainable Claim Verification via Knowledge-Grounded Reasoning with Large Language Models
Figure 2 for Explainable Claim Verification via Knowledge-Grounded Reasoning with Large Language Models
Figure 3 for Explainable Claim Verification via Knowledge-Grounded Reasoning with Large Language Models
Figure 4 for Explainable Claim Verification via Knowledge-Grounded Reasoning with Large Language Models
Viaarxiv icon