Alert button
Picture for Kai Shu

Kai Shu

Alert button

Re-Search for The Truth: Multi-round Retrieval-augmented Large Language Models are Strong Fake News Detectors

Add code
Bookmark button
Alert button
Mar 14, 2024
Guanghua Li, Wensheng Lu, Wei Zhang, Defu Lian, Kezhong Lu, Rui Mao, Kai Shu, Hao Liao

Figure 1 for Re-Search for The Truth: Multi-round Retrieval-augmented Large Language Models are Strong Fake News Detectors
Figure 2 for Re-Search for The Truth: Multi-round Retrieval-augmented Large Language Models are Strong Fake News Detectors
Figure 3 for Re-Search for The Truth: Multi-round Retrieval-augmented Large Language Models are Strong Fake News Detectors
Figure 4 for Re-Search for The Truth: Multi-round Retrieval-augmented Large Language Models are Strong Fake News Detectors
Viaarxiv icon

Can Large Language Models Identify Authorship?

Add code
Bookmark button
Alert button
Mar 13, 2024
Baixiang Huang, Canyu Chen, Kai Shu

Figure 1 for Can Large Language Models Identify Authorship?
Figure 2 for Can Large Language Models Identify Authorship?
Figure 3 for Can Large Language Models Identify Authorship?
Figure 4 for Can Large Language Models Identify Authorship?
Viaarxiv icon

Can Large Language Model Agents Simulate Human Trust Behaviors?

Add code
Bookmark button
Alert button
Feb 07, 2024
Chengxing Xie, Canyu Chen, Feiran Jia, Ziyu Ye, Kai Shu, Adel Bibi, Ziniu Hu, Philip Torr, Bernard Ghanem, Guohao Li

Viaarxiv icon

TrustLLM: Trustworthiness in Large Language Models

Add code
Bookmark button
Alert button
Jan 25, 2024
Lichao Sun, Yue Huang, Haoran Wang, Siyuan Wu, Qihui Zhang, Chujie Gao, Yixin Huang, Wenhan Lyu, Yixuan Zhang, Xiner Li, Zhengliang Liu, Yixin Liu, Yijue Wang, Zhikun Zhang, Bhavya Kailkhura, Caiming Xiong, Chaowei Xiao, Chunyuan Li, Eric Xing, Furong Huang, Hao Liu, Heng Ji, Hongyi Wang, Huan Zhang, Huaxiu Yao, Manolis Kellis, Marinka Zitnik, Meng Jiang, Mohit Bansal, James Zou, Jian Pei, Jian Liu, Jianfeng Gao, Jiawei Han, Jieyu Zhao, Jiliang Tang, Jindong Wang, John Mitchell, Kai Shu, Kaidi Xu, Kai-Wei Chang, Lifang He, Lifu Huang, Michael Backes, Neil Zhenqiang Gong, Philip S. Yu, Pin-Yu Chen, Quanquan Gu, Ran Xu, Rex Ying, Shuiwang Ji, Suman Jana, Tianlong Chen, Tianming Liu, Tianyi Zhou, William Wang, Xiang Li, Xiangliang Zhang, Xiao Wang, Xing Xie, Xun Chen, Xuyu Wang, Yan Liu, Yanfang Ye, Yinzhi Cao, Yong Chen, Yue Zhao

Figure 1 for TrustLLM: Trustworthiness in Large Language Models
Figure 2 for TrustLLM: Trustworthiness in Large Language Models
Figure 3 for TrustLLM: Trustworthiness in Large Language Models
Figure 4 for TrustLLM: Trustworthiness in Large Language Models
Viaarxiv icon

Beyond Detection: Unveiling Fairness Vulnerabilities in Abusive Language Models

Add code
Bookmark button
Alert button
Dec 05, 2023
Yueqing Liang, Lu Cheng, Ali Payani, Kai Shu

Viaarxiv icon

Backdoor Activation Attack: Attack Large Language Models using Activation Steering for Safety-Alignment

Add code
Bookmark button
Alert button
Nov 24, 2023
Haoran Wang, Kai Shu

Viaarxiv icon

CSGNN: Conquering Noisy Node labels via Dynamic Class-wise Selection

Add code
Bookmark button
Alert button
Nov 20, 2023
Yifan Li, Zhen Tan, Kai Shu, Zongsheng Cao, Yu Kong, Huan Liu

Viaarxiv icon