Alert button
Picture for Chaowei Xiao

Chaowei Xiao

Alert button

Preference Poisoning Attacks on Reward Model Learning

Add code
Bookmark button
Alert button
Feb 02, 2024
Junlin Wu, Jiongxiao Wang, Chaowei Xiao, Chenguang Wang, Ning Zhang, Yevgeniy Vorobeychik

Viaarxiv icon

TrustLLM: Trustworthiness in Large Language Models

Add code
Bookmark button
Alert button
Jan 25, 2024
Lichao Sun, Yue Huang, Haoran Wang, Siyuan Wu, Qihui Zhang, Chujie Gao, Yixin Huang, Wenhan Lyu, Yixuan Zhang, Xiner Li, Zhengliang Liu, Yixin Liu, Yijue Wang, Zhikun Zhang, Bhavya Kailkhura, Caiming Xiong, Chaowei Xiao, Chunyuan Li, Eric Xing, Furong Huang, Hao Liu, Heng Ji, Hongyi Wang, Huan Zhang, Huaxiu Yao, Manolis Kellis, Marinka Zitnik, Meng Jiang, Mohit Bansal, James Zou, Jian Pei, Jian Liu, Jianfeng Gao, Jiawei Han, Jieyu Zhao, Jiliang Tang, Jindong Wang, John Mitchell, Kai Shu, Kaidi Xu, Kai-Wei Chang, Lifang He, Lifu Huang, Michael Backes, Neil Zhenqiang Gong, Philip S. Yu, Pin-Yu Chen, Quanquan Gu, Ran Xu, Rex Ying, Shuiwang Ji, Suman Jana, Tianlong Chen, Tianming Liu, Tianyi Zhou, William Wang, Xiang Li, Xiangliang Zhang, Xiao Wang, Xing Xie, Xun Chen, Xuyu Wang, Yan Liu, Yanfang Ye, Yinzhi Cao, Yong Chen, Yue Zhao

Figure 1 for TrustLLM: Trustworthiness in Large Language Models
Figure 2 for TrustLLM: Trustworthiness in Large Language Models
Figure 3 for TrustLLM: Trustworthiness in Large Language Models
Figure 4 for TrustLLM: Trustworthiness in Large Language Models
Viaarxiv icon

A Learning-based Declarative Privacy-Preserving Framework for Federated Data Management

Add code
Bookmark button
Alert button
Jan 22, 2024
Hong Guan, Summer Gautier, Deepti Gupta, Rajan Hari Ambrish, Yancheng Wang, Harsha Lakamsani, Dhanush Giriyan, Saajan Maslanka, Chaowei Xiao, Yingzhen Yang, Jia Zou

Viaarxiv icon

Instructional Fingerprinting of Large Language Models

Add code
Bookmark button
Alert button
Jan 21, 2024
Jiashu Xu, Fei Wang, Mingyu Derek Ma, Pang Wei Koh, Chaowei Xiao, Muhao Chen

Viaarxiv icon

RealGen: Retrieval Augmented Generation for Controllable Traffic Scenarios

Add code
Bookmark button
Alert button
Dec 19, 2023
Wenhao Ding, Yulong Cao, Ding Zhao, Chaowei Xiao, Marco Pavone

Viaarxiv icon

DeceptPrompt: Exploiting LLM-driven Code Generation via Adversarial Natural Language Instructions

Add code
Bookmark button
Alert button
Dec 12, 2023
Fangzhou Wu, Xiaogeng Liu, Chaowei Xiao

Figure 1 for DeceptPrompt: Exploiting LLM-driven Code Generation via Adversarial Natural Language Instructions
Figure 2 for DeceptPrompt: Exploiting LLM-driven Code Generation via Adversarial Natural Language Instructions
Figure 3 for DeceptPrompt: Exploiting LLM-driven Code Generation via Adversarial Natural Language Instructions
Figure 4 for DeceptPrompt: Exploiting LLM-driven Code Generation via Adversarial Natural Language Instructions
Viaarxiv icon

Exploring the Limits of ChatGPT in Software Security Applications

Add code
Bookmark button
Alert button
Dec 08, 2023
Fangzhou Wu, Qingzhao Zhang, Ati Priya Bajaj, Tiffany Bao, Ning Zhang, Ruoyu "Fish" Wang, Chaowei Xiao

Viaarxiv icon

Dolphins: Multimodal Language Model for Driving

Add code
Bookmark button
Alert button
Dec 01, 2023
Yingzi Ma, Yulong Cao, Jiachen Sun, Marco Pavone, Chaowei Xiao

Figure 1 for Dolphins: Multimodal Language Model for Driving
Figure 2 for Dolphins: Multimodal Language Model for Driving
Figure 3 for Dolphins: Multimodal Language Model for Driving
Figure 4 for Dolphins: Multimodal Language Model for Driving
Viaarxiv icon

Cognitive Overload: Jailbreaking Large Language Models with Overloaded Logical Thinking

Add code
Bookmark button
Alert button
Nov 16, 2023
Nan Xu, Fei Wang, Ben Zhou, Bang Zheng Li, Chaowei Xiao, Muhao Chen

Viaarxiv icon