Picture for Xuandong Zhao

Xuandong Zhao

Evaluating Durability: Benchmark Insights into Multimodal Watermarking

Add code
Jun 06, 2024
Viaarxiv icon

Bileve: Securing Text Provenance in Large Language Models Against Spoofing with Bi-level Signature

Add code
Jun 04, 2024
Figure 1 for Bileve: Securing Text Provenance in Large Language Models Against Spoofing with Bi-level Signature
Figure 2 for Bileve: Securing Text Provenance in Large Language Models Against Spoofing with Bi-level Signature
Figure 3 for Bileve: Securing Text Provenance in Large Language Models Against Spoofing with Bi-level Signature
Figure 4 for Bileve: Securing Text Provenance in Large Language Models Against Spoofing with Bi-level Signature
Viaarxiv icon

MarkLLM: An Open-Source Toolkit for LLM Watermarking

Add code
May 16, 2024
Viaarxiv icon

Mapping the Increasing Use of LLMs in Scientific Papers

Add code
Apr 01, 2024
Figure 1 for Mapping the Increasing Use of LLMs in Scientific Papers
Figure 2 for Mapping the Increasing Use of LLMs in Scientific Papers
Figure 3 for Mapping the Increasing Use of LLMs in Scientific Papers
Figure 4 for Mapping the Increasing Use of LLMs in Scientific Papers
Viaarxiv icon

Monitoring AI-Modified Content at Scale: A Case Study on the Impact of ChatGPT on AI Conference Peer Reviews

Add code
Mar 11, 2024
Figure 1 for Monitoring AI-Modified Content at Scale: A Case Study on the Impact of ChatGPT on AI Conference Peer Reviews
Figure 2 for Monitoring AI-Modified Content at Scale: A Case Study on the Impact of ChatGPT on AI Conference Peer Reviews
Figure 3 for Monitoring AI-Modified Content at Scale: A Case Study on the Impact of ChatGPT on AI Conference Peer Reviews
Figure 4 for Monitoring AI-Modified Content at Scale: A Case Study on the Impact of ChatGPT on AI Conference Peer Reviews
Viaarxiv icon

GumbelSoft: Diversified Language Model Watermarking via the GumbelMax-trick

Add code
Feb 25, 2024
Viaarxiv icon

Perils of Self-Feedback: Self-Bias Amplifies in Large Language Models

Add code
Feb 18, 2024
Viaarxiv icon

DE-COP: Detecting Copyrighted Content in Language Models Training Data

Add code
Feb 15, 2024
Figure 1 for DE-COP: Detecting Copyrighted Content in Language Models Training Data
Figure 2 for DE-COP: Detecting Copyrighted Content in Language Models Training Data
Figure 3 for DE-COP: Detecting Copyrighted Content in Language Models Training Data
Figure 4 for DE-COP: Detecting Copyrighted Content in Language Models Training Data
Viaarxiv icon

Permute-and-Flip: An optimally robust and watermarkable decoder for LLMs

Add code
Feb 08, 2024
Viaarxiv icon

Weak-to-Strong Jailbreaking on Large Language Models

Add code
Feb 05, 2024
Viaarxiv icon