Alert button
Picture for Ziyi Liu

Ziyi Liu

Alert button

SCORE: A framework for Self-Contradictory Reasoning Evaluation

Add code
Bookmark button
Alert button
Nov 16, 2023
Ziyi Liu, Isabelle Lee, Yongkang Du, Soumya Sanyal, Jieyu Zhao

Viaarxiv icon

Dual Defense: Adversarial, Traceable, and Invisible Robust Watermarking against Face Swapping

Add code
Bookmark button
Alert button
Oct 25, 2023
Yunming Zhang, Dengpan Ye, Caiyun Xie, Long Tang, Chuanxi Chen, Ziyi Liu, Jiacheng Deng

Figure 1 for Dual Defense: Adversarial, Traceable, and Invisible Robust Watermarking against Face Swapping
Figure 2 for Dual Defense: Adversarial, Traceable, and Invisible Robust Watermarking against Face Swapping
Figure 3 for Dual Defense: Adversarial, Traceable, and Invisible Robust Watermarking against Face Swapping
Figure 4 for Dual Defense: Adversarial, Traceable, and Invisible Robust Watermarking against Face Swapping
Viaarxiv icon

Bootstrap Your Own Skills: Learning to Solve New Tasks with Large Language Model Guidance

Add code
Bookmark button
Alert button
Oct 17, 2023
Jesse Zhang, Jiahui Zhang, Karl Pertsch, Ziyi Liu, Xiang Ren, Minsuk Chang, Shao-Hua Sun, Joseph J. Lim

Viaarxiv icon

SOAR: Scene-debiasing Open-set Action Recognition

Add code
Bookmark button
Alert button
Sep 03, 2023
Yuanhao Zhai, Ziyi Liu, Zhenyu Wu, Yi Wu, Chunluan Zhou, David Doermann, Junsong Yuan, Gang Hua

Figure 1 for SOAR: Scene-debiasing Open-set Action Recognition
Figure 2 for SOAR: Scene-debiasing Open-set Action Recognition
Figure 3 for SOAR: Scene-debiasing Open-set Action Recognition
Figure 4 for SOAR: Scene-debiasing Open-set Action Recognition
Viaarxiv icon

Are Machine Rationales (Not) Useful to Humans? Measuring and Improving Human Utility of Free-Text Rationales

Add code
Bookmark button
Alert button
May 11, 2023
Brihi Joshi, Ziyi Liu, Sahana Ramnath, Aaron Chan, Zhewei Tong, Shaoliang Nie, Qifan Wang, Yejin Choi, Xiang Ren

Figure 1 for Are Machine Rationales (Not) Useful to Humans? Measuring and Improving Human Utility of Free-Text Rationales
Figure 2 for Are Machine Rationales (Not) Useful to Humans? Measuring and Improving Human Utility of Free-Text Rationales
Figure 3 for Are Machine Rationales (Not) Useful to Humans? Measuring and Improving Human Utility of Free-Text Rationales
Figure 4 for Are Machine Rationales (Not) Useful to Humans? Measuring and Improving Human Utility of Free-Text Rationales
Viaarxiv icon

Handling Concept Drift in Global Time Series Forecasting

Add code
Bookmark button
Alert button
Apr 04, 2023
Ziyi Liu, Rakshitha Godahewa, Kasun Bandara, Christoph Bergmeir

Figure 1 for Handling Concept Drift in Global Time Series Forecasting
Figure 2 for Handling Concept Drift in Global Time Series Forecasting
Figure 3 for Handling Concept Drift in Global Time Series Forecasting
Figure 4 for Handling Concept Drift in Global Time Series Forecasting
Viaarxiv icon

Clustered Federated Learning based on Nonconvex Pairwise Fusion

Add code
Bookmark button
Alert button
Nov 08, 2022
Xue Yu, Ziyi Liu, Yifan Sun, Wu Wang

Figure 1 for Clustered Federated Learning based on Nonconvex Pairwise Fusion
Figure 2 for Clustered Federated Learning based on Nonconvex Pairwise Fusion
Figure 3 for Clustered Federated Learning based on Nonconvex Pairwise Fusion
Figure 4 for Clustered Federated Learning based on Nonconvex Pairwise Fusion
Viaarxiv icon

XMD: An End-to-End Framework for Interactive Explanation-Based Debugging of NLP Models

Add code
Bookmark button
Alert button
Oct 30, 2022
Dong-Ho Lee, Akshen Kadakia, Brihi Joshi, Aaron Chan, Ziyi Liu, Kiran Narahari, Takashi Shibuya, Ryosuke Mitani, Toshiyuki Sekiya, Jay Pujara, Xiang Ren

Figure 1 for XMD: An End-to-End Framework for Interactive Explanation-Based Debugging of NLP Models
Figure 2 for XMD: An End-to-End Framework for Interactive Explanation-Based Debugging of NLP Models
Figure 3 for XMD: An End-to-End Framework for Interactive Explanation-Based Debugging of NLP Models
Figure 4 for XMD: An End-to-End Framework for Interactive Explanation-Based Debugging of NLP Models
Viaarxiv icon

Finetuning Pretrained Vision-Language Models with Correlation Information Bottleneck for Robust Visual Question Answering

Add code
Bookmark button
Alert button
Sep 14, 2022
Jingjing Jiang, Ziyi Liu, Nanning Zheng

Figure 1 for Finetuning Pretrained Vision-Language Models with Correlation Information Bottleneck for Robust Visual Question Answering
Figure 2 for Finetuning Pretrained Vision-Language Models with Correlation Information Bottleneck for Robust Visual Question Answering
Figure 3 for Finetuning Pretrained Vision-Language Models with Correlation Information Bottleneck for Robust Visual Question Answering
Figure 4 for Finetuning Pretrained Vision-Language Models with Correlation Information Bottleneck for Robust Visual Question Answering
Viaarxiv icon

AiM: Taking Answers in Mind to Correct Chinese Cloze Tests in Educational Applications

Add code
Bookmark button
Alert button
Aug 26, 2022
Yusen Zhang, Zhongli Li, Qingyu Zhou, Ziyi Liu, Chao Li, Mina Ma, Yunbo Cao, Hongzhi Liu

Figure 1 for AiM: Taking Answers in Mind to Correct Chinese Cloze Tests in Educational Applications
Figure 2 for AiM: Taking Answers in Mind to Correct Chinese Cloze Tests in Educational Applications
Figure 3 for AiM: Taking Answers in Mind to Correct Chinese Cloze Tests in Educational Applications
Figure 4 for AiM: Taking Answers in Mind to Correct Chinese Cloze Tests in Educational Applications
Viaarxiv icon