Alert button
Picture for Wei Cheng

Wei Cheng

Alert button

Open-ended Commonsense Reasoning with Unrestricted Answer Scope

Oct 18, 2023
Chen Ling, Xuchao Zhang, Xujiang Zhao, Yanchi Liu, Wei Cheng, Takao Osaki, Katsushi Matsuda, Haifeng Chen, Liang Zhao

Viaarxiv icon

Dynamic DAG Discovery for Interpretable Imitation Learning

Oct 12, 2023
ianxiang Zhao, Wenchao Yu, Suhang Wang, Lu Wang, Xiang Zhang, Yuncong Chen, Yanchi Liu, Wei Cheng, Haifeng Chen

Figure 1 for Dynamic DAG Discovery for Interpretable Imitation Learning
Figure 2 for Dynamic DAG Discovery for Interpretable Imitation Learning
Figure 3 for Dynamic DAG Discovery for Interpretable Imitation Learning
Figure 4 for Dynamic DAG Discovery for Interpretable Imitation Learning
Viaarxiv icon

Zero-Shot Detection of Machine-Generated Codes

Oct 08, 2023
Xianjun Yang, Kexun Zhang, Haifeng Chen, Linda Petzold, William Yang Wang, Wei Cheng

Figure 1 for Zero-Shot Detection of Machine-Generated Codes
Figure 2 for Zero-Shot Detection of Machine-Generated Codes
Figure 3 for Zero-Shot Detection of Machine-Generated Codes
Figure 4 for Zero-Shot Detection of Machine-Generated Codes
Viaarxiv icon

Large Language Models Can Be Good Privacy Protection Learners

Oct 03, 2023
Yijia Xiao, Yiqiao Jin, Yushi Bai, Yue Wu, Xianjun Yang, Xiao Luo, Wenchao Yu, Xujiang Zhao, Yanchi Liu, Haifeng Chen, Wei Wang, Wei Cheng

Figure 1 for Large Language Models Can Be Good Privacy Protection Learners
Figure 2 for Large Language Models Can Be Good Privacy Protection Learners
Figure 3 for Large Language Models Can Be Good Privacy Protection Learners
Figure 4 for Large Language Models Can Be Good Privacy Protection Learners
Viaarxiv icon

Towards Robust Fidelity for Evaluating Explainability of Graph Neural Networks

Oct 03, 2023
Xu Zheng, Farhad Shirani, Tianchun Wang, Wei Cheng, Zhuomin Chen, Haifeng Chen, Hua Wei, Dongsheng Luo

Figure 1 for Towards Robust Fidelity for Evaluating Explainability of Graph Neural Networks
Figure 2 for Towards Robust Fidelity for Evaluating Explainability of Graph Neural Networks
Figure 3 for Towards Robust Fidelity for Evaluating Explainability of Graph Neural Networks
Figure 4 for Towards Robust Fidelity for Evaluating Explainability of Graph Neural Networks
Viaarxiv icon

Baichuan 2: Open Large-scale Language Models

Sep 20, 2023
Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu

Figure 1 for Baichuan 2: Open Large-scale Language Models
Figure 2 for Baichuan 2: Open Large-scale Language Models
Figure 3 for Baichuan 2: Open Large-scale Language Models
Figure 4 for Baichuan 2: Open Large-scale Language Models
Viaarxiv icon

GLAD: Content-aware Dynamic Graphs For Log Anomaly Detection

Sep 12, 2023
Yufei Li, Yanchi Liu, Haoyu Wang, Zhengzhang Chen, Wei Cheng, Yuncong Chen, Wenchao Yu, Haifeng Chen, Cong Liu

Figure 1 for GLAD: Content-aware Dynamic Graphs For Log Anomaly Detection
Figure 2 for GLAD: Content-aware Dynamic Graphs For Log Anomaly Detection
Figure 3 for GLAD: Content-aware Dynamic Graphs For Log Anomaly Detection
Figure 4 for GLAD: Content-aware Dynamic Graphs For Log Anomaly Detection
Viaarxiv icon

Improving Open Information Extraction with Large Language Models: A Study on Demonstration Uncertainty

Sep 07, 2023
Chen Ling, Xujiang Zhao, Xuchao Zhang, Yanchi Liu, Wei Cheng, Haoyu Wang, Zhengzhang Chen, Takao Osaki, Katsushi Matsuda, Haifeng Chen, Liang Zhao

Figure 1 for Improving Open Information Extraction with Large Language Models: A Study on Demonstration Uncertainty
Figure 2 for Improving Open Information Extraction with Large Language Models: A Study on Demonstration Uncertainty
Figure 3 for Improving Open Information Extraction with Large Language Models: A Study on Demonstration Uncertainty
Viaarxiv icon

DNA-Rendering: A Diverse Neural Actor Repository for High-Fidelity Human-centric Rendering

Jul 19, 2023
Wei Cheng, Ruixiang Chen, Wanqi Yin, Siming Fan, Keyu Chen, Honglin He, Huiwen Luo, Zhongang Cai, Jingbo Wang, Yang Gao, Zhengming Yu, Zhengyu Lin, Daxuan Ren, Lei Yang, Ziwei Liu, Chen Change Loy, Chen Qian, Wayne Wu, Dahua Lin, Bo Dai, Kwan-Yee Lin

Figure 1 for DNA-Rendering: A Diverse Neural Actor Repository for High-Fidelity Human-centric Rendering
Figure 2 for DNA-Rendering: A Diverse Neural Actor Repository for High-Fidelity Human-centric Rendering
Figure 3 for DNA-Rendering: A Diverse Neural Actor Repository for High-Fidelity Human-centric Rendering
Figure 4 for DNA-Rendering: A Diverse Neural Actor Repository for High-Fidelity Human-centric Rendering
Viaarxiv icon

Skill Disentanglement for Imitation Learning from Suboptimal Demonstrations

Jun 13, 2023
Tianxiang Zhao, Wenchao Yu, Suhang Wang, Lu Wang, Xiang Zhang, Yuncong Chen, Yanchi Liu, Wei Cheng, Haifeng Chen

Figure 1 for Skill Disentanglement for Imitation Learning from Suboptimal Demonstrations
Figure 2 for Skill Disentanglement for Imitation Learning from Suboptimal Demonstrations
Figure 3 for Skill Disentanglement for Imitation Learning from Suboptimal Demonstrations
Figure 4 for Skill Disentanglement for Imitation Learning from Suboptimal Demonstrations
Viaarxiv icon