Alert button
Picture for Xiao Liu

Xiao Liu

Alert button

SafetyBench: Evaluating the Safety of Large Language Models with Multiple Choice Questions

Sep 13, 2023
Zhexin Zhang, Leqi Lei, Lindong Wu, Rui Sun, Yongkang Huang, Chong Long, Xiao Liu, Xuanyu Lei, Jie Tang, Minlie Huang

Figure 1 for SafetyBench: Evaluating the Safety of Large Language Models with Multiple Choice Questions
Figure 2 for SafetyBench: Evaluating the Safety of Large Language Models with Multiple Choice Questions
Figure 3 for SafetyBench: Evaluating the Safety of Large Language Models with Multiple Choice Questions
Figure 4 for SafetyBench: Evaluating the Safety of Large Language Models with Multiple Choice Questions

With the rapid development of Large Language Models (LLMs), increasing attention has been paid to their safety concerns. Consequently, evaluating the safety of LLMs has become an essential task for facilitating the broad applications of LLMs. Nevertheless, the absence of comprehensive safety evaluation benchmarks poses a significant impediment to effectively assess and enhance the safety of LLMs. In this work, we present SafetyBench, a comprehensive benchmark for evaluating the safety of LLMs, which comprises 11,435 diverse multiple choice questions spanning across 7 distinct categories of safety concerns. Notably, SafetyBench also incorporates both Chinese and English data, facilitating the evaluation in both languages. Our extensive tests over 25 popular Chinese and English LLMs in both zero-shot and few-shot settings reveal a substantial performance advantage for GPT-4 over its counterparts, and there is still significant room for improving the safety of current LLMs. We believe SafetyBench will enable fast and comprehensive evaluation of LLMs' safety, and foster the development of safer LLMs. Data and evaluation guidelines are available at https://github.com/thu-coai/SafetyBench. Submission entrance and leaderboard are available at https://llmbench.ai/safety.

* 15 pages 
Viaarxiv icon

Probabilistic Differentiable Filters Enable Ubiquitous Robot Control with Smartwatches

Sep 12, 2023
Fabian C Weigend, Xiao Liu, Heni Ben Amor

Figure 1 for Probabilistic Differentiable Filters Enable Ubiquitous Robot Control with Smartwatches
Figure 2 for Probabilistic Differentiable Filters Enable Ubiquitous Robot Control with Smartwatches
Figure 3 for Probabilistic Differentiable Filters Enable Ubiquitous Robot Control with Smartwatches
Figure 4 for Probabilistic Differentiable Filters Enable Ubiquitous Robot Control with Smartwatches

Ubiquitous robot control and human-robot collaboration using smart devices poses a challenging problem primarily due to strict accuracy requirements and sparse information. This paper presents a novel approach that incorporates a probabilistic differentiable filter, specifically the Differentiable Ensemble Kalman Filter (DEnKF), to facilitate robot control solely using Inertial Measurement Units (IMUs) observations from a smartwatch and a smartphone. The implemented system achieves accurate estimation of human pose state with a reduction of 30.2% compared to the baseline using the Mean Per Joint Vertex Error (MPJVE). Our results foster smartwatches and smartphones as a cost-effective alternative human-pose state estimation. Furthermore, experiment results from human-robot handover tasks underscore that smart devices allow for low-cost, versatile and ubiquitous robot control.

* DiffPropRob IROS 2023 (Oral) 
Viaarxiv icon

Fidelity-Induced Interpretable Policy Extraction for Reinforcement Learning

Sep 12, 2023
Xiao Liu, Wubing Chen, Mao Tan

Figure 1 for Fidelity-Induced Interpretable Policy Extraction for Reinforcement Learning
Figure 2 for Fidelity-Induced Interpretable Policy Extraction for Reinforcement Learning
Figure 3 for Fidelity-Induced Interpretable Policy Extraction for Reinforcement Learning
Figure 4 for Fidelity-Induced Interpretable Policy Extraction for Reinforcement Learning

Deep Reinforcement Learning (DRL) has achieved remarkable success in sequential decision-making problems. However, existing DRL agents make decisions in an opaque fashion, hindering the user from establishing trust and scrutinizing weaknesses of the agents. While recent research has developed Interpretable Policy Extraction (IPE) methods for explaining how an agent takes actions, their explanations are often inconsistent with the agent's behavior and thus, frequently fail to explain. To tackle this issue, we propose a novel method, Fidelity-Induced Policy Extraction (FIPE). Specifically, we start by analyzing the optimization mechanism of existing IPE methods, elaborating on the issue of ignoring consistency while increasing cumulative rewards. We then design a fidelity-induced mechanism by integrate a fidelity measurement into the reinforcement learning feedback. We conduct experiments in the complex control environment of StarCraft II, an arena typically avoided by current IPE methods. The experiment results demonstrate that FIPE outperforms the baselines in terms of interaction performance and consistency, meanwhile easy to understand.

* 10 pages, 3 figures, 2 tables 
Viaarxiv icon

LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding

Aug 28, 2023
Yushi Bai, Xin Lv, Jiajie Zhang, Hongchang Lyu, Jiankai Tang, Zhidian Huang, Zhengxiao Du, Xiao Liu, Aohan Zeng, Lei Hou, Yuxiao Dong, Jie Tang, Juanzi Li

Figure 1 for LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding
Figure 2 for LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding
Figure 3 for LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding
Figure 4 for LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding

Although large language models (LLMs) demonstrate impressive performance for many language tasks, most of them can only handle texts a few thousand tokens long, limiting their applications on longer sequence inputs, such as books, reports, and codebases. Recent works have proposed methods to improve LLMs' long context capabilities by extending context windows and more sophisticated memory mechanisms. However, comprehensive benchmarks tailored for evaluating long context understanding are lacking. In this paper, we introduce LongBench, the first bilingual, multi-task benchmark for long context understanding, enabling a more rigorous evaluation of long context understanding. LongBench comprises 21 datasets across 6 task categories in both English and Chinese, with an average length of 6,711 words (English) and 13,386 characters (Chinese). These tasks cover key long-text application areas including single-doc QA, multi-doc QA, summarization, few-shot learning, synthetic tasks, and code completion. All datasets in LongBench are standardized into a unified format, allowing for effortless automatic evaluation of LLMs. Upon comprehensive evaluation of 8 LLMs on LongBench, we find that: (1) Commercial model (GPT-3.5-Turbo-16k) outperforms other open-sourced models, but still struggles on longer contexts. (2) Scaled position embedding and fine-tuning on longer sequences lead to substantial improvement on long context understanding. (3) Context compression technique such as retrieval brings improvement for model with weak ability on long contexts, but the performance still lags behind models that have strong long context understanding capability. The code and datasets are available at https://github.com/THUDM/LongBench.

* 18 pages, 6 figures 
Viaarxiv icon

Enhancing State Estimation in Robots: A Data-Driven Approach with Differentiable Ensemble Kalman Filters

Aug 19, 2023
Xiao Liu, Geoffrey Clark, Joseph Campbell, Yifan Zhou, Heni Ben Amor

Figure 1 for Enhancing State Estimation in Robots: A Data-Driven Approach with Differentiable Ensemble Kalman Filters
Figure 2 for Enhancing State Estimation in Robots: A Data-Driven Approach with Differentiable Ensemble Kalman Filters
Figure 3 for Enhancing State Estimation in Robots: A Data-Driven Approach with Differentiable Ensemble Kalman Filters
Figure 4 for Enhancing State Estimation in Robots: A Data-Driven Approach with Differentiable Ensemble Kalman Filters

This paper introduces a novel state estimation framework for robots using differentiable ensemble Kalman filters (DEnKF). DEnKF is a reformulation of the traditional ensemble Kalman filter that employs stochastic neural networks to model the process noise implicitly. Our work is an extension of previous research on differentiable filters, which has provided a strong foundation for our modular and end-to-end differentiable framework. This framework enables each component of the system to function independently, leading to improved flexibility and versatility in implementation. Through a series of experiments, we demonstrate the flexibility of this model across a diverse set of real-world tracking tasks, including visual odometry and robot manipulation. Moreover, we show that our model effectively handles noisy observations, is robust in the absence of observations, and outperforms state-of-the-art differentiable filters in terms of error metrics. Specifically, we observe a significant improvement of at least 59% in translational error when using DEnKF with noisy observations. Our results underscore the potential of DEnKF in advancing state estimation for robotics. Code for DEnKF is available at https://github.com/ir-lab/DEnKF

* 8 pages, 6 figures, 4 tables 
Viaarxiv icon

Learning Soft Robot Dynamics using Differentiable Kalman Filters and Spatio-Temporal Embeddings

Aug 19, 2023
Xiao Liu, Shuhei Ikemoto, Yuhei Yoshimitsu, Heni Ben Amor

Figure 1 for Learning Soft Robot Dynamics using Differentiable Kalman Filters and Spatio-Temporal Embeddings
Figure 2 for Learning Soft Robot Dynamics using Differentiable Kalman Filters and Spatio-Temporal Embeddings
Figure 3 for Learning Soft Robot Dynamics using Differentiable Kalman Filters and Spatio-Temporal Embeddings
Figure 4 for Learning Soft Robot Dynamics using Differentiable Kalman Filters and Spatio-Temporal Embeddings

This paper introduces a novel approach for modeling the dynamics of soft robots, utilizing a differentiable filter architecture. The proposed approach enables end-to-end training to learn system dynamics, noise characteristics, and temporal behavior of the robot. A novel spatio-temporal embedding process is discussed to handle observations with varying sensor placements and sampling frequencies. The efficacy of this approach is demonstrated on a tensegrity robot arm by learning end-effector dynamics from demonstrations with complex bending motions. The model is proven to be robust against missing modalities, diverse sensor placement, and varying sampling rates. Additionally, the proposed framework is shown to identify physical interactions with humans during motion. The utilization of a differentiable filter presents a novel solution to the difficulties of modeling soft robot dynamics. Our approach shows substantial improvement in accuracy compared to state-of-the-art filtering methods, with at least a 24% reduction in mean absolute error (MAE) observed. Furthermore, the predicted end-effector positions show an average MAE of 25.77mm from the ground truth, highlighting the advantage of our approach. The code is available at https://github.com/ir-lab/soft_robot_DEnKF.

* 8 pages, 9 figures, 4 tables 
Viaarxiv icon

Tem-adapter: Adapting Image-Text Pretraining for Video Question Answer

Aug 16, 2023
Guangyi Chen, Xiao Liu, Guangrun Wang, Kun Zhang, Philip H. S. Torr, Xiao-Ping Zhang, Yansong Tang

Figure 1 for Tem-adapter: Adapting Image-Text Pretraining for Video Question Answer
Figure 2 for Tem-adapter: Adapting Image-Text Pretraining for Video Question Answer
Figure 3 for Tem-adapter: Adapting Image-Text Pretraining for Video Question Answer
Figure 4 for Tem-adapter: Adapting Image-Text Pretraining for Video Question Answer

Video-language pre-trained models have shown remarkable success in guiding video question-answering (VideoQA) tasks. However, due to the length of video sequences, training large-scale video-based models incurs considerably higher costs than training image-based ones. This motivates us to leverage the knowledge from image-based pretraining, despite the obvious gaps between image and video domains. To bridge these gaps, in this paper, we propose Tem-Adapter, which enables the learning of temporal dynamics and complex semantics by a visual Temporal Aligner and a textual Semantic Aligner. Unlike conventional pretrained knowledge adaptation methods that only concentrate on the downstream task objective, the Temporal Aligner introduces an extra language-guided autoregressive task aimed at facilitating the learning of temporal dependencies, with the objective of predicting future states based on historical clues and language guidance that describes event progression. Besides, to reduce the semantic gap and adapt the textual representation for better event description, we introduce a Semantic Aligner that first designs a template to fuse question and answer pairs as event descriptions and then learns a Transformer decoder with the whole video sequence as guidance for refinement. We evaluate Tem-Adapter and different pre-train transferring methods on two VideoQA benchmarks, and the significant performance improvement demonstrates the effectiveness of our method.

* ICCV 2023 
Viaarxiv icon

Unsupervised Image Denoising in Real-World Scenarios via Self-Collaboration Parallel Generative Adversarial Branches

Aug 13, 2023
Xin Lin, Chao Ren, Xiao Liu, Jie Huang, Yinjie Lei

Figure 1 for Unsupervised Image Denoising in Real-World Scenarios via Self-Collaboration Parallel Generative Adversarial Branches
Figure 2 for Unsupervised Image Denoising in Real-World Scenarios via Self-Collaboration Parallel Generative Adversarial Branches
Figure 3 for Unsupervised Image Denoising in Real-World Scenarios via Self-Collaboration Parallel Generative Adversarial Branches
Figure 4 for Unsupervised Image Denoising in Real-World Scenarios via Self-Collaboration Parallel Generative Adversarial Branches

Deep learning methods have shown remarkable performance in image denoising, particularly when trained on large-scale paired datasets. However, acquiring such paired datasets for real-world scenarios poses a significant challenge. Although unsupervised approaches based on generative adversarial networks offer a promising solution for denoising without paired datasets, they are difficult in surpassing the performance limitations of conventional GAN-based unsupervised frameworks without significantly modifying existing structures or increasing the computational complexity of denoisers. To address this problem, we propose a SC strategy for multiple denoisers. This strategy can achieve significant performance improvement without increasing the inference complexity of the GAN-based denoising framework. Its basic idea is to iteratively replace the previous less powerful denoiser in the filter-guided noise extraction module with the current powerful denoiser. This process generates better synthetic clean-noisy image pairs, leading to a more powerful denoiser for the next iteration. This baseline ensures the stability and effectiveness of the training network. The experimental results demonstrate the superiority of our method over state-of-the-art unsupervised methods.

* Accepted to ICCV 2023 
Viaarxiv icon

AgentBench: Evaluating LLMs as Agents

Aug 07, 2023
Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang

Figure 1 for AgentBench: Evaluating LLMs as Agents
Figure 2 for AgentBench: Evaluating LLMs as Agents
Figure 3 for AgentBench: Evaluating LLMs as Agents
Figure 4 for AgentBench: Evaluating LLMs as Agents

Large Language Models (LLMs) are becoming increasingly smart and autonomous, targeting real-world pragmatic missions beyond traditional NLP tasks. As a result, there has been an urgent need to evaluate LLMs as agents on challenging tasks in interactive environments. We present AgentBench, a multi-dimensional evolving benchmark that currently consists of 8 distinct environments to assess LLM-as-Agent's reasoning and decision-making abilities in a multi-turn open-ended generation setting. Our extensive test over 25 LLMs (including APIs and open-sourced models) shows that, while top commercial LLMs present a strong ability of acting as agents in complex environments, there is a significant disparity in performance between them and open-sourced competitors. It also serves as a component of an ongoing project with wider coverage and deeper consideration towards systematic LLM evaluation. Datasets, environments, and an integrated evaluation package for AgentBench are released at https://github.com/THUDM/AgentBench

* 38 pages 
Viaarxiv icon

Random Sub-Samples Generation for Self-Supervised Real Image Denoising

Jul 31, 2023
Yizhong Pan, Xiao Liu, Xiangyu Liao, Yuanzhouhan Cao, Chao Ren

Figure 1 for Random Sub-Samples Generation for Self-Supervised Real Image Denoising
Figure 2 for Random Sub-Samples Generation for Self-Supervised Real Image Denoising
Figure 3 for Random Sub-Samples Generation for Self-Supervised Real Image Denoising
Figure 4 for Random Sub-Samples Generation for Self-Supervised Real Image Denoising

With sufficient paired training samples, the supervised deep learning methods have attracted much attention in image denoising because of their superior performance. However, it is still very challenging to widely utilize the supervised methods in real cases due to the lack of paired noisy-clean images. Meanwhile, most self-supervised denoising methods are ineffective as well when applied to the real-world denoising tasks because of their strict assumptions in applications. For example, as a typical method for self-supervised denoising, the original blind spot network (BSN) assumes that the noise is pixel-wise independent, which is much different from the real cases. To solve this problem, we propose a novel self-supervised real image denoising framework named Sampling Difference As Perturbation (SDAP) based on Random Sub-samples Generation (RSG) with a cyclic sample difference loss. Specifically, we dig deeper into the properties of BSN to make it more suitable for real noise. Surprisingly, we find that adding an appropriate perturbation to the training images can effectively improve the performance of BSN. Further, we propose that the sampling difference can be considered as perturbation to achieve better results. Finally we propose a new BSN framework in combination with our RSG strategy. The results show that it significantly outperforms other state-of-the-art self-supervised denoising methods on real-world datasets. The code is available at https://github.com/p1y2z3/SDAP.

* Accepted to ICCV2023 
Viaarxiv icon