Alert button
Picture for Lijun Li

Lijun Li

Alert button

Assessment of Multimodal Large Language Models in Alignment with Human Values

Add code
Bookmark button
Alert button
Mar 26, 2024
Zhelun Shi, Zhipin Wang, Hongxing Fan, Zaibin Zhang, Lijun Li, Yongting Zhang, Zhenfei Yin, Lu Sheng, Yu Qiao, Jing Shao

Viaarxiv icon

EasyJailbreak: A Unified Framework for Jailbreaking Large Language Models

Add code
Bookmark button
Alert button
Mar 18, 2024
Weikang Zhou, Xiao Wang, Limao Xiong, Han Xia, Yingshuang Gu, Mingxu Chai, Fukang Zhu, Caishuang Huang, Shihan Dou, Zhiheng Xi, Rui Zheng, Songyang Gao, Yicheng Zou, Hang Yan, Yifan Le, Ruohui Wang, Lijun Li, Jing Shao, Tao Gui, Qi Zhang, Xuanjing Huang

Figure 1 for EasyJailbreak: A Unified Framework for Jailbreaking Large Language Models
Figure 2 for EasyJailbreak: A Unified Framework for Jailbreaking Large Language Models
Figure 3 for EasyJailbreak: A Unified Framework for Jailbreaking Large Language Models
Figure 4 for EasyJailbreak: A Unified Framework for Jailbreaking Large Language Models
Viaarxiv icon

SALAD-Bench: A Hierarchical and Comprehensive Safety Benchmark for Large Language Models

Add code
Bookmark button
Alert button
Feb 08, 2024
Lijun Li, Bowen Dong, Ruohui Wang, Xuhao Hu, Wangmeng Zuo, Dahua Lin, Yu Qiao, Jing Shao

Viaarxiv icon

From GPT-4 to Gemini and Beyond: Assessing the Landscape of MLLMs on Generalizability, Trustworthiness and Causality through Four Modalities

Add code
Bookmark button
Alert button
Jan 29, 2024
Chaochao Lu, Chen Qian, Guodong Zheng, Hongxing Fan, Hongzhi Gao, Jie Zhang, Jing Shao, Jingyi Deng, Jinlan Fu, Kexin Huang, Kunchang Li, Lijun Li, Limin Wang, Lu Sheng, Meiqi Chen, Ming Zhang, Qibing Ren, Sirui Chen, Tao Gui, Wanli Ouyang, Yali Wang, Yan Teng, Yaru Wang, Yi Wang, Yinan He, Yingchun Wang, Yixu Wang, Yongting Zhang, Yu Qiao, Yujiong Shen, Yurong Mou, Yuxi Chen, Zaibin Zhang, Zhelun Shi, Zhenfei Yin, Zhipin Wang

Viaarxiv icon

PsySafe: A Comprehensive Framework for Psychological-based Attack, Defense, and Evaluation of Multi-agent System Safety

Add code
Bookmark button
Alert button
Jan 22, 2024
Zaibin Zhang, Yongting Zhang, Lijun Li, Hongzhi Gao, Lijun Wang, Huchuan Lu, Feng Zhao, Yu Qiao, Jing Shao

Viaarxiv icon

RenderIH: A Large-scale Synthetic Dataset for 3D Interacting Hand Pose Estimation

Add code
Bookmark button
Alert button
Sep 27, 2023
Lijun Li, Linrui Tian, Xindi Zhang, Qi Wang, Bang Zhang, Mengyuan Liu, Chen Chen

Figure 1 for RenderIH: A Large-scale Synthetic Dataset for 3D Interacting Hand Pose Estimation
Figure 2 for RenderIH: A Large-scale Synthetic Dataset for 3D Interacting Hand Pose Estimation
Figure 3 for RenderIH: A Large-scale Synthetic Dataset for 3D Interacting Hand Pose Estimation
Figure 4 for RenderIH: A Large-scale Synthetic Dataset for 3D Interacting Hand Pose Estimation
Viaarxiv icon

DiffHand: End-to-End Hand Mesh Reconstruction via Diffusion Models

Add code
Bookmark button
Alert button
May 23, 2023
Lijun Li, Li'an Zhuo, Bang Zhang, Liefeng Bo, Chen Chen

Figure 1 for DiffHand: End-to-End Hand Mesh Reconstruction via Diffusion Models
Figure 2 for DiffHand: End-to-End Hand Mesh Reconstruction via Diffusion Models
Figure 3 for DiffHand: End-to-End Hand Mesh Reconstruction via Diffusion Models
Figure 4 for DiffHand: End-to-End Hand Mesh Reconstruction via Diffusion Models
Viaarxiv icon

One-stage Action Detection Transformer

Add code
Bookmark button
Alert button
Jun 21, 2022
Lijun Li, Li'an Zhuo, Bang Zhang

Figure 1 for One-stage Action Detection Transformer
Figure 2 for One-stage Action Detection Transformer
Figure 3 for One-stage Action Detection Transformer
Viaarxiv icon

NATTACK: Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural Networks

Add code
Bookmark button
Alert button
May 13, 2019
Yandong Li, Lijun Li, Liqiang Wang, Tong Zhang, Boqing Gong

Figure 1 for NATTACK: Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural Networks
Figure 2 for NATTACK: Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural Networks
Figure 3 for NATTACK: Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural Networks
Viaarxiv icon