Alert button
Picture for Bei Liu

Bei Liu

Alert button

Spatiotemporal Predictive Pre-training for Robotic Motor Control

Add code
Bookmark button
Alert button
Mar 14, 2024
Jiange Yang, Bei Liu, Jianlong Fu, Bocheng Pan, Gangshan Wu, Limin Wang

Figure 1 for Spatiotemporal Predictive Pre-training for Robotic Motor Control
Figure 2 for Spatiotemporal Predictive Pre-training for Robotic Motor Control
Figure 3 for Spatiotemporal Predictive Pre-training for Robotic Motor Control
Figure 4 for Spatiotemporal Predictive Pre-training for Robotic Motor Control
Viaarxiv icon

Multi-task Manipulation Policy Modeling with Visuomotor Latent Diffusion

Add code
Bookmark button
Alert button
Mar 12, 2024
Wenhui Tan, Bei Liu, Junbo Zhang, Ruihua Song, Jianlong Fu

Figure 1 for Multi-task Manipulation Policy Modeling with Visuomotor Latent Diffusion
Figure 2 for Multi-task Manipulation Policy Modeling with Visuomotor Latent Diffusion
Figure 3 for Multi-task Manipulation Policy Modeling with Visuomotor Latent Diffusion
Figure 4 for Multi-task Manipulation Policy Modeling with Visuomotor Latent Diffusion
Viaarxiv icon

One-Shot Sensitivity-Aware Mixed Sparsity Pruning for Large Language Models

Add code
Bookmark button
Alert button
Oct 14, 2023
Hang Shao, Bei Liu, Yanmin Qian

Viaarxiv icon

ViCo: Engaging Video Comment Generation with Human Preference Rewards

Add code
Bookmark button
Alert button
Aug 22, 2023
Yuchong Sun, Bei Liu, Xu Chen, Ruihua Song, Jianlong Fu

Figure 1 for ViCo: Engaging Video Comment Generation with Human Preference Rewards
Figure 2 for ViCo: Engaging Video Comment Generation with Human Preference Rewards
Figure 3 for ViCo: Engaging Video Comment Generation with Human Preference Rewards
Figure 4 for ViCo: Engaging Video Comment Generation with Human Preference Rewards
Viaarxiv icon

Improving Diversity in Zero-Shot GAN Adaptation with Semantic Variations

Add code
Bookmark button
Alert button
Aug 21, 2023
Seogkyu Jeon, Bei Liu, Pilhyeon Lee, Kibeom Hong, Jianlong Fu, Hyeran Byun

Figure 1 for Improving Diversity in Zero-Shot GAN Adaptation with Semantic Variations
Figure 2 for Improving Diversity in Zero-Shot GAN Adaptation with Semantic Variations
Figure 3 for Improving Diversity in Zero-Shot GAN Adaptation with Semantic Variations
Figure 4 for Improving Diversity in Zero-Shot GAN Adaptation with Semantic Variations
Viaarxiv icon

Revisiting Latent Space of GAN Inversion for Real Image Editing

Add code
Bookmark button
Alert button
Jul 18, 2023
Kai Katsumata, Duc Minh Vo, Bei Liu, Hideki Nakayama

Figure 1 for Revisiting Latent Space of GAN Inversion for Real Image Editing
Figure 2 for Revisiting Latent Space of GAN Inversion for Real Image Editing
Figure 3 for Revisiting Latent Space of GAN Inversion for Real Image Editing
Figure 4 for Revisiting Latent Space of GAN Inversion for Real Image Editing
Viaarxiv icon

SINC: Self-Supervised In-Context Learning for Vision-Language Tasks

Add code
Bookmark button
Alert button
Jul 15, 2023
Yi-Syuan Chen, Yun-Zhu Song, Cheng Yu Yeo, Bei Liu, Jianlong Fu, Hong-Han Shuai

Figure 1 for SINC: Self-Supervised In-Context Learning for Vision-Language Tasks
Figure 2 for SINC: Self-Supervised In-Context Learning for Vision-Language Tasks
Figure 3 for SINC: Self-Supervised In-Context Learning for Vision-Language Tasks
Figure 4 for SINC: Self-Supervised In-Context Learning for Vision-Language Tasks
Viaarxiv icon

Pave the Way to Grasp Anything: Transferring Foundation Models for Universal Pick-Place Robots

Add code
Bookmark button
Alert button
Jun 25, 2023
Jiange Yang, Wenhui Tan, Chuhao Jin, Bei Liu, Jianlong Fu, Ruihua Song, Limin Wang

Figure 1 for Pave the Way to Grasp Anything: Transferring Foundation Models for Universal Pick-Place Robots
Figure 2 for Pave the Way to Grasp Anything: Transferring Foundation Models for Universal Pick-Place Robots
Figure 3 for Pave the Way to Grasp Anything: Transferring Foundation Models for Universal Pick-Place Robots
Figure 4 for Pave the Way to Grasp Anything: Transferring Foundation Models for Universal Pick-Place Robots
Viaarxiv icon