Alert button
Picture for Thomas H. Li

Thomas H. Li

Alert button

StreamFlow: Streamlined Multi-Frame Optical Flow Estimation for Video Sequences

Add code
Bookmark button
Alert button
Nov 28, 2023
Shangkun Sun, Jiaming Liu, Thomas H. Li, Huaxia Li, Guoqing Liu, Wei Gao

Viaarxiv icon

Mug-STAN: Adapting Image-Language Pretrained Models for General Video Understanding

Add code
Bookmark button
Alert button
Nov 25, 2023
Ruyang Liu, Jingjia Huang, Wei Gao, Thomas H. Li, Ge Li

Viaarxiv icon

Efficient Test-Time Adaptation for Super-Resolution with Second-Order Degradation and Reconstruction

Add code
Bookmark button
Alert button
Oct 29, 2023
Zeshuai Deng, Zhuokun Chen, Shuaicheng Niu, Thomas H. Li, Bohan Zhuang, Mingkui Tan

Figure 1 for Efficient Test-Time Adaptation for Super-Resolution with Second-Order Degradation and Reconstruction
Figure 2 for Efficient Test-Time Adaptation for Super-Resolution with Second-Order Degradation and Reconstruction
Figure 3 for Efficient Test-Time Adaptation for Super-Resolution with Second-Order Degradation and Reconstruction
Figure 4 for Efficient Test-Time Adaptation for Super-Resolution with Second-Order Degradation and Reconstruction
Viaarxiv icon

FGPrompt: Fine-grained Goal Prompting for Image-goal Navigation

Add code
Bookmark button
Alert button
Oct 11, 2023
Xinyu Sun, Peihao Chen, Jugang Fan, Thomas H. Li, Jian Chen, Mingkui Tan

Viaarxiv icon

One For All: Video Conversation is Feasible Without Video Instruction Tuning

Add code
Bookmark button
Alert button
Sep 27, 2023
Ruyang Liu, Chen Li, Yixiao Ge, Ying Shan, Thomas H. Li, Ge Li

Viaarxiv icon

$A^2$Nav: Action-Aware Zero-Shot Robot Navigation by Exploiting Vision-and-Language Ability of Foundation Models

Add code
Bookmark button
Alert button
Aug 15, 2023
Peihao Chen, Xinyu Sun, Hongyan Zhi, Runhao Zeng, Thomas H. Li, Gaowen Liu, Mingkui Tan, Chuang Gan

Figure 1 for $A^2$Nav: Action-Aware Zero-Shot Robot Navigation by Exploiting Vision-and-Language Ability of Foundation Models
Figure 2 for $A^2$Nav: Action-Aware Zero-Shot Robot Navigation by Exploiting Vision-and-Language Ability of Foundation Models
Figure 3 for $A^2$Nav: Action-Aware Zero-Shot Robot Navigation by Exploiting Vision-and-Language Ability of Foundation Models
Figure 4 for $A^2$Nav: Action-Aware Zero-Shot Robot Navigation by Exploiting Vision-and-Language Ability of Foundation Models
Viaarxiv icon

Learning Vision-and-Language Navigation from YouTube Videos

Add code
Bookmark button
Alert button
Jul 22, 2023
Kunyang Lin, Peihao Chen, Diwei Huang, Thomas H. Li, Mingkui Tan, Chuang Gan

Figure 1 for Learning Vision-and-Language Navigation from YouTube Videos
Figure 2 for Learning Vision-and-Language Navigation from YouTube Videos
Figure 3 for Learning Vision-and-Language Navigation from YouTube Videos
Figure 4 for Learning Vision-and-Language Navigation from YouTube Videos
Viaarxiv icon

Hard Sample Matters a Lot in Zero-Shot Quantization

Add code
Bookmark button
Alert button
Mar 24, 2023
Huantong Li, Xiangmiao Wu, Fanbing Lv, Daihai Liao, Thomas H. Li, Yonggang Zhang, Bo Han, Mingkui Tan

Figure 1 for Hard Sample Matters a Lot in Zero-Shot Quantization
Figure 2 for Hard Sample Matters a Lot in Zero-Shot Quantization
Figure 3 for Hard Sample Matters a Lot in Zero-Shot Quantization
Figure 4 for Hard Sample Matters a Lot in Zero-Shot Quantization
Viaarxiv icon

Detecting the open-world objects with the help of the Brain

Add code
Bookmark button
Alert button
Mar 21, 2023
Shuailei Ma, Yuefeng Wang, Ying Wei, Peihao Chen, Zhixiang Ye, Jiaqi Fan, Enming Zhang, Thomas H. Li

Figure 1 for Detecting the open-world objects with the help of the Brain
Figure 2 for Detecting the open-world objects with the help of the Brain
Figure 3 for Detecting the open-world objects with the help of the Brain
Figure 4 for Detecting the open-world objects with the help of the Brain
Viaarxiv icon

LIO-PPF: Fast LiDAR-Inertial Odometry via Incremental Plane Pre-Fitting and Skeleton Tracking

Add code
Bookmark button
Alert button
Mar 02, 2023
Xingyu Chen, Peixi Wu, Ge Li, Thomas H. Li

Figure 1 for LIO-PPF: Fast LiDAR-Inertial Odometry via Incremental Plane Pre-Fitting and Skeleton Tracking
Figure 2 for LIO-PPF: Fast LiDAR-Inertial Odometry via Incremental Plane Pre-Fitting and Skeleton Tracking
Figure 3 for LIO-PPF: Fast LiDAR-Inertial Odometry via Incremental Plane Pre-Fitting and Skeleton Tracking
Figure 4 for LIO-PPF: Fast LiDAR-Inertial Odometry via Incremental Plane Pre-Fitting and Skeleton Tracking
Viaarxiv icon