Alert button
Picture for Guodong Guo

Guodong Guo

Alert button

Implicit Subgoal Planning with Variational Autoencoders for Long-Horizon Sparse Reward Robotic Tasks

Dec 25, 2023
Fangyuan Wang, Anqing Duan, Peng Zhou, Shengzeng Huo, Guodong Guo, Chenguang Yang, David Navarro-Alarcon

Viaarxiv icon

NCL++: Nested Collaborative Learning for Long-Tailed Visual Recognition

Jun 29, 2023
Zichang Tan, Jun Li, Jinhao Du, Jun Wan, Zhen Lei, Guodong Guo

Figure 1 for NCL++: Nested Collaborative Learning for Long-Tailed Visual Recognition
Figure 2 for NCL++: Nested Collaborative Learning for Long-Tailed Visual Recognition
Figure 3 for NCL++: Nested Collaborative Learning for Long-Tailed Visual Recognition
Figure 4 for NCL++: Nested Collaborative Learning for Long-Tailed Visual Recognition
Viaarxiv icon

DCP-NAS: Discrepant Child-Parent Neural Architecture Search for 1-bit CNNs

Jun 27, 2023
Yanjing Li, Sheng Xu, Xianbin Cao, Li'an Zhuo, Baochang Zhang, Tian Wang, Guodong Guo

Figure 1 for DCP-NAS: Discrepant Child-Parent Neural Architecture Search for 1-bit CNNs
Figure 2 for DCP-NAS: Discrepant Child-Parent Neural Architecture Search for 1-bit CNNs
Figure 3 for DCP-NAS: Discrepant Child-Parent Neural Architecture Search for 1-bit CNNs
Figure 4 for DCP-NAS: Discrepant Child-Parent Neural Architecture Search for 1-bit CNNs
Viaarxiv icon

FM-ViT: Flexible Modal Vision Transformers for Face Anti-Spoofing

May 05, 2023
Ajian Liu, Zichang Tan, Zitong Yu, Chenxu Zhao, Jun Wan, Yanyan Liang, Zhen Lei, Du Zhang, Stan Z. Li, Guodong Guo

Figure 1 for FM-ViT: Flexible Modal Vision Transformers for Face Anti-Spoofing
Figure 2 for FM-ViT: Flexible Modal Vision Transformers for Face Anti-Spoofing
Figure 3 for FM-ViT: Flexible Modal Vision Transformers for Face Anti-Spoofing
Figure 4 for FM-ViT: Flexible Modal Vision Transformers for Face Anti-Spoofing
Viaarxiv icon

Q-DETR: An Efficient Low-Bit Quantized Detection Transformer

Apr 01, 2023
Sheng Xu, Yanjing Li, Mingbao Lin, Peng Gao, Guodong Guo, Jinhu Lu, Baochang Zhang

Figure 1 for Q-DETR: An Efficient Low-Bit Quantized Detection Transformer
Figure 2 for Q-DETR: An Efficient Low-Bit Quantized Detection Transformer
Figure 3 for Q-DETR: An Efficient Low-Bit Quantized Detection Transformer
Figure 4 for Q-DETR: An Efficient Low-Bit Quantized Detection Transformer
Viaarxiv icon

Vision Transformer with Attentive Pooling for Robust Facial Expression Recognition

Dec 11, 2022
Fanglei Xue, Qiangchang Wang, Zichang Tan, Zhongsong Ma, Guodong Guo

Figure 1 for Vision Transformer with Attentive Pooling for Robust Facial Expression Recognition
Figure 2 for Vision Transformer with Attentive Pooling for Robust Facial Expression Recognition
Figure 3 for Vision Transformer with Attentive Pooling for Robust Facial Expression Recognition
Figure 4 for Vision Transformer with Attentive Pooling for Robust Facial Expression Recognition
Viaarxiv icon

Q-ViT: Accurate and Fully Quantized Low-bit Vision Transformer

Oct 13, 2022
Yanjing Li, Sheng Xu, Baochang Zhang, Xianbin Cao, Peng Gao, Guodong Guo

Figure 1 for Q-ViT: Accurate and Fully Quantized Low-bit Vision Transformer
Figure 2 for Q-ViT: Accurate and Fully Quantized Low-bit Vision Transformer
Figure 3 for Q-ViT: Accurate and Fully Quantized Low-bit Vision Transformer
Figure 4 for Q-ViT: Accurate and Fully Quantized Low-bit Vision Transformer
Viaarxiv icon

Perceptual Attacks of No-Reference Image Quality Models with Human-in-the-Loop

Oct 03, 2022
Weixia Zhang, Dingquan Li, Xiongkuo Min, Guangtao Zhai, Guodong Guo, Xiaokang Yang, Kede Ma

Figure 1 for Perceptual Attacks of No-Reference Image Quality Models with Human-in-the-Loop
Figure 2 for Perceptual Attacks of No-Reference Image Quality Models with Human-in-the-Loop
Figure 3 for Perceptual Attacks of No-Reference Image Quality Models with Human-in-the-Loop
Figure 4 for Perceptual Attacks of No-Reference Image Quality Models with Human-in-the-Loop
Viaarxiv icon

Adaptive Sparse ViT: Towards Learnable Adaptive Token Pruning by Fully Exploiting Self-Attention

Sep 28, 2022
Xiangcheng Liu, Tianyi Wu, Guodong Guo

Figure 1 for Adaptive Sparse ViT: Towards Learnable Adaptive Token Pruning by Fully Exploiting Self-Attention
Figure 2 for Adaptive Sparse ViT: Towards Learnable Adaptive Token Pruning by Fully Exploiting Self-Attention
Figure 3 for Adaptive Sparse ViT: Towards Learnable Adaptive Token Pruning by Fully Exploiting Self-Attention
Figure 4 for Adaptive Sparse ViT: Towards Learnable Adaptive Token Pruning by Fully Exploiting Self-Attention
Viaarxiv icon

Recurrent Bilinear Optimization for Binary Neural Networks

Sep 04, 2022
Sheng Xu, Yanjing Li, Tiancheng Wang, Teli Ma, Baochang Zhang, Peng Gao, Yu Qiao, Jinhu Lv, Guodong Guo

Figure 1 for Recurrent Bilinear Optimization for Binary Neural Networks
Figure 2 for Recurrent Bilinear Optimization for Binary Neural Networks
Figure 3 for Recurrent Bilinear Optimization for Binary Neural Networks
Figure 4 for Recurrent Bilinear Optimization for Binary Neural Networks
Viaarxiv icon