Alert button
Picture for Guodong Guo

Guodong Guo

Alert button

Fusion-Mamba for Cross-modality Object Detection

Add code
Bookmark button
Alert button
Apr 14, 2024
Wenhao Dong, Haodong Zhu, Shaohui Lin, Xiaoyan Luo, Yunhang Shen, Xuhui Liu, Juan Zhang, Guodong Guo, Baochang Zhang

Viaarxiv icon

Implicit Subgoal Planning with Variational Autoencoders for Long-Horizon Sparse Reward Robotic Tasks

Add code
Bookmark button
Alert button
Dec 25, 2023
Fangyuan Wang, Anqing Duan, Peng Zhou, Shengzeng Huo, Guodong Guo, Chenguang Yang, David Navarro-Alarcon

Viaarxiv icon

NCL++: Nested Collaborative Learning for Long-Tailed Visual Recognition

Add code
Bookmark button
Alert button
Jun 29, 2023
Zichang Tan, Jun Li, Jinhao Du, Jun Wan, Zhen Lei, Guodong Guo

Figure 1 for NCL++: Nested Collaborative Learning for Long-Tailed Visual Recognition
Figure 2 for NCL++: Nested Collaborative Learning for Long-Tailed Visual Recognition
Figure 3 for NCL++: Nested Collaborative Learning for Long-Tailed Visual Recognition
Figure 4 for NCL++: Nested Collaborative Learning for Long-Tailed Visual Recognition
Viaarxiv icon

DCP-NAS: Discrepant Child-Parent Neural Architecture Search for 1-bit CNNs

Add code
Bookmark button
Alert button
Jun 27, 2023
Yanjing Li, Sheng Xu, Xianbin Cao, Li'an Zhuo, Baochang Zhang, Tian Wang, Guodong Guo

Figure 1 for DCP-NAS: Discrepant Child-Parent Neural Architecture Search for 1-bit CNNs
Figure 2 for DCP-NAS: Discrepant Child-Parent Neural Architecture Search for 1-bit CNNs
Figure 3 for DCP-NAS: Discrepant Child-Parent Neural Architecture Search for 1-bit CNNs
Figure 4 for DCP-NAS: Discrepant Child-Parent Neural Architecture Search for 1-bit CNNs
Viaarxiv icon

FM-ViT: Flexible Modal Vision Transformers for Face Anti-Spoofing

Add code
Bookmark button
Alert button
May 05, 2023
Ajian Liu, Zichang Tan, Zitong Yu, Chenxu Zhao, Jun Wan, Yanyan Liang, Zhen Lei, Du Zhang, Stan Z. Li, Guodong Guo

Figure 1 for FM-ViT: Flexible Modal Vision Transformers for Face Anti-Spoofing
Figure 2 for FM-ViT: Flexible Modal Vision Transformers for Face Anti-Spoofing
Figure 3 for FM-ViT: Flexible Modal Vision Transformers for Face Anti-Spoofing
Figure 4 for FM-ViT: Flexible Modal Vision Transformers for Face Anti-Spoofing
Viaarxiv icon

Q-DETR: An Efficient Low-Bit Quantized Detection Transformer

Add code
Bookmark button
Alert button
Apr 01, 2023
Sheng Xu, Yanjing Li, Mingbao Lin, Peng Gao, Guodong Guo, Jinhu Lu, Baochang Zhang

Figure 1 for Q-DETR: An Efficient Low-Bit Quantized Detection Transformer
Figure 2 for Q-DETR: An Efficient Low-Bit Quantized Detection Transformer
Figure 3 for Q-DETR: An Efficient Low-Bit Quantized Detection Transformer
Figure 4 for Q-DETR: An Efficient Low-Bit Quantized Detection Transformer
Viaarxiv icon

Vision Transformer with Attentive Pooling for Robust Facial Expression Recognition

Add code
Bookmark button
Alert button
Dec 11, 2022
Fanglei Xue, Qiangchang Wang, Zichang Tan, Zhongsong Ma, Guodong Guo

Figure 1 for Vision Transformer with Attentive Pooling for Robust Facial Expression Recognition
Figure 2 for Vision Transformer with Attentive Pooling for Robust Facial Expression Recognition
Figure 3 for Vision Transformer with Attentive Pooling for Robust Facial Expression Recognition
Figure 4 for Vision Transformer with Attentive Pooling for Robust Facial Expression Recognition
Viaarxiv icon

Q-ViT: Accurate and Fully Quantized Low-bit Vision Transformer

Add code
Bookmark button
Alert button
Oct 13, 2022
Yanjing Li, Sheng Xu, Baochang Zhang, Xianbin Cao, Peng Gao, Guodong Guo

Figure 1 for Q-ViT: Accurate and Fully Quantized Low-bit Vision Transformer
Figure 2 for Q-ViT: Accurate and Fully Quantized Low-bit Vision Transformer
Figure 3 for Q-ViT: Accurate and Fully Quantized Low-bit Vision Transformer
Figure 4 for Q-ViT: Accurate and Fully Quantized Low-bit Vision Transformer
Viaarxiv icon

Perceptual Attacks of No-Reference Image Quality Models with Human-in-the-Loop

Add code
Bookmark button
Alert button
Oct 03, 2022
Weixia Zhang, Dingquan Li, Xiongkuo Min, Guangtao Zhai, Guodong Guo, Xiaokang Yang, Kede Ma

Figure 1 for Perceptual Attacks of No-Reference Image Quality Models with Human-in-the-Loop
Figure 2 for Perceptual Attacks of No-Reference Image Quality Models with Human-in-the-Loop
Figure 3 for Perceptual Attacks of No-Reference Image Quality Models with Human-in-the-Loop
Figure 4 for Perceptual Attacks of No-Reference Image Quality Models with Human-in-the-Loop
Viaarxiv icon

Adaptive Sparse ViT: Towards Learnable Adaptive Token Pruning by Fully Exploiting Self-Attention

Add code
Bookmark button
Alert button
Sep 28, 2022
Xiangcheng Liu, Tianyi Wu, Guodong Guo

Figure 1 for Adaptive Sparse ViT: Towards Learnable Adaptive Token Pruning by Fully Exploiting Self-Attention
Figure 2 for Adaptive Sparse ViT: Towards Learnable Adaptive Token Pruning by Fully Exploiting Self-Attention
Figure 3 for Adaptive Sparse ViT: Towards Learnable Adaptive Token Pruning by Fully Exploiting Self-Attention
Figure 4 for Adaptive Sparse ViT: Towards Learnable Adaptive Token Pruning by Fully Exploiting Self-Attention
Viaarxiv icon