Alert button
Picture for Chuang Gan

Chuang Gan

Alert button

Learning Physical Dynamics with Subequivariant Graph Neural Networks

Add code
Bookmark button
Alert button
Oct 13, 2022
Jiaqi Han, Wenbing Huang, Hengbo Ma, Jiachen Li, Joshua B. Tenenbaum, Chuang Gan

Figure 1 for Learning Physical Dynamics with Subequivariant Graph Neural Networks
Figure 2 for Learning Physical Dynamics with Subequivariant Graph Neural Networks
Figure 3 for Learning Physical Dynamics with Subequivariant Graph Neural Networks
Figure 4 for Learning Physical Dynamics with Subequivariant Graph Neural Networks
Viaarxiv icon

M$^3$Video: Masked Motion Modeling for Self-Supervised Video Representation Learning

Add code
Bookmark button
Alert button
Oct 12, 2022
Xinyu Sun, Peihao Chen, Liangwei Chen, Thomas H. Li, Mingkui Tan, Chuang Gan

Figure 1 for M$^3$Video: Masked Motion Modeling for Self-Supervised Video Representation Learning
Figure 2 for M$^3$Video: Masked Motion Modeling for Self-Supervised Video Representation Learning
Figure 3 for M$^3$Video: Masked Motion Modeling for Self-Supervised Video Representation Learning
Figure 4 for M$^3$Video: Masked Motion Modeling for Self-Supervised Video Representation Learning
Viaarxiv icon

MA2QL: A Minimalist Approach to Fully Decentralized Multi-Agent Reinforcement Learning

Add code
Bookmark button
Alert button
Sep 17, 2022
Kefan Su, Siyuan Zhou, Chuang Gan, Xiangjun Wang, Zongqing Lu

Figure 1 for MA2QL: A Minimalist Approach to Fully Decentralized Multi-Agent Reinforcement Learning
Figure 2 for MA2QL: A Minimalist Approach to Fully Decentralized Multi-Agent Reinforcement Learning
Figure 3 for MA2QL: A Minimalist Approach to Fully Decentralized Multi-Agent Reinforcement Learning
Figure 4 for MA2QL: A Minimalist Approach to Fully Decentralized Multi-Agent Reinforcement Learning
Viaarxiv icon

Prototype-Guided Continual Adaptation for Class-Incremental Unsupervised Domain Adaptation

Add code
Bookmark button
Alert button
Jul 29, 2022
Hongbin Lin, Yifan Zhang, Zhen Qiu, Shuaicheng Niu, Chuang Gan, Yanxia Liu, Mingkui Tan

Figure 1 for Prototype-Guided Continual Adaptation for Class-Incremental Unsupervised Domain Adaptation
Figure 2 for Prototype-Guided Continual Adaptation for Class-Incremental Unsupervised Domain Adaptation
Figure 3 for Prototype-Guided Continual Adaptation for Class-Incremental Unsupervised Domain Adaptation
Figure 4 for Prototype-Guided Continual Adaptation for Class-Incremental Unsupervised Domain Adaptation
Viaarxiv icon

On-Device Training Under 256KB Memory

Add code
Bookmark button
Alert button
Jul 14, 2022
Ji Lin, Ligeng Zhu, Wei-Ming Chen, Wei-Chen Wang, Chuang Gan, Song Han

Figure 1 for On-Device Training Under 256KB Memory
Figure 2 for On-Device Training Under 256KB Memory
Figure 3 for On-Device Training Under 256KB Memory
Figure 4 for On-Device Training Under 256KB Memory
Viaarxiv icon

3D Concept Grounding on Neural Fields

Add code
Bookmark button
Alert button
Jul 13, 2022
Yining Hong, Yilun Du, Chunru Lin, Joshua B. Tenenbaum, Chuang Gan

Figure 1 for 3D Concept Grounding on Neural Fields
Figure 2 for 3D Concept Grounding on Neural Fields
Figure 3 for 3D Concept Grounding on Neural Fields
Figure 4 for 3D Concept Grounding on Neural Fields
Viaarxiv icon

Finding Fallen Objects Via Asynchronous Audio-Visual Integration

Add code
Bookmark button
Alert button
Jul 07, 2022
Chuang Gan, Yi Gu, Siyuan Zhou, Jeremy Schwartz, Seth Alter, James Traer, Dan Gutfreund, Joshua B. Tenenbaum, Josh McDermott, Antonio Torralba

Figure 1 for Finding Fallen Objects Via Asynchronous Audio-Visual Integration
Figure 2 for Finding Fallen Objects Via Asynchronous Audio-Visual Integration
Figure 3 for Finding Fallen Objects Via Asynchronous Audio-Visual Integration
Figure 4 for Finding Fallen Objects Via Asynchronous Audio-Visual Integration
Viaarxiv icon

Weakly Supervised Grounding for VQA in Vision-Language Transformers

Add code
Bookmark button
Alert button
Jul 05, 2022
Aisha Urooj Khan, Hilde Kuehne, Chuang Gan, Niels Da Vitoria Lobo, Mubarak Shah

Figure 1 for Weakly Supervised Grounding for VQA in Vision-Language Transformers
Figure 2 for Weakly Supervised Grounding for VQA in Vision-Language Transformers
Figure 3 for Weakly Supervised Grounding for VQA in Vision-Language Transformers
Figure 4 for Weakly Supervised Grounding for VQA in Vision-Language Transformers
Viaarxiv icon