Picture for Yezhou Yang

Yezhou Yang

Arizona State University

Compressing Visual-linguistic Model via Knowledge Distillation

Add code
Apr 05, 2021
Figure 1 for Compressing Visual-linguistic Model via Knowledge Distillation
Figure 2 for Compressing Visual-linguistic Model via Knowledge Distillation
Figure 3 for Compressing Visual-linguistic Model via Knowledge Distillation
Figure 4 for Compressing Visual-linguistic Model via Knowledge Distillation
Viaarxiv icon

CAROM -- Vehicle Localization and Traffic Scene Reconstruction from Monocular Cameras on Road Infrastructures

Add code
Apr 02, 2021
Figure 1 for CAROM -- Vehicle Localization and Traffic Scene Reconstruction from Monocular Cameras on Road Infrastructures
Figure 2 for CAROM -- Vehicle Localization and Traffic Scene Reconstruction from Monocular Cameras on Road Infrastructures
Figure 3 for CAROM -- Vehicle Localization and Traffic Scene Reconstruction from Monocular Cameras on Road Infrastructures
Figure 4 for CAROM -- Vehicle Localization and Traffic Scene Reconstruction from Monocular Cameras on Road Infrastructures
Viaarxiv icon

Hierarchical and Partially Observable Goal-driven Policy Learning with Goals Relational Graph

Add code
Mar 30, 2021
Figure 1 for Hierarchical and Partially Observable Goal-driven Policy Learning with Goals Relational Graph
Figure 2 for Hierarchical and Partially Observable Goal-driven Policy Learning with Goals Relational Graph
Figure 3 for Hierarchical and Partially Observable Goal-driven Policy Learning with Goals Relational Graph
Figure 4 for Hierarchical and Partially Observable Goal-driven Policy Learning with Goals Relational Graph
Viaarxiv icon

SEED: Self-supervised Distillation For Visual Representation

Add code
Jan 12, 2021
Figure 1 for SEED: Self-supervised Distillation For Visual Representation
Figure 2 for SEED: Self-supervised Distillation For Visual Representation
Figure 3 for SEED: Self-supervised Distillation For Visual Representation
Figure 4 for SEED: Self-supervised Distillation For Visual Representation
Viaarxiv icon

Self-Supervised VQA: Answering Visual Questions using Images and Captions

Add code
Dec 04, 2020
Figure 1 for Self-Supervised VQA: Answering Visual Questions using Images and Captions
Figure 2 for Self-Supervised VQA: Answering Visual Questions using Images and Captions
Figure 3 for Self-Supervised VQA: Answering Visual Questions using Images and Captions
Figure 4 for Self-Supervised VQA: Answering Visual Questions using Images and Captions
Viaarxiv icon

Attribute-Guided Adversarial Training for Robustness to Natural Perturbations

Add code
Dec 03, 2020
Figure 1 for Attribute-Guided Adversarial Training for Robustness to Natural Perturbations
Figure 2 for Attribute-Guided Adversarial Training for Robustness to Natural Perturbations
Figure 3 for Attribute-Guided Adversarial Training for Robustness to Natural Perturbations
Figure 4 for Attribute-Guided Adversarial Training for Robustness to Natural Perturbations
Viaarxiv icon

Decentralized Attribution of Generative Models

Add code
Oct 27, 2020
Figure 1 for Decentralized Attribution of Generative Models
Figure 2 for Decentralized Attribution of Generative Models
Figure 3 for Decentralized Attribution of Generative Models
Figure 4 for Decentralized Attribution of Generative Models
Viaarxiv icon

Efficient Robotic Object Search via HIEM: Hierarchical Policy Learning with Intrinsic-Extrinsic Modeling

Add code
Oct 16, 2020
Figure 1 for Efficient Robotic Object Search via HIEM: Hierarchical Policy Learning with Intrinsic-Extrinsic Modeling
Figure 2 for Efficient Robotic Object Search via HIEM: Hierarchical Policy Learning with Intrinsic-Extrinsic Modeling
Figure 3 for Efficient Robotic Object Search via HIEM: Hierarchical Policy Learning with Intrinsic-Extrinsic Modeling
Figure 4 for Efficient Robotic Object Search via HIEM: Hierarchical Policy Learning with Intrinsic-Extrinsic Modeling
Viaarxiv icon

MUTANT: A Training Paradigm for Out-of-Distribution Generalization in Visual Question Answering

Add code
Sep 18, 2020
Figure 1 for MUTANT: A Training Paradigm for Out-of-Distribution Generalization in Visual Question Answering
Figure 2 for MUTANT: A Training Paradigm for Out-of-Distribution Generalization in Visual Question Answering
Figure 3 for MUTANT: A Training Paradigm for Out-of-Distribution Generalization in Visual Question Answering
Figure 4 for MUTANT: A Training Paradigm for Out-of-Distribution Generalization in Visual Question Answering
Viaarxiv icon

Low to High Dimensional Modality Hallucination using Aggregated Fields of View

Add code
Jul 13, 2020
Figure 1 for Low to High Dimensional Modality Hallucination using Aggregated Fields of View
Figure 2 for Low to High Dimensional Modality Hallucination using Aggregated Fields of View
Figure 3 for Low to High Dimensional Modality Hallucination using Aggregated Fields of View
Figure 4 for Low to High Dimensional Modality Hallucination using Aggregated Fields of View
Viaarxiv icon