Picture for Zhicheng Yan

Zhicheng Yan

Understanding and Accelerating Neural Architecture Search with Training-Free and Theory-Grounded Metrics

Add code
Aug 26, 2021
Figure 1 for Understanding and Accelerating Neural Architecture Search with Training-Free and Theory-Grounded Metrics
Figure 2 for Understanding and Accelerating Neural Architecture Search with Training-Free and Theory-Grounded Metrics
Figure 3 for Understanding and Accelerating Neural Architecture Search with Training-Free and Theory-Grounded Metrics
Figure 4 for Understanding and Accelerating Neural Architecture Search with Training-Free and Theory-Grounded Metrics
Viaarxiv icon

Multiscale Vision Transformers

Add code
Apr 22, 2021
Figure 1 for Multiscale Vision Transformers
Figure 2 for Multiscale Vision Transformers
Figure 3 for Multiscale Vision Transformers
Figure 4 for Multiscale Vision Transformers
Viaarxiv icon

FP-NAS: Fast Probabilistic Neural Architecture Search

Add code
Nov 24, 2020
Figure 1 for FP-NAS: Fast Probabilistic Neural Architecture Search
Figure 2 for FP-NAS: Fast Probabilistic Neural Architecture Search
Figure 3 for FP-NAS: Fast Probabilistic Neural Architecture Search
Figure 4 for FP-NAS: Fast Probabilistic Neural Architecture Search
Viaarxiv icon

Decoupling Representation and Classifier for Long-Tailed Recognition

Add code
Oct 21, 2019
Figure 1 for Decoupling Representation and Classifier for Long-Tailed Recognition
Figure 2 for Decoupling Representation and Classifier for Long-Tailed Recognition
Figure 3 for Decoupling Representation and Classifier for Long-Tailed Recognition
Figure 4 for Decoupling Representation and Classifier for Long-Tailed Recognition
Viaarxiv icon

Only Time Can Tell: Discovering Temporal Data for Temporal Modeling

Add code
Jul 19, 2019
Figure 1 for Only Time Can Tell: Discovering Temporal Data for Temporal Modeling
Figure 2 for Only Time Can Tell: Discovering Temporal Data for Temporal Modeling
Figure 3 for Only Time Can Tell: Discovering Temporal Data for Temporal Modeling
Figure 4 for Only Time Can Tell: Discovering Temporal Data for Temporal Modeling
Viaarxiv icon

Drop an Octave: Reducing Spatial Redundancy in Convolutional Neural Networks with Octave Convolution

Add code
Apr 30, 2019
Figure 1 for Drop an Octave: Reducing Spatial Redundancy in Convolutional Neural Networks with Octave Convolution
Figure 2 for Drop an Octave: Reducing Spatial Redundancy in Convolutional Neural Networks with Octave Convolution
Figure 3 for Drop an Octave: Reducing Spatial Redundancy in Convolutional Neural Networks with Octave Convolution
Figure 4 for Drop an Octave: Reducing Spatial Redundancy in Convolutional Neural Networks with Octave Convolution
Viaarxiv icon

DMC-Net: Generating Discriminative Motion Cues for Fast Compressed Video Action Recognition

Add code
Jan 11, 2019
Figure 1 for DMC-Net: Generating Discriminative Motion Cues for Fast Compressed Video Action Recognition
Figure 2 for DMC-Net: Generating Discriminative Motion Cues for Fast Compressed Video Action Recognition
Figure 3 for DMC-Net: Generating Discriminative Motion Cues for Fast Compressed Video Action Recognition
Figure 4 for DMC-Net: Generating Discriminative Motion Cues for Fast Compressed Video Action Recognition
Viaarxiv icon

Graph-Based Global Reasoning Networks

Add code
Nov 30, 2018
Figure 1 for Graph-Based Global Reasoning Networks
Figure 2 for Graph-Based Global Reasoning Networks
Figure 3 for Graph-Based Global Reasoning Networks
Figure 4 for Graph-Based Global Reasoning Networks
Viaarxiv icon

SLAC: A Sparsely Labeled Dataset for Action Classification and Localization

Add code
Dec 26, 2017
Figure 1 for SLAC: A Sparsely Labeled Dataset for Action Classification and Localization
Figure 2 for SLAC: A Sparsely Labeled Dataset for Action Classification and Localization
Figure 3 for SLAC: A Sparsely Labeled Dataset for Action Classification and Localization
Figure 4 for SLAC: A Sparsely Labeled Dataset for Action Classification and Localization
Viaarxiv icon

Learning Concept Taxonomies from Multi-modal Data

Add code
Jun 29, 2016
Figure 1 for Learning Concept Taxonomies from Multi-modal Data
Figure 2 for Learning Concept Taxonomies from Multi-modal Data
Figure 3 for Learning Concept Taxonomies from Multi-modal Data
Figure 4 for Learning Concept Taxonomies from Multi-modal Data
Viaarxiv icon