Picture for Yizhao Gao

Yizhao Gao

DyBit: Dynamic Bit-Precision Numbers for Efficient Quantized Neural Network Inference

Add code
Feb 24, 2023
Viaarxiv icon

COTS: Collaborative Two-Stream Vision-Language Pre-Training Model for Cross-Modal Retrieval

Add code
Apr 15, 2022
Figure 1 for COTS: Collaborative Two-Stream Vision-Language Pre-Training Model for Cross-Modal Retrieval
Figure 2 for COTS: Collaborative Two-Stream Vision-Language Pre-Training Model for Cross-Modal Retrieval
Figure 3 for COTS: Collaborative Two-Stream Vision-Language Pre-Training Model for Cross-Modal Retrieval
Figure 4 for COTS: Collaborative Two-Stream Vision-Language Pre-Training Model for Cross-Modal Retrieval
Viaarxiv icon

A Roadmap for Big Model

Add code
Apr 02, 2022
Figure 1 for A Roadmap for Big Model
Figure 2 for A Roadmap for Big Model
Figure 3 for A Roadmap for Big Model
Figure 4 for A Roadmap for Big Model
Viaarxiv icon

WenLan 2.0: Make AI Imagine via a Multimodal Foundation Model

Add code
Oct 27, 2021
Figure 1 for WenLan 2.0: Make AI Imagine via a Multimodal Foundation Model
Figure 2 for WenLan 2.0: Make AI Imagine via a Multimodal Foundation Model
Figure 3 for WenLan 2.0: Make AI Imagine via a Multimodal Foundation Model
Figure 4 for WenLan 2.0: Make AI Imagine via a Multimodal Foundation Model
Viaarxiv icon

HAO: Hardware-aware neural Architecture Optimization for Efficient Inference

Add code
Apr 26, 2021
Figure 1 for HAO: Hardware-aware neural Architecture Optimization for Efficient Inference
Figure 2 for HAO: Hardware-aware neural Architecture Optimization for Efficient Inference
Figure 3 for HAO: Hardware-aware neural Architecture Optimization for Efficient Inference
Figure 4 for HAO: Hardware-aware neural Architecture Optimization for Efficient Inference
Viaarxiv icon

WenLan: Bridging Vision and Language by Large-Scale Multi-Modal Pre-Training

Add code
Mar 19, 2021
Figure 1 for WenLan: Bridging Vision and Language by Large-Scale Multi-Modal Pre-Training
Figure 2 for WenLan: Bridging Vision and Language by Large-Scale Multi-Modal Pre-Training
Figure 3 for WenLan: Bridging Vision and Language by Large-Scale Multi-Modal Pre-Training
Figure 4 for WenLan: Bridging Vision and Language by Large-Scale Multi-Modal Pre-Training
Viaarxiv icon

Contrastive Prototype Learning with Augmented Embeddings for Few-Shot Learning

Add code
Jan 23, 2021
Figure 1 for Contrastive Prototype Learning with Augmented Embeddings for Few-Shot Learning
Figure 2 for Contrastive Prototype Learning with Augmented Embeddings for Few-Shot Learning
Figure 3 for Contrastive Prototype Learning with Augmented Embeddings for Few-Shot Learning
Figure 4 for Contrastive Prototype Learning with Augmented Embeddings for Few-Shot Learning
Viaarxiv icon

CoDeNet: Algorithm-hardware Co-design for Deformable Convolution

Add code
Jun 12, 2020
Figure 1 for CoDeNet: Algorithm-hardware Co-design for Deformable Convolution
Figure 2 for CoDeNet: Algorithm-hardware Co-design for Deformable Convolution
Figure 3 for CoDeNet: Algorithm-hardware Co-design for Deformable Convolution
Figure 4 for CoDeNet: Algorithm-hardware Co-design for Deformable Convolution
Viaarxiv icon

Meta-Learning across Meta-Tasks for Few-Shot Learning

Add code
Mar 09, 2020
Figure 1 for Meta-Learning across Meta-Tasks for Few-Shot Learning
Figure 2 for Meta-Learning across Meta-Tasks for Few-Shot Learning
Figure 3 for Meta-Learning across Meta-Tasks for Few-Shot Learning
Figure 4 for Meta-Learning across Meta-Tasks for Few-Shot Learning
Viaarxiv icon

Algorithm-hardware Co-design for Deformable Convolution

Add code
Feb 19, 2020
Figure 1 for Algorithm-hardware Co-design for Deformable Convolution
Figure 2 for Algorithm-hardware Co-design for Deformable Convolution
Viaarxiv icon