Picture for Yizhao Gao

Yizhao Gao

Co-designing a Sub-millisecond Latency Event-based Eye Tracking System with Submanifold Sparse CNN

Add code
Apr 22, 2024
Viaarxiv icon

Event-Based Eye Tracking. AIS 2024 Challenge Survey

Add code
Apr 17, 2024
Figure 1 for Event-Based Eye Tracking. AIS 2024 Challenge Survey
Figure 2 for Event-Based Eye Tracking. AIS 2024 Challenge Survey
Figure 3 for Event-Based Eye Tracking. AIS 2024 Challenge Survey
Figure 4 for Event-Based Eye Tracking. AIS 2024 Challenge Survey
Viaarxiv icon

Random resistive memory-based deep extreme point learning machine for unified visual processing

Add code
Dec 14, 2023
Figure 1 for Random resistive memory-based deep extreme point learning machine for unified visual processing
Figure 2 for Random resistive memory-based deep extreme point learning machine for unified visual processing
Figure 3 for Random resistive memory-based deep extreme point learning machine for unified visual processing
Figure 4 for Random resistive memory-based deep extreme point learning machine for unified visual processing
Viaarxiv icon

DyBit: Dynamic Bit-Precision Numbers for Efficient Quantized Neural Network Inference

Add code
Feb 24, 2023
Figure 1 for DyBit: Dynamic Bit-Precision Numbers for Efficient Quantized Neural Network Inference
Figure 2 for DyBit: Dynamic Bit-Precision Numbers for Efficient Quantized Neural Network Inference
Figure 3 for DyBit: Dynamic Bit-Precision Numbers for Efficient Quantized Neural Network Inference
Figure 4 for DyBit: Dynamic Bit-Precision Numbers for Efficient Quantized Neural Network Inference
Viaarxiv icon

COTS: Collaborative Two-Stream Vision-Language Pre-Training Model for Cross-Modal Retrieval

Add code
Apr 15, 2022
Figure 1 for COTS: Collaborative Two-Stream Vision-Language Pre-Training Model for Cross-Modal Retrieval
Figure 2 for COTS: Collaborative Two-Stream Vision-Language Pre-Training Model for Cross-Modal Retrieval
Figure 3 for COTS: Collaborative Two-Stream Vision-Language Pre-Training Model for Cross-Modal Retrieval
Figure 4 for COTS: Collaborative Two-Stream Vision-Language Pre-Training Model for Cross-Modal Retrieval
Viaarxiv icon

A Roadmap for Big Model

Add code
Apr 02, 2022
Figure 1 for A Roadmap for Big Model
Figure 2 for A Roadmap for Big Model
Figure 3 for A Roadmap for Big Model
Figure 4 for A Roadmap for Big Model
Viaarxiv icon

WenLan 2.0: Make AI Imagine via a Multimodal Foundation Model

Add code
Oct 27, 2021
Figure 1 for WenLan 2.0: Make AI Imagine via a Multimodal Foundation Model
Figure 2 for WenLan 2.0: Make AI Imagine via a Multimodal Foundation Model
Figure 3 for WenLan 2.0: Make AI Imagine via a Multimodal Foundation Model
Figure 4 for WenLan 2.0: Make AI Imagine via a Multimodal Foundation Model
Viaarxiv icon

HAO: Hardware-aware neural Architecture Optimization for Efficient Inference

Add code
Apr 26, 2021
Figure 1 for HAO: Hardware-aware neural Architecture Optimization for Efficient Inference
Figure 2 for HAO: Hardware-aware neural Architecture Optimization for Efficient Inference
Figure 3 for HAO: Hardware-aware neural Architecture Optimization for Efficient Inference
Figure 4 for HAO: Hardware-aware neural Architecture Optimization for Efficient Inference
Viaarxiv icon

WenLan: Bridging Vision and Language by Large-Scale Multi-Modal Pre-Training

Add code
Mar 19, 2021
Figure 1 for WenLan: Bridging Vision and Language by Large-Scale Multi-Modal Pre-Training
Figure 2 for WenLan: Bridging Vision and Language by Large-Scale Multi-Modal Pre-Training
Figure 3 for WenLan: Bridging Vision and Language by Large-Scale Multi-Modal Pre-Training
Figure 4 for WenLan: Bridging Vision and Language by Large-Scale Multi-Modal Pre-Training
Viaarxiv icon

Contrastive Prototype Learning with Augmented Embeddings for Few-Shot Learning

Add code
Jan 23, 2021
Figure 1 for Contrastive Prototype Learning with Augmented Embeddings for Few-Shot Learning
Figure 2 for Contrastive Prototype Learning with Augmented Embeddings for Few-Shot Learning
Figure 3 for Contrastive Prototype Learning with Augmented Embeddings for Few-Shot Learning
Figure 4 for Contrastive Prototype Learning with Augmented Embeddings for Few-Shot Learning
Viaarxiv icon