Picture for Gu-Yeon Wei

Gu-Yeon Wei

Quantifying and Maximizing the Benefits of Back-End Noise Adaption on Attention-Based Speech Recognition Models

Add code
May 03, 2021
Figure 1 for Quantifying and Maximizing the Benefits of Back-End Noise Adaption on Attention-Based Speech Recognition Models
Figure 2 for Quantifying and Maximizing the Benefits of Back-End Noise Adaption on Attention-Based Speech Recognition Models
Figure 3 for Quantifying and Maximizing the Benefits of Back-End Noise Adaption on Attention-Based Speech Recognition Models
Figure 4 for Quantifying and Maximizing the Benefits of Back-End Noise Adaption on Attention-Based Speech Recognition Models
Viaarxiv icon

Machine Learning-Based Automated Design Space Exploration for Autonomous Aerial Robots

Add code
Feb 05, 2021
Figure 1 for Machine Learning-Based Automated Design Space Exploration for Autonomous Aerial Robots
Figure 2 for Machine Learning-Based Automated Design Space Exploration for Autonomous Aerial Robots
Figure 3 for Machine Learning-Based Automated Design Space Exploration for Autonomous Aerial Robots
Figure 4 for Machine Learning-Based Automated Design Space Exploration for Autonomous Aerial Robots
Viaarxiv icon

RecSSD: Near Data Processing for Solid State Drive Based Recommendation Inference

Add code
Jan 29, 2021
Figure 1 for RecSSD: Near Data Processing for Solid State Drive Based Recommendation Inference
Figure 2 for RecSSD: Near Data Processing for Solid State Drive Based Recommendation Inference
Figure 3 for RecSSD: Near Data Processing for Solid State Drive Based Recommendation Inference
Figure 4 for RecSSD: Near Data Processing for Solid State Drive Based Recommendation Inference
Viaarxiv icon

EdgeBERT: Optimizing On-Chip Inference for Multi-Task NLP

Add code
Dec 01, 2020
Figure 1 for EdgeBERT: Optimizing On-Chip Inference for Multi-Task NLP
Figure 2 for EdgeBERT: Optimizing On-Chip Inference for Multi-Task NLP
Figure 3 for EdgeBERT: Optimizing On-Chip Inference for Multi-Task NLP
Figure 4 for EdgeBERT: Optimizing On-Chip Inference for Multi-Task NLP
Viaarxiv icon

SMAUG: End-to-End Full-Stack Simulation Infrastructure for Deep Learning Workloads

Add code
Dec 11, 2019
Figure 1 for SMAUG: End-to-End Full-Stack Simulation Infrastructure for Deep Learning Workloads
Figure 2 for SMAUG: End-to-End Full-Stack Simulation Infrastructure for Deep Learning Workloads
Figure 3 for SMAUG: End-to-End Full-Stack Simulation Infrastructure for Deep Learning Workloads
Figure 4 for SMAUG: End-to-End Full-Stack Simulation Infrastructure for Deep Learning Workloads
Viaarxiv icon

A binary-activation, multi-level weight RNN and training algorithm for processing-in-memory inference with eNVM

Add code
Dec 03, 2019
Figure 1 for A binary-activation, multi-level weight RNN and training algorithm for processing-in-memory inference with eNVM
Figure 2 for A binary-activation, multi-level weight RNN and training algorithm for processing-in-memory inference with eNVM
Figure 3 for A binary-activation, multi-level weight RNN and training algorithm for processing-in-memory inference with eNVM
Figure 4 for A binary-activation, multi-level weight RNN and training algorithm for processing-in-memory inference with eNVM
Viaarxiv icon

MLPerf Training Benchmark

Add code
Oct 30, 2019
Figure 1 for MLPerf Training Benchmark
Figure 2 for MLPerf Training Benchmark
Figure 3 for MLPerf Training Benchmark
Figure 4 for MLPerf Training Benchmark
Viaarxiv icon

AdaptivFloat: A Floating-point based Data Type for Resilient Deep Learning Inference

Add code
Oct 15, 2019
Figure 1 for AdaptivFloat: A Floating-point based Data Type for Resilient Deep Learning Inference
Figure 2 for AdaptivFloat: A Floating-point based Data Type for Resilient Deep Learning Inference
Figure 3 for AdaptivFloat: A Floating-point based Data Type for Resilient Deep Learning Inference
Figure 4 for AdaptivFloat: A Floating-point based Data Type for Resilient Deep Learning Inference
Viaarxiv icon

Benchmarking TPU, GPU, and CPU Platforms for Deep Learning

Add code
Aug 06, 2019
Figure 1 for Benchmarking TPU, GPU, and CPU Platforms for Deep Learning
Figure 2 for Benchmarking TPU, GPU, and CPU Platforms for Deep Learning
Figure 3 for Benchmarking TPU, GPU, and CPU Platforms for Deep Learning
Figure 4 for Benchmarking TPU, GPU, and CPU Platforms for Deep Learning
Viaarxiv icon

Learning Low-Rank Approximation for CNNs

Add code
May 24, 2019
Figure 1 for Learning Low-Rank Approximation for CNNs
Figure 2 for Learning Low-Rank Approximation for CNNs
Figure 3 for Learning Low-Rank Approximation for CNNs
Figure 4 for Learning Low-Rank Approximation for CNNs
Viaarxiv icon