Picture for Dongjoo Seo

Dongjoo Seo

Demand Layering for Real-Time DNN Inference with Minimized Memory Usage

Add code
Oct 08, 2022
Figure 1 for Demand Layering for Real-Time DNN Inference with Minimized Memory Usage
Figure 2 for Demand Layering for Real-Time DNN Inference with Minimized Memory Usage
Figure 3 for Demand Layering for Real-Time DNN Inference with Minimized Memory Usage
Figure 4 for Demand Layering for Real-Time DNN Inference with Minimized Memory Usage
Viaarxiv icon

Hybrid Learning for Orchestrating Deep Learning Inference in Multi-user Edge-cloud Networks

Add code
Feb 21, 2022
Figure 1 for Hybrid Learning for Orchestrating Deep Learning Inference in Multi-user Edge-cloud Networks
Figure 2 for Hybrid Learning for Orchestrating Deep Learning Inference in Multi-user Edge-cloud Networks
Figure 3 for Hybrid Learning for Orchestrating Deep Learning Inference in Multi-user Edge-cloud Networks
Figure 4 for Hybrid Learning for Orchestrating Deep Learning Inference in Multi-user Edge-cloud Networks
Viaarxiv icon

Online Learning for Orchestration of Inference in Multi-User End-Edge-Cloud Networks

Add code
Feb 21, 2022
Figure 1 for Online Learning for Orchestration of Inference in Multi-User End-Edge-Cloud Networks
Figure 2 for Online Learning for Orchestration of Inference in Multi-User End-Edge-Cloud Networks
Figure 3 for Online Learning for Orchestration of Inference in Multi-User End-Edge-Cloud Networks
Figure 4 for Online Learning for Orchestration of Inference in Multi-User End-Edge-Cloud Networks
Viaarxiv icon

NSML: Meet the MLaaS platform with a real-world case study

Add code
Oct 08, 2018
Figure 1 for NSML: Meet the MLaaS platform with a real-world case study
Figure 2 for NSML: Meet the MLaaS platform with a real-world case study
Figure 3 for NSML: Meet the MLaaS platform with a real-world case study
Figure 4 for NSML: Meet the MLaaS platform with a real-world case study
Viaarxiv icon