Alert button
Picture for Seungeun Oh

Seungeun Oh

Alert button

SplitAMC: Split Learning for Robust Automatic Modulation Classification

Add code
Bookmark button
Alert button
Apr 17, 2023
Jihoon Park, Seungeun Oh, Seong-Lyun Kim

Figure 1 for SplitAMC: Split Learning for Robust Automatic Modulation Classification
Figure 2 for SplitAMC: Split Learning for Robust Automatic Modulation Classification
Figure 3 for SplitAMC: Split Learning for Robust Automatic Modulation Classification
Figure 4 for SplitAMC: Split Learning for Robust Automatic Modulation Classification
Viaarxiv icon

Differentially Private CutMix for Split Learning with Vision Transformer

Add code
Bookmark button
Alert button
Oct 28, 2022
Seungeun Oh, Jihong Park, Sihun Baek, Hyelin Nam, Praneeth Vepakomma, Ramesh Raskar, Mehdi Bennis, Seong-Lyun Kim

Figure 1 for Differentially Private CutMix for Split Learning with Vision Transformer
Figure 2 for Differentially Private CutMix for Split Learning with Vision Transformer
Figure 3 for Differentially Private CutMix for Split Learning with Vision Transformer
Figure 4 for Differentially Private CutMix for Split Learning with Vision Transformer
Viaarxiv icon

Federated Knowledge Distillation

Add code
Bookmark button
Alert button
Nov 04, 2020
Hyowoon Seo, Jihong Park, Seungeun Oh, Mehdi Bennis, Seong-Lyun Kim

Figure 1 for Federated Knowledge Distillation
Figure 2 for Federated Knowledge Distillation
Figure 3 for Federated Knowledge Distillation
Figure 4 for Federated Knowledge Distillation
Viaarxiv icon

Mix2FLD: Downlink Federated Learning After Uplink Federated Distillation With Two-Way Mixup

Add code
Bookmark button
Alert button
Jun 17, 2020
Seungeun Oh, Jihong Park, Eunjeong Jeong, Hyesung Kim, Mehdi Bennis, Seong-Lyun Kim

Figure 1 for Mix2FLD: Downlink Federated Learning After Uplink Federated Distillation With Two-Way Mixup
Figure 2 for Mix2FLD: Downlink Federated Learning After Uplink Federated Distillation With Two-Way Mixup
Figure 3 for Mix2FLD: Downlink Federated Learning After Uplink Federated Distillation With Two-Way Mixup
Figure 4 for Mix2FLD: Downlink Federated Learning After Uplink Federated Distillation With Two-Way Mixup
Viaarxiv icon

Distilling On-Device Intelligence at the Network Edge

Add code
Bookmark button
Alert button
Aug 16, 2019
Jihong Park, Shiqiang Wang, Anis Elgabli, Seungeun Oh, Eunjeong Jeong, Han Cha, Hyesung Kim, Seong-Lyun Kim, Mehdi Bennis

Figure 1 for Distilling On-Device Intelligence at the Network Edge
Figure 2 for Distilling On-Device Intelligence at the Network Edge
Figure 3 for Distilling On-Device Intelligence at the Network Edge
Figure 4 for Distilling On-Device Intelligence at the Network Edge
Viaarxiv icon

Multi-hop Federated Private Data Augmentation with Sample Compression

Add code
Bookmark button
Alert button
Jul 15, 2019
Eunjeong Jeong, Seungeun Oh, Jihong Park, Hyesung Kim, Mehdi Bennis, Seong-Lyun Kim

Figure 1 for Multi-hop Federated Private Data Augmentation with Sample Compression
Figure 2 for Multi-hop Federated Private Data Augmentation with Sample Compression
Figure 3 for Multi-hop Federated Private Data Augmentation with Sample Compression
Figure 4 for Multi-hop Federated Private Data Augmentation with Sample Compression
Viaarxiv icon

Communication-Efficient On-Device Machine Learning: Federated Distillation and Augmentation under Non-IID Private Data

Add code
Bookmark button
Alert button
Nov 28, 2018
Eunjeong Jeong, Seungeun Oh, Hyesung Kim, Jihong Park, Mehdi Bennis, Seong-Lyun Kim

Figure 1 for Communication-Efficient On-Device Machine Learning: Federated Distillation and Augmentation under Non-IID Private Data
Figure 2 for Communication-Efficient On-Device Machine Learning: Federated Distillation and Augmentation under Non-IID Private Data
Figure 3 for Communication-Efficient On-Device Machine Learning: Federated Distillation and Augmentation under Non-IID Private Data
Viaarxiv icon