Alert button
Picture for Jaeyong Song

Jaeyong Song

Alert button

PeerAiD: Improving Adversarial Distillation from a Specialized Peer Tutor

Add code
Bookmark button
Alert button
Mar 11, 2024
Jaewon Jung, Hongsun Jang, Jaeyong Song, Jinho Lee

Figure 1 for PeerAiD: Improving Adversarial Distillation from a Specialized Peer Tutor
Figure 2 for PeerAiD: Improving Adversarial Distillation from a Specialized Peer Tutor
Figure 3 for PeerAiD: Improving Adversarial Distillation from a Specialized Peer Tutor
Figure 4 for PeerAiD: Improving Adversarial Distillation from a Specialized Peer Tutor
Viaarxiv icon

Smart-Infinity: Fast Large Language Model Training using Near-Storage Processing on a Real System

Add code
Bookmark button
Alert button
Mar 11, 2024
Hongsun Jang, Jaeyong Song, Jaewon Jung, Jaeyoung Park, Youngsok Kim, Jinho Lee

Figure 1 for Smart-Infinity: Fast Large Language Model Training using Near-Storage Processing on a Real System
Figure 2 for Smart-Infinity: Fast Large Language Model Training using Near-Storage Processing on a Real System
Figure 3 for Smart-Infinity: Fast Large Language Model Training using Near-Storage Processing on a Real System
Figure 4 for Smart-Infinity: Fast Large Language Model Training using Near-Storage Processing on a Real System
Viaarxiv icon

GraNNDis: Efficient Unified Distributed Training Framework for Deep GNNs on Large Clusters

Add code
Bookmark button
Alert button
Nov 12, 2023
Jaeyong Song, Hongsun Jang, Jaewon Jung, Youngsok Kim, Jinho Lee

Viaarxiv icon

Pipe-BD: Pipelined Parallel Blockwise Distillation

Add code
Bookmark button
Alert button
Jan 29, 2023
Hongsun Jang, Jaewon Jung, Jaeyong Song, Joonsang Yu, Youngsok Kim, Jinho Lee

Figure 1 for Pipe-BD: Pipelined Parallel Blockwise Distillation
Figure 2 for Pipe-BD: Pipelined Parallel Blockwise Distillation
Figure 3 for Pipe-BD: Pipelined Parallel Blockwise Distillation
Figure 4 for Pipe-BD: Pipelined Parallel Blockwise Distillation
Viaarxiv icon

SGCN: Exploiting Compressed-Sparse Features in Deep Graph Convolutional Network Accelerators

Add code
Bookmark button
Alert button
Jan 25, 2023
Mingi Yoo, Jaeyong Song, Jounghoo Lee, Namhyung Kim, Youngsok Kim, Jinho Lee

Figure 1 for SGCN: Exploiting Compressed-Sparse Features in Deep Graph Convolutional Network Accelerators
Figure 2 for SGCN: Exploiting Compressed-Sparse Features in Deep Graph Convolutional Network Accelerators
Figure 3 for SGCN: Exploiting Compressed-Sparse Features in Deep Graph Convolutional Network Accelerators
Figure 4 for SGCN: Exploiting Compressed-Sparse Features in Deep Graph Convolutional Network Accelerators
Viaarxiv icon

Optimus-CC: Efficient Large NLP Model Training with 3D Parallelism Aware Communication Compression

Add code
Bookmark button
Alert button
Jan 24, 2023
Jaeyong Song, Jinkyu Yim, Jaewon Jung, Hongsun Jang, Hyung-Jin Kim, Youngsok Kim, Jinho Lee

Figure 1 for Optimus-CC: Efficient Large NLP Model Training with 3D Parallelism Aware Communication Compression
Figure 2 for Optimus-CC: Efficient Large NLP Model Training with 3D Parallelism Aware Communication Compression
Figure 3 for Optimus-CC: Efficient Large NLP Model Training with 3D Parallelism Aware Communication Compression
Figure 4 for Optimus-CC: Efficient Large NLP Model Training with 3D Parallelism Aware Communication Compression
Viaarxiv icon

Slice-and-Forge: Making Better Use of Caches for Graph Convolutional Network Accelerators

Add code
Bookmark button
Alert button
Jan 24, 2023
Mingi Yoo, Jaeyong Song, Hyeyoon Lee, Jounghoo Lee, Namhyung Kim, Youngsok Kim, Jinho Lee

Figure 1 for Slice-and-Forge: Making Better Use of Caches for Graph Convolutional Network Accelerators
Figure 2 for Slice-and-Forge: Making Better Use of Caches for Graph Convolutional Network Accelerators
Figure 3 for Slice-and-Forge: Making Better Use of Caches for Graph Convolutional Network Accelerators
Figure 4 for Slice-and-Forge: Making Better Use of Caches for Graph Convolutional Network Accelerators
Viaarxiv icon