Alert button
Picture for Nam Sung Kim

Nam Sung Kim

Alert button

Defensive ML: Defending Architectural Side-channels with Adversarial Obfuscation

Add code
Bookmark button
Alert button
Feb 03, 2023
Hyoungwook Nam, Raghavendra Pradyumna Pothukuchi, Bo Li, Nam Sung Kim, Josep Torrellas

Figure 1 for Defensive ML: Defending Architectural Side-channels with Adversarial Obfuscation
Figure 2 for Defensive ML: Defending Architectural Side-channels with Adversarial Obfuscation
Figure 3 for Defensive ML: Defending Architectural Side-channels with Adversarial Obfuscation
Figure 4 for Defensive ML: Defending Architectural Side-channels with Adversarial Obfuscation
Viaarxiv icon

BNS-GCN: Efficient Full-Graph Training of Graph Convolutional Networks with Partition-Parallelism and Random Boundary Node Sampling

Add code
Bookmark button
Alert button
Mar 26, 2022
Cheng Wan, Youjie Li, Ang Li, Nam Sung Kim, Yingyan Lin

Figure 1 for BNS-GCN: Efficient Full-Graph Training of Graph Convolutional Networks with Partition-Parallelism and Random Boundary Node Sampling
Figure 2 for BNS-GCN: Efficient Full-Graph Training of Graph Convolutional Networks with Partition-Parallelism and Random Boundary Node Sampling
Figure 3 for BNS-GCN: Efficient Full-Graph Training of Graph Convolutional Networks with Partition-Parallelism and Random Boundary Node Sampling
Figure 4 for BNS-GCN: Efficient Full-Graph Training of Graph Convolutional Networks with Partition-Parallelism and Random Boundary Node Sampling
Viaarxiv icon

PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication

Add code
Bookmark button
Alert button
Mar 20, 2022
Cheng Wan, Youjie Li, Cameron R. Wolfe, Anastasios Kyrillidis, Nam Sung Kim, Yingyan Lin

Figure 1 for PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication
Figure 2 for PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication
Figure 3 for PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication
Figure 4 for PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication
Viaarxiv icon

Harmony: Overcoming the hurdles of GPU memory capacity to train massive DNN models on commodity servers

Add code
Bookmark button
Alert button
Feb 02, 2022
Youjie Li, Amar Phanishayee, Derek Murray, Jakub Tarnawski, Nam Sung Kim

Figure 1 for Harmony: Overcoming the hurdles of GPU memory capacity to train massive DNN models on commodity servers
Figure 2 for Harmony: Overcoming the hurdles of GPU memory capacity to train massive DNN models on commodity servers
Figure 3 for Harmony: Overcoming the hurdles of GPU memory capacity to train massive DNN models on commodity servers
Figure 4 for Harmony: Overcoming the hurdles of GPU memory capacity to train massive DNN models on commodity servers
Viaarxiv icon

Bit-Parallel Vector Composability for Neural Acceleration

Add code
Bookmark button
Alert button
Apr 11, 2020
Soroush Ghodrati, Hardik Sharma, Cliff Young, Nam Sung Kim, Hadi Esmaeilzadeh

Figure 1 for Bit-Parallel Vector Composability for Neural Acceleration
Figure 2 for Bit-Parallel Vector Composability for Neural Acceleration
Figure 3 for Bit-Parallel Vector Composability for Neural Acceleration
Figure 4 for Bit-Parallel Vector Composability for Neural Acceleration
Viaarxiv icon

Pipe-SGD: A Decentralized Pipelined SGD Framework for Distributed Deep Net Training

Add code
Bookmark button
Alert button
Nov 08, 2018
Youjie Li, Mingchao Yu, Songze Li, Salman Avestimehr, Nam Sung Kim, Alexander Schwing

Figure 1 for Pipe-SGD: A Decentralized Pipelined SGD Framework for Distributed Deep Net Training
Figure 2 for Pipe-SGD: A Decentralized Pipelined SGD Framework for Distributed Deep Net Training
Figure 3 for Pipe-SGD: A Decentralized Pipelined SGD Framework for Distributed Deep Net Training
Figure 4 for Pipe-SGD: A Decentralized Pipelined SGD Framework for Distributed Deep Net Training
Viaarxiv icon

GradiVeQ: Vector Quantization for Bandwidth-Efficient Gradient Aggregation in Distributed CNN Training

Add code
Bookmark button
Alert button
Nov 08, 2018
Mingchao Yu, Zhifeng Lin, Krishna Narra, Songze Li, Youjie Li, Nam Sung Kim, Alexander Schwing, Murali Annavaram, Salman Avestimehr

Figure 1 for GradiVeQ: Vector Quantization for Bandwidth-Efficient Gradient Aggregation in Distributed CNN Training
Figure 2 for GradiVeQ: Vector Quantization for Bandwidth-Efficient Gradient Aggregation in Distributed CNN Training
Figure 3 for GradiVeQ: Vector Quantization for Bandwidth-Efficient Gradient Aggregation in Distributed CNN Training
Figure 4 for GradiVeQ: Vector Quantization for Bandwidth-Efficient Gradient Aggregation in Distributed CNN Training
Viaarxiv icon

GANAX: A Unified MIMD-SIMD Acceleration for Generative Adversarial Networks

Add code
Bookmark button
Alert button
May 10, 2018
Amir Yazdanbakhsh, Hajar Falahati, Philip J. Wolfe, Kambiz Samadi, Nam Sung Kim, Hadi Esmaeilzadeh

Figure 1 for GANAX: A Unified MIMD-SIMD Acceleration for Generative Adversarial Networks
Figure 2 for GANAX: A Unified MIMD-SIMD Acceleration for Generative Adversarial Networks
Figure 3 for GANAX: A Unified MIMD-SIMD Acceleration for Generative Adversarial Networks
Figure 4 for GANAX: A Unified MIMD-SIMD Acceleration for Generative Adversarial Networks
Viaarxiv icon