Alert button
Picture for Can Karakus

Can Karakus

Alert button

MADA: Meta-Adaptive Optimizers through hyper-gradient Descent

Jan 17, 2024
Kaan Ozkara, Can Karakus, Parameswaran Raman, Mingyi Hong, Shoham Sabach, Branislav Kveton, Volkan Cevher

Viaarxiv icon

Amazon SageMaker Model Parallelism: A General and Flexible Framework for Large Model Training

Nov 10, 2021
Can Karakus, Rahul Huilgol, Fei Wu, Anirudh Subramanian, Cade Daniel, Derya Cavdar, Teng Xu, Haohan Chen, Arash Rahnama, Luis Quintela

Figure 1 for Amazon SageMaker Model Parallelism: A General and Flexible Framework for Large Model Training
Figure 2 for Amazon SageMaker Model Parallelism: A General and Flexible Framework for Large Model Training
Figure 3 for Amazon SageMaker Model Parallelism: A General and Flexible Framework for Large Model Training
Figure 4 for Amazon SageMaker Model Parallelism: A General and Flexible Framework for Large Model Training
Viaarxiv icon

Qsparse-local-SGD: Distributed SGD with Quantization, Sparsification, and Local Computations

Jun 06, 2019
Debraj Basu, Deepesh Data, Can Karakus, Suhas Diggavi

Figure 1 for Qsparse-local-SGD: Distributed SGD with Quantization, Sparsification, and Local Computations
Figure 2 for Qsparse-local-SGD: Distributed SGD with Quantization, Sparsification, and Local Computations
Figure 3 for Qsparse-local-SGD: Distributed SGD with Quantization, Sparsification, and Local Computations
Viaarxiv icon

Densifying Assumed-sparse Tensors: Improving Memory Efficiency and MPI Collective Performance during Tensor Accumulation for Parallelized Training of Neural Machine Translation Models

May 10, 2019
Derya Cavdar, Valeriu Codreanu, Can Karakus, John A. Lockman III, Damian Podareanu, Vikram Saletore, Alexander Sergeev, Don D. Smith II, Victor Suthichai, Quy Ta, Srinivas Varadharajan, Lucas A. Wilson, Rengan Xu, Pei Yang

Figure 1 for Densifying Assumed-sparse Tensors: Improving Memory Efficiency and MPI Collective Performance during Tensor Accumulation for Parallelized Training of Neural Machine Translation Models
Figure 2 for Densifying Assumed-sparse Tensors: Improving Memory Efficiency and MPI Collective Performance during Tensor Accumulation for Parallelized Training of Neural Machine Translation Models
Figure 3 for Densifying Assumed-sparse Tensors: Improving Memory Efficiency and MPI Collective Performance during Tensor Accumulation for Parallelized Training of Neural Machine Translation Models
Figure 4 for Densifying Assumed-sparse Tensors: Improving Memory Efficiency and MPI Collective Performance during Tensor Accumulation for Parallelized Training of Neural Machine Translation Models
Viaarxiv icon

Differentially Private Consensus-Based Distributed Optimization

Mar 19, 2019
Mehrdad Showkatbakhsh, Can Karakus, Suhas Diggavi

Figure 1 for Differentially Private Consensus-Based Distributed Optimization
Figure 2 for Differentially Private Consensus-Based Distributed Optimization
Figure 3 for Differentially Private Consensus-Based Distributed Optimization
Figure 4 for Differentially Private Consensus-Based Distributed Optimization
Viaarxiv icon

Privacy-Utility Trade-off of Linear Regression under Random Projections and Additive Noise

Feb 13, 2019
Mehrdad Showkatbakhsh, Can Karakus, Suhas Diggavi

Figure 1 for Privacy-Utility Trade-off of Linear Regression under Random Projections and Additive Noise
Figure 2 for Privacy-Utility Trade-off of Linear Regression under Random Projections and Additive Noise
Figure 3 for Privacy-Utility Trade-off of Linear Regression under Random Projections and Additive Noise
Viaarxiv icon

Redundancy Techniques for Straggler Mitigation in Distributed Optimization and Learning

Mar 14, 2018
Can Karakus, Yifan Sun, Suhas Diggavi, Wotao Yin

Figure 1 for Redundancy Techniques for Straggler Mitigation in Distributed Optimization and Learning
Figure 2 for Redundancy Techniques for Straggler Mitigation in Distributed Optimization and Learning
Figure 3 for Redundancy Techniques for Straggler Mitigation in Distributed Optimization and Learning
Figure 4 for Redundancy Techniques for Straggler Mitigation in Distributed Optimization and Learning
Viaarxiv icon

Straggler Mitigation in Distributed Optimization Through Data Encoding

Jan 22, 2018
Can Karakus, Yifan Sun, Suhas Diggavi, Wotao Yin

Figure 1 for Straggler Mitigation in Distributed Optimization Through Data Encoding
Figure 2 for Straggler Mitigation in Distributed Optimization Through Data Encoding
Figure 3 for Straggler Mitigation in Distributed Optimization Through Data Encoding
Figure 4 for Straggler Mitigation in Distributed Optimization Through Data Encoding
Viaarxiv icon