Alert button
Picture for Rajarshi Saha

Rajarshi Saha

Alert button

Matrix Compression via Randomized Low Rank and Low Precision Factorization

Add code
Bookmark button
Alert button
Oct 17, 2023
Rajarshi Saha, Varun Srivastava, Mert Pilanci

Viaarxiv icon

Collaborative Mean Estimation over Intermittently Connected Networks with Peer-To-Peer Privacy

Add code
Bookmark button
Alert button
Feb 28, 2023
Rajarshi Saha, Mohamed Seif, Michal Yemini, Andrea J. Goldsmith, H. Vincent Poor

Figure 1 for Collaborative Mean Estimation over Intermittently Connected Networks with Peer-To-Peer Privacy
Figure 2 for Collaborative Mean Estimation over Intermittently Connected Networks with Peer-To-Peer Privacy
Figure 3 for Collaborative Mean Estimation over Intermittently Connected Networks with Peer-To-Peer Privacy
Figure 4 for Collaborative Mean Estimation over Intermittently Connected Networks with Peer-To-Peer Privacy
Viaarxiv icon

Semi-Decentralized Federated Learning with Collaborative Relaying

Add code
Bookmark button
Alert button
May 23, 2022
Michal Yemini, Rajarshi Saha, Emre Ozfatura, Deniz Gündüz, Andrea J. Goldsmith

Figure 1 for Semi-Decentralized Federated Learning with Collaborative Relaying
Figure 2 for Semi-Decentralized Federated Learning with Collaborative Relaying
Figure 3 for Semi-Decentralized Federated Learning with Collaborative Relaying
Figure 4 for Semi-Decentralized Federated Learning with Collaborative Relaying
Viaarxiv icon

Robust Federated Learning with Connectivity Failures: A Semi-Decentralized Framework with Collaborative Relaying

Add code
Bookmark button
Alert button
Feb 24, 2022
Michal Yemini, Rajarshi Saha, Emre Ozfatura, Deniz Gündüz, Andrea J. Goldsmith

Figure 1 for Robust Federated Learning with Connectivity Failures: A Semi-Decentralized Framework with Collaborative Relaying
Figure 2 for Robust Federated Learning with Connectivity Failures: A Semi-Decentralized Framework with Collaborative Relaying
Figure 3 for Robust Federated Learning with Connectivity Failures: A Semi-Decentralized Framework with Collaborative Relaying
Figure 4 for Robust Federated Learning with Connectivity Failures: A Semi-Decentralized Framework with Collaborative Relaying
Viaarxiv icon

Minimax Optimal Quantization of Linear Models: Information-Theoretic Limits and Efficient Algorithms

Add code
Bookmark button
Alert button
Feb 23, 2022
Rajarshi Saha, Mert Pilanci, Andrea J. Goldsmith

Figure 1 for Minimax Optimal Quantization of Linear Models: Information-Theoretic Limits and Efficient Algorithms
Figure 2 for Minimax Optimal Quantization of Linear Models: Information-Theoretic Limits and Efficient Algorithms
Figure 3 for Minimax Optimal Quantization of Linear Models: Information-Theoretic Limits and Efficient Algorithms
Figure 4 for Minimax Optimal Quantization of Linear Models: Information-Theoretic Limits and Efficient Algorithms
Viaarxiv icon

Partner-Aware Algorithms in Decentralized Cooperative Bandit Teams

Add code
Bookmark button
Alert button
Oct 02, 2021
Erdem Bıyık, Anusha Lalitha, Rajarshi Saha, Andrea Goldsmith, Dorsa Sadigh

Figure 1 for Partner-Aware Algorithms in Decentralized Cooperative Bandit Teams
Figure 2 for Partner-Aware Algorithms in Decentralized Cooperative Bandit Teams
Figure 3 for Partner-Aware Algorithms in Decentralized Cooperative Bandit Teams
Figure 4 for Partner-Aware Algorithms in Decentralized Cooperative Bandit Teams
Viaarxiv icon

Distributed Learning and Democratic Embeddings: Polynomial-Time Source Coding Schemes Can Achieve Minimax Lower Bounds for Distributed Gradient Descent under Communication Constraints

Add code
Bookmark button
Alert button
Mar 13, 2021
Rajarshi Saha, Mert Pilanci, Andrea J. Goldsmith

Figure 1 for Distributed Learning and Democratic Embeddings: Polynomial-Time Source Coding Schemes Can Achieve Minimax Lower Bounds for Distributed Gradient Descent under Communication Constraints
Figure 2 for Distributed Learning and Democratic Embeddings: Polynomial-Time Source Coding Schemes Can Achieve Minimax Lower Bounds for Distributed Gradient Descent under Communication Constraints
Figure 3 for Distributed Learning and Democratic Embeddings: Polynomial-Time Source Coding Schemes Can Achieve Minimax Lower Bounds for Distributed Gradient Descent under Communication Constraints
Figure 4 for Distributed Learning and Democratic Embeddings: Polynomial-Time Source Coding Schemes Can Achieve Minimax Lower Bounds for Distributed Gradient Descent under Communication Constraints
Viaarxiv icon