Alert button
Picture for Zhao Song

Zhao Song

Alert button

Sublinear Time Algorithm for Online Weighted Bipartite Matching

Add code
Bookmark button
Alert button
Aug 05, 2022
Hang Hu, Zhao Song, Runzhou Tao, Zhaozhuo Xu, Danyang Zhuo

Viaarxiv icon

Bounding the Width of Neural Networks via Coupled Initialization -- A Worst Case Analysis

Add code
Bookmark button
Alert button
Jun 26, 2022
Alexander Munteanu, Simon Omlor, Zhao Song, David P. Woodruff

Figure 1 for Bounding the Width of Neural Networks via Coupled Initialization -- A Worst Case Analysis
Figure 2 for Bounding the Width of Neural Networks via Coupled Initialization -- A Worst Case Analysis
Figure 3 for Bounding the Width of Neural Networks via Coupled Initialization -- A Worst Case Analysis
Figure 4 for Bounding the Width of Neural Networks via Coupled Initialization -- A Worst Case Analysis
Viaarxiv icon

Smoothed Online Combinatorial Optimization Using Imperfect Predictions

Add code
Bookmark button
Alert button
Apr 23, 2022
Kai Wang, Zhao Song, Georgios Theocharous, Sridhar Mahadevan

Figure 1 for Smoothed Online Combinatorial Optimization Using Imperfect Predictions
Figure 2 for Smoothed Online Combinatorial Optimization Using Imperfect Predictions
Figure 3 for Smoothed Online Combinatorial Optimization Using Imperfect Predictions
Figure 4 for Smoothed Online Combinatorial Optimization Using Imperfect Predictions
Viaarxiv icon

Perfectly Balanced: Improving Transfer and Robustness of Supervised Contrastive Learning

Add code
Bookmark button
Alert button
Apr 15, 2022
Mayee F. Chen, Daniel Y. Fu, Avanika Narayan, Michael Zhang, Zhao Song, Kayvon Fatahalian, Christopher Ré

Figure 1 for Perfectly Balanced: Improving Transfer and Robustness of Supervised Contrastive Learning
Figure 2 for Perfectly Balanced: Improving Transfer and Robustness of Supervised Contrastive Learning
Figure 3 for Perfectly Balanced: Improving Transfer and Robustness of Supervised Contrastive Learning
Figure 4 for Perfectly Balanced: Improving Transfer and Robustness of Supervised Contrastive Learning
Viaarxiv icon

Training Multi-Layer Over-Parametrized Neural Network in Subquadratic Time

Add code
Bookmark button
Alert button
Dec 14, 2021
Zhao Song, Lichen Zhang, Ruizhe Zhang

Viaarxiv icon

On Convergence of Federated Averaging Langevin Dynamics

Add code
Bookmark button
Alert button
Dec 09, 2021
Wei Deng, Yi-An Ma, Zhao Song, Qian Zhang, Guang Lin

Figure 1 for On Convergence of Federated Averaging Langevin Dynamics
Figure 2 for On Convergence of Federated Averaging Langevin Dynamics
Viaarxiv icon

Fast Graph Neural Tangent Kernel via Kronecker Sketching

Add code
Bookmark button
Alert button
Dec 04, 2021
Shunhua Jiang, Yunze Man, Zhao Song, Zheng Yu, Danyang Zhuo

Figure 1 for Fast Graph Neural Tangent Kernel via Kronecker Sketching
Figure 2 for Fast Graph Neural Tangent Kernel via Kronecker Sketching
Figure 3 for Fast Graph Neural Tangent Kernel via Kronecker Sketching
Figure 4 for Fast Graph Neural Tangent Kernel via Kronecker Sketching
Viaarxiv icon

Evaluating Gradient Inversion Attacks and Defenses in Federated Learning

Add code
Bookmark button
Alert button
Nov 30, 2021
Yangsibo Huang, Samyak Gupta, Zhao Song, Kai Li, Sanjeev Arora

Figure 1 for Evaluating Gradient Inversion Attacks and Defenses in Federated Learning
Figure 2 for Evaluating Gradient Inversion Attacks and Defenses in Federated Learning
Figure 3 for Evaluating Gradient Inversion Attacks and Defenses in Federated Learning
Figure 4 for Evaluating Gradient Inversion Attacks and Defenses in Federated Learning
Viaarxiv icon

Pixelated Butterfly: Simple and Efficient Sparse training for Neural Network Models

Add code
Bookmark button
Alert button
Nov 30, 2021
Beidi Chen, Tri Dao, Kaizhao Liang, Jiaming Yang, Zhao Song, Atri Rudra, Christopher Re

Figure 1 for Pixelated Butterfly: Simple and Efficient Sparse training for Neural Network Models
Figure 2 for Pixelated Butterfly: Simple and Efficient Sparse training for Neural Network Models
Figure 3 for Pixelated Butterfly: Simple and Efficient Sparse training for Neural Network Models
Figure 4 for Pixelated Butterfly: Simple and Efficient Sparse training for Neural Network Models
Viaarxiv icon

Breaking the Linear Iteration Cost Barrier for Some Well-known Conditional Gradient Methods Using MaxIP Data-structures

Add code
Bookmark button
Alert button
Nov 30, 2021
Anshumali Shrivastava, Zhao Song, Zhaozhuo Xu

Figure 1 for Breaking the Linear Iteration Cost Barrier for Some Well-known Conditional Gradient Methods Using MaxIP Data-structures
Figure 2 for Breaking the Linear Iteration Cost Barrier for Some Well-known Conditional Gradient Methods Using MaxIP Data-structures
Figure 3 for Breaking the Linear Iteration Cost Barrier for Some Well-known Conditional Gradient Methods Using MaxIP Data-structures
Figure 4 for Breaking the Linear Iteration Cost Barrier for Some Well-known Conditional Gradient Methods Using MaxIP Data-structures
Viaarxiv icon