Picture for Tuo Zhao

Tuo Zhao

Picasso: A Sparse Learning Library for High Dimensional Data Analysis in R and Python

Add code
Jun 27, 2020
Figure 1 for Picasso: A Sparse Learning Library for High Dimensional Data Analysis in R and Python
Figure 2 for Picasso: A Sparse Learning Library for High Dimensional Data Analysis in R and Python
Figure 3 for Picasso: A Sparse Learning Library for High Dimensional Data Analysis in R and Python
Viaarxiv icon

The huge Package for High-dimensional Undirected Graph Estimation in R

Add code
Jun 26, 2020
Figure 1 for The huge Package for High-dimensional Undirected Graph Estimation in R
Figure 2 for The huge Package for High-dimensional Undirected Graph Estimation in R
Figure 3 for The huge Package for High-dimensional Undirected Graph Estimation in R
Figure 4 for The huge Package for High-dimensional Undirected Graph Estimation in R
Viaarxiv icon

Towards Understanding Hierarchical Learning: Benefits of Neural Representations

Add code
Jun 24, 2020
Viaarxiv icon

Deep Reinforcement Learning with Smooth Policy

Add code
Mar 24, 2020
Figure 1 for Deep Reinforcement Learning with Smooth Policy
Figure 2 for Deep Reinforcement Learning with Smooth Policy
Figure 3 for Deep Reinforcement Learning with Smooth Policy
Figure 4 for Deep Reinforcement Learning with Smooth Policy
Viaarxiv icon

Transformer Hawkes Process

Add code
Feb 21, 2020
Figure 1 for Transformer Hawkes Process
Figure 2 for Transformer Hawkes Process
Figure 3 for Transformer Hawkes Process
Figure 4 for Transformer Hawkes Process
Viaarxiv icon

Differentiable Top-k Operator with Optimal Transport

Add code
Feb 18, 2020
Figure 1 for Differentiable Top-k Operator with Optimal Transport
Figure 2 for Differentiable Top-k Operator with Optimal Transport
Figure 3 for Differentiable Top-k Operator with Optimal Transport
Figure 4 for Differentiable Top-k Operator with Optimal Transport
Viaarxiv icon

Why Do Deep Residual Networks Generalize Better than Deep Feedforward Networks? -- A Neural Tangent Kernel Perspective

Add code
Feb 14, 2020
Figure 1 for Why Do Deep Residual Networks Generalize Better than Deep Feedforward Networks? -- A Neural Tangent Kernel Perspective
Figure 2 for Why Do Deep Residual Networks Generalize Better than Deep Feedforward Networks? -- A Neural Tangent Kernel Perspective
Figure 3 for Why Do Deep Residual Networks Generalize Better than Deep Feedforward Networks? -- A Neural Tangent Kernel Perspective
Viaarxiv icon

Statistical Guarantees of Generative Adversarial Networks for Distribution Estimation

Add code
Feb 10, 2020
Figure 1 for Statistical Guarantees of Generative Adversarial Networks for Distribution Estimation
Viaarxiv icon

On Computation and Generalization of Generative Adversarial Imitation Learning

Add code
Jan 12, 2020
Figure 1 for On Computation and Generalization of Generative Adversarial Imitation Learning
Viaarxiv icon

SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization

Add code
Nov 08, 2019
Figure 1 for SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
Figure 2 for SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
Figure 3 for SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
Figure 4 for SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
Viaarxiv icon