Picture for Liqiang Wang

Liqiang Wang

Depthwise Convolution is All You Need for Learning Multiple Visual Domains

Add code
Feb 19, 2019
Figure 1 for Depthwise Convolution is All You Need for Learning Multiple Visual Domains
Figure 2 for Depthwise Convolution is All You Need for Learning Multiple Visual Domains
Figure 3 for Depthwise Convolution is All You Need for Learning Multiple Visual Domains
Figure 4 for Depthwise Convolution is All You Need for Learning Multiple Visual Domains
Viaarxiv icon

Learning to Adaptively Scale Recurrent Neural Networks

Add code
Feb 15, 2019
Figure 1 for Learning to Adaptively Scale Recurrent Neural Networks
Figure 2 for Learning to Adaptively Scale Recurrent Neural Networks
Figure 3 for Learning to Adaptively Scale Recurrent Neural Networks
Figure 4 for Learning to Adaptively Scale Recurrent Neural Networks
Viaarxiv icon

Asynchronous Delay-Aware Accelerated Proximal Coordinate Descent for Nonconvex Nonsmooth Problems

Add code
Feb 05, 2019
Figure 1 for Asynchronous Delay-Aware Accelerated Proximal Coordinate Descent for Nonconvex Nonsmooth Problems
Figure 2 for Asynchronous Delay-Aware Accelerated Proximal Coordinate Descent for Nonconvex Nonsmooth Problems
Figure 3 for Asynchronous Delay-Aware Accelerated Proximal Coordinate Descent for Nonconvex Nonsmooth Problems
Viaarxiv icon

AET vs. AED: Unsupervised Representation Learning by Auto-Encoding Transformations rather than Data

Add code
Jan 14, 2019
Figure 1 for AET vs. AED: Unsupervised Representation Learning by Auto-Encoding Transformations rather than Data
Figure 2 for AET vs. AED: Unsupervised Representation Learning by Auto-Encoding Transformations rather than Data
Figure 3 for AET vs. AED: Unsupervised Representation Learning by Auto-Encoding Transformations rather than Data
Figure 4 for AET vs. AED: Unsupervised Representation Learning by Auto-Encoding Transformations rather than Data
Viaarxiv icon

A Proximal Zeroth-Order Algorithm for Nonconvex Nonsmooth Problems

Add code
Oct 17, 2018
Figure 1 for A Proximal Zeroth-Order Algorithm for Nonconvex Nonsmooth Problems
Viaarxiv icon

How Local is the Local Diversity? Reinforcing Sequential Determinantal Point Processes with Dynamic Ground Sets for Supervised Video Summarization

Add code
Aug 24, 2018
Figure 1 for How Local is the Local Diversity? Reinforcing Sequential Determinantal Point Processes with Dynamic Ground Sets for Supervised Video Summarization
Figure 2 for How Local is the Local Diversity? Reinforcing Sequential Determinantal Point Processes with Dynamic Ground Sets for Supervised Video Summarization
Figure 3 for How Local is the Local Diversity? Reinforcing Sequential Determinantal Point Processes with Dynamic Ground Sets for Supervised Video Summarization
Figure 4 for How Local is the Local Diversity? Reinforcing Sequential Determinantal Point Processes with Dynamic Ground Sets for Supervised Video Summarization
Viaarxiv icon

A Semi-Supervised Two-Stage Approach to Learning from Noisy Labels

Add code
Mar 21, 2018
Figure 1 for A Semi-Supervised Two-Stage Approach to Learning from Noisy Labels
Figure 2 for A Semi-Supervised Two-Stage Approach to Learning from Noisy Labels
Figure 3 for A Semi-Supervised Two-Stage Approach to Learning from Noisy Labels
Figure 4 for A Semi-Supervised Two-Stage Approach to Learning from Noisy Labels
Viaarxiv icon

Improving the Improved Training of Wasserstein GANs: A Consistency Term and Its Dual Effect

Add code
Mar 05, 2018
Figure 1 for Improving the Improved Training of Wasserstein GANs: A Consistency Term and Its Dual Effect
Figure 2 for Improving the Improved Training of Wasserstein GANs: A Consistency Term and Its Dual Effect
Figure 3 for Improving the Improved Training of Wasserstein GANs: A Consistency Term and Its Dual Effect
Figure 4 for Improving the Improved Training of Wasserstein GANs: A Consistency Term and Its Dual Effect
Viaarxiv icon