Picture for Wu-Jun Li

Wu-Jun Li

Multiple Code Hashing for Efficient Image Retrieval

Add code
Aug 04, 2020
Figure 1 for Multiple Code Hashing for Efficient Image Retrieval
Figure 2 for Multiple Code Hashing for Efficient Image Retrieval
Figure 3 for Multiple Code Hashing for Efficient Image Retrieval
Figure 4 for Multiple Code Hashing for Efficient Image Retrieval
Viaarxiv icon

ExchNet: A Unified Hashing Network for Large-Scale Fine-Grained Image Retrieval

Add code
Aug 04, 2020
Figure 1 for ExchNet: A Unified Hashing Network for Large-Scale Fine-Grained Image Retrieval
Figure 2 for ExchNet: A Unified Hashing Network for Large-Scale Fine-Grained Image Retrieval
Figure 3 for ExchNet: A Unified Hashing Network for Large-Scale Fine-Grained Image Retrieval
Figure 4 for ExchNet: A Unified Hashing Network for Large-Scale Fine-Grained Image Retrieval
Viaarxiv icon

Stochastic Normalized Gradient Descent with Momentum for Large Batch Training

Add code
Jul 28, 2020
Figure 1 for Stochastic Normalized Gradient Descent with Momentum for Large Batch Training
Figure 2 for Stochastic Normalized Gradient Descent with Momentum for Large Batch Training
Figure 3 for Stochastic Normalized Gradient Descent with Momentum for Large Batch Training
Figure 4 for Stochastic Normalized Gradient Descent with Momentum for Large Batch Training
Viaarxiv icon

TOMA: Topological Map Abstraction for Reinforcement Learning

Add code
May 11, 2020
Figure 1 for TOMA: Topological Map Abstraction for Reinforcement Learning
Figure 2 for TOMA: Topological Map Abstraction for Reinforcement Learning
Figure 3 for TOMA: Topological Map Abstraction for Reinforcement Learning
Figure 4 for TOMA: Topological Map Abstraction for Reinforcement Learning
Viaarxiv icon

BASGD: Buffered Asynchronous SGD for Byzantine Learning

Add code
Mar 03, 2020
Figure 1 for BASGD: Buffered Asynchronous SGD for Byzantine Learning
Figure 2 for BASGD: Buffered Asynchronous SGD for Byzantine Learning
Figure 3 for BASGD: Buffered Asynchronous SGD for Byzantine Learning
Figure 4 for BASGD: Buffered Asynchronous SGD for Byzantine Learning
Viaarxiv icon

Stagewise Enlargement of Batch Size for SGD-based Learning

Add code
Feb 27, 2020
Figure 1 for Stagewise Enlargement of Batch Size for SGD-based Learning
Figure 2 for Stagewise Enlargement of Batch Size for SGD-based Learning
Figure 3 for Stagewise Enlargement of Batch Size for SGD-based Learning
Figure 4 for Stagewise Enlargement of Batch Size for SGD-based Learning
Viaarxiv icon

Weight Normalization based Quantization for Deep Neural Network Compression

Add code
Jul 01, 2019
Figure 1 for Weight Normalization based Quantization for Deep Neural Network Compression
Figure 2 for Weight Normalization based Quantization for Deep Neural Network Compression
Figure 3 for Weight Normalization based Quantization for Deep Neural Network Compression
Figure 4 for Weight Normalization based Quantization for Deep Neural Network Compression
Viaarxiv icon

ADASS: Adaptive Sample Selection for Training Acceleration

Add code
Jun 11, 2019
Figure 1 for ADASS: Adaptive Sample Selection for Training Acceleration
Figure 2 for ADASS: Adaptive Sample Selection for Training Acceleration
Figure 3 for ADASS: Adaptive Sample Selection for Training Acceleration
Figure 4 for ADASS: Adaptive Sample Selection for Training Acceleration
Viaarxiv icon

Clustered Reinforcement Learning

Add code
Jun 06, 2019
Figure 1 for Clustered Reinforcement Learning
Figure 2 for Clustered Reinforcement Learning
Figure 3 for Clustered Reinforcement Learning
Figure 4 for Clustered Reinforcement Learning
Viaarxiv icon

On the Convergence of Memory-Based Distributed SGD

Add code
May 30, 2019
Figure 1 for On the Convergence of Memory-Based Distributed SGD
Viaarxiv icon