Alert button
Picture for Mu Li

Mu Li

Alert button

Nimble: Efficiently Compiling Dynamic Neural Networks for Model Inference

Add code
Bookmark button
Alert button
Jun 04, 2020
Haichen Shen, Jared Roesch, Zhi Chen, Wei Chen, Yong Wu, Mu Li, Vin Sharma, Zachary Tatlock, Yida Wang

Figure 1 for Nimble: Efficiently Compiling Dynamic Neural Networks for Model Inference
Figure 2 for Nimble: Efficiently Compiling Dynamic Neural Networks for Model Inference
Figure 3 for Nimble: Efficiently Compiling Dynamic Neural Networks for Model Inference
Figure 4 for Nimble: Efficiently Compiling Dynamic Neural Networks for Model Inference
Viaarxiv icon

Learning Context-Based Non-local Entropy Modeling for Image Compression

Add code
Bookmark button
Alert button
May 10, 2020
Mu Li, Kai Zhang, Wangmeng Zuo, Radu Timofte, David Zhang

Figure 1 for Learning Context-Based Non-local Entropy Modeling for Image Compression
Figure 2 for Learning Context-Based Non-local Entropy Modeling for Image Compression
Figure 3 for Learning Context-Based Non-local Entropy Modeling for Image Compression
Figure 4 for Learning Context-Based Non-local Entropy Modeling for Image Compression
Viaarxiv icon

Improving Semantic Segmentation via Self-Training

Add code
Bookmark button
Alert button
May 06, 2020
Yi Zhu, Zhongyue Zhang, Chongruo Wu, Zhi Zhang, Tong He, Hang Zhang, R. Manmatha, Mu Li, Alexander Smola

Figure 1 for Improving Semantic Segmentation via Self-Training
Figure 2 for Improving Semantic Segmentation via Self-Training
Figure 3 for Improving Semantic Segmentation via Self-Training
Figure 4 for Improving Semantic Segmentation via Self-Training
Viaarxiv icon

ResNeSt: Split-Attention Networks

Add code
Bookmark button
Alert button
Apr 19, 2020
Hang Zhang, Chongruo Wu, Zhongyue Zhang, Yi Zhu, Zhi Zhang, Haibin Lin, Yue Sun, Tong He, Jonas Mueller, R. Manmatha, Mu Li, Alexander Smola

Figure 1 for ResNeSt: Split-Attention Networks
Figure 2 for ResNeSt: Split-Attention Networks
Figure 3 for ResNeSt: Split-Attention Networks
Figure 4 for ResNeSt: Split-Attention Networks
Viaarxiv icon

AutoGluon-Tabular: Robust and Accurate AutoML for Structured Data

Add code
Bookmark button
Alert button
Mar 13, 2020
Nick Erickson, Jonas Mueller, Alexander Shirkov, Hang Zhang, Pedro Larroy, Mu Li, Alexander Smola

Figure 1 for AutoGluon-Tabular: Robust and Accurate AutoML for Structured Data
Figure 2 for AutoGluon-Tabular: Robust and Accurate AutoML for Structured Data
Figure 3 for AutoGluon-Tabular: Robust and Accurate AutoML for Structured Data
Figure 4 for AutoGluon-Tabular: Robust and Accurate AutoML for Structured Data
Viaarxiv icon

GluonCV and GluonNLP: Deep Learning in Computer Vision and Natural Language Processing

Add code
Bookmark button
Alert button
Jul 09, 2019
Jian Guo, He He, Tong He, Leonard Lausen, Mu Li, Haibin Lin, Xingjian Shi, Chenguang Wang, Junyuan Xie, Sheng Zha, Aston Zhang, Hang Zhang, Zhi Zhang, Zhongyue Zhang, Shuai Zheng

Figure 1 for GluonCV and GluonNLP: Deep Learning in Computer Vision and Natural Language Processing
Viaarxiv icon

Efficient and Effective Context-Based Convolutional Entropy Modeling for Image Compression

Add code
Bookmark button
Alert button
Jun 24, 2019
Mu Li, Kede Ma, Jane You, David Zhang, Wangmeng Zuo

Figure 1 for Efficient and Effective Context-Based Convolutional Entropy Modeling for Image Compression
Figure 2 for Efficient and Effective Context-Based Convolutional Entropy Modeling for Image Compression
Figure 3 for Efficient and Effective Context-Based Convolutional Entropy Modeling for Image Compression
Figure 4 for Efficient and Effective Context-Based Convolutional Entropy Modeling for Image Compression
Viaarxiv icon

Dynamic Mini-batch SGD for Elastic Distributed Training: Learning in the Limbo of Resources

Add code
Bookmark button
Alert button
May 02, 2019
Haibin Lin, Hang Zhang, Yifei Ma, Tong He, Zhi Zhang, Sheng Zha, Mu Li

Figure 1 for Dynamic Mini-batch SGD for Elastic Distributed Training: Learning in the Limbo of Resources
Figure 2 for Dynamic Mini-batch SGD for Elastic Distributed Training: Learning in the Limbo of Resources
Figure 3 for Dynamic Mini-batch SGD for Elastic Distributed Training: Learning in the Limbo of Resources
Figure 4 for Dynamic Mini-batch SGD for Elastic Distributed Training: Learning in the Limbo of Resources
Viaarxiv icon

Language Models with Transformers

Add code
Bookmark button
Alert button
Apr 20, 2019
Chenguang Wang, Mu Li, Alexander J. Smola

Figure 1 for Language Models with Transformers
Figure 2 for Language Models with Transformers
Figure 3 for Language Models with Transformers
Figure 4 for Language Models with Transformers
Viaarxiv icon