Alert button
Picture for Tao Luo

Tao Luo

Alert button

Optimizing for In-memory Deep Learning with Emerging Memory Technology

Add code
Bookmark button
Alert button
Dec 01, 2021
Zhehui Wang, Tao Luo, Rick Siow Mong Goh, Wei Zhang, Weng-Fai Wong

Figure 1 for Optimizing for In-memory Deep Learning with Emerging Memory Technology
Figure 2 for Optimizing for In-memory Deep Learning with Emerging Memory Technology
Figure 3 for Optimizing for In-memory Deep Learning with Emerging Memory Technology
Figure 4 for Optimizing for In-memory Deep Learning with Emerging Memory Technology
Viaarxiv icon

Embedding Principle: a hierarchical structure of loss landscape of deep neural networks

Add code
Bookmark button
Alert button
Nov 30, 2021
Yaoyu Zhang, Yuqing Li, Zhongwang Zhang, Tao Luo, Zhi-Qin John Xu

Figure 1 for Embedding Principle: a hierarchical structure of loss landscape of deep neural networks
Figure 2 for Embedding Principle: a hierarchical structure of loss landscape of deep neural networks
Figure 3 for Embedding Principle: a hierarchical structure of loss landscape of deep neural networks
Figure 4 for Embedding Principle: a hierarchical structure of loss landscape of deep neural networks
Viaarxiv icon

E3NE: An End-to-End Framework for Accelerating Spiking Neural Networks with Emerging Neural Encoding on FPGAs

Add code
Bookmark button
Alert button
Nov 19, 2021
Daniel Gerlinghoff, Zhehui Wang, Xiaozhe Gu, Rick Siow Mong Goh, Tao Luo

Figure 1 for E3NE: An End-to-End Framework for Accelerating Spiking Neural Networks with Emerging Neural Encoding on FPGAs
Figure 2 for E3NE: An End-to-End Framework for Accelerating Spiking Neural Networks with Emerging Neural Encoding on FPGAs
Figure 3 for E3NE: An End-to-End Framework for Accelerating Spiking Neural Networks with Emerging Neural Encoding on FPGAs
Figure 4 for E3NE: An End-to-End Framework for Accelerating Spiking Neural Networks with Emerging Neural Encoding on FPGAs
Viaarxiv icon

MOD-Net: A Machine Learning Approach via Model-Operator-Data Network for Solving PDEs

Add code
Bookmark button
Alert button
Jul 08, 2021
Lulu Zhang, Tao Luo, Yaoyu Zhang, Zhi-Qin John Xu, Zheng Ma

Figure 1 for MOD-Net: A Machine Learning Approach via Model-Operator-Data Network for Solving PDEs
Figure 2 for MOD-Net: A Machine Learning Approach via Model-Operator-Data Network for Solving PDEs
Figure 3 for MOD-Net: A Machine Learning Approach via Model-Operator-Data Network for Solving PDEs
Figure 4 for MOD-Net: A Machine Learning Approach via Model-Operator-Data Network for Solving PDEs
Viaarxiv icon

Privacy Budget Scheduling

Add code
Bookmark button
Alert button
Jun 29, 2021
Tao Luo, Mingen Pan, Pierre Tholoniat, Asaf Cidon, Roxana Geambasu, Mathias Lécuyer

Figure 1 for Privacy Budget Scheduling
Figure 2 for Privacy Budget Scheduling
Figure 3 for Privacy Budget Scheduling
Figure 4 for Privacy Budget Scheduling
Viaarxiv icon

An Upper Limit of Decaying Rate with Respect to Frequency in Deep Neural Network

Add code
Bookmark button
Alert button
Jun 03, 2021
Tao Luo, Zheng Ma, Zhiwei Wang, Zhi-Qin John Xu, Yaoyu Zhang

Figure 1 for An Upper Limit of Decaying Rate with Respect to Frequency in Deep Neural Network
Figure 2 for An Upper Limit of Decaying Rate with Respect to Frequency in Deep Neural Network
Figure 3 for An Upper Limit of Decaying Rate with Respect to Frequency in Deep Neural Network
Viaarxiv icon

Embedding Principle of Loss Landscape of Deep Neural Networks

Add code
Bookmark button
Alert button
May 30, 2021
Yaoyu Zhang, Zhongwang Zhang, Tao Luo, Zhi-Qin John Xu

Figure 1 for Embedding Principle of Loss Landscape of Deep Neural Networks
Figure 2 for Embedding Principle of Loss Landscape of Deep Neural Networks
Figure 3 for Embedding Principle of Loss Landscape of Deep Neural Networks
Figure 4 for Embedding Principle of Loss Landscape of Deep Neural Networks
Viaarxiv icon

Towards Understanding the Condensation of Two-layer Neural Networks at Initial Training

Add code
Bookmark button
Alert button
May 29, 2021
Zhi-Qin John Xu, Hanxu Zhou, Tao Luo, Yaoyu Zhang

Figure 1 for Towards Understanding the Condensation of Two-layer Neural Networks at Initial Training
Figure 2 for Towards Understanding the Condensation of Two-layer Neural Networks at Initial Training
Figure 3 for Towards Understanding the Condensation of Two-layer Neural Networks at Initial Training
Figure 4 for Towards Understanding the Condensation of Two-layer Neural Networks at Initial Training
Viaarxiv icon

DTNN: Energy-efficient Inference with Dendrite Tree Inspired Neural Networks for Edge Vision Applications

Add code
Bookmark button
Alert button
May 25, 2021
Tao Luo, Wai Teng Tang, Matthew Kay Fei Lee, Chuping Qu, Weng-Fai Wong, Rick Goh

Figure 1 for DTNN: Energy-efficient Inference with Dendrite Tree Inspired Neural Networks for Edge Vision Applications
Figure 2 for DTNN: Energy-efficient Inference with Dendrite Tree Inspired Neural Networks for Edge Vision Applications
Figure 3 for DTNN: Energy-efficient Inference with Dendrite Tree Inspired Neural Networks for Edge Vision Applications
Figure 4 for DTNN: Energy-efficient Inference with Dendrite Tree Inspired Neural Networks for Edge Vision Applications
Viaarxiv icon