Alert button
Picture for Weiwen Jiang

Weiwen Jiang

Alert button

Edge-InversionNet: Enabling Efficient Inference of InversionNet on Edge Devices

Oct 18, 2023
Zhepeng Wang, Isaacshubhanand Putla, Weiwen Jiang, Youzuo Lin

Figure 1 for Edge-InversionNet: Enabling Efficient Inference of InversionNet on Edge Devices
Figure 2 for Edge-InversionNet: Enabling Efficient Inference of InversionNet on Edge Devices
Figure 3 for Edge-InversionNet: Enabling Efficient Inference of InversionNet on Edge Devices
Viaarxiv icon

Muffin: A Framework Toward Multi-Dimension AI Fairness by Uniting Off-the-Shelf Models

Aug 26, 2023
Yi Sheng, Junhuan Yang, Lei Yang, Yiyu Shi, Jingtongf Hu, Weiwen Jiang

Figure 1 for Muffin: A Framework Toward Multi-Dimension AI Fairness by Uniting Off-the-Shelf Models
Figure 2 for Muffin: A Framework Toward Multi-Dimension AI Fairness by Uniting Off-the-Shelf Models
Figure 3 for Muffin: A Framework Toward Multi-Dimension AI Fairness by Uniting Off-the-Shelf Models
Figure 4 for Muffin: A Framework Toward Multi-Dimension AI Fairness by Uniting Off-the-Shelf Models
Viaarxiv icon

A Novel Spatial-Temporal Variational Quantum Circuit to Enable Deep Learning on NISQ Devices

Jul 19, 2023
Jinyang Li, Zhepeng Wang, Zhirui Hu, Prasanna Date, Ang Li, Weiwen Jiang

Viaarxiv icon

QuMoS: A Framework for Preserving Security of Quantum Machine Learning Model

Apr 23, 2023
Zhepeng Wang, Jinyang Li, Zhirui Hu, Blake Gage, Elizabeth Iwasawa, Weiwen Jiang

Figure 1 for QuMoS: A Framework for Preserving Security of Quantum Machine Learning Model
Figure 2 for QuMoS: A Framework for Preserving Security of Quantum Machine Learning Model
Figure 3 for QuMoS: A Framework for Preserving Security of Quantum Machine Learning Model
Figure 4 for QuMoS: A Framework for Preserving Security of Quantum Machine Learning Model
Viaarxiv icon

All-in-One: A Highly Representative DNN Pruning Framework for Edge Devices with Dynamic Power Management

Dec 09, 2022
Yifan Gong, Zheng Zhan, Pu Zhao, Yushu Wu, Chao Wu, Caiwen Ding, Weiwen Jiang, Minghai Qin, Yanzhi Wang

Figure 1 for All-in-One: A Highly Representative DNN Pruning Framework for Edge Devices with Dynamic Power Management
Figure 2 for All-in-One: A Highly Representative DNN Pruning Framework for Edge Devices with Dynamic Power Management
Figure 3 for All-in-One: A Highly Representative DNN Pruning Framework for Edge Devices with Dynamic Power Management
Figure 4 for All-in-One: A Highly Representative DNN Pruning Framework for Edge Devices with Dynamic Power Management
Viaarxiv icon

QuEst: Graph Transformer for Quantum Circuit Reliability Estimation

Oct 30, 2022
Hanrui Wang, Pengyu Liu, Jinglei Cheng, Zhiding Liang, Jiaqi Gu, Zirui Li, Yongshan Ding, Weiwen Jiang, Yiyu Shi, Xuehai Qian, David Z. Pan, Frederic T. Chong, Song Han

Figure 1 for QuEst: Graph Transformer for Quantum Circuit Reliability Estimation
Figure 2 for QuEst: Graph Transformer for Quantum Circuit Reliability Estimation
Figure 3 for QuEst: Graph Transformer for Quantum Circuit Reliability Estimation
Figure 4 for QuEst: Graph Transformer for Quantum Circuit Reliability Estimation
Viaarxiv icon

Towards Real-Time Temporal Graph Learning

Oct 12, 2022
Deniz Gurevin, Mohsin Shan, Tong Geng, Weiwen Jiang, Caiwen Ding, Omer Khan

Figure 1 for Towards Real-Time Temporal Graph Learning
Figure 2 for Towards Real-Time Temporal Graph Learning
Figure 3 for Towards Real-Time Temporal Graph Learning
Figure 4 for Towards Real-Time Temporal Graph Learning
Viaarxiv icon

Towards Sparsification of Graph Neural Networks

Sep 11, 2022
Hongwu Peng, Deniz Gurevin, Shaoyi Huang, Tong Geng, Weiwen Jiang, Omer Khan, Caiwen Ding

Figure 1 for Towards Sparsification of Graph Neural Networks
Figure 2 for Towards Sparsification of Graph Neural Networks
Figure 3 for Towards Sparsification of Graph Neural Networks
Figure 4 for Towards Sparsification of Graph Neural Networks
Viaarxiv icon

A Length Adaptive Algorithm-Hardware Co-design of Transformer on FPGA Through Sparse Attention and Dynamic Pipelining

Aug 07, 2022
Hongwu Peng, Shaoyi Huang, Shiyang Chen, Bingbing Li, Tong Geng, Ang Li, Weiwen Jiang, Wujie Wen, Jinbo Bi, Hang Liu, Caiwen Ding

Figure 1 for A Length Adaptive Algorithm-Hardware Co-design of Transformer on FPGA Through Sparse Attention and Dynamic Pipelining
Figure 2 for A Length Adaptive Algorithm-Hardware Co-design of Transformer on FPGA Through Sparse Attention and Dynamic Pipelining
Figure 3 for A Length Adaptive Algorithm-Hardware Co-design of Transformer on FPGA Through Sparse Attention and Dynamic Pipelining
Figure 4 for A Length Adaptive Algorithm-Hardware Co-design of Transformer on FPGA Through Sparse Attention and Dynamic Pipelining
Viaarxiv icon

Quantum Neural Network Compression

Jul 05, 2022
Zhirui Hu, Peiyan Dong, Zhepeng Wang, Youzuo Lin, Yanzhi Wang, Weiwen Jiang

Figure 1 for Quantum Neural Network Compression
Figure 2 for Quantum Neural Network Compression
Figure 3 for Quantum Neural Network Compression
Figure 4 for Quantum Neural Network Compression
Viaarxiv icon