Alert button
Picture for Jie Zhang

Jie Zhang

Alert button

On the Performance of Data Compression in Clustered Fog Radio Access Networks

Jul 01, 2022
Haonan Hu, Yan Jiang, Jiliang Zhang, Yanan Zheng, Qianbin Chen, Jie Zhang

Figure 1 for On the Performance of Data Compression in Clustered Fog Radio Access Networks
Figure 2 for On the Performance of Data Compression in Clustered Fog Radio Access Networks
Figure 3 for On the Performance of Data Compression in Clustered Fog Radio Access Networks
Figure 4 for On the Performance of Data Compression in Clustered Fog Radio Access Networks
Viaarxiv icon

DaisyRec 2.0: Benchmarking Recommendation for Rigorous Evaluation

Jun 22, 2022
Zhu Sun, Hui Fang, Jie Yang, Xinghua Qu, Hongyang Liu, Di Yu, Yew-Soon Ong, Jie Zhang

Figure 1 for DaisyRec 2.0: Benchmarking Recommendation for Rigorous Evaluation
Figure 2 for DaisyRec 2.0: Benchmarking Recommendation for Rigorous Evaluation
Figure 3 for DaisyRec 2.0: Benchmarking Recommendation for Rigorous Evaluation
Figure 4 for DaisyRec 2.0: Benchmarking Recommendation for Rigorous Evaluation
Viaarxiv icon

Sampling Efficient Deep Reinforcement Learning through Preference-Guided Stochastic Exploration

Jun 20, 2022
Wenhui Huang, Cong Zhang, Jingda Wu, Xiangkun He, Jie Zhang, Chen Lv

Figure 1 for Sampling Efficient Deep Reinforcement Learning through Preference-Guided Stochastic Exploration
Figure 2 for Sampling Efficient Deep Reinforcement Learning through Preference-Guided Stochastic Exploration
Figure 3 for Sampling Efficient Deep Reinforcement Learning through Preference-Guided Stochastic Exploration
Figure 4 for Sampling Efficient Deep Reinforcement Learning through Preference-Guided Stochastic Exploration
Viaarxiv icon

Joint Training of Speech Enhancement and Self-supervised Model for Noise-robust ASR

May 26, 2022
Qiu-Shi Zhu, Jie Zhang, Zi-Qiang Zhang, Li-Rong Dai

Figure 1 for Joint Training of Speech Enhancement and Self-supervised Model for Noise-robust ASR
Figure 2 for Joint Training of Speech Enhancement and Self-supervised Model for Noise-robust ASR
Figure 3 for Joint Training of Speech Enhancement and Self-supervised Model for Noise-robust ASR
Figure 4 for Joint Training of Speech Enhancement and Self-supervised Model for Noise-robust ASR
Viaarxiv icon

QEKD: Query-Efficient and Data-Free Knowledge Distillation from Black-box Models

May 23, 2022
Jie Zhang, Chen Chen, Jiahua Dong, Ruoxi Jia, Lingjuan Lyu

Figure 1 for QEKD: Query-Efficient and Data-Free Knowledge Distillation from Black-box Models
Figure 2 for QEKD: Query-Efficient and Data-Free Knowledge Distillation from Black-box Models
Figure 3 for QEKD: Query-Efficient and Data-Free Knowledge Distillation from Black-box Models
Figure 4 for QEKD: Query-Efficient and Data-Free Knowledge Distillation from Black-box Models
Viaarxiv icon

On Scheduling Mechanisms Beyond the Worst Case

Apr 20, 2022
Yansong Gao, Jie Zhang

Viaarxiv icon

PICASSO: Unleashing the Potential of GPU-centric Training for Wide-and-deep Recommender Systems

Apr 17, 2022
Yuanxing Zhang, Langshi Chen, Siran Yang, Man Yuan, Huimin Yi, Jie Zhang, Jiamang Wang, Jianbo Dong, Yunlong Xu, Yue Song, Yong Li, Di Zhang, Wei Lin, Lin Qu, Bo Zheng

Figure 1 for PICASSO: Unleashing the Potential of GPU-centric Training for Wide-and-deep Recommender Systems
Figure 2 for PICASSO: Unleashing the Potential of GPU-centric Training for Wide-and-deep Recommender Systems
Figure 3 for PICASSO: Unleashing the Potential of GPU-centric Training for Wide-and-deep Recommender Systems
Figure 4 for PICASSO: Unleashing the Potential of GPU-centric Training for Wide-and-deep Recommender Systems
Viaarxiv icon

Sign Bit is Enough: A Learning Synchronization Framework for Multi-hop All-reduce with Ultimate Compression

Apr 14, 2022
Feijie Wu, Shiqi He, Song Guo, Zhihao Qu, Haozhao Wang, Weihua Zhuang, Jie Zhang

Figure 1 for Sign Bit is Enough: A Learning Synchronization Framework for Multi-hop All-reduce with Ultimate Compression
Figure 2 for Sign Bit is Enough: A Learning Synchronization Framework for Multi-hop All-reduce with Ultimate Compression
Figure 3 for Sign Bit is Enough: A Learning Synchronization Framework for Multi-hop All-reduce with Ultimate Compression
Figure 4 for Sign Bit is Enough: A Learning Synchronization Framework for Multi-hop All-reduce with Ultimate Compression
Viaarxiv icon

Adaptive Modulation for Wobbling UAV Air-to-Ground Links in Millimeter-wave Bands

Apr 13, 2022
Songjiang Yang, Zitian Zhang, Jiliang Zhang, Xiaoli Chu, Jie Zhang

Figure 1 for Adaptive Modulation for Wobbling UAV Air-to-Ground Links in Millimeter-wave Bands
Figure 2 for Adaptive Modulation for Wobbling UAV Air-to-Ground Links in Millimeter-wave Bands
Figure 3 for Adaptive Modulation for Wobbling UAV Air-to-Ground Links in Millimeter-wave Bands
Figure 4 for Adaptive Modulation for Wobbling UAV Air-to-Ground Links in Millimeter-wave Bands
Viaarxiv icon

Recommender May Not Favor Loyal Users

Apr 12, 2022
Yitong Ji, Aixin Sun, Jie Zhang, Chenliang Li

Figure 1 for Recommender May Not Favor Loyal Users
Figure 2 for Recommender May Not Favor Loyal Users
Figure 3 for Recommender May Not Favor Loyal Users
Figure 4 for Recommender May Not Favor Loyal Users
Viaarxiv icon