Alert button
Picture for Jie Zhou

Jie Zhou

Alert button

Plot Retrieval as an Assessment of Abstract Semantic Association

Add code
Bookmark button
Alert button
Nov 03, 2023
Shicheng Xu, Liang Pang, Jiangnan Li, Mo Yu, Fandong Meng, Huawei Shen, Xueqi Cheng, Jie Zhou

Viaarxiv icon

Exploring Unified Perspective For Fast Shapley Value Estimation

Add code
Bookmark button
Alert button
Nov 02, 2023
Borui Zhang, Baotong Tian, Wenzhao Zheng, Jie Zhou, Jiwen Lu

Figure 1 for Exploring Unified Perspective For Fast Shapley Value Estimation
Figure 2 for Exploring Unified Perspective For Fast Shapley Value Estimation
Figure 3 for Exploring Unified Perspective For Fast Shapley Value Estimation
Figure 4 for Exploring Unified Perspective For Fast Shapley Value Estimation
Viaarxiv icon

MCUFormer: Deploying Vision Tranformers on Microcontrollers with Limited Memory

Add code
Bookmark button
Alert button
Oct 27, 2023
Yinan Liang, Ziwei Wang, Xiuwei Xu, Yansong Tang, Jie Zhou, Jiwen Lu

Viaarxiv icon

Variator: Accelerating Pre-trained Models with Plug-and-Play Compression Modules

Add code
Bookmark button
Alert button
Oct 24, 2023
Chaojun Xiao, Yuqi Luo, Wenbin Zhang, Pengle Zhang, Xu Han, Yankai Lin, Zhengyan Zhang, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou

Figure 1 for Variator: Accelerating Pre-trained Models with Plug-and-Play Compression Modules
Figure 2 for Variator: Accelerating Pre-trained Models with Plug-and-Play Compression Modules
Figure 3 for Variator: Accelerating Pre-trained Models with Plug-and-Play Compression Modules
Figure 4 for Variator: Accelerating Pre-trained Models with Plug-and-Play Compression Modules
Viaarxiv icon

Thoroughly Modeling Multi-domain Pre-trained Recommendation as Language

Add code
Bookmark button
Alert button
Oct 20, 2023
Zekai Qu, Ruobing Xie, Chaojun Xiao, Yuan Yao, Zhiyuan Liu, Fengzong Lian, Zhanhui Kang, Jie Zhou

Figure 1 for Thoroughly Modeling Multi-domain Pre-trained Recommendation as Language
Figure 2 for Thoroughly Modeling Multi-domain Pre-trained Recommendation as Language
Figure 3 for Thoroughly Modeling Multi-domain Pre-trained Recommendation as Language
Figure 4 for Thoroughly Modeling Multi-domain Pre-trained Recommendation as Language
Viaarxiv icon

Boosting Inference Efficiency: Unleashing the Power of Parameter-Shared Pre-trained Language Models

Add code
Bookmark button
Alert button
Oct 19, 2023
Weize Chen, Xiaoyue Xu, Xu Han, Yankai Lin, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou

Viaarxiv icon

DCRNN: A Deep Cross approach based on RNN for Partial Parameter Sharing in Multi-task Learning

Add code
Bookmark button
Alert button
Oct 18, 2023
Jie Zhou, Qian Yu

Viaarxiv icon

RethinkingTMSC: An Empirical Study for Target-Oriented Multimodal Sentiment Classification

Add code
Bookmark button
Alert button
Oct 14, 2023
Junjie Ye, Jie Zhou, Junfeng Tian, Rui Wang, Qi Zhang, Tao Gui, Xuanjing Huang

Figure 1 for RethinkingTMSC: An Empirical Study for Target-Oriented Multimodal Sentiment Classification
Figure 2 for RethinkingTMSC: An Empirical Study for Target-Oriented Multimodal Sentiment Classification
Figure 3 for RethinkingTMSC: An Empirical Study for Target-Oriented Multimodal Sentiment Classification
Figure 4 for RethinkingTMSC: An Empirical Study for Target-Oriented Multimodal Sentiment Classification
Viaarxiv icon

XAL: EXplainable Active Learning Makes Classifiers Better Low-resource Learners

Add code
Bookmark button
Alert button
Oct 09, 2023
Yun Luo, Zhen Yang, Fandong Meng, Yingjie Li, Fang Guo, Qinglin Qi, Jie Zhou, Yue Zhang

Figure 1 for XAL: EXplainable Active Learning Makes Classifiers Better Low-resource Learners
Figure 2 for XAL: EXplainable Active Learning Makes Classifiers Better Low-resource Learners
Figure 3 for XAL: EXplainable Active Learning Makes Classifiers Better Low-resource Learners
Figure 4 for XAL: EXplainable Active Learning Makes Classifiers Better Low-resource Learners
Viaarxiv icon

C^2M-DoT: Cross-modal consistent multi-view medical report generation with domain transfer network

Add code
Bookmark button
Alert button
Oct 09, 2023
Ruizhi Wang, Xiangtao Wang, Jie Zhou, Thomas Lukasiewicz, Zhenghua Xu

Figure 1 for C^2M-DoT: Cross-modal consistent multi-view medical report generation with domain transfer network
Figure 2 for C^2M-DoT: Cross-modal consistent multi-view medical report generation with domain transfer network
Figure 3 for C^2M-DoT: Cross-modal consistent multi-view medical report generation with domain transfer network
Figure 4 for C^2M-DoT: Cross-modal consistent multi-view medical report generation with domain transfer network
Viaarxiv icon