Alert button
Picture for Jie Zhou

Jie Zhou

Alert button

Different Tunes Played with Equal Skill: Exploring a Unified Optimization Subspace for Delta Tuning

Add code
Bookmark button
Alert button
Oct 24, 2022
Jing Yi, Weize Chen, Yujia Qin, Yankai Lin, Ning Ding, Xu Han, Zhiyuan Liu, Maosong Sun, Jie Zhou

Figure 1 for Different Tunes Played with Equal Skill: Exploring a Unified Optimization Subspace for Delta Tuning
Figure 2 for Different Tunes Played with Equal Skill: Exploring a Unified Optimization Subspace for Delta Tuning
Figure 3 for Different Tunes Played with Equal Skill: Exploring a Unified Optimization Subspace for Delta Tuning
Figure 4 for Different Tunes Played with Equal Skill: Exploring a Unified Optimization Subspace for Delta Tuning
Viaarxiv icon

ROSE: Robust Selective Fine-tuning for Pre-trained Language Models

Add code
Bookmark button
Alert button
Oct 18, 2022
Lan Jiang, Hao Zhou, Yankai Lin, Peng Li, Jie Zhou, Rui Jiang

Figure 1 for ROSE: Robust Selective Fine-tuning for Pre-trained Language Models
Figure 2 for ROSE: Robust Selective Fine-tuning for Pre-trained Language Models
Figure 3 for ROSE: Robust Selective Fine-tuning for Pre-trained Language Models
Figure 4 for ROSE: Robust Selective Fine-tuning for Pre-trained Language Models
Viaarxiv icon

Cerebrovascular Segmentation via Vessel Oriented Filtering Network

Add code
Bookmark button
Alert button
Oct 17, 2022
Zhanqiang Guo, Yao Luan, Jianjiang Feng, Wangsheng Lu, Yin Yin, Guangming Yang, Jie Zhou

Figure 1 for Cerebrovascular Segmentation via Vessel Oriented Filtering Network
Figure 2 for Cerebrovascular Segmentation via Vessel Oriented Filtering Network
Figure 3 for Cerebrovascular Segmentation via Vessel Oriented Filtering Network
Figure 4 for Cerebrovascular Segmentation via Vessel Oriented Filtering Network
Viaarxiv icon

Towards Robust k-Nearest-Neighbor Machine Translation

Add code
Bookmark button
Alert button
Oct 17, 2022
Hui Jiang, Ziyao Lu, Fandong Meng, Chulun Zhou, Jie Zhou, Degen Huang, Jinsong Su

Figure 1 for Towards Robust k-Nearest-Neighbor Machine Translation
Figure 2 for Towards Robust k-Nearest-Neighbor Machine Translation
Figure 3 for Towards Robust k-Nearest-Neighbor Machine Translation
Figure 4 for Towards Robust k-Nearest-Neighbor Machine Translation
Viaarxiv icon

Dynamics-aware Adversarial Attack of Adaptive Neural Networks

Add code
Bookmark button
Alert button
Oct 15, 2022
An Tao, Yueqi Duan, Yingqi Wang, Jiwen Lu, Jie Zhou

Figure 1 for Dynamics-aware Adversarial Attack of Adaptive Neural Networks
Figure 2 for Dynamics-aware Adversarial Attack of Adaptive Neural Networks
Figure 3 for Dynamics-aware Adversarial Attack of Adaptive Neural Networks
Figure 4 for Dynamics-aware Adversarial Attack of Adaptive Neural Networks
Viaarxiv icon

Categorizing Semantic Representations for Neural Machine Translation

Add code
Bookmark button
Alert button
Oct 13, 2022
Yongjing Yin, Yafu Li, Fandong Meng, Jie Zhou, Yue Zhang

Figure 1 for Categorizing Semantic Representations for Neural Machine Translation
Figure 2 for Categorizing Semantic Representations for Neural Machine Translation
Figure 3 for Categorizing Semantic Representations for Neural Machine Translation
Figure 4 for Categorizing Semantic Representations for Neural Machine Translation
Viaarxiv icon

Token-Label Alignment for Vision Transformers

Add code
Bookmark button
Alert button
Oct 12, 2022
Han Xiao, Wenzhao Zheng, Zheng Zhu, Jie Zhou, Jiwen Lu

Figure 1 for Token-Label Alignment for Vision Transformers
Figure 2 for Token-Label Alignment for Vision Transformers
Figure 3 for Token-Label Alignment for Vision Transformers
Figure 4 for Token-Label Alignment for Vision Transformers
Viaarxiv icon

WeLM: A Well-Read Pre-trained Language Model for Chinese

Add code
Bookmark button
Alert button
Oct 12, 2022
Hui Su, Xiao Zhou, Houjin Yu, Yuwen Chen, Zilin Zhu, Yang Yu, Jie Zhou

Figure 1 for WeLM: A Well-Read Pre-trained Language Model for Chinese
Figure 2 for WeLM: A Well-Read Pre-trained Language Model for Chinese
Figure 3 for WeLM: A Well-Read Pre-trained Language Model for Chinese
Figure 4 for WeLM: A Well-Read Pre-trained Language Model for Chinese
Viaarxiv icon

OPERA: Omni-Supervised Representation Learning with Hierarchical Supervisions

Add code
Bookmark button
Alert button
Oct 11, 2022
Chengkun Wang, Wenzhao Zheng, Zheng Zhu, Jie Zhou, Jiwen Lu

Figure 1 for OPERA: Omni-Supervised Representation Learning with Hierarchical Supervisions
Figure 2 for OPERA: Omni-Supervised Representation Learning with Hierarchical Supervisions
Figure 3 for OPERA: Omni-Supervised Representation Learning with Hierarchical Supervisions
Figure 4 for OPERA: Omni-Supervised Representation Learning with Hierarchical Supervisions
Viaarxiv icon

From Mimicking to Integrating: Knowledge Integration for Pre-Trained Language Models

Add code
Bookmark button
Alert button
Oct 11, 2022
Lei Li, Yankai Lin, Xuancheng Ren, Guangxiang Zhao, Peng Li, Jie Zhou, Xu Sun

Figure 1 for From Mimicking to Integrating: Knowledge Integration for Pre-Trained Language Models
Figure 2 for From Mimicking to Integrating: Knowledge Integration for Pre-Trained Language Models
Figure 3 for From Mimicking to Integrating: Knowledge Integration for Pre-Trained Language Models
Figure 4 for From Mimicking to Integrating: Knowledge Integration for Pre-Trained Language Models
Viaarxiv icon