Alert button
Picture for Jie Zhou

Jie Zhou

Alert button

Farewell to Aimless Large-scale Pretraining: Influential Subset Selection for Language Model

Add code
Bookmark button
Alert button
May 22, 2023
Xiao Wang, Weikang Zhou, Qi Zhang, Jie Zhou, Songyang Gao, Junzhe Wang, Menghan Zhang, Xiang Gao, Yunwen Chen, Tao Gui

Figure 1 for Farewell to Aimless Large-scale Pretraining: Influential Subset Selection for Language Model
Figure 2 for Farewell to Aimless Large-scale Pretraining: Influential Subset Selection for Language Model
Figure 3 for Farewell to Aimless Large-scale Pretraining: Influential Subset Selection for Language Model
Figure 4 for Farewell to Aimless Large-scale Pretraining: Influential Subset Selection for Language Model
Viaarxiv icon

D$^2$TV: Dual Knowledge Distillation and Target-oriented Vision Modeling for Many-to-Many Multimodal Summarization

Add code
Bookmark button
Alert button
May 22, 2023
Yunlong Liang, Fandong Meng, Jiaan Wang, Jinan Xu, Yufeng Chen, Jie Zhou

Figure 1 for D$^2$TV: Dual Knowledge Distillation and Target-oriented Vision Modeling for Many-to-Many Multimodal Summarization
Figure 2 for D$^2$TV: Dual Knowledge Distillation and Target-oriented Vision Modeling for Many-to-Many Multimodal Summarization
Figure 3 for D$^2$TV: Dual Knowledge Distillation and Target-oriented Vision Modeling for Many-to-Many Multimodal Summarization
Figure 4 for D$^2$TV: Dual Knowledge Distillation and Target-oriented Vision Modeling for Many-to-Many Multimodal Summarization
Viaarxiv icon

A Confidence-based Partial Label Learning Model for Crowd-Annotated Named Entity Recognition

Add code
Bookmark button
Alert button
May 21, 2023
Limao Xiong, Jie Zhou, Qunxi Zhu, Xiao Wang, Yuanbin Wu, Qi Zhang, Tao Gui, Xuanjing Huang, Jin Ma, Ying Shan

Figure 1 for A Confidence-based Partial Label Learning Model for Crowd-Annotated Named Entity Recognition
Figure 2 for A Confidence-based Partial Label Learning Model for Crowd-Annotated Named Entity Recognition
Figure 3 for A Confidence-based Partial Label Learning Model for Crowd-Annotated Named Entity Recognition
Figure 4 for A Confidence-based Partial Label Learning Model for Crowd-Annotated Named Entity Recognition
Viaarxiv icon

Mitigating Catastrophic Forgetting in Task-Incremental Continual Learning with Adaptive Classification Criterion

Add code
Bookmark button
Alert button
May 20, 2023
Yun Luo, Xiaotian Lin, Zhen Yang, Fandong Meng, Jie Zhou, Yue Zhang

Figure 1 for Mitigating Catastrophic Forgetting in Task-Incremental Continual Learning with Adaptive Classification Criterion
Figure 2 for Mitigating Catastrophic Forgetting in Task-Incremental Continual Learning with Adaptive Classification Criterion
Figure 3 for Mitigating Catastrophic Forgetting in Task-Incremental Continual Learning with Adaptive Classification Criterion
Figure 4 for Mitigating Catastrophic Forgetting in Task-Incremental Continual Learning with Adaptive Classification Criterion
Viaarxiv icon

GFDC: A Granule Fusion Density-Based Clustering with Evidential Reasoning

Add code
Bookmark button
Alert button
May 20, 2023
Mingjie Cai, Zhishan Wu, Qingguo Li, Feng Xu, Jie Zhou

Figure 1 for GFDC: A Granule Fusion Density-Based Clustering with Evidential Reasoning
Figure 2 for GFDC: A Granule Fusion Density-Based Clustering with Evidential Reasoning
Figure 3 for GFDC: A Granule Fusion Density-Based Clustering with Evidential Reasoning
Figure 4 for GFDC: A Granule Fusion Density-Based Clustering with Evidential Reasoning
Viaarxiv icon

VisionLLM: Large Language Model is also an Open-Ended Decoder for Vision-Centric Tasks

Add code
Bookmark button
Alert button
May 18, 2023
Wenhai Wang, Zhe Chen, Xiaokang Chen, Jiannan Wu, Xizhou Zhu, Gang Zeng, Ping Luo, Tong Lu, Jie Zhou, Yu Qiao, Jifeng Dai

Figure 1 for VisionLLM: Large Language Model is also an Open-Ended Decoder for Vision-Centric Tasks
Figure 2 for VisionLLM: Large Language Model is also an Open-Ended Decoder for Vision-Centric Tasks
Figure 3 for VisionLLM: Large Language Model is also an Open-Ended Decoder for Vision-Centric Tasks
Figure 4 for VisionLLM: Large Language Model is also an Open-Ended Decoder for Vision-Centric Tasks
Viaarxiv icon

Personality Understanding of Fictional Characters during Book Reading

Add code
Bookmark button
Alert button
May 17, 2023
Mo Yu, Jiangnan Li, Shunyu Yao, Wenjie Pang, Xiaochen Zhou, Zhou Xiao, Fandong Meng, Jie Zhou

Figure 1 for Personality Understanding of Fictional Characters during Book Reading
Figure 2 for Personality Understanding of Fictional Characters during Book Reading
Figure 3 for Personality Understanding of Fictional Characters during Book Reading
Figure 4 for Personality Understanding of Fictional Characters during Book Reading
Viaarxiv icon

Towards Unifying Multi-Lingual and Cross-Lingual Summarization

Add code
Bookmark button
Alert button
May 16, 2023
Jiaan Wang, Fandong Meng, Duo Zheng, Yunlong Liang, Zhixu Li, Jianfeng Qu, Jie Zhou

Figure 1 for Towards Unifying Multi-Lingual and Cross-Lingual Summarization
Figure 2 for Towards Unifying Multi-Lingual and Cross-Lingual Summarization
Figure 3 for Towards Unifying Multi-Lingual and Cross-Lingual Summarization
Figure 4 for Towards Unifying Multi-Lingual and Cross-Lingual Summarization
Viaarxiv icon

Recyclable Tuning for Continual Pre-training

Add code
Bookmark button
Alert button
May 15, 2023
Yujia Qin, Cheng Qian, Xu Han, Yankai Lin, Huadong Wang, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie Zhou

Figure 1 for Recyclable Tuning for Continual Pre-training
Figure 2 for Recyclable Tuning for Continual Pre-training
Figure 3 for Recyclable Tuning for Continual Pre-training
Figure 4 for Recyclable Tuning for Continual Pre-training
Viaarxiv icon

RC3: Regularized Contrastive Cross-lingual Cross-modal Pre-training

Add code
Bookmark button
Alert button
May 13, 2023
Chulun Zhou, Yunlong Liang, Fandong Meng, Jinan Xu, Jinsong Su, Jie Zhou

Figure 1 for RC3: Regularized Contrastive Cross-lingual Cross-modal Pre-training
Figure 2 for RC3: Regularized Contrastive Cross-lingual Cross-modal Pre-training
Figure 3 for RC3: Regularized Contrastive Cross-lingual Cross-modal Pre-training
Figure 4 for RC3: Regularized Contrastive Cross-lingual Cross-modal Pre-training
Viaarxiv icon