Alert button

"Text": models, code, and papers
Alert button

Co-Driven Recognition of Semantic Consistency via the Fusion of Transformer and HowNet Sememes Knowledge

Feb 21, 2023
Fan Chen, Yan Huang, Xinfang Zhang, Kang Luo, Jinxuan Zhu, Ruixian He

Figure 1 for Co-Driven Recognition of Semantic Consistency via the Fusion of Transformer and HowNet Sememes Knowledge
Figure 2 for Co-Driven Recognition of Semantic Consistency via the Fusion of Transformer and HowNet Sememes Knowledge
Figure 3 for Co-Driven Recognition of Semantic Consistency via the Fusion of Transformer and HowNet Sememes Knowledge
Figure 4 for Co-Driven Recognition of Semantic Consistency via the Fusion of Transformer and HowNet Sememes Knowledge
Viaarxiv icon

Adapting Pretrained Language Models for Solving Tabular Prediction Problems in the Electronic Health Record

Mar 27, 2023
Christopher McMaster, David FL Liew, Douglas EV Pires

Figure 1 for Adapting Pretrained Language Models for Solving Tabular Prediction Problems in the Electronic Health Record
Figure 2 for Adapting Pretrained Language Models for Solving Tabular Prediction Problems in the Electronic Health Record
Figure 3 for Adapting Pretrained Language Models for Solving Tabular Prediction Problems in the Electronic Health Record
Figure 4 for Adapting Pretrained Language Models for Solving Tabular Prediction Problems in the Electronic Health Record
Viaarxiv icon

Latent Prompt Tuning for Text Summarization

Nov 03, 2022
Yubo Zhang, Xingxing Zhang, Xun Wang, Si-qing Chen, Furu Wei

Figure 1 for Latent Prompt Tuning for Text Summarization
Figure 2 for Latent Prompt Tuning for Text Summarization
Figure 3 for Latent Prompt Tuning for Text Summarization
Figure 4 for Latent Prompt Tuning for Text Summarization
Viaarxiv icon

Table-To-Text generation and pre-training with TabT5

Oct 17, 2022
Ewa Andrejczuk, Julian Martin Eisenschlos, Francesco Piccinno, Syrine Krichene, Yasemin Altun

Figure 1 for Table-To-Text generation and pre-training with TabT5
Figure 2 for Table-To-Text generation and pre-training with TabT5
Figure 3 for Table-To-Text generation and pre-training with TabT5
Figure 4 for Table-To-Text generation and pre-training with TabT5
Viaarxiv icon

RankT5: Fine-Tuning T5 for Text Ranking with Ranking Losses

Oct 12, 2022
Honglei Zhuang, Zhen Qin, Rolf Jagerman, Kai Hui, Ji Ma, Jing Lu, Jianmo Ni, Xuanhui Wang, Michael Bendersky

Figure 1 for RankT5: Fine-Tuning T5 for Text Ranking with Ranking Losses
Figure 2 for RankT5: Fine-Tuning T5 for Text Ranking with Ranking Losses
Figure 3 for RankT5: Fine-Tuning T5 for Text Ranking with Ranking Losses
Figure 4 for RankT5: Fine-Tuning T5 for Text Ranking with Ranking Losses
Viaarxiv icon

Scientific Computing Algorithms to Learn Enhanced Scalable Surrogates for Mesh Physics

Apr 01, 2023
Brian R. Bartoldson, Yeping Hu, Amar Saini, Jose Cadena, Yucheng Fu, Jie Bao, Zhijie Xu, Brenda Ng, Phan Nguyen

Figure 1 for Scientific Computing Algorithms to Learn Enhanced Scalable Surrogates for Mesh Physics
Figure 2 for Scientific Computing Algorithms to Learn Enhanced Scalable Surrogates for Mesh Physics
Figure 3 for Scientific Computing Algorithms to Learn Enhanced Scalable Surrogates for Mesh Physics
Figure 4 for Scientific Computing Algorithms to Learn Enhanced Scalable Surrogates for Mesh Physics
Viaarxiv icon

Improving Patient Pre-screening for Clinical Trials: Assisting Physicians with Large Language Models

Apr 14, 2023
Danny M. den Hamer, Perry Schoor, Tobias B. Polak, Daniel Kapitan

Figure 1 for Improving Patient Pre-screening for Clinical Trials: Assisting Physicians with Large Language Models
Figure 2 for Improving Patient Pre-screening for Clinical Trials: Assisting Physicians with Large Language Models
Figure 3 for Improving Patient Pre-screening for Clinical Trials: Assisting Physicians with Large Language Models
Figure 4 for Improving Patient Pre-screening for Clinical Trials: Assisting Physicians with Large Language Models
Viaarxiv icon

LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention

Mar 28, 2023
Renrui Zhang, Jiaming Han, Aojun Zhou, Xiangfei Hu, Shilin Yan, Pan Lu, Hongsheng Li, Peng Gao, Yu Qiao

Figure 1 for LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention
Figure 2 for LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention
Figure 3 for LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention
Figure 4 for LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention
Viaarxiv icon

Evaluation of ChatGPT for NLP-based Mental Health Applications

Mar 28, 2023
Bishal Lamichhane

Figure 1 for Evaluation of ChatGPT for NLP-based Mental Health Applications
Figure 2 for Evaluation of ChatGPT for NLP-based Mental Health Applications
Figure 3 for Evaluation of ChatGPT for NLP-based Mental Health Applications
Figure 4 for Evaluation of ChatGPT for NLP-based Mental Health Applications
Viaarxiv icon

Towards Universal Vision-language Omni-supervised Segmentation

Mar 12, 2023
Bowen Dong, Jiaxi Gu, Jianhua Han, Hang Xu, Wangmeng Zuo

Figure 1 for Towards Universal Vision-language Omni-supervised Segmentation
Figure 2 for Towards Universal Vision-language Omni-supervised Segmentation
Figure 3 for Towards Universal Vision-language Omni-supervised Segmentation
Figure 4 for Towards Universal Vision-language Omni-supervised Segmentation
Viaarxiv icon