Alert button

"Text": models, code, and papers
Alert button

The Inductive Bias of In-Context Learning: Rethinking Pretraining Example Design

Oct 09, 2021
Yoav Levine, Noam Wies, Daniel Jannai, Dan Navon, Yedid Hoshen, Amnon Shashua

Figure 1 for The Inductive Bias of In-Context Learning: Rethinking Pretraining Example Design
Figure 2 for The Inductive Bias of In-Context Learning: Rethinking Pretraining Example Design
Figure 3 for The Inductive Bias of In-Context Learning: Rethinking Pretraining Example Design
Viaarxiv icon

On the Effectiveness of Pretrained Models for API Learning

Apr 05, 2022
Mohammad Abdul Hadi, Imam Nur Bani Yusuf, Ferdian Thung, Kien Gia Luong, Jiang Lingxiao, Fatemeh H. Fard, David Lo

Figure 1 for On the Effectiveness of Pretrained Models for API Learning
Figure 2 for On the Effectiveness of Pretrained Models for API Learning
Figure 3 for On the Effectiveness of Pretrained Models for API Learning
Figure 4 for On the Effectiveness of Pretrained Models for API Learning
Viaarxiv icon

Hyperlink-induced Pre-training for Passage Retrieval in Open-domain Question Answering

Mar 14, 2022
Jiawei Zhou, Xiaoguang Li, Lifeng Shang, Lan Luo, Ke Zhan, Enrui Hu, Xinyu Zhang, Hao Jiang, Zhao Cao, Fan Yu, Xin Jiang, Qun Liu, Lei Chen

Figure 1 for Hyperlink-induced Pre-training for Passage Retrieval in Open-domain Question Answering
Figure 2 for Hyperlink-induced Pre-training for Passage Retrieval in Open-domain Question Answering
Figure 3 for Hyperlink-induced Pre-training for Passage Retrieval in Open-domain Question Answering
Figure 4 for Hyperlink-induced Pre-training for Passage Retrieval in Open-domain Question Answering
Viaarxiv icon

Multilevel Text Alignment with Cross-Document Attention

Oct 03, 2020
Xuhui Zhou, Nikolaos Pappas, Noah A. Smith

Figure 1 for Multilevel Text Alignment with Cross-Document Attention
Figure 2 for Multilevel Text Alignment with Cross-Document Attention
Figure 3 for Multilevel Text Alignment with Cross-Document Attention
Figure 4 for Multilevel Text Alignment with Cross-Document Attention
Viaarxiv icon

Leveraging Auxiliary Text for Deep Recognition of Unseen Visual Relationships

Oct 27, 2019
Gal Sadeh Kenigsfield, Ran El-Yaniv

Figure 1 for Leveraging Auxiliary Text for Deep Recognition of Unseen Visual Relationships
Figure 2 for Leveraging Auxiliary Text for Deep Recognition of Unseen Visual Relationships
Figure 3 for Leveraging Auxiliary Text for Deep Recognition of Unseen Visual Relationships
Figure 4 for Leveraging Auxiliary Text for Deep Recognition of Unseen Visual Relationships
Viaarxiv icon

Text-based RL Agents with Commonsense Knowledge: New Challenges, Environments and Baselines

Oct 08, 2020
Keerthiram Murugesan, Mattia Atzeni, Pavan Kapanipathi, Pushkar Shukla, Sadhana Kumaravel, Gerald Tesauro, Kartik Talamadupula, Mrinmaya Sachan, Murray Campbell

Figure 1 for Text-based RL Agents with Commonsense Knowledge: New Challenges, Environments and Baselines
Figure 2 for Text-based RL Agents with Commonsense Knowledge: New Challenges, Environments and Baselines
Figure 3 for Text-based RL Agents with Commonsense Knowledge: New Challenges, Environments and Baselines
Figure 4 for Text-based RL Agents with Commonsense Knowledge: New Challenges, Environments and Baselines
Viaarxiv icon

NumHTML: Numeric-Oriented Hierarchical Transformer Model for Multi-task Financial Forecasting

Jan 05, 2022
Linyi Yang, Jiazheng Li, Ruihai Dong, Yue Zhang, Barry Smyth

Figure 1 for NumHTML: Numeric-Oriented Hierarchical Transformer Model for Multi-task Financial Forecasting
Figure 2 for NumHTML: Numeric-Oriented Hierarchical Transformer Model for Multi-task Financial Forecasting
Figure 3 for NumHTML: Numeric-Oriented Hierarchical Transformer Model for Multi-task Financial Forecasting
Figure 4 for NumHTML: Numeric-Oriented Hierarchical Transformer Model for Multi-task Financial Forecasting
Viaarxiv icon

Variational Autoencoder with Disentanglement Priors for Low-Resource Task-Specific Natural Language Generation

Feb 27, 2022
Zhuang Li, Lizhen Qu, Qiongkai Xu, Tongtong Wu, Tianyang Zhan, Gholamreza Haffari

Figure 1 for Variational Autoencoder with Disentanglement Priors for Low-Resource Task-Specific Natural Language Generation
Figure 2 for Variational Autoencoder with Disentanglement Priors for Low-Resource Task-Specific Natural Language Generation
Figure 3 for Variational Autoencoder with Disentanglement Priors for Low-Resource Task-Specific Natural Language Generation
Figure 4 for Variational Autoencoder with Disentanglement Priors for Low-Resource Task-Specific Natural Language Generation
Viaarxiv icon

t-SS3: a text classifier with dynamic n-grams for early risk detection over text streams

Nov 11, 2019
Sergio G. Burdisso, Marcelo Errecalde, Manuel Montes-y-Gómez

Figure 1 for t-SS3: a text classifier with dynamic n-grams for early risk detection over text streams
Figure 2 for t-SS3: a text classifier with dynamic n-grams for early risk detection over text streams
Figure 3 for t-SS3: a text classifier with dynamic n-grams for early risk detection over text streams
Figure 4 for t-SS3: a text classifier with dynamic n-grams for early risk detection over text streams
Viaarxiv icon

Benchmarking Modern Named Entity Recognition Techniques for Free-text Health Record De-identification

Mar 25, 2021
Abdullah Ahmed, Adeel Abbasi, Carsten Eickhoff

Figure 1 for Benchmarking Modern Named Entity Recognition Techniques for Free-text Health Record De-identification
Figure 2 for Benchmarking Modern Named Entity Recognition Techniques for Free-text Health Record De-identification
Figure 3 for Benchmarking Modern Named Entity Recognition Techniques for Free-text Health Record De-identification
Figure 4 for Benchmarking Modern Named Entity Recognition Techniques for Free-text Health Record De-identification
Viaarxiv icon