Alert button

"Text": models, code, and papers
Alert button

A Sketch Is Worth a Thousand Words: Image Retrieval with Text and Sketch

Aug 05, 2022
Patsorn Sangkloy, Wittawat Jitkrittum, Diyi Yang, James Hays

Figure 1 for A Sketch Is Worth a Thousand Words: Image Retrieval with Text and Sketch
Figure 2 for A Sketch Is Worth a Thousand Words: Image Retrieval with Text and Sketch
Figure 3 for A Sketch Is Worth a Thousand Words: Image Retrieval with Text and Sketch
Figure 4 for A Sketch Is Worth a Thousand Words: Image Retrieval with Text and Sketch
Viaarxiv icon

IRT2: Inductive Linking and Ranking in Knowledge Graphs of Varying Scale

Jan 02, 2023
Felix Hamann, Adrian Ulges, Maurice Falk

Figure 1 for IRT2: Inductive Linking and Ranking in Knowledge Graphs of Varying Scale
Figure 2 for IRT2: Inductive Linking and Ranking in Knowledge Graphs of Varying Scale
Viaarxiv icon

Gradient-Boosted Based Structured and Unstructured Learning

Feb 28, 2023
Andrea Treviño Gavito, Diego Klabjan, Jean Utke

Figure 1 for Gradient-Boosted Based Structured and Unstructured Learning
Figure 2 for Gradient-Boosted Based Structured and Unstructured Learning
Figure 3 for Gradient-Boosted Based Structured and Unstructured Learning
Figure 4 for Gradient-Boosted Based Structured and Unstructured Learning
Viaarxiv icon

GLM-Dialog: Noise-tolerant Pre-training for Knowledge-grounded Dialogue Generation

Feb 28, 2023
Jing Zhang, Xiaokang Zhang, Daniel Zhang-Li, Jifan Yu, Zijun Yao, Zeyao Ma, Yiqi Xu, Haohua Wang, Xiaohan Zhang, Nianyi Lin, Sunrui Lu, Juanzi Li, Jie Tang

Figure 1 for GLM-Dialog: Noise-tolerant Pre-training for Knowledge-grounded Dialogue Generation
Figure 2 for GLM-Dialog: Noise-tolerant Pre-training for Knowledge-grounded Dialogue Generation
Figure 3 for GLM-Dialog: Noise-tolerant Pre-training for Knowledge-grounded Dialogue Generation
Figure 4 for GLM-Dialog: Noise-tolerant Pre-training for Knowledge-grounded Dialogue Generation
Viaarxiv icon

ConTra: (Con)text (Tra)nsformer for Cross-Modal Video Retrieval

Oct 09, 2022
Adriano Fragomeni, Michael Wray, Dima Damen

Figure 1 for ConTra: (Con)text (Tra)nsformer for Cross-Modal Video Retrieval
Figure 2 for ConTra: (Con)text (Tra)nsformer for Cross-Modal Video Retrieval
Figure 3 for ConTra: (Con)text (Tra)nsformer for Cross-Modal Video Retrieval
Figure 4 for ConTra: (Con)text (Tra)nsformer for Cross-Modal Video Retrieval
Viaarxiv icon

Answering Numerical Reasoning Questions in Table-Text Hybrid Contents with Graph-based Encoder and Tree-based Decoder

Sep 16, 2022
Fangyu Lei, Shizhu He, Xiang Li, Jun Zhao, Kang Liu

Figure 1 for Answering Numerical Reasoning Questions in Table-Text Hybrid Contents with Graph-based Encoder and Tree-based Decoder
Figure 2 for Answering Numerical Reasoning Questions in Table-Text Hybrid Contents with Graph-based Encoder and Tree-based Decoder
Figure 3 for Answering Numerical Reasoning Questions in Table-Text Hybrid Contents with Graph-based Encoder and Tree-based Decoder
Figure 4 for Answering Numerical Reasoning Questions in Table-Text Hybrid Contents with Graph-based Encoder and Tree-based Decoder
Viaarxiv icon

A Watermark for Large Language Models

Jan 24, 2023
John Kirchenbauer, Jonas Geiping, Yuxin Wen, Jonathan Katz, Ian Miers, Tom Goldstein

Figure 1 for A Watermark for Large Language Models
Figure 2 for A Watermark for Large Language Models
Figure 3 for A Watermark for Large Language Models
Figure 4 for A Watermark for Large Language Models
Viaarxiv icon

Proactive Prioritization of App Issues via Contrastive Learning

Mar 12, 2023
Moghis Fereidouni, Adib Mosharrof, Umar Farooq, AB Siddique

Figure 1 for Proactive Prioritization of App Issues via Contrastive Learning
Figure 2 for Proactive Prioritization of App Issues via Contrastive Learning
Figure 3 for Proactive Prioritization of App Issues via Contrastive Learning
Figure 4 for Proactive Prioritization of App Issues via Contrastive Learning
Viaarxiv icon

Large Language Models as Corporate Lobbyists

Jan 17, 2023
John J. Nay

Viaarxiv icon

MOCHA: A Multi-Task Training Approach for Coherent Text Generation from Cognitive Perspective

Oct 26, 2022
Zhe Hu, Hou Pong Chan, Lifu Huang

Figure 1 for MOCHA: A Multi-Task Training Approach for Coherent Text Generation from Cognitive Perspective
Figure 2 for MOCHA: A Multi-Task Training Approach for Coherent Text Generation from Cognitive Perspective
Figure 3 for MOCHA: A Multi-Task Training Approach for Coherent Text Generation from Cognitive Perspective
Figure 4 for MOCHA: A Multi-Task Training Approach for Coherent Text Generation from Cognitive Perspective
Viaarxiv icon