Alert button
Picture for William Cohen

William Cohen

Alert button

Instruct-Imagen: Image Generation with Multi-modal Instruction

Add code
Bookmark button
Alert button
Jan 03, 2024
Hexiang Hu, Kelvin C. K. Chan, Yu-Chuan Su, Wenhu Chen, Yandong Li, Kihyuk Sohn, Yang Zhao, Xue Ben, Boqing Gong, William Cohen, Ming-Wei Chang, Xuhui Jia

Viaarxiv icon

Pre-computed memory or on-the-fly encoding? A hybrid approach to retrieval augmentation makes the most of your compute

Add code
Bookmark button
Alert button
Jan 25, 2023
Michiel de Jong, Yury Zemlyanskiy, Nicholas FitzGerald, Joshua Ainslie, Sumit Sanghai, Fei Sha, William Cohen

Figure 1 for Pre-computed memory or on-the-fly encoding? A hybrid approach to retrieval augmentation makes the most of your compute
Figure 2 for Pre-computed memory or on-the-fly encoding? A hybrid approach to retrieval augmentation makes the most of your compute
Figure 3 for Pre-computed memory or on-the-fly encoding? A hybrid approach to retrieval augmentation makes the most of your compute
Figure 4 for Pre-computed memory or on-the-fly encoding? A hybrid approach to retrieval augmentation makes the most of your compute
Viaarxiv icon

FiDO: Fusion-in-Decoder optimized for stronger performance and faster inference

Add code
Bookmark button
Alert button
Dec 15, 2022
Michiel de Jong, Yury Zemlyanskiy, Joshua Ainslie, Nicholas FitzGerald, Sumit Sanghai, Fei Sha, William Cohen

Figure 1 for FiDO: Fusion-in-Decoder optimized for stronger performance and faster inference
Figure 2 for FiDO: Fusion-in-Decoder optimized for stronger performance and faster inference
Figure 3 for FiDO: Fusion-in-Decoder optimized for stronger performance and faster inference
Figure 4 for FiDO: Fusion-in-Decoder optimized for stronger performance and faster inference
Viaarxiv icon

Augmenting Pre-trained Language Models with QA-Memory for Open-Domain Question Answering

Add code
Bookmark button
Alert button
Apr 10, 2022
Wenhu Chen, Pat Verga, Michiel de Jong, John Wieting, William Cohen

Figure 1 for Augmenting Pre-trained Language Models with QA-Memory for Open-Domain Question Answering
Figure 2 for Augmenting Pre-trained Language Models with QA-Memory for Open-Domain Question Answering
Figure 3 for Augmenting Pre-trained Language Models with QA-Memory for Open-Domain Question Answering
Figure 4 for Augmenting Pre-trained Language Models with QA-Memory for Open-Domain Question Answering
Viaarxiv icon

Mention Memory: incorporating textual knowledge into Transformers through entity mention attention

Add code
Bookmark button
Alert button
Oct 12, 2021
Michiel de Jong, Yury Zemlyanskiy, Nicholas FitzGerald, Fei Sha, William Cohen

Figure 1 for Mention Memory: incorporating textual knowledge into Transformers through entity mention attention
Figure 2 for Mention Memory: incorporating textual knowledge into Transformers through entity mention attention
Figure 3 for Mention Memory: incorporating textual knowledge into Transformers through entity mention attention
Figure 4 for Mention Memory: incorporating textual knowledge into Transformers through entity mention attention
Viaarxiv icon

Explainable Entity-based Recommendations with Knowledge Graphs

Add code
Bookmark button
Alert button
Jul 12, 2017
Rose Catherine, Kathryn Mazaitis, Maxine Eskenazi, William Cohen

Figure 1 for Explainable Entity-based Recommendations with Knowledge Graphs
Viaarxiv icon

TransNets: Learning to Transform for Recommendation

Add code
Bookmark button
Alert button
Jun 30, 2017
Rose Catherine, William Cohen

Figure 1 for TransNets: Learning to Transform for Recommendation
Figure 2 for TransNets: Learning to Transform for Recommendation
Figure 3 for TransNets: Learning to Transform for Recommendation
Figure 4 for TransNets: Learning to Transform for Recommendation
Viaarxiv icon

Multi-Task Cross-Lingual Sequence Tagging from Scratch

Add code
Bookmark button
Alert button
Aug 09, 2016
Zhilin Yang, Ruslan Salakhutdinov, William Cohen

Figure 1 for Multi-Task Cross-Lingual Sequence Tagging from Scratch
Figure 2 for Multi-Task Cross-Lingual Sequence Tagging from Scratch
Figure 3 for Multi-Task Cross-Lingual Sequence Tagging from Scratch
Figure 4 for Multi-Task Cross-Lingual Sequence Tagging from Scratch
Viaarxiv icon

Multi-Modal Bayesian Embeddings for Learning Social Knowledge Graphs

Add code
Bookmark button
Alert button
Apr 20, 2016
Zhilin Yang, Jie Tang, William Cohen

Figure 1 for Multi-Modal Bayesian Embeddings for Learning Social Knowledge Graphs
Figure 2 for Multi-Modal Bayesian Embeddings for Learning Social Knowledge Graphs
Figure 3 for Multi-Modal Bayesian Embeddings for Learning Social Knowledge Graphs
Figure 4 for Multi-Modal Bayesian Embeddings for Learning Social Knowledge Graphs
Viaarxiv icon

Scaling Graph-based Semi Supervised Learning to Large Number of Labels Using Count-Min Sketch

Add code
Bookmark button
Alert button
Feb 27, 2014
Partha Pratim Talukdar, William Cohen

Figure 1 for Scaling Graph-based Semi Supervised Learning to Large Number of Labels Using Count-Min Sketch
Figure 2 for Scaling Graph-based Semi Supervised Learning to Large Number of Labels Using Count-Min Sketch
Figure 3 for Scaling Graph-based Semi Supervised Learning to Large Number of Labels Using Count-Min Sketch
Figure 4 for Scaling Graph-based Semi Supervised Learning to Large Number of Labels Using Count-Min Sketch
Viaarxiv icon