Picture for Nicholas FitzGerald

Nicholas FitzGerald

University of Washington

Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context

Add code
Mar 08, 2024
Viaarxiv icon

Gemini: A Family of Highly Capable Multimodal Models

Add code
Dec 19, 2023
Viaarxiv icon

GLIMMER: generalized late-interaction memory reranker

Add code
Jun 17, 2023
Figure 1 for GLIMMER: generalized late-interaction memory reranker
Figure 2 for GLIMMER: generalized late-interaction memory reranker
Figure 3 for GLIMMER: generalized late-interaction memory reranker
Figure 4 for GLIMMER: generalized late-interaction memory reranker
Viaarxiv icon

Pre-computed memory or on-the-fly encoding? A hybrid approach to retrieval augmentation makes the most of your compute

Add code
Jan 25, 2023
Figure 1 for Pre-computed memory or on-the-fly encoding? A hybrid approach to retrieval augmentation makes the most of your compute
Figure 2 for Pre-computed memory or on-the-fly encoding? A hybrid approach to retrieval augmentation makes the most of your compute
Figure 3 for Pre-computed memory or on-the-fly encoding? A hybrid approach to retrieval augmentation makes the most of your compute
Figure 4 for Pre-computed memory or on-the-fly encoding? A hybrid approach to retrieval augmentation makes the most of your compute
Viaarxiv icon

FiDO: Fusion-in-Decoder optimized for stronger performance and faster inference

Add code
Dec 15, 2022
Figure 1 for FiDO: Fusion-in-Decoder optimized for stronger performance and faster inference
Figure 2 for FiDO: Fusion-in-Decoder optimized for stronger performance and faster inference
Figure 3 for FiDO: Fusion-in-Decoder optimized for stronger performance and faster inference
Figure 4 for FiDO: Fusion-in-Decoder optimized for stronger performance and faster inference
Viaarxiv icon

Mention Memory: incorporating textual knowledge into Transformers through entity mention attention

Add code
Oct 12, 2021
Figure 1 for Mention Memory: incorporating textual knowledge into Transformers through entity mention attention
Figure 2 for Mention Memory: incorporating textual knowledge into Transformers through entity mention attention
Figure 3 for Mention Memory: incorporating textual knowledge into Transformers through entity mention attention
Figure 4 for Mention Memory: incorporating textual knowledge into Transformers through entity mention attention
Viaarxiv icon

MOLEMAN: Mention-Only Linking of Entities with a Mention Annotation Network

Add code
Jun 02, 2021
Figure 1 for MOLEMAN: Mention-Only Linking of Entities with a Mention Annotation Network
Figure 2 for MOLEMAN: Mention-Only Linking of Entities with a Mention Annotation Network
Figure 3 for MOLEMAN: Mention-Only Linking of Entities with a Mention Annotation Network
Figure 4 for MOLEMAN: Mention-Only Linking of Entities with a Mention Annotation Network
Viaarxiv icon

Empirical Evaluation of Pretraining Strategies for Supervised Entity Linking

Add code
May 28, 2020
Figure 1 for Empirical Evaluation of Pretraining Strategies for Supervised Entity Linking
Figure 2 for Empirical Evaluation of Pretraining Strategies for Supervised Entity Linking
Figure 3 for Empirical Evaluation of Pretraining Strategies for Supervised Entity Linking
Figure 4 for Empirical Evaluation of Pretraining Strategies for Supervised Entity Linking
Viaarxiv icon

Entities as Experts: Sparse Memory Access with Entity Supervision

Add code
Apr 15, 2020
Figure 1 for Entities as Experts: Sparse Memory Access with Entity Supervision
Figure 2 for Entities as Experts: Sparse Memory Access with Entity Supervision
Figure 3 for Entities as Experts: Sparse Memory Access with Entity Supervision
Figure 4 for Entities as Experts: Sparse Memory Access with Entity Supervision
Viaarxiv icon

Learning Cross-Context Entity Representations from Text

Add code
Jan 11, 2020
Figure 1 for Learning Cross-Context Entity Representations from Text
Figure 2 for Learning Cross-Context Entity Representations from Text
Figure 3 for Learning Cross-Context Entity Representations from Text
Figure 4 for Learning Cross-Context Entity Representations from Text
Viaarxiv icon