Alert button
Picture for Jianmo Ni

Jianmo Ni

Alert button

Knowledge Prompts: Injecting World Knowledge into Language Models through Soft Prompts

Add code
Bookmark button
Alert button
Oct 10, 2022
Cicero Nogueira dos Santos, Zhe Dong, Daniel Cer, John Nham, Siamak Shakeri, Jianmo Ni, Yun-hsuan Sung

Figure 1 for Knowledge Prompts: Injecting World Knowledge into Language Models through Soft Prompts
Figure 2 for Knowledge Prompts: Injecting World Knowledge into Language Models through Soft Prompts
Figure 3 for Knowledge Prompts: Injecting World Knowledge into Language Models through Soft Prompts
Figure 4 for Knowledge Prompts: Injecting World Knowledge into Language Models through Soft Prompts
Viaarxiv icon

Promptagator: Few-shot Dense Retrieval From 8 Examples

Add code
Bookmark button
Alert button
Sep 23, 2022
Zhuyun Dai, Vincent Y. Zhao, Ji Ma, Yi Luan, Jianmo Ni, Jing Lu, Anton Bakalov, Kelvin Guu, Keith B. Hall, Ming-Wei Chang

Figure 1 for Promptagator: Few-shot Dense Retrieval From 8 Examples
Figure 2 for Promptagator: Few-shot Dense Retrieval From 8 Examples
Figure 3 for Promptagator: Few-shot Dense Retrieval From 8 Examples
Figure 4 for Promptagator: Few-shot Dense Retrieval From 8 Examples
Viaarxiv icon

Knowledge-aware Neural Collective Matrix Factorization for Cross-domain Recommendation

Add code
Bookmark button
Alert button
Jun 27, 2022
Li Zhang, Yan Ge, Jun Ma, Jianmo Ni, Haiping Lu

Figure 1 for Knowledge-aware Neural Collective Matrix Factorization for Cross-domain Recommendation
Figure 2 for Knowledge-aware Neural Collective Matrix Factorization for Cross-domain Recommendation
Figure 3 for Knowledge-aware Neural Collective Matrix Factorization for Cross-domain Recommendation
Figure 4 for Knowledge-aware Neural Collective Matrix Factorization for Cross-domain Recommendation
Viaarxiv icon

Exploring Dual Encoder Architectures for Question Answering

Add code
Bookmark button
Alert button
Apr 14, 2022
Zhe Dong, Jianmo Ni, Dan Bikel, Enrique Alfonseca, Yuan Wang, Chen Qu, Imed Zitouni

Figure 1 for Exploring Dual Encoder Architectures for Question Answering
Figure 2 for Exploring Dual Encoder Architectures for Question Answering
Figure 3 for Exploring Dual Encoder Architectures for Question Answering
Figure 4 for Exploring Dual Encoder Architectures for Question Answering
Viaarxiv icon

Scaling Up Models and Data with $\texttt{t5x}$ and $\texttt{seqio}$

Add code
Bookmark button
Alert button
Mar 31, 2022
Adam Roberts, Hyung Won Chung, Anselm Levskaya, Gaurav Mishra, James Bradbury, Daniel Andor, Sharan Narang, Brian Lester, Colin Gaffney, Afroz Mohiuddin, Curtis Hawthorne, Aitor Lewkowycz, Alex Salcianu, Marc van Zee, Jacob Austin, Sebastian Goodman, Livio Baldini Soares, Haitang Hu, Sasha Tsvyashchenko, Aakanksha Chowdhery, Jasmijn Bastings, Jannis Bulian, Xavier Garcia, Jianmo Ni, Andrew Chen, Kathleen Kenealy, Jonathan H. Clark, Stephan Lee, Dan Garrette, James Lee-Thorp, Colin Raffel, Noam Shazeer, Marvin Ritter, Maarten Bosma, Alexandre Passos, Jeremy Maitin-Shepard, Noah Fiedel, Mark Omernick, Brennan Saeta, Ryan Sepassi, Alexander Spiridonov, Joshua Newlan, Andrea Gesmundo

Figure 1 for Scaling Up Models and Data with $\texttt{t5x}$ and $\texttt{seqio}$
Figure 2 for Scaling Up Models and Data with $\texttt{t5x}$ and $\texttt{seqio}$
Viaarxiv icon

Transformer Memory as a Differentiable Search Index

Add code
Bookmark button
Alert button
Feb 16, 2022
Yi Tay, Vinh Q. Tran, Mostafa Dehghani, Jianmo Ni, Dara Bahri, Harsh Mehta, Zhen Qin, Kai Hui, Zhe Zhao, Jai Gupta, Tal Schuster, William W. Cohen, Donald Metzler

Figure 1 for Transformer Memory as a Differentiable Search Index
Figure 2 for Transformer Memory as a Differentiable Search Index
Figure 3 for Transformer Memory as a Differentiable Search Index
Figure 4 for Transformer Memory as a Differentiable Search Index
Viaarxiv icon

LongT5: Efficient Text-To-Text Transformer for Long Sequences

Add code
Bookmark button
Alert button
Dec 15, 2021
Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang

Figure 1 for LongT5: Efficient Text-To-Text Transformer for Long Sequences
Figure 2 for LongT5: Efficient Text-To-Text Transformer for Long Sequences
Figure 3 for LongT5: Efficient Text-To-Text Transformer for Long Sequences
Figure 4 for LongT5: Efficient Text-To-Text Transformer for Long Sequences
Viaarxiv icon

Large Dual Encoders Are Generalizable Retrievers

Add code
Bookmark button
Alert button
Dec 15, 2021
Jianmo Ni, Chen Qu, Jing Lu, Zhuyun Dai, Gustavo Hernández Ábrego, Ji Ma, Vincent Y. Zhao, Yi Luan, Keith B. Hall, Ming-Wei Chang, Yinfei Yang

Figure 1 for Large Dual Encoders Are Generalizable Retrievers
Figure 2 for Large Dual Encoders Are Generalizable Retrievers
Figure 3 for Large Dual Encoders Are Generalizable Retrievers
Figure 4 for Large Dual Encoders Are Generalizable Retrievers
Viaarxiv icon

ExT5: Towards Extreme Multi-Task Scaling for Transfer Learning

Add code
Bookmark button
Alert button
Nov 22, 2021
Vamsi Aribandi, Yi Tay, Tal Schuster, Jinfeng Rao, Huaixiu Steven Zheng, Sanket Vaibhav Mehta, Honglei Zhuang, Vinh Q. Tran, Dara Bahri, Jianmo Ni, Jai Gupta, Kai Hui, Sebastian Ruder, Donald Metzler

Figure 1 for ExT5: Towards Extreme Multi-Task Scaling for Transfer Learning
Figure 2 for ExT5: Towards Extreme Multi-Task Scaling for Transfer Learning
Figure 3 for ExT5: Towards Extreme Multi-Task Scaling for Transfer Learning
Figure 4 for ExT5: Towards Extreme Multi-Task Scaling for Transfer Learning
Viaarxiv icon

Sentence-T5: Scalable Sentence Encoders from Pre-trained Text-to-Text Models

Add code
Bookmark button
Alert button
Aug 26, 2021
Jianmo Ni, Gustavo Hernández Ábrego, Noah Constant, Ji Ma, Keith B. Hall, Daniel Cer, Yinfei Yang

Figure 1 for Sentence-T5: Scalable Sentence Encoders from Pre-trained Text-to-Text Models
Figure 2 for Sentence-T5: Scalable Sentence Encoders from Pre-trained Text-to-Text Models
Figure 3 for Sentence-T5: Scalable Sentence Encoders from Pre-trained Text-to-Text Models
Figure 4 for Sentence-T5: Scalable Sentence Encoders from Pre-trained Text-to-Text Models
Viaarxiv icon