Alert button
Picture for Marc Najork

Marc Najork

Alert button

Born Again Neural Rankers

Sep 30, 2021
Zhen Qin, Le Yan, Yi Tay, Honglei Zhuang, Xuanhui Wang, Michael Bendersky, Marc Najork

Figure 1 for Born Again Neural Rankers
Figure 2 for Born Again Neural Rankers
Figure 3 for Born Again Neural Rankers
Figure 4 for Born Again Neural Rankers
Viaarxiv icon

Dynamic Language Models for Continuously Evolving Content

Jun 11, 2021
Spurthi Amba Hombaiah, Tao Chen, Mingyang Zhang, Michael Bendersky, Marc Najork

Figure 1 for Dynamic Language Models for Continuously Evolving Content
Figure 2 for Dynamic Language Models for Continuously Evolving Content
Figure 3 for Dynamic Language Models for Continuously Evolving Content
Figure 4 for Dynamic Language Models for Continuously Evolving Content
Viaarxiv icon

Rethinking Search: Making Experts out of Dilettantes

May 05, 2021
Donald Metzler, Yi Tay, Dara Bahri, Marc Najork

Figure 1 for Rethinking Search: Making Experts out of Dilettantes
Figure 2 for Rethinking Search: Making Experts out of Dilettantes
Figure 3 for Rethinking Search: Making Experts out of Dilettantes
Viaarxiv icon

Privacy-Adaptive BERT for Natural Language Understanding

Apr 15, 2021
Chen Qu, Weize Kong, Liu Yang, Mingyang Zhang, Michael Bendersky, Marc Najork

Figure 1 for Privacy-Adaptive BERT for Natural Language Understanding
Figure 2 for Privacy-Adaptive BERT for Natural Language Understanding
Figure 3 for Privacy-Adaptive BERT for Natural Language Understanding
Figure 4 for Privacy-Adaptive BERT for Natural Language Understanding
Viaarxiv icon

WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning

Mar 03, 2021
Krishna Srinivasan, Karthik Raman, Jiecao Chen, Michael Bendersky, Marc Najork

Figure 1 for WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning
Figure 2 for WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning
Figure 3 for WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning
Figure 4 for WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning
Viaarxiv icon

Scalable Bottom-Up Hierarchical Clustering

Nov 04, 2020
Nicholas Monath, Avinava Dubey, Guru Guruganesh, Manzil Zaheer, Amr Ahmed, Andrew McCallum, Gokhan Mergen, Marc Najork, Mert Terzihan, Bryon Tjanaka, Yuan Wang, Yuchen Wu

Figure 1 for Scalable Bottom-Up Hierarchical Clustering
Figure 2 for Scalable Bottom-Up Hierarchical Clustering
Figure 3 for Scalable Bottom-Up Hierarchical Clustering
Figure 4 for Scalable Bottom-Up Hierarchical Clustering
Viaarxiv icon

DiPair: Fast and Accurate Distillation for Trillion-Scale Text Matching and Pair Modeling

Oct 07, 2020
Jiecao Chen, Liu Yang, Karthik Raman, Michael Bendersky, Jung-Jung Yeh, Yun Zhou, Marc Najork, Danyang Cai, Ehsan Emadzadeh

Figure 1 for DiPair: Fast and Accurate Distillation for Trillion-Scale Text Matching and Pair Modeling
Figure 2 for DiPair: Fast and Accurate Distillation for Trillion-Scale Text Matching and Pair Modeling
Figure 3 for DiPair: Fast and Accurate Distillation for Trillion-Scale Text Matching and Pair Modeling
Figure 4 for DiPair: Fast and Accurate Distillation for Trillion-Scale Text Matching and Pair Modeling
Viaarxiv icon

Active Learning for Skewed Data Sets

May 23, 2020
Abbas Kazerouni, Qi Zhao, Jing Xie, Sandeep Tata, Marc Najork

Figure 1 for Active Learning for Skewed Data Sets
Figure 2 for Active Learning for Skewed Data Sets
Figure 3 for Active Learning for Skewed Data Sets
Figure 4 for Active Learning for Skewed Data Sets
Viaarxiv icon

Beyond 512 Tokens: Siamese Multi-depth Transformer-based Hierarchical Encoder for Document Matching

Apr 26, 2020
Liu Yang, Mingyang Zhang, Cheng Li, Michael Bendersky, Marc Najork

Figure 1 for Beyond 512 Tokens: Siamese Multi-depth Transformer-based Hierarchical Encoder for Document Matching
Figure 2 for Beyond 512 Tokens: Siamese Multi-depth Transformer-based Hierarchical Encoder for Document Matching
Figure 3 for Beyond 512 Tokens: Siamese Multi-depth Transformer-based Hierarchical Encoder for Document Matching
Figure 4 for Beyond 512 Tokens: Siamese Multi-depth Transformer-based Hierarchical Encoder for Document Matching
Viaarxiv icon