Alert button
Picture for David Weiss

David Weiss

Alert button

Scene Transformer: A unified multi-task model for behavior prediction and planning

Add code
Bookmark button
Alert button
Jun 15, 2021
Jiquan Ngiam, Benjamin Caine, Vijay Vasudevan, Zhengdong Zhang, Hao-Tien Lewis Chiang, Jeffrey Ling, Rebecca Roelofs, Alex Bewley, Chenxi Liu, Ashish Venugopal, David Weiss, Ben Sapp, Zhifeng Chen, Jonathon Shlens

Figure 1 for Scene Transformer: A unified multi-task model for behavior prediction and planning
Figure 2 for Scene Transformer: A unified multi-task model for behavior prediction and planning
Figure 3 for Scene Transformer: A unified multi-task model for behavior prediction and planning
Figure 4 for Scene Transformer: A unified multi-task model for behavior prediction and planning
Viaarxiv icon

Learning Cross-Context Entity Representations from Text

Add code
Bookmark button
Alert button
Jan 11, 2020
Jeffrey Ling, Nicholas FitzGerald, Zifei Shan, Livio Baldini Soares, Thibault Févry, David Weiss, Tom Kwiatkowski

Figure 1 for Learning Cross-Context Entity Representations from Text
Figure 2 for Learning Cross-Context Entity Representations from Text
Figure 3 for Learning Cross-Context Entity Representations from Text
Figure 4 for Learning Cross-Context Entity Representations from Text
Viaarxiv icon

A Fast, Compact, Accurate Model for Language Identification of Codemixed Text

Add code
Bookmark button
Alert button
Oct 09, 2018
Yuan Zhang, Jason Riesa, Daniel Gillick, Anton Bakalov, Jason Baldridge, David Weiss

Figure 1 for A Fast, Compact, Accurate Model for Language Identification of Codemixed Text
Figure 2 for A Fast, Compact, Accurate Model for Language Identification of Codemixed Text
Figure 3 for A Fast, Compact, Accurate Model for Language Identification of Codemixed Text
Figure 4 for A Fast, Compact, Accurate Model for Language Identification of Codemixed Text
Viaarxiv icon

Linguistically-Informed Self-Attention for Semantic Role Labeling

Add code
Bookmark button
Alert button
Aug 28, 2018
Emma Strubell, Patrick Verga, Daniel Andor, David Weiss, Andrew McCallum

Figure 1 for Linguistically-Informed Self-Attention for Semantic Role Labeling
Figure 2 for Linguistically-Informed Self-Attention for Semantic Role Labeling
Figure 3 for Linguistically-Informed Self-Attention for Semantic Role Labeling
Figure 4 for Linguistically-Informed Self-Attention for Semantic Role Labeling
Viaarxiv icon

State-of-the-art Chinese Word Segmentation with Bi-LSTMs

Add code
Bookmark button
Alert button
Aug 24, 2018
Ji Ma, Kuzman Ganchev, David Weiss

Figure 1 for State-of-the-art Chinese Word Segmentation with Bi-LSTMs
Figure 2 for State-of-the-art Chinese Word Segmentation with Bi-LSTMs
Figure 3 for State-of-the-art Chinese Word Segmentation with Bi-LSTMs
Figure 4 for State-of-the-art Chinese Word Segmentation with Bi-LSTMs
Viaarxiv icon

Adversarial Neural Networks for Cross-lingual Sequence Tagging

Add code
Bookmark button
Alert button
Aug 14, 2018
Heike Adel, Anton Bryl, David Weiss, Aliaksei Severyn

Figure 1 for Adversarial Neural Networks for Cross-lingual Sequence Tagging
Figure 2 for Adversarial Neural Networks for Cross-lingual Sequence Tagging
Figure 3 for Adversarial Neural Networks for Cross-lingual Sequence Tagging
Figure 4 for Adversarial Neural Networks for Cross-lingual Sequence Tagging
Viaarxiv icon

Natural Language Processing with Small Feed-Forward Networks

Add code
Bookmark button
Alert button
Aug 01, 2017
Jan A. Botha, Emily Pitler, Ji Ma, Anton Bakalov, Alex Salcianu, David Weiss, Ryan McDonald, Slav Petrov

Figure 1 for Natural Language Processing with Small Feed-Forward Networks
Figure 2 for Natural Language Processing with Small Feed-Forward Networks
Figure 3 for Natural Language Processing with Small Feed-Forward Networks
Figure 4 for Natural Language Processing with Small Feed-Forward Networks
Viaarxiv icon

SyntaxNet Models for the CoNLL 2017 Shared Task

Add code
Bookmark button
Alert button
Mar 15, 2017
Chris Alberti, Daniel Andor, Ivan Bogatyy, Michael Collins, Dan Gillick, Lingpeng Kong, Terry Koo, Ji Ma, Mark Omernick, Slav Petrov, Chayut Thanapirom, Zora Tung, David Weiss

Figure 1 for SyntaxNet Models for the CoNLL 2017 Shared Task
Viaarxiv icon

DRAGNN: A Transition-based Framework for Dynamically Connected Neural Networks

Add code
Bookmark button
Alert button
Mar 13, 2017
Lingpeng Kong, Chris Alberti, Daniel Andor, Ivan Bogatyy, David Weiss

Figure 1 for DRAGNN: A Transition-based Framework for Dynamically Connected Neural Networks
Figure 2 for DRAGNN: A Transition-based Framework for Dynamically Connected Neural Networks
Figure 3 for DRAGNN: A Transition-based Framework for Dynamically Connected Neural Networks
Figure 4 for DRAGNN: A Transition-based Framework for Dynamically Connected Neural Networks
Viaarxiv icon

Globally Normalized Transition-Based Neural Networks

Add code
Bookmark button
Alert button
Jun 08, 2016
Daniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev, Slav Petrov, Michael Collins

Figure 1 for Globally Normalized Transition-Based Neural Networks
Figure 2 for Globally Normalized Transition-Based Neural Networks
Figure 3 for Globally Normalized Transition-Based Neural Networks
Figure 4 for Globally Normalized Transition-Based Neural Networks
Viaarxiv icon