Alert button
Picture for Chris Dyer

Chris Dyer

Alert button

Continuous diffusion for categorical data

Dec 15, 2022
Sander Dieleman, Laurent Sartran, Arman Roshannai, Nikolay Savinov, Yaroslav Ganin, Pierre H. Richemond, Arnaud Doucet, Robin Strudel, Chris Dyer, Conor Durkan, Curtis Hawthorne, Rémi Leblond, Will Grathwohl, Jonas Adler

Figure 1 for Continuous diffusion for categorical data
Figure 2 for Continuous diffusion for categorical data
Figure 3 for Continuous diffusion for categorical data
Figure 4 for Continuous diffusion for categorical data
Viaarxiv icon

MAD for Robust Reinforcement Learning in Machine Translation

Jul 18, 2022
Domenic Donato, Lei Yu, Wang Ling, Chris Dyer

Figure 1 for MAD for Robust Reinforcement Learning in Machine Translation
Figure 2 for MAD for Robust Reinforcement Learning in Machine Translation
Figure 3 for MAD for Robust Reinforcement Learning in Machine Translation
Figure 4 for MAD for Robust Reinforcement Learning in Machine Translation
Viaarxiv icon

Transformer Grammars: Augmenting Transformer Language Models with Syntactic Inductive Biases at Scale

Mar 01, 2022
Laurent Sartran, Samuel Barrett, Adhiguna Kuncoro, Miloš Stanojević, Phil Blunsom, Chris Dyer

Figure 1 for Transformer Grammars: Augmenting Transformer Language Models with Syntactic Inductive Biases at Scale
Figure 2 for Transformer Grammars: Augmenting Transformer Language Models with Syntactic Inductive Biases at Scale
Figure 3 for Transformer Grammars: Augmenting Transformer Language Models with Syntactic Inductive Biases at Scale
Figure 4 for Transformer Grammars: Augmenting Transformer Language Models with Syntactic Inductive Biases at Scale
Viaarxiv icon

Enabling arbitrary translation objectives with Adaptive Tree Search

Feb 23, 2022
Wang Ling, Wojciech Stokowiec, Domenic Donato, Laurent Sartran, Lei Yu, Austin Matthews, Chris Dyer

Figure 1 for Enabling arbitrary translation objectives with Adaptive Tree Search
Figure 2 for Enabling arbitrary translation objectives with Adaptive Tree Search
Figure 3 for Enabling arbitrary translation objectives with Adaptive Tree Search
Figure 4 for Enabling arbitrary translation objectives with Adaptive Tree Search
Viaarxiv icon

Scaling Language Models: Methods, Analysis & Insights from Training Gopher

Dec 08, 2021
Jack W. Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, Eliza Rutherford, Tom Hennigan, Jacob Menick, Albin Cassirer, Richard Powell, George van den Driessche, Lisa Anne Hendricks, Maribeth Rauh, Po-Sen Huang, Amelia Glaese, Johannes Welbl, Sumanth Dathathri, Saffron Huang, Jonathan Uesato, John Mellor, Irina Higgins, Antonia Creswell, Nat McAleese, Amy Wu, Erich Elsen, Siddhant Jayakumar, Elena Buchatskaya, David Budden, Esme Sutherland, Karen Simonyan, Michela Paganini, Laurent Sifre, Lena Martens, Xiang Lorraine Li, Adhiguna Kuncoro, Aida Nematzadeh, Elena Gribovskaya, Domenic Donato, Angeliki Lazaridou, Arthur Mensch, Jean-Baptiste Lespiau, Maria Tsimpoukelli, Nikolai Grigorev, Doug Fritz, Thibault Sottiaux, Mantas Pajarskas, Toby Pohlen, Zhitao Gong, Daniel Toyama, Cyprien de Masson d'Autume, Yujia Li, Tayfun Terzi, Vladimir Mikulik, Igor Babuschkin, Aidan Clark, Diego de Las Casas, Aurelia Guy, Chris Jones, James Bradbury, Matthew Johnson, Blake Hechtman, Laura Weidinger, Iason Gabriel, William Isaac, Ed Lockhart, Simon Osindero, Laura Rimell, Chris Dyer, Oriol Vinyals, Kareem Ayoub, Jeff Stanway, Lorrayne Bennett, Demis Hassabis, Koray Kavukcuoglu, Geoffrey Irving

Figure 1 for Scaling Language Models: Methods, Analysis & Insights from Training Gopher
Figure 2 for Scaling Language Models: Methods, Analysis & Insights from Training Gopher
Figure 3 for Scaling Language Models: Methods, Analysis & Insights from Training Gopher
Figure 4 for Scaling Language Models: Methods, Analysis & Insights from Training Gopher
Viaarxiv icon

End-to-End Training of Multi-Document Reader and Retriever for Open-Domain Question Answering

Jun 09, 2021
Devendra Singh Sachan, Siva Reddy, William Hamilton, Chris Dyer, Dani Yogatama

Figure 1 for End-to-End Training of Multi-Document Reader and Retriever for Open-Domain Question Answering
Figure 2 for End-to-End Training of Multi-Document Reader and Retriever for Open-Domain Question Answering
Figure 3 for End-to-End Training of Multi-Document Reader and Retriever for Open-Domain Question Answering
Figure 4 for End-to-End Training of Multi-Document Reader and Retriever for Open-Domain Question Answering
Viaarxiv icon

Diverse Pretrained Context Encodings Improve Document Translation

Jun 07, 2021
Domenic Donato, Lei Yu, Chris Dyer

Figure 1 for Diverse Pretrained Context Encodings Improve Document Translation
Figure 2 for Diverse Pretrained Context Encodings Improve Document Translation
Figure 3 for Diverse Pretrained Context Encodings Improve Document Translation
Figure 4 for Diverse Pretrained Context Encodings Improve Document Translation
Viaarxiv icon

Exposing the Implicit Energy Networks behind Masked Language Models via Metropolis--Hastings

Jun 04, 2021
Kartik Goyal, Chris Dyer, Taylor Berg-Kirkpatrick

Figure 1 for Exposing the Implicit Energy Networks behind Masked Language Models via Metropolis--Hastings
Figure 2 for Exposing the Implicit Energy Networks behind Masked Language Models via Metropolis--Hastings
Figure 3 for Exposing the Implicit Energy Networks behind Masked Language Models via Metropolis--Hastings
Figure 4 for Exposing the Implicit Energy Networks behind Masked Language Models via Metropolis--Hastings
Viaarxiv icon