Alert button
Picture for Da Ju

Da Ju

Alert button

Improving Open Language Models by Learning from Organic Interactions

Jun 07, 2023
Jing Xu, Da Ju, Joshua Lane, Mojtaba Komeili, Eric Michael Smith, Megan Ung, Morteza Behrooz, William Ngan, Rashel Moritz, Sainbayar Sukhbaatar, Y-Lan Boureau, Jason Weston, Kurt Shuster

Figure 1 for Improving Open Language Models by Learning from Organic Interactions
Figure 2 for Improving Open Language Models by Learning from Organic Interactions
Figure 3 for Improving Open Language Models by Learning from Organic Interactions
Figure 4 for Improving Open Language Models by Learning from Organic Interactions

We present BlenderBot 3x, an update on the conversational model BlenderBot 3, which is now trained using organic conversation and feedback data from participating users of the system in order to improve both its skills and safety. We are publicly releasing the participating de-identified interaction data for use by the research community, in order to spur further progress. Training models with organic data is challenging because interactions with people "in the wild" include both high quality conversations and feedback, as well as adversarial and toxic behavior. We study techniques that enable learning from helpful teachers while avoiding learning from people who are trying to trick the model into unhelpful or toxic responses. BlenderBot 3x is both preferred in conversation to BlenderBot 3, and is shown to produce safer responses in challenging situations. While our current models are still far from perfect, we believe further improvement can be achieved by continued use of the techniques explored in this work.

Viaarxiv icon

BlenderBot 3: a deployed conversational agent that continually learns to responsibly engage

Aug 10, 2022
Kurt Shuster, Jing Xu, Mojtaba Komeili, Da Ju, Eric Michael Smith, Stephen Roller, Megan Ung, Moya Chen, Kushal Arora, Joshua Lane, Morteza Behrooz, William Ngan, Spencer Poff, Naman Goyal, Arthur Szlam, Y-Lan Boureau, Melanie Kambadur, Jason Weston

Figure 1 for BlenderBot 3: a deployed conversational agent that continually learns to responsibly engage
Figure 2 for BlenderBot 3: a deployed conversational agent that continually learns to responsibly engage
Figure 3 for BlenderBot 3: a deployed conversational agent that continually learns to responsibly engage
Figure 4 for BlenderBot 3: a deployed conversational agent that continually learns to responsibly engage

We present BlenderBot 3, a 175B parameter dialogue model capable of open-domain conversation with access to the internet and a long-term memory, and having been trained on a large number of user defined tasks. We release both the model weights and code, and have also deployed the model on a public web page to interact with organic users. This technical report describes how the model was built (architecture, model and training scheme), and details of its deployment, including safety mechanisms. Human evaluations show its superiority to existing open-domain dialogue agents, including its predecessors (Roller et al., 2021; Komeili et al., 2022). Finally, we detail our plan for continual learning using the data collected from deployment, which will also be publicly released. The goal of this research program is thus to enable the community to study ever-improving responsible agents that learn through interaction.

Viaarxiv icon

Learning from data in the mixed adversarial non-adversarial case: Finding the helpers and ignoring the trolls

Aug 05, 2022
Da Ju, Jing Xu, Y-Lan Boureau, Jason Weston

Figure 1 for Learning from data in the mixed adversarial non-adversarial case: Finding the helpers and ignoring the trolls
Figure 2 for Learning from data in the mixed adversarial non-adversarial case: Finding the helpers and ignoring the trolls
Figure 3 for Learning from data in the mixed adversarial non-adversarial case: Finding the helpers and ignoring the trolls
Figure 4 for Learning from data in the mixed adversarial non-adversarial case: Finding the helpers and ignoring the trolls

The promise of interaction between intelligent conversational agents and humans is that models can learn from such feedback in order to improve. Unfortunately, such exchanges in the wild will not always involve human utterances that are benign or of high quality, and will include a mixture of engaged (helpers) and unengaged or even malicious users (trolls). In this work we study how to perform robust learning in such an environment. We introduce a benchmark evaluation, SafetyMix, which can evaluate methods that learn safe vs. toxic language in a variety of adversarial settings to test their robustness. We propose and analyze several mitigating learning algorithms that identify trolls either at the example or at the user level. Our main finding is that user-based methods, that take into account that troll users will exhibit adversarial behavior across multiple examples, work best in a variety of settings on our benchmark. We then test these methods in a further real-life setting of conversations collected during deployment, with similar results.

Viaarxiv icon

Staircase Attention for Recurrent Processing of Sequences

Jun 08, 2021
Da Ju, Stephen Roller, Sainbayar Sukhbaatar, Jason Weston

Figure 1 for Staircase Attention for Recurrent Processing of Sequences
Figure 2 for Staircase Attention for Recurrent Processing of Sequences
Figure 3 for Staircase Attention for Recurrent Processing of Sequences
Figure 4 for Staircase Attention for Recurrent Processing of Sequences

Attention mechanisms have become a standard tool for sequence modeling tasks, in particular by stacking self-attention layers over the entire input sequence as in the Transformer architecture. In this work we introduce a novel attention procedure called staircase attention that, unlike self-attention, operates across the sequence (in time) recurrently processing the input by adding another step of processing. A step in the staircase comprises of backward tokens (encoding the sequence so far seen) and forward tokens (ingesting a new part of the sequence), or an extreme Ladder version with a forward step of zero that simply repeats the Transformer on each step of the ladder, sharing the weights. We thus describe a family of such models that can trade off performance and compute, by either increasing the amount of recurrence through time, the amount of sequential processing via recurrence in depth, or both. Staircase attention is shown to be able to solve tasks that involve tracking that conventional Transformers cannot, due to this recurrence. Further, it is shown to provide improved modeling power for the same size model (number of parameters) compared to self-attentive Transformers on large language modeling and dialogue tasks, yielding significant perplexity gains.

Viaarxiv icon

The FLORES-101 Evaluation Benchmark for Low-Resource and Multilingual Machine Translation

Jun 06, 2021
Naman Goyal, Cynthia Gao, Vishrav Chaudhary, Peng-Jen Chen, Guillaume Wenzek, Da Ju, Sanjana Krishnan, Marc'Aurelio Ranzato, Francisco Guzman, Angela Fan

Figure 1 for The FLORES-101 Evaluation Benchmark for Low-Resource and Multilingual Machine Translation
Figure 2 for The FLORES-101 Evaluation Benchmark for Low-Resource and Multilingual Machine Translation
Figure 3 for The FLORES-101 Evaluation Benchmark for Low-Resource and Multilingual Machine Translation
Figure 4 for The FLORES-101 Evaluation Benchmark for Low-Resource and Multilingual Machine Translation

One of the biggest challenges hindering progress in low-resource and multilingual machine translation is the lack of good evaluation benchmarks. Current evaluation benchmarks either lack good coverage of low-resource languages, consider only restricted domains, or are low quality because they are constructed using semi-automatic procedures. In this work, we introduce the FLORES-101 evaluation benchmark, consisting of 3001 sentences extracted from English Wikipedia and covering a variety of different topics and domains. These sentences have been translated in 101 languages by professional translators through a carefully controlled process. The resulting dataset enables better assessment of model quality on the long tail of low-resource languages, including the evaluation of many-to-many multilingual translation systems, as all translations are multilingually aligned. By publicly releasing such a high-quality and high-coverage dataset, we hope to foster progress in the machine translation community and beyond.

Viaarxiv icon

Not All Memories are Created Equal: Learning to Forget by Expiring

May 13, 2021
Sainbayar Sukhbaatar, Da Ju, Spencer Poff, Stephen Roller, Arthur Szlam, Jason Weston, Angela Fan

Figure 1 for Not All Memories are Created Equal: Learning to Forget by Expiring
Figure 2 for Not All Memories are Created Equal: Learning to Forget by Expiring
Figure 3 for Not All Memories are Created Equal: Learning to Forget by Expiring
Figure 4 for Not All Memories are Created Equal: Learning to Forget by Expiring

Attention mechanisms have shown promising results in sequence modeling tasks that require long-term memory. Recent work investigated mechanisms to reduce the computational cost of preserving and storing memories. However, not all content in the past is equally important to remember. We propose Expire-Span, a method that learns to retain the most important information and expire the irrelevant information. This forgetting of memories enables Transformers to scale to attend over tens of thousands of previous timesteps efficiently, as not all states from previous timesteps are preserved. We demonstrate that Expire-Span can help models identify and retain critical information and show it can achieve strong performance on reinforcement learning tasks specifically designed to challenge this functionality. Next, we show that Expire-Span can scale to memories that are tens of thousands in size, setting a new state of the art on incredibly long context tasks such as character-level language modeling and a frame-by-frame moving objects task. Finally, we analyze the efficiency of Expire-Span compared to existing approaches and demonstrate that it trains faster and uses less memory.

Viaarxiv icon

Recipes for Safety in Open-domain Chatbots

Oct 22, 2020
Jing Xu, Da Ju, Margaret Li, Y-Lan Boureau, Jason Weston, Emily Dinan

Figure 1 for Recipes for Safety in Open-domain Chatbots
Figure 2 for Recipes for Safety in Open-domain Chatbots
Figure 3 for Recipes for Safety in Open-domain Chatbots
Figure 4 for Recipes for Safety in Open-domain Chatbots

Models trained on large unlabeled corpora of human interactions will learn patterns and mimic behaviors therein, which include offensive or otherwise toxic behavior and unwanted biases. We investigate a variety of methods to mitigate these issues in the context of open-domain generative dialogue models. We introduce a new human-and-model-in-the-loop framework for both training safer models and for evaluating them, as well as a novel method to distill safety considerations inside generative models without the use of an external classifier at deployment time. We conduct experiments comparing these methods and find our new techniques are (i) safer than existing models as measured by automatic and human evaluations while (ii) maintaining usability metrics such as engagingness relative to the state of the art. We then discuss the limitations of this work by analyzing failure cases of our models.

Viaarxiv icon

Multi-Modal Open-Domain Dialogue

Oct 02, 2020
Kurt Shuster, Eric Michael Smith, Da Ju, Jason Weston

Figure 1 for Multi-Modal Open-Domain Dialogue
Figure 2 for Multi-Modal Open-Domain Dialogue
Figure 3 for Multi-Modal Open-Domain Dialogue
Figure 4 for Multi-Modal Open-Domain Dialogue

Recent work in open-domain conversational agents has demonstrated that significant improvements in model engagingness and humanness metrics can be achieved via massive scaling in both pre-training data and model size (Adiwardana et al., 2020; Roller et al., 2020). However, if we want to build agents with human-like abilities, we must expand beyond handling just text. A particularly important topic is the ability to see images and communicate about what is perceived. With the goal of engaging humans in multi-modal dialogue, we investigate combining components from state-of-the-art open-domain dialogue agents with those from state-of-the-art vision models. We study incorporating different image fusion schemes and domain-adaptive pre-training and fine-tuning strategies, and show that our best resulting model outperforms strong existing models in multi-modal dialogue while simultaneously performing as well as its predecessor (text-only) BlenderBot (Roller et al., 2020) in text-based conversation. We additionally investigate and incorporate safety components in our final model, and show that such efforts do not diminish model performance with respect to engagingness metrics.

Viaarxiv icon