Alert button
Picture for Arthur Szlam

Arthur Szlam

Alert button

DiPaCo: Distributed Path Composition

Mar 15, 2024
Arthur Douillard, Qixuan Feng, Andrei A. Rusu, Adhiguna Kuncoro, Yani Donchev, Rachita Chhaparia, Ionel Gog, Marc'Aurelio Ranzato, Jiajun Shen, Arthur Szlam

Viaarxiv icon

Asynchronous Local-SGD Training for Language Modeling

Jan 17, 2024
Bo Liu, Rachita Chhaparia, Arthur Douillard, Satyen Kale, Andrei A. Rusu, Jiajun Shen, Arthur Szlam, Marc'Aurelio Ranzato

Viaarxiv icon

DiLoCo: Distributed Low-Communication Training of Language Models

Nov 14, 2023
Arthur Douillard, Qixuan Feng, Andrei A. Rusu, Rachita Chhaparia, Yani Donchev, Adhiguna Kuncoro, Marc'Aurelio Ranzato, Arthur Szlam, Jiajun Shen

Viaarxiv icon

A Data Source for Reasoning Embodied Agents

Sep 14, 2023
Jack Lanchantin, Sainbayar Sukhbaatar, Gabriel Synnaeve, Yuxuan Sun, Kavya Srinet, Arthur Szlam

Figure 1 for A Data Source for Reasoning Embodied Agents
Figure 2 for A Data Source for Reasoning Embodied Agents
Figure 3 for A Data Source for Reasoning Embodied Agents
Figure 4 for A Data Source for Reasoning Embodied Agents
Viaarxiv icon

Transforming Human-Centered AI Collaboration: Redefining Embodied Agents Capabilities through Interactive Grounded Language Instructions

May 18, 2023
Shrestha Mohanty, Negar Arabzadeh, Julia Kiseleva, Artem Zholus, Milagro Teruel, Ahmed Awadallah, Yuxuan Sun, Kavya Srinet, Arthur Szlam

Figure 1 for Transforming Human-Centered AI Collaboration: Redefining Embodied Agents Capabilities through Interactive Grounded Language Instructions
Figure 2 for Transforming Human-Centered AI Collaboration: Redefining Embodied Agents Capabilities through Interactive Grounded Language Instructions
Figure 3 for Transforming Human-Centered AI Collaboration: Redefining Embodied Agents Capabilities through Interactive Grounded Language Instructions
Figure 4 for Transforming Human-Centered AI Collaboration: Redefining Embodied Agents Capabilities through Interactive Grounded Language Instructions
Viaarxiv icon

Learning to Reason and Memorize with Self-Notes

May 01, 2023
Jack Lanchantin, Shubham Toshniwal, Jason Weston, Arthur Szlam, Sainbayar Sukhbaatar

Figure 1 for Learning to Reason and Memorize with Self-Notes
Figure 2 for Learning to Reason and Memorize with Self-Notes
Figure 3 for Learning to Reason and Memorize with Self-Notes
Figure 4 for Learning to Reason and Memorize with Self-Notes
Viaarxiv icon

Multi-Party Chat: Conversational Agents in Group Settings with Humans and Models

Apr 26, 2023
Jimmy Wei, Kurt Shuster, Arthur Szlam, Jason Weston, Jack Urbanek, Mojtaba Komeili

Figure 1 for Multi-Party Chat: Conversational Agents in Group Settings with Humans and Models
Figure 2 for Multi-Party Chat: Conversational Agents in Group Settings with Humans and Models
Figure 3 for Multi-Party Chat: Conversational Agents in Group Settings with Humans and Models
Figure 4 for Multi-Party Chat: Conversational Agents in Group Settings with Humans and Models
Viaarxiv icon

Infusing Commonsense World Models with Graph Knowledge

Jan 13, 2023
Alexander Gurung, Mojtaba Komeili, Arthur Szlam, Jason Weston, Jack Urbanek

Figure 1 for Infusing Commonsense World Models with Graph Knowledge
Figure 2 for Infusing Commonsense World Models with Graph Knowledge
Figure 3 for Infusing Commonsense World Models with Graph Knowledge
Figure 4 for Infusing Commonsense World Models with Graph Knowledge
Viaarxiv icon

Collecting Interactive Multi-modal Datasets for Grounded Language Understanding

Nov 18, 2022
Shrestha Mohanty, Negar Arabzadeh, Milagro Teruel, Yuxuan Sun, Artem Zholus, Alexey Skrynnik, Mikhail Burtsev, Kavya Srinet, Aleksandr Panov, Arthur Szlam, Marc-Alexandre Côté, Julia Kiseleva

Figure 1 for Collecting Interactive Multi-modal Datasets for Grounded Language Understanding
Figure 2 for Collecting Interactive Multi-modal Datasets for Grounded Language Understanding
Figure 3 for Collecting Interactive Multi-modal Datasets for Grounded Language Understanding
Viaarxiv icon

CLIP-Fields: Weakly Supervised Semantic Fields for Robotic Memory

Oct 11, 2022
Nur Muhammad Mahi Shafiullah, Chris Paxton, Lerrel Pinto, Soumith Chintala, Arthur Szlam

Figure 1 for CLIP-Fields: Weakly Supervised Semantic Fields for Robotic Memory
Figure 2 for CLIP-Fields: Weakly Supervised Semantic Fields for Robotic Memory
Figure 3 for CLIP-Fields: Weakly Supervised Semantic Fields for Robotic Memory
Figure 4 for CLIP-Fields: Weakly Supervised Semantic Fields for Robotic Memory
Viaarxiv icon