Alert button
Picture for Louis-Philippe Morency

Louis-Philippe Morency

Alert button

Ehsan

WildMix Dataset and Spectro-Temporal Transformer Model for Monoaural Audio Source Separation

Add code
Bookmark button
Alert button
Nov 21, 2019
Amir Zadeh, Tianjun Ma, Soujanya Poria, Louis-Philippe Morency

Figure 1 for WildMix Dataset and Spectro-Temporal Transformer Model for Monoaural Audio Source Separation
Figure 2 for WildMix Dataset and Spectro-Temporal Transformer Model for Monoaural Audio Source Separation
Figure 3 for WildMix Dataset and Spectro-Temporal Transformer Model for Monoaural Audio Source Separation
Figure 4 for WildMix Dataset and Spectro-Temporal Transformer Model for Monoaural Audio Source Separation
Viaarxiv icon

To React or not to React: End-to-End Visual Pose Forecasting for Personalized Avatar during Dyadic Conversations

Add code
Bookmark button
Alert button
Oct 05, 2019
Chaitanya Ahuja, Shugao Ma, Louis-Philippe Morency, Yaser Sheikh

Figure 1 for To React or not to React: End-to-End Visual Pose Forecasting for Personalized Avatar during Dyadic Conversations
Figure 2 for To React or not to React: End-to-End Visual Pose Forecasting for Personalized Avatar during Dyadic Conversations
Figure 3 for To React or not to React: End-to-End Visual Pose Forecasting for Personalized Avatar during Dyadic Conversations
Figure 4 for To React or not to React: End-to-End Visual Pose Forecasting for Personalized Avatar during Dyadic Conversations
Viaarxiv icon

Transformer Dissection: An Unified Understanding for Transformer's Attention via the Lens of Kernel

Add code
Bookmark button
Alert button
Aug 30, 2019
Yao-Hung Hubert Tsai, Shaojie Bai, Makoto Yamada, Louis-Philippe Morency, Ruslan Salakhutdinov

Figure 1 for Transformer Dissection: An Unified Understanding for Transformer's Attention via the Lens of Kernel
Figure 2 for Transformer Dissection: An Unified Understanding for Transformer's Attention via the Lens of Kernel
Figure 3 for Transformer Dissection: An Unified Understanding for Transformer's Attention via the Lens of Kernel
Figure 4 for Transformer Dissection: An Unified Understanding for Transformer's Attention via the Lens of Kernel
Viaarxiv icon

M-BERT: Injecting Multimodal Information in the BERT Structure

Add code
Bookmark button
Alert button
Aug 15, 2019
Wasifur Rahman, Md Kamrul Hasan, Amir Zadeh, Louis-Philippe Morency, Mohammed Ehsan Hoque

Figure 1 for M-BERT: Injecting Multimodal Information in the BERT Structure
Figure 2 for M-BERT: Injecting Multimodal Information in the BERT Structure
Viaarxiv icon

Language2Pose: Natural Language Grounded Pose Forecasting

Add code
Bookmark button
Alert button
Jul 02, 2019
Chaitanya Ahuja, Louis-Philippe Morency

Figure 1 for Language2Pose: Natural Language Grounded Pose Forecasting
Figure 2 for Language2Pose: Natural Language Grounded Pose Forecasting
Figure 3 for Language2Pose: Natural Language Grounded Pose Forecasting
Figure 4 for Language2Pose: Natural Language Grounded Pose Forecasting
Viaarxiv icon

Learning Representations from Imperfect Time Series Data via Tensor Rank Regularization

Add code
Bookmark button
Alert button
Jul 01, 2019
Paul Pu Liang, Zhun Liu, Yao-Hung Hubert Tsai, Qibin Zhao, Ruslan Salakhutdinov, Louis-Philippe Morency

Figure 1 for Learning Representations from Imperfect Time Series Data via Tensor Rank Regularization
Figure 2 for Learning Representations from Imperfect Time Series Data via Tensor Rank Regularization
Figure 3 for Learning Representations from Imperfect Time Series Data via Tensor Rank Regularization
Viaarxiv icon

Deep Gamblers: Learning to Abstain with Portfolio Theory

Add code
Bookmark button
Alert button
Jun 29, 2019
Liu Ziyin, Zhikang Wang, Paul Pu Liang, Ruslan Salakhutdinov, Louis-Philippe Morency, Masahito Ueda

Figure 1 for Deep Gamblers: Learning to Abstain with Portfolio Theory
Figure 2 for Deep Gamblers: Learning to Abstain with Portfolio Theory
Figure 3 for Deep Gamblers: Learning to Abstain with Portfolio Theory
Figure 4 for Deep Gamblers: Learning to Abstain with Portfolio Theory
Viaarxiv icon

Multimodal Transformer for Unaligned Multimodal Language Sequences

Add code
Bookmark button
Alert button
Jun 01, 2019
Yao-Hung Hubert Tsai, Shaojie Bai, Paul Pu Liang, J. Zico Kolter, Louis-Philippe Morency, Ruslan Salakhutdinov

Figure 1 for Multimodal Transformer for Unaligned Multimodal Language Sequences
Figure 2 for Multimodal Transformer for Unaligned Multimodal Language Sequences
Figure 3 for Multimodal Transformer for Unaligned Multimodal Language Sequences
Figure 4 for Multimodal Transformer for Unaligned Multimodal Language Sequences
Viaarxiv icon

Strong and Simple Baselines for Multimodal Utterance Embeddings

Add code
Bookmark button
Alert button
May 14, 2019
Paul Pu Liang, Yao Chong Lim, Yao-Hung Hubert Tsai, Ruslan Salakhutdinov, Louis-Philippe Morency

Figure 1 for Strong and Simple Baselines for Multimodal Utterance Embeddings
Figure 2 for Strong and Simple Baselines for Multimodal Utterance Embeddings
Figure 3 for Strong and Simple Baselines for Multimodal Utterance Embeddings
Figure 4 for Strong and Simple Baselines for Multimodal Utterance Embeddings
Viaarxiv icon

UR-FUNNY: A Multimodal Language Dataset for Understanding Humor

Add code
Bookmark button
Alert button
Apr 14, 2019
Md Kamrul Hasan, Wasifur Rahman, Amir Zadeh, Jianyuan Zhong, Md Iftekhar Tanveer, Louis-Philippe Morency, Mohammed, Hoque

Figure 1 for UR-FUNNY: A Multimodal Language Dataset for Understanding Humor
Figure 2 for UR-FUNNY: A Multimodal Language Dataset for Understanding Humor
Figure 3 for UR-FUNNY: A Multimodal Language Dataset for Understanding Humor
Figure 4 for UR-FUNNY: A Multimodal Language Dataset for Understanding Humor
Viaarxiv icon