Alert button
Picture for Louis-Philippe Morency

Louis-Philippe Morency

Alert button

Neural Methods for Point-wise Dependency Estimation

Add code
Bookmark button
Alert button
Jun 11, 2020
Yao-Hung Hubert Tsai, Han Zhao, Makoto Yamada, Louis-Philippe Morency, Ruslan Salakhutdinov

Figure 1 for Neural Methods for Point-wise Dependency Estimation
Figure 2 for Neural Methods for Point-wise Dependency Estimation
Figure 3 for Neural Methods for Point-wise Dependency Estimation
Figure 4 for Neural Methods for Point-wise Dependency Estimation
Viaarxiv icon

Demystifying Self-Supervised Learning: An Information-Theoretical Framework

Add code
Bookmark button
Alert button
Jun 11, 2020
Yao-Hung Hubert Tsai, Yue Wu, Ruslan Salakhutdinov, Louis-Philippe Morency

Figure 1 for Demystifying Self-Supervised Learning: An Information-Theoretical Framework
Figure 2 for Demystifying Self-Supervised Learning: An Information-Theoretical Framework
Figure 3 for Demystifying Self-Supervised Learning: An Information-Theoretical Framework
Figure 4 for Demystifying Self-Supervised Learning: An Information-Theoretical Framework
Viaarxiv icon

Improving Aspect-Level Sentiment Analysis with Aspect Extraction

Add code
Bookmark button
Alert button
May 03, 2020
Navonil Majumder, Rishabh Bhardwaj, Soujanya Poria, Amir Zadeh, Alexander Gelbukh, Amir Hussain, Louis-Philippe Morency

Figure 1 for Improving Aspect-Level Sentiment Analysis with Aspect Extraction
Figure 2 for Improving Aspect-Level Sentiment Analysis with Aspect Extraction
Figure 3 for Improving Aspect-Level Sentiment Analysis with Aspect Extraction
Figure 4 for Improving Aspect-Level Sentiment Analysis with Aspect Extraction
Viaarxiv icon

Interpretable Multimodal Routing for Human Multimodal Language

Add code
Bookmark button
Alert button
Apr 29, 2020
Yao-Hung Hubert Tsai, Martin Q. Ma, Muqiao Yang, Ruslan Salakhutdinov, Louis-Philippe Morency

Figure 1 for Interpretable Multimodal Routing for Human Multimodal Language
Figure 2 for Interpretable Multimodal Routing for Human Multimodal Language
Figure 3 for Interpretable Multimodal Routing for Human Multimodal Language
Figure 4 for Interpretable Multimodal Routing for Human Multimodal Language
Viaarxiv icon

Diverse and Admissible Trajectory Forecasting through Multimodal Context Understanding

Add code
Bookmark button
Alert button
Apr 03, 2020
Seong Hyeon Park, Gyubok Lee, Manoj Bhat, Jimin Seo, Minseok Kang, Jonathan Francis, Ashwin R. Jadhav, Paul Pu Liang, Louis-Philippe Morency

Figure 1 for Diverse and Admissible Trajectory Forecasting through Multimodal Context Understanding
Figure 2 for Diverse and Admissible Trajectory Forecasting through Multimodal Context Understanding
Figure 3 for Diverse and Admissible Trajectory Forecasting through Multimodal Context Understanding
Figure 4 for Diverse and Admissible Trajectory Forecasting through Multimodal Context Understanding
Viaarxiv icon

On Emergent Communication in Competitive Multi-Agent Teams

Add code
Bookmark button
Alert button
Mar 04, 2020
Paul Pu Liang, Jeffrey Chen, Ruslan Salakhutdinov, Louis-Philippe Morency, Satwik Kottur

Figure 1 for On Emergent Communication in Competitive Multi-Agent Teams
Figure 2 for On Emergent Communication in Competitive Multi-Agent Teams
Figure 3 for On Emergent Communication in Competitive Multi-Agent Teams
Figure 4 for On Emergent Communication in Competitive Multi-Agent Teams
Viaarxiv icon

Learning Not to Learn in the Presence of Noisy Labels

Add code
Bookmark button
Alert button
Feb 16, 2020
Liu Ziyin, Blair Chen, Ru Wang, Paul Pu Liang, Ruslan Salakhutdinov, Louis-Philippe Morency, Masahito Ueda

Figure 1 for Learning Not to Learn in the Presence of Noisy Labels
Figure 2 for Learning Not to Learn in the Presence of Noisy Labels
Figure 3 for Learning Not to Learn in the Presence of Noisy Labels
Figure 4 for Learning Not to Learn in the Presence of Noisy Labels
Viaarxiv icon

Think Locally, Act Globally: Federated Learning with Local and Global Representations

Add code
Bookmark button
Alert button
Jan 06, 2020
Paul Pu Liang, Terrance Liu, Liu Ziyin, Ruslan Salakhutdinov, Louis-Philippe Morency

Figure 1 for Think Locally, Act Globally: Federated Learning with Local and Global Representations
Figure 2 for Think Locally, Act Globally: Federated Learning with Local and Global Representations
Figure 3 for Think Locally, Act Globally: Federated Learning with Local and Global Representations
Figure 4 for Think Locally, Act Globally: Federated Learning with Local and Global Representations
Viaarxiv icon

Context-Dependent Models for Predicting and Characterizing Facial Expressiveness

Add code
Bookmark button
Alert button
Dec 10, 2019
Victoria Lin, Jeffrey M. Girard, Louis-Philippe Morency

Figure 1 for Context-Dependent Models for Predicting and Characterizing Facial Expressiveness
Figure 2 for Context-Dependent Models for Predicting and Characterizing Facial Expressiveness
Figure 3 for Context-Dependent Models for Predicting and Characterizing Facial Expressiveness
Figure 4 for Context-Dependent Models for Predicting and Characterizing Facial Expressiveness
Viaarxiv icon