Alert button
Picture for Louis-Philippe Morency

Louis-Philippe Morency

Alert button

Quantifying & Modeling Feature Interactions: An Information Decomposition Framework

Feb 23, 2023
Paul Pu Liang, Yun Cheng, Xiang Fan, Chun Kai Ling, Suzanne Nie, Richard Chen, Zihao Deng, Faisal Mahmood, Ruslan Salakhutdinov, Louis-Philippe Morency

Figure 1 for Quantifying & Modeling Feature Interactions: An Information Decomposition Framework
Figure 2 for Quantifying & Modeling Feature Interactions: An Information Decomposition Framework
Figure 3 for Quantifying & Modeling Feature Interactions: An Information Decomposition Framework
Figure 4 for Quantifying & Modeling Feature Interactions: An Information Decomposition Framework
Viaarxiv icon

Cross-modal Attention Congruence Regularization for Vision-Language Relation Alignment

Dec 20, 2022
Rohan Pandey, Rulin Shao, Paul Pu Liang, Ruslan Salakhutdinov, Louis-Philippe Morency

Figure 1 for Cross-modal Attention Congruence Regularization for Vision-Language Relation Alignment
Figure 2 for Cross-modal Attention Congruence Regularization for Vision-Language Relation Alignment
Figure 3 for Cross-modal Attention Congruence Regularization for Vision-Language Relation Alignment
Figure 4 for Cross-modal Attention Congruence Regularization for Vision-Language Relation Alignment
Viaarxiv icon

SeedBERT: Recovering Annotator Rating Distributions from an Aggregated Label

Nov 23, 2022
Aneesha Sampath, Victoria Lin, Louis-Philippe Morency

Figure 1 for SeedBERT: Recovering Annotator Rating Distributions from an Aggregated Label
Figure 2 for SeedBERT: Recovering Annotator Rating Distributions from an Aggregated Label
Figure 3 for SeedBERT: Recovering Annotator Rating Distributions from an Aggregated Label
Figure 4 for SeedBERT: Recovering Annotator Rating Distributions from an Aggregated Label
Viaarxiv icon

Nano: Nested Human-in-the-Loop Reward Learning for Few-shot Language Model Control

Nov 10, 2022
Xiang Fan, Yiwei Lyu, Paul Pu Liang, Ruslan Salakhutdinov, Louis-Philippe Morency

Figure 1 for Nano: Nested Human-in-the-Loop Reward Learning for Few-shot Language Model Control
Figure 2 for Nano: Nested Human-in-the-Loop Reward Learning for Few-shot Language Model Control
Figure 3 for Nano: Nested Human-in-the-Loop Reward Learning for Few-shot Language Model Control
Figure 4 for Nano: Nested Human-in-the-Loop Reward Learning for Few-shot Language Model Control
Viaarxiv icon

Uncertainty Quantification with Pre-trained Language Models: A Large-Scale Empirical Analysis

Oct 10, 2022
Yuxin Xiao, Paul Pu Liang, Umang Bhatt, Willie Neiswanger, Ruslan Salakhutdinov, Louis-Philippe Morency

Figure 1 for Uncertainty Quantification with Pre-trained Language Models: A Large-Scale Empirical Analysis
Figure 2 for Uncertainty Quantification with Pre-trained Language Models: A Large-Scale Empirical Analysis
Figure 3 for Uncertainty Quantification with Pre-trained Language Models: A Large-Scale Empirical Analysis
Figure 4 for Uncertainty Quantification with Pre-trained Language Models: A Large-Scale Empirical Analysis
Viaarxiv icon

Paraphrasing Is All You Need for Novel Object Captioning

Sep 25, 2022
Cheng-Fu Yang, Yao-Hung Hubert Tsai, Wan-Cyuan Fan, Ruslan Salakhutdinov, Louis-Philippe Morency, Yu-Chiang Frank Wang

Figure 1 for Paraphrasing Is All You Need for Novel Object Captioning
Figure 2 for Paraphrasing Is All You Need for Novel Object Captioning
Figure 3 for Paraphrasing Is All You Need for Novel Object Captioning
Figure 4 for Paraphrasing Is All You Need for Novel Object Captioning
Viaarxiv icon

Foundations and Recent Trends in Multimodal Machine Learning: Principles, Challenges, and Open Questions

Sep 07, 2022
Paul Pu Liang, Amir Zadeh, Louis-Philippe Morency

Figure 1 for Foundations and Recent Trends in Multimodal Machine Learning: Principles, Challenges, and Open Questions
Figure 2 for Foundations and Recent Trends in Multimodal Machine Learning: Principles, Challenges, and Open Questions
Figure 3 for Foundations and Recent Trends in Multimodal Machine Learning: Principles, Challenges, and Open Questions
Figure 4 for Foundations and Recent Trends in Multimodal Machine Learning: Principles, Challenges, and Open Questions
Viaarxiv icon

Multimodal Lecture Presentations Dataset: Understanding Multimodality in Educational Slides

Aug 17, 2022
Dong Won Lee, Chaitanya Ahuja, Paul Pu Liang, Sanika Natu, Louis-Philippe Morency

Figure 1 for Multimodal Lecture Presentations Dataset: Understanding Multimodality in Educational Slides
Figure 2 for Multimodal Lecture Presentations Dataset: Understanding Multimodality in Educational Slides
Figure 3 for Multimodal Lecture Presentations Dataset: Understanding Multimodality in Educational Slides
Figure 4 for Multimodal Lecture Presentations Dataset: Understanding Multimodality in Educational Slides
Viaarxiv icon

Face-to-Face Contrastive Learning for Social Intelligence Question-Answering

Aug 15, 2022
Alex Wilf, Qianli M. Ma, Paul Pu Liang, Amir Zadeh, Louis-Philippe Morency

Figure 1 for Face-to-Face Contrastive Learning for Social Intelligence Question-Answering
Figure 2 for Face-to-Face Contrastive Learning for Social Intelligence Question-Answering
Figure 3 for Face-to-Face Contrastive Learning for Social Intelligence Question-Answering
Figure 4 for Face-to-Face Contrastive Learning for Social Intelligence Question-Answering
Viaarxiv icon