Alert button
Picture for Paul Pu Liang

Paul Pu Liang

Alert button

Multimodal Fusion Interactions: A Study of Human and Automatic Quantification

Add code
Bookmark button
Alert button
Jun 07, 2023
Paul Pu Liang, Yun Cheng, Ruslan Salakhutdinov, Louis-Philippe Morency

Figure 1 for Multimodal Fusion Interactions: A Study of Human and Automatic Quantification
Figure 2 for Multimodal Fusion Interactions: A Study of Human and Automatic Quantification
Figure 3 for Multimodal Fusion Interactions: A Study of Human and Automatic Quantification
Figure 4 for Multimodal Fusion Interactions: A Study of Human and Automatic Quantification
Viaarxiv icon

Difference-Masking: Choosing What to Mask in Continued Pretraining

Add code
Bookmark button
Alert button
May 23, 2023
Alex Wilf, Syeda Nahida Akter, Leena Mathur, Paul Pu Liang, Sheryl Mathew, Mengrou Shou, Eric Nyberg, Louis-Philippe Morency

Figure 1 for Difference-Masking: Choosing What to Mask in Continued Pretraining
Figure 2 for Difference-Masking: Choosing What to Mask in Continued Pretraining
Figure 3 for Difference-Masking: Choosing What to Mask in Continued Pretraining
Figure 4 for Difference-Masking: Choosing What to Mask in Continued Pretraining
Viaarxiv icon

HIINT: Historical, Intra- and Inter- personal Dynamics Modeling with Cross-person Memory Transformer

Add code
Bookmark button
Alert button
May 21, 2023
Yubin Kim, Dong Won Lee, Paul Pu Liang, Sharifa Algohwinem, Cynthia Breazeal, Hae Won Park

Figure 1 for HIINT: Historical, Intra- and Inter- personal Dynamics Modeling with Cross-person Memory Transformer
Figure 2 for HIINT: Historical, Intra- and Inter- personal Dynamics Modeling with Cross-person Memory Transformer
Figure 3 for HIINT: Historical, Intra- and Inter- personal Dynamics Modeling with Cross-person Memory Transformer
Figure 4 for HIINT: Historical, Intra- and Inter- personal Dynamics Modeling with Cross-person Memory Transformer
Viaarxiv icon

Quantifying & Modeling Feature Interactions: An Information Decomposition Framework

Add code
Bookmark button
Alert button
Feb 23, 2023
Paul Pu Liang, Yun Cheng, Xiang Fan, Chun Kai Ling, Suzanne Nie, Richard Chen, Zihao Deng, Faisal Mahmood, Ruslan Salakhutdinov, Louis-Philippe Morency

Figure 1 for Quantifying & Modeling Feature Interactions: An Information Decomposition Framework
Figure 2 for Quantifying & Modeling Feature Interactions: An Information Decomposition Framework
Figure 3 for Quantifying & Modeling Feature Interactions: An Information Decomposition Framework
Figure 4 for Quantifying & Modeling Feature Interactions: An Information Decomposition Framework
Viaarxiv icon

Read and Reap the Rewards: Learning to Play Atari with the Help of Instruction Manuals

Add code
Bookmark button
Alert button
Feb 12, 2023
Yue Wu, Yewen Fan, Paul Pu Liang, Amos Azaria, Yuanzhi Li, Tom M. Mitchell

Figure 1 for Read and Reap the Rewards: Learning to Play Atari with the Help of Instruction Manuals
Figure 2 for Read and Reap the Rewards: Learning to Play Atari with the Help of Instruction Manuals
Figure 3 for Read and Reap the Rewards: Learning to Play Atari with the Help of Instruction Manuals
Figure 4 for Read and Reap the Rewards: Learning to Play Atari with the Help of Instruction Manuals
Viaarxiv icon

Cross-modal Attention Congruence Regularization for Vision-Language Relation Alignment

Add code
Bookmark button
Alert button
Dec 20, 2022
Rohan Pandey, Rulin Shao, Paul Pu Liang, Ruslan Salakhutdinov, Louis-Philippe Morency

Figure 1 for Cross-modal Attention Congruence Regularization for Vision-Language Relation Alignment
Figure 2 for Cross-modal Attention Congruence Regularization for Vision-Language Relation Alignment
Figure 3 for Cross-modal Attention Congruence Regularization for Vision-Language Relation Alignment
Figure 4 for Cross-modal Attention Congruence Regularization for Vision-Language Relation Alignment
Viaarxiv icon

Nano: Nested Human-in-the-Loop Reward Learning for Few-shot Language Model Control

Add code
Bookmark button
Alert button
Nov 10, 2022
Xiang Fan, Yiwei Lyu, Paul Pu Liang, Ruslan Salakhutdinov, Louis-Philippe Morency

Figure 1 for Nano: Nested Human-in-the-Loop Reward Learning for Few-shot Language Model Control
Figure 2 for Nano: Nested Human-in-the-Loop Reward Learning for Few-shot Language Model Control
Figure 3 for Nano: Nested Human-in-the-Loop Reward Learning for Few-shot Language Model Control
Figure 4 for Nano: Nested Human-in-the-Loop Reward Learning for Few-shot Language Model Control
Viaarxiv icon

Uncertainty Quantification with Pre-trained Language Models: A Large-Scale Empirical Analysis

Add code
Bookmark button
Alert button
Oct 10, 2022
Yuxin Xiao, Paul Pu Liang, Umang Bhatt, Willie Neiswanger, Ruslan Salakhutdinov, Louis-Philippe Morency

Figure 1 for Uncertainty Quantification with Pre-trained Language Models: A Large-Scale Empirical Analysis
Figure 2 for Uncertainty Quantification with Pre-trained Language Models: A Large-Scale Empirical Analysis
Figure 3 for Uncertainty Quantification with Pre-trained Language Models: A Large-Scale Empirical Analysis
Figure 4 for Uncertainty Quantification with Pre-trained Language Models: A Large-Scale Empirical Analysis
Viaarxiv icon

Foundations and Recent Trends in Multimodal Machine Learning: Principles, Challenges, and Open Questions

Add code
Bookmark button
Alert button
Sep 07, 2022
Paul Pu Liang, Amir Zadeh, Louis-Philippe Morency

Figure 1 for Foundations and Recent Trends in Multimodal Machine Learning: Principles, Challenges, and Open Questions
Figure 2 for Foundations and Recent Trends in Multimodal Machine Learning: Principles, Challenges, and Open Questions
Figure 3 for Foundations and Recent Trends in Multimodal Machine Learning: Principles, Challenges, and Open Questions
Figure 4 for Foundations and Recent Trends in Multimodal Machine Learning: Principles, Challenges, and Open Questions
Viaarxiv icon

Multimodal Lecture Presentations Dataset: Understanding Multimodality in Educational Slides

Add code
Bookmark button
Alert button
Aug 17, 2022
Dong Won Lee, Chaitanya Ahuja, Paul Pu Liang, Sanika Natu, Louis-Philippe Morency

Figure 1 for Multimodal Lecture Presentations Dataset: Understanding Multimodality in Educational Slides
Figure 2 for Multimodal Lecture Presentations Dataset: Understanding Multimodality in Educational Slides
Figure 3 for Multimodal Lecture Presentations Dataset: Understanding Multimodality in Educational Slides
Figure 4 for Multimodal Lecture Presentations Dataset: Understanding Multimodality in Educational Slides
Viaarxiv icon