Alert button
Picture for Paul Pu Liang

Paul Pu Liang

Alert button

Localized Symbolic Knowledge Distillation for Visual Commonsense Models

Dec 12, 2023
Jae Sung Park, Jack Hessel, Khyathi Raghavi Chandu, Paul Pu Liang, Ximing Lu, Peter West, Youngjae Yu, Qiuyuan Huang, Jianfeng Gao, Ali Farhadi, Yejin Choi

Viaarxiv icon

Think Twice: Perspective-Taking Improves Large Language Models' Theory-of-Mind Capabilities

Nov 16, 2023
Alex Wilf, Sihyun Shawn Lee, Paul Pu Liang, Louis-Philippe Morency

Viaarxiv icon

MMOE: Mixture of Multimodal Interaction Experts

Nov 16, 2023
Haofei Yu, Paul Pu Liang, Ruslan Salakhutdinov, Louis-Philippe Morency

Viaarxiv icon

MultiIoT: Towards Large-scale Multisensory Learning for the Internet of Things

Nov 10, 2023
Shentong Mo, Paul Pu Liang, Russ Salakhutdinov, Louis-Philippe Morency

Viaarxiv icon

Comparative Knowledge Distillation

Nov 03, 2023
Alex Wilf, Alex Tianyi Xu, Paul Pu Liang, Alexander Obolenskiy, Daniel Fried, Louis-Philippe Morency

Viaarxiv icon

Towards Vision-Language Mechanistic Interpretability: A Causal Tracing Tool for BLIP

Aug 27, 2023
Vedant Palit, Rohan Pandey, Aryaman Arora, Paul Pu Liang

Figure 1 for Towards Vision-Language Mechanistic Interpretability: A Causal Tracing Tool for BLIP
Figure 2 for Towards Vision-Language Mechanistic Interpretability: A Causal Tracing Tool for BLIP
Figure 3 for Towards Vision-Language Mechanistic Interpretability: A Causal Tracing Tool for BLIP
Figure 4 for Towards Vision-Language Mechanistic Interpretability: A Causal Tracing Tool for BLIP
Viaarxiv icon

MultiZoo & MultiBench: A Standardized Toolkit for Multimodal Deep Learning

Jun 28, 2023
Paul Pu Liang, Yiwei Lyu, Xiang Fan, Arav Agarwal, Yun Cheng, Louis-Philippe Morency, Ruslan Salakhutdinov

Figure 1 for MultiZoo & MultiBench: A Standardized Toolkit for Multimodal Deep Learning
Figure 2 for MultiZoo & MultiBench: A Standardized Toolkit for Multimodal Deep Learning
Viaarxiv icon

Factorized Contrastive Learning: Going Beyond Multi-view Redundancy

Jun 08, 2023
Paul Pu Liang, Zihao Deng, Martin Ma, James Zou, Louis-Philippe Morency, Ruslan Salakhutdinov

Figure 1 for Factorized Contrastive Learning: Going Beyond Multi-view Redundancy
Figure 2 for Factorized Contrastive Learning: Going Beyond Multi-view Redundancy
Figure 3 for Factorized Contrastive Learning: Going Beyond Multi-view Redundancy
Figure 4 for Factorized Contrastive Learning: Going Beyond Multi-view Redundancy
Viaarxiv icon

Language Models Get a Gender Makeover: Mitigating Gender Bias with Few-Shot Data Interventions

Jun 07, 2023
Himanshu Thakur, Atishay Jain, Praneetha Vaddamanu, Paul Pu Liang, Louis-Philippe Morency

Figure 1 for Language Models Get a Gender Makeover: Mitigating Gender Bias with Few-Shot Data Interventions
Figure 2 for Language Models Get a Gender Makeover: Mitigating Gender Bias with Few-Shot Data Interventions
Figure 3 for Language Models Get a Gender Makeover: Mitigating Gender Bias with Few-Shot Data Interventions
Figure 4 for Language Models Get a Gender Makeover: Mitigating Gender Bias with Few-Shot Data Interventions
Viaarxiv icon

Multimodal Learning Without Labeled Multimodal Data: Guarantees and Applications

Jun 07, 2023
Paul Pu Liang, Chun Kai Ling, Yun Cheng, Alex Obolenskiy, Yudong Liu, Rohan Pandey, Alex Wilf, Louis-Philippe Morency, Ruslan Salakhutdinov

Figure 1 for Multimodal Learning Without Labeled Multimodal Data: Guarantees and Applications
Figure 2 for Multimodal Learning Without Labeled Multimodal Data: Guarantees and Applications
Figure 3 for Multimodal Learning Without Labeled Multimodal Data: Guarantees and Applications
Figure 4 for Multimodal Learning Without Labeled Multimodal Data: Guarantees and Applications
Viaarxiv icon