Alert button
Picture for William Yang Wang

William Yang Wang

Alert button

Training-Free Structured Diffusion Guidance for Compositional Text-to-Image Synthesis

Add code
Bookmark button
Alert button
Dec 09, 2022
Weixi Feng, Xuehai He, Tsu-Jui Fu, Varun Jampani, Arjun Akula, Pradyumna Narayana, Sugato Basu, Xin Eric Wang, William Yang Wang

Figure 1 for Training-Free Structured Diffusion Guidance for Compositional Text-to-Image Synthesis
Figure 2 for Training-Free Structured Diffusion Guidance for Compositional Text-to-Image Synthesis
Figure 3 for Training-Free Structured Diffusion Guidance for Compositional Text-to-Image Synthesis
Figure 4 for Training-Free Structured Diffusion Guidance for Compositional Text-to-Image Synthesis
Viaarxiv icon

Offline Reinforcement Learning with Closed-Form Policy Improvement Operators

Add code
Bookmark button
Alert button
Nov 29, 2022
Jiachen Li, Edwin Zhang, Ming Yin, Qinxun Bai, Yu-Xiang Wang, William Yang Wang

Figure 1 for Offline Reinforcement Learning with Closed-Form Policy Improvement Operators
Figure 2 for Offline Reinforcement Learning with Closed-Form Policy Improvement Operators
Figure 3 for Offline Reinforcement Learning with Closed-Form Policy Improvement Operators
Figure 4 for Offline Reinforcement Learning with Closed-Form Policy Improvement Operators
Viaarxiv icon

Tell Me What Happened: Unifying Text-guided Video Completion via Multimodal Masked Video Generation

Add code
Bookmark button
Alert button
Nov 23, 2022
Tsu-Jui Fu, Licheng Yu, Ning Zhang, Cheng-Yang Fu, Jong-Chyi Su, William Yang Wang, Sean Bell

Figure 1 for Tell Me What Happened: Unifying Text-guided Video Completion via Multimodal Masked Video Generation
Figure 2 for Tell Me What Happened: Unifying Text-guided Video Completion via Multimodal Masked Video Generation
Figure 3 for Tell Me What Happened: Unifying Text-guided Video Completion via Multimodal Masked Video Generation
Figure 4 for Tell Me What Happened: Unifying Text-guided Video Completion via Multimodal Masked Video Generation
Viaarxiv icon

Bridging the Training-Inference Gap for Dense Phrase Retrieval

Add code
Bookmark button
Alert button
Oct 25, 2022
Gyuwan Kim, Jinhyuk Lee, Barlas Oguz, Wenhan Xiong, Yizhe Zhang, Yashar Mehdad, William Yang Wang

Figure 1 for Bridging the Training-Inference Gap for Dense Phrase Retrieval
Figure 2 for Bridging the Training-Inference Gap for Dense Phrase Retrieval
Figure 3 for Bridging the Training-Inference Gap for Dense Phrase Retrieval
Figure 4 for Bridging the Training-Inference Gap for Dense Phrase Retrieval
Viaarxiv icon

CPL: Counterfactual Prompt Learning for Vision and Language Models

Add code
Bookmark button
Alert button
Oct 19, 2022
Xuehai He, Diji Yang, Weixi Feng, Tsu-Jui Fu, Arjun Akula, Varun Jampani, Pradyumna Narayana, Sugato Basu, William Yang Wang, Xin Eric Wang

Figure 1 for CPL: Counterfactual Prompt Learning for Vision and Language Models
Figure 2 for CPL: Counterfactual Prompt Learning for Vision and Language Models
Figure 3 for CPL: Counterfactual Prompt Learning for Vision and Language Models
Figure 4 for CPL: Counterfactual Prompt Learning for Vision and Language Models
Viaarxiv icon

SafeText: A Benchmark for Exploring Physical Safety in Language Models

Add code
Bookmark button
Alert button
Oct 18, 2022
Sharon Levy, Emily Allaway, Melanie Subbiah, Lydia Chilton, Desmond Patton, Kathleen McKeown, William Yang Wang

Figure 1 for SafeText: A Benchmark for Exploring Physical Safety in Language Models
Figure 2 for SafeText: A Benchmark for Exploring Physical Safety in Language Models
Figure 3 for SafeText: A Benchmark for Exploring Physical Safety in Language Models
Figure 4 for SafeText: A Benchmark for Exploring Physical Safety in Language Models
Viaarxiv icon

ULN: Towards Underspecified Vision-and-Language Navigation

Add code
Bookmark button
Alert button
Oct 18, 2022
Weixi Feng, Tsu-Jui Fu, Yujie Lu, William Yang Wang

Figure 1 for ULN: Towards Underspecified Vision-and-Language Navigation
Figure 2 for ULN: Towards Underspecified Vision-and-Language Navigation
Figure 3 for ULN: Towards Underspecified Vision-and-Language Navigation
Figure 4 for ULN: Towards Underspecified Vision-and-Language Navigation
Viaarxiv icon

Mitigating Covertly Unsafe Text within Natural Language Systems

Add code
Bookmark button
Alert button
Oct 17, 2022
Alex Mei, Anisha Kabir, Sharon Levy, Melanie Subbiah, Emily Allaway, John Judge, Desmond Patton, Bruce Bimber, Kathleen McKeown, William Yang Wang

Figure 1 for Mitigating Covertly Unsafe Text within Natural Language Systems
Figure 2 for Mitigating Covertly Unsafe Text within Natural Language Systems
Figure 3 for Mitigating Covertly Unsafe Text within Natural Language Systems
Figure 4 for Mitigating Covertly Unsafe Text within Natural Language Systems
Viaarxiv icon

Language Agnostic Multilingual Information Retrieval with Contrastive Learning

Add code
Bookmark button
Alert button
Oct 12, 2022
Xiyang Hu, Xinchi Chen, Peng Qi, Deguang Kong, Kunlun Liu, William Yang Wang, Zhiheng Huang

Figure 1 for Language Agnostic Multilingual Information Retrieval with Contrastive Learning
Figure 2 for Language Agnostic Multilingual Information Retrieval with Contrastive Learning
Figure 3 for Language Agnostic Multilingual Information Retrieval with Contrastive Learning
Figure 4 for Language Agnostic Multilingual Information Retrieval with Contrastive Learning
Viaarxiv icon

CLIP also Understands Text: Prompting CLIP for Phrase Understanding

Add code
Bookmark button
Alert button
Oct 11, 2022
An Yan, Jiacheng Li, Wanrong Zhu, Yujie Lu, William Yang Wang, Julian McAuley

Figure 1 for CLIP also Understands Text: Prompting CLIP for Phrase Understanding
Figure 2 for CLIP also Understands Text: Prompting CLIP for Phrase Understanding
Figure 3 for CLIP also Understands Text: Prompting CLIP for Phrase Understanding
Figure 4 for CLIP also Understands Text: Prompting CLIP for Phrase Understanding
Viaarxiv icon