Alert button
Picture for Ye Liu

Ye Liu

Alert button

Choice Models and Permutation Invariance

Add code
Bookmark button
Alert button
Jul 13, 2023
Amandeep Singh, Ye Liu, Hema Yoganarasimhan

Figure 1 for Choice Models and Permutation Invariance
Figure 2 for Choice Models and Permutation Invariance
Figure 3 for Choice Models and Permutation Invariance
Figure 4 for Choice Models and Permutation Invariance
Viaarxiv icon

Unified Conversational Models with System-Initiated Transitions between Chit-Chat and Task-Oriented Dialogues

Add code
Bookmark button
Alert button
Jul 04, 2023
Ye Liu, Stefan Ultes, Wolfgang Minker, Wolfgang Maier

Figure 1 for Unified Conversational Models with System-Initiated Transitions between Chit-Chat and Task-Oriented Dialogues
Figure 2 for Unified Conversational Models with System-Initiated Transitions between Chit-Chat and Task-Oriented Dialogues
Figure 3 for Unified Conversational Models with System-Initiated Transitions between Chit-Chat and Task-Oriented Dialogues
Figure 4 for Unified Conversational Models with System-Initiated Transitions between Chit-Chat and Task-Oriented Dialogues
Viaarxiv icon

The Scope of ChatGPT in Software Engineering: A Thorough Investigation

Add code
Bookmark button
Alert button
May 20, 2023
Wei Ma, Shangqing Liu, Wenhan Wang, Qiang Hu, Ye Liu, Cen Zhang, Liming Nie, Yang Liu

Figure 1 for The Scope of ChatGPT in Software Engineering: A Thorough Investigation
Figure 2 for The Scope of ChatGPT in Software Engineering: A Thorough Investigation
Figure 3 for The Scope of ChatGPT in Software Engineering: A Thorough Investigation
Figure 4 for The Scope of ChatGPT in Software Engineering: A Thorough Investigation
Viaarxiv icon

Answering Complex Questions over Text by Hybrid Question Parsing and Execution

Add code
Bookmark button
Alert button
May 12, 2023
Ye Liu, Semih Yavuz, Rui Meng, Dragomir Radev, Caiming Xiong, Yingbo Zhou

Figure 1 for Answering Complex Questions over Text by Hybrid Question Parsing and Execution
Figure 2 for Answering Complex Questions over Text by Hybrid Question Parsing and Execution
Figure 3 for Answering Complex Questions over Text by Hybrid Question Parsing and Execution
Figure 4 for Answering Complex Questions over Text by Hybrid Question Parsing and Execution
Viaarxiv icon

MER 2023: Multi-label Learning, Modality Robustness, and Semi-Supervised Learning

Add code
Bookmark button
Alert button
Apr 18, 2023
Zheng Lian, Haiyang Sun, Licai Sun, Jinming Zhao, Ye Liu, Bin Liu, Jiangyan Yi, Meng Wang, Erik Cambria, Guoying Zhao, Björn W. Schuller, Jianhua Tao

Figure 1 for MER 2023: Multi-label Learning, Modality Robustness, and Semi-Supervised Learning
Figure 2 for MER 2023: Multi-label Learning, Modality Robustness, and Semi-Supervised Learning
Figure 3 for MER 2023: Multi-label Learning, Modality Robustness, and Semi-Supervised Learning
Figure 4 for MER 2023: Multi-label Learning, Modality Robustness, and Semi-Supervised Learning
Viaarxiv icon

Timestamps as Prompts for Geography-Aware Location Recommendation

Add code
Bookmark button
Alert button
Apr 09, 2023
Yan Luo, Haoyi Duan, Ye Liu, Fu-lai Chung

Figure 1 for Timestamps as Prompts for Geography-Aware Location Recommendation
Figure 2 for Timestamps as Prompts for Geography-Aware Location Recommendation
Figure 3 for Timestamps as Prompts for Geography-Aware Location Recommendation
Figure 4 for Timestamps as Prompts for Geography-Aware Location Recommendation
Viaarxiv icon

End-to-End Personalized Next Location Recommendation via Contrastive User Preference Modeling

Add code
Bookmark button
Alert button
Mar 22, 2023
Yan Luo, Ye Liu, Fu-lai Chung, Yu Liu, Chang Wen Chen

Figure 1 for End-to-End Personalized Next Location Recommendation via Contrastive User Preference Modeling
Figure 2 for End-to-End Personalized Next Location Recommendation via Contrastive User Preference Modeling
Figure 3 for End-to-End Personalized Next Location Recommendation via Contrastive User Preference Modeling
Figure 4 for End-to-End Personalized Next Location Recommendation via Contrastive User Preference Modeling
Viaarxiv icon

Just Noticeable Visual Redundancy Forecasting: A Deep Multimodal-driven Approach

Add code
Bookmark button
Alert button
Mar 18, 2023
Wuyuan Xie, Shukang Wang, Sukun Tian, Lirong Huang, Ye Liu, Miaohui Wang

Figure 1 for Just Noticeable Visual Redundancy Forecasting: A Deep Multimodal-driven Approach
Figure 2 for Just Noticeable Visual Redundancy Forecasting: A Deep Multimodal-driven Approach
Figure 3 for Just Noticeable Visual Redundancy Forecasting: A Deep Multimodal-driven Approach
Figure 4 for Just Noticeable Visual Redundancy Forecasting: A Deep Multimodal-driven Approach
Viaarxiv icon

Unsupervised Dense Retrieval Deserves Better Positive Pairs: Scalable Augmentation with Query Extraction and Generation

Add code
Bookmark button
Alert button
Dec 17, 2022
Rui Meng, Ye Liu, Semih Yavuz, Divyansh Agarwal, Lifu Tu, Ning Yu, Jianguo Zhang, Meghana Bhat, Yingbo Zhou

Figure 1 for Unsupervised Dense Retrieval Deserves Better Positive Pairs: Scalable Augmentation with Query Extraction and Generation
Figure 2 for Unsupervised Dense Retrieval Deserves Better Positive Pairs: Scalable Augmentation with Query Extraction and Generation
Figure 3 for Unsupervised Dense Retrieval Deserves Better Positive Pairs: Scalable Augmentation with Query Extraction and Generation
Figure 4 for Unsupervised Dense Retrieval Deserves Better Positive Pairs: Scalable Augmentation with Query Extraction and Generation
Viaarxiv icon

Grafting Pre-trained Models for Multimodal Headline Generation

Add code
Bookmark button
Alert button
Nov 14, 2022
Lingfeng Qiao, Chen Wu, Ye Liu, Haoyuan Peng, Di Yin, Bo Ren

Figure 1 for Grafting Pre-trained Models for Multimodal Headline Generation
Figure 2 for Grafting Pre-trained Models for Multimodal Headline Generation
Figure 3 for Grafting Pre-trained Models for Multimodal Headline Generation
Figure 4 for Grafting Pre-trained Models for Multimodal Headline Generation
Viaarxiv icon