Alert button
Picture for Lixi Zhu

Lixi Zhu

Alert button

Knowledge Graph-enhanced Sampling for Conversational Recommender System

Oct 13, 2021
Mengyuan Zhao, Xiaowen Huang, Lixi Zhu, Jitao Sang, Jian Yu

Figure 1 for Knowledge Graph-enhanced Sampling for Conversational Recommender System
Figure 2 for Knowledge Graph-enhanced Sampling for Conversational Recommender System
Figure 3 for Knowledge Graph-enhanced Sampling for Conversational Recommender System
Figure 4 for Knowledge Graph-enhanced Sampling for Conversational Recommender System

The traditional recommendation systems mainly use offline user data to train offline models, and then recommend items for online users, thus suffering from the unreliable estimation of user preferences based on sparse and noisy historical data. Conversational Recommendation System (CRS) uses the interactive form of the dialogue systems to solve the intrinsic problems of traditional recommendation systems. However, due to the lack of contextual information modeling, the existing CRS models are unable to deal with the exploitation and exploration (E&E) problem well, resulting in the heavy burden on users. To address the aforementioned issue, this work proposes a contextual information enhancement model tailored for CRS, called Knowledge Graph-enhanced Sampling (KGenSam). KGenSam integrates the dynamic graph of user interaction data with the external knowledge into one heterogeneous Knowledge Graph (KG) as the contextual information environment. Then, two samplers are designed to enhance knowledge by sampling fuzzy samples with high uncertainty for obtaining user preferences and reliable negative samples for updating recommender to achieve efficient acquisition of user preferences and model updating, and thus provide a powerful solution for CRS to deal with E&E problem. Experimental results on two real-world datasets demonstrate the superiority of KGenSam with significant improvements over state-of-the-art methods.

Viaarxiv icon

Multi-Dimension Fusion Network for Light Field Spatial Super-Resolution using Dynamic Filters

Aug 26, 2020
Qingyan Sun, Shuo Zhang, Song Chang, Lixi Zhu, Youfang Lin

Figure 1 for Multi-Dimension Fusion Network for Light Field Spatial Super-Resolution using Dynamic Filters
Figure 2 for Multi-Dimension Fusion Network for Light Field Spatial Super-Resolution using Dynamic Filters
Figure 3 for Multi-Dimension Fusion Network for Light Field Spatial Super-Resolution using Dynamic Filters
Figure 4 for Multi-Dimension Fusion Network for Light Field Spatial Super-Resolution using Dynamic Filters

Light field cameras have been proved to be powerful tools for 3D reconstruction and virtual reality applications. However, the limited resolution of light field images brings a lot of difficulties for further information display and extraction. In this paper, we introduce a novel learning-based framework to improve the spatial resolution of light fields. First, features from different dimensions are parallelly extracted and fused together in our multi-dimension fusion architecture. These features are then used to generate dynamic filters, which extract subpixel information from micro-lens images and also implicitly consider the disparity information. Finally, more high-frequency details learned in the residual branch are added to the upsampled images and the final super-resolved light fields are obtained. Experimental results show that the proposed method uses fewer parameters but achieves better performances than other state-of-the-art methods in various kinds of datasets. Our reconstructed images also show sharp details and distinct lines in both sub-aperture images and epipolar plane images.

Viaarxiv icon