Alert button
Picture for Jose Sepulveda

Jose Sepulveda

Alert button

Automated Data Denoising for Recommendation

May 26, 2023
Yingqiang Ge, Mostafa Rahmani, Athirai Irissappane, Jose Sepulveda, James Caverlee, Fei Wang

Figure 1 for Automated Data Denoising for Recommendation
Figure 2 for Automated Data Denoising for Recommendation
Figure 3 for Automated Data Denoising for Recommendation
Figure 4 for Automated Data Denoising for Recommendation

In real-world scenarios, most platforms collect both large-scale, naturally noisy implicit feedback and small-scale yet highly relevant explicit feedback. Due to the issue of data sparsity, implicit feedback is often the default choice for training recommender systems (RS), however, such data could be very noisy due to the randomness and diversity of user behaviors. For instance, a large portion of clicks may not reflect true user preferences and many purchases may result in negative reviews or returns. Fortunately, by utilizing the strengths of both types of feedback to compensate for the weaknesses of the other, we can mitigate the above issue at almost no cost. In this work, we propose an Automated Data Denoising framework, \textbf{\textit{AutoDenoise}}, for recommendation, which uses a small number of explicit data as validation set to guide the recommender training. Inspired by the generalized definition of curriculum learning (CL), AutoDenoise learns to automatically and dynamically assign the most appropriate (discrete or continuous) weights to each implicit data sample along the training process under the guidance of the validation performance. Specifically, we use a delicately designed controller network to generate the weights, combine the weights with the loss of each input data to train the recommender system, and optimize the controller with reinforcement learning to maximize the expected accuracy of the trained RS on the noise-free validation set. Thorough experiments indicate that AutoDenoise is able to boost the performance of the state-of-the-art recommendation algorithms on several public benchmark datasets.

Viaarxiv icon

Sense Beyond Expressions: Cuteness

Aug 17, 2015
Kang Wang, Tam V. Nguyen, Jiashi Feng, Jose Sepulveda

Figure 1 for Sense Beyond Expressions: Cuteness
Figure 2 for Sense Beyond Expressions: Cuteness
Figure 3 for Sense Beyond Expressions: Cuteness
Figure 4 for Sense Beyond Expressions: Cuteness

With the development of Internet culture, cuteness has become a popular concept. Many people are curious about what factors making a person look cute. However, there is rare research to answer this interesting question. In this work, we construct a dataset of personal images with comprehensively annotated cuteness scores and facial attributes to investigate this high-level concept in depth. Based on this dataset, through an automatic attributes mining process, we find several critical attributes determining the cuteness of a person. We also develop a novel Continuous Latent Support Vector Machine (C-LSVM) method to predict the cuteness score of one person given only his image. Extensive evaluations validate the effectiveness of the proposed method for cuteness prediction.

* 4 pages 
Viaarxiv icon

Salient Object Detection via Augmented Hypotheses

May 29, 2015
Tam V. Nguyen, Jose Sepulveda

Figure 1 for Salient Object Detection via Augmented Hypotheses
Figure 2 for Salient Object Detection via Augmented Hypotheses
Figure 3 for Salient Object Detection via Augmented Hypotheses
Figure 4 for Salient Object Detection via Augmented Hypotheses

In this paper, we propose using \textit{augmented hypotheses} which consider objectness, foreground and compactness for salient object detection. Our algorithm consists of four basic steps. First, our method generates the objectness map via objectness hypotheses. Based on the objectness map, we estimate the foreground margin and compute the corresponding foreground map which prefers the foreground objects. From the objectness map and the foreground map, the compactness map is formed to favor the compact objects. We then derive a saliency measure that produces a pixel-accurate saliency map which uniformly covers the objects of interest and consistently separates fore- and background. We finally evaluate the proposed framework on two challenging datasets, MSRA-1000 and iCoSeg. Our extensive experimental results show that our method outperforms state-of-the-art approaches.

* IJCAI 2015 paper 
Viaarxiv icon

Adaptive Nonparametric Image Parsing

May 07, 2015
Tam V. Nguyen, Canyi Lu, Jose Sepulveda, Shuicheng Yan

Figure 1 for Adaptive Nonparametric Image Parsing
Figure 2 for Adaptive Nonparametric Image Parsing
Figure 3 for Adaptive Nonparametric Image Parsing
Figure 4 for Adaptive Nonparametric Image Parsing

In this paper, we present an adaptive nonparametric solution to the image parsing task, namely annotating each image pixel with its corresponding category label. For a given test image, first, a locality-aware retrieval set is extracted from the training data based on super-pixel matching similarities, which are augmented with feature extraction for better differentiation of local super-pixels. Then, the category of each super-pixel is initialized by the majority vote of the $k$-nearest-neighbor super-pixels in the retrieval set. Instead of fixing $k$ as in traditional non-parametric approaches, here we propose a novel adaptive nonparametric approach which determines the sample-specific k for each test image. In particular, $k$ is adaptively set to be the number of the fewest nearest super-pixels which the images in the retrieval set can use to get the best category prediction. Finally, the initial super-pixel labels are further refined by contextual smoothing. Extensive experiments on challenging datasets demonstrate the superiority of the new solution over other state-of-the-art nonparametric solutions.

* 11 pages 
Viaarxiv icon