Alert button
Picture for Yong Liang

Yong Liang

Alert button

Online Joint Assortment-Inventory Optimization under MNL Choices

Apr 04, 2023
Yong Liang, Xiaojie Mao, Shiyuan Wang

Figure 1 for Online Joint Assortment-Inventory Optimization under MNL Choices
Figure 2 for Online Joint Assortment-Inventory Optimization under MNL Choices
Figure 3 for Online Joint Assortment-Inventory Optimization under MNL Choices
Figure 4 for Online Joint Assortment-Inventory Optimization under MNL Choices

We study an online joint assortment-inventory optimization problem, in which we assume that the choice behavior of each customer follows the Multinomial Logit (MNL) choice model, and the attraction parameters are unknown a priori. The retailer makes periodic assortment and inventory decisions to dynamically learn from the realized demands about the attraction parameters while maximizing the expected total profit over time. In this paper, we propose a novel algorithm that can effectively balance the exploration and exploitation in the online decision-making of assortment and inventory. Our algorithm builds on a new estimator for the MNL attraction parameters, a novel approach to incentivize exploration by adaptively tuning certain known and unknown parameters, and an optimization oracle to static single-cycle assortment-inventory planning problems with given parameters. We establish a regret upper bound for our algorithm and a lower bound for the online joint assortment-inventory optimization problem, suggesting that our algorithm achieves nearly optimal regret rate, provided that the static optimization oracle is exact. Then we incorporate more practical approximate static optimization oracles into our algorithm, and bound from above the impact of static optimization errors on the regret of our algorithm. At last, we perform numerical studies to demonstrate the effectiveness of our proposed algorithm.

Viaarxiv icon

RCDNet: An Interpretable Rain Convolutional Dictionary Network for Single Image Deraining

Jul 14, 2021
Hong Wang, Qi Xie, Qian Zhao, Yong Liang, Deyu Meng

Figure 1 for RCDNet: An Interpretable Rain Convolutional Dictionary Network for Single Image Deraining
Figure 2 for RCDNet: An Interpretable Rain Convolutional Dictionary Network for Single Image Deraining
Figure 3 for RCDNet: An Interpretable Rain Convolutional Dictionary Network for Single Image Deraining
Figure 4 for RCDNet: An Interpretable Rain Convolutional Dictionary Network for Single Image Deraining

As a common weather, rain streaks adversely degrade the image quality. Hence, removing rains from an image has become an important issue in the field. To handle such an ill-posed single image deraining task, in this paper, we specifically build a novel deep architecture, called rain convolutional dictionary network (RCDNet), which embeds the intrinsic priors of rain streaks and has clear interpretability. In specific, we first establish a RCD model for representing rain streaks and utilize the proximal gradient descent technique to design an iterative algorithm only containing simple operators for solving the model. By unfolding it, we then build the RCDNet in which every network module has clear physical meanings and corresponds to each operation involved in the algorithm. This good interpretability greatly facilitates an easy visualization and analysis on what happens inside the network and why it works well in inference process. Moreover, taking into account the domain gap issue in real scenarios, we further design a novel dynamic RCDNet, where the rain kernels can be dynamically inferred corresponding to input rainy images and then help shrink the space for rain layer estimation with few rain maps so as to ensure a fine generalization performance in the inconsistent scenarios of rain types between training and testing data. By end-to-end training such an interpretable network, all involved rain kernels and proximal operators can be automatically extracted, faithfully characterizing the features of both rain and clean background layers, and thus naturally lead to better deraining performance. Comprehensive experiments substantiate the superiority of our method, especially on its well generality to diverse testing scenarios and good interpretability for all its modules. Code is available in \emph{\url{https://github.com/hongwang01/DRCDNet}}.

Viaarxiv icon

Select-ProtoNet: Learning to Select for Few-Shot Disease Subtype Prediction

Sep 03, 2020
Ziyi Yang, Jun Shu, Yong Liang, Deyu Meng, Zongben Xu

Figure 1 for Select-ProtoNet: Learning to Select for Few-Shot Disease Subtype Prediction
Figure 2 for Select-ProtoNet: Learning to Select for Few-Shot Disease Subtype Prediction
Figure 3 for Select-ProtoNet: Learning to Select for Few-Shot Disease Subtype Prediction
Figure 4 for Select-ProtoNet: Learning to Select for Few-Shot Disease Subtype Prediction

Current machine learning has made great progress on computer vision and many other fields attributed to the large amount of high-quality training samples, while it does not work very well on genomic data analysis, since they are notoriously known as small data. In our work, we focus on few-shot disease subtype prediction problem, identifying subgroups of similar patients that can guide treatment decisions for a specific individual through training on small data. In fact, doctors and clinicians always address this problem by studying several interrelated clinical variables simultaneously. We attempt to simulate such clinical perspective, and introduce meta learning techniques to develop a new model, which can extract the common experience or knowledge from interrelated clinical tasks and transfer it to help address new tasks. Our new model is built upon a carefully designed meta-learner, called Prototypical Network, that is a simple yet effective meta learning machine for few-shot image classification. Observing that gene expression data have specifically high dimensionality and high noise properties compared with image data, we proposed a new extension of it by appending two modules to address these issues. Concretely, we append a feature selection layer to automatically filter out the disease-irrelated genes and incorporate a sample reweighting strategy to adaptively remove noisy data, and meanwhile the extended model is capable of learning from a limited number of training examples and generalize well. Simulations and real gene expression data experiments substantiate the superiority of the proposed method for predicting the subtypes of disease and identifying potential disease-related genes.

* 11 pages 
Viaarxiv icon

Structural Residual Learning for Single Image Rain Removal

May 19, 2020
Hong Wang, Yichen Wu, Qi Xie, Qian Zhao, Yong Liang, Deyu Meng

Figure 1 for Structural Residual Learning for Single Image Rain Removal
Figure 2 for Structural Residual Learning for Single Image Rain Removal
Figure 3 for Structural Residual Learning for Single Image Rain Removal
Figure 4 for Structural Residual Learning for Single Image Rain Removal

To alleviate the adverse effect of rain streaks in image processing tasks, CNN-based single image rain removal methods have been recently proposed. However, the performance of these deep learning methods largely relies on the covering range of rain shapes contained in the pre-collected training rainy-clean image pairs. This makes them easily trapped into the overfitting-to-the-training-samples issue and cannot finely generalize to practical rainy images with complex and diverse rain streaks. Against this generalization issue, this study proposes a new network architecture by enforcing the output residual of the network possess intrinsic rain structures. Such a structural residual setting guarantees the rain layer extracted by the network finely comply with the prior knowledge of general rain streaks, and thus regulates sound rain shapes capable of being well extracted from rainy images in both training and predicting stages. Such a general regularization function naturally leads to both its better training accuracy and testing generalization capability even for those non-seen rain configurations. Such superiority is comprehensively substantiated by experiments implemented on synthetic and real datasets both visually and quantitatively as compared with current state-of-the-art methods.

Viaarxiv icon