Alert button
Picture for Zhihao Liu

Zhihao Liu

Alert button

A Versatile Multi-Agent Reinforcement Learning Benchmark for Inventory Management

Jun 13, 2023
Xianliang Yang, Zhihao Liu, Wei Jiang, Chuheng Zhang, Li Zhao, Lei Song, Jiang Bian

Figure 1 for A Versatile Multi-Agent Reinforcement Learning Benchmark for Inventory Management
Figure 2 for A Versatile Multi-Agent Reinforcement Learning Benchmark for Inventory Management
Figure 3 for A Versatile Multi-Agent Reinforcement Learning Benchmark for Inventory Management
Figure 4 for A Versatile Multi-Agent Reinforcement Learning Benchmark for Inventory Management

Multi-agent reinforcement learning (MARL) models multiple agents that interact and learn within a shared environment. This paradigm is applicable to various industrial scenarios such as autonomous driving, quantitative trading, and inventory management. However, applying MARL to these real-world scenarios is impeded by many challenges such as scaling up, complex agent interactions, and non-stationary dynamics. To incentivize the research of MARL on these challenges, we develop MABIM (Multi-Agent Benchmark for Inventory Management) which is a multi-echelon, multi-commodity inventory management simulator that can generate versatile tasks with these different challenging properties. Based on MABIM, we evaluate the performance of classic operations research (OR) methods and popular MARL algorithms on these challenging tasks to highlight their weaknesses and potential.

Viaarxiv icon

CRFormer: A Cross-Region Transformer for Shadow Removal

Jul 04, 2022
Jin Wan, Hui Yin, Zhenyao Wu, Xinyi Wu, Zhihao Liu, Song Wang

Figure 1 for CRFormer: A Cross-Region Transformer for Shadow Removal
Figure 2 for CRFormer: A Cross-Region Transformer for Shadow Removal
Figure 3 for CRFormer: A Cross-Region Transformer for Shadow Removal
Figure 4 for CRFormer: A Cross-Region Transformer for Shadow Removal

Aiming to restore the original intensity of shadow regions in an image and make them compatible with the remaining non-shadow regions without a trace, shadow removal is a very challenging problem that benefits many downstream image/video-related tasks. Recently, transformers have shown their strong capability in various applications by capturing global pixel interactions and this capability is highly desirable in shadow removal. However, applying transformers to promote shadow removal is non-trivial for the following two reasons: 1) The patchify operation is not suitable for shadow removal due to irregular shadow shapes; 2) shadow removal only needs one-way interaction from the non-shadow region to the shadow region instead of the common two-way interactions among all pixels in the image. In this paper, we propose a novel cross-region transformer, namely CRFormer, for shadow removal which differs from existing transformers by only considering the pixel interactions from the non-shadow region to the shadow region without splitting images into patches. This is achieved by a carefully designed region-aware cross-attention operation that can aggregate the recovered shadow region features conditioned on the non-shadow region features. Extensive experiments on ISTD, AISTD, SRD, and Video Shadow Removal datasets demonstrate the superiority of our method compared to other state-of-the-art methods.

Viaarxiv icon

From Shadow Generation to Shadow Removal

Mar 24, 2021
Zhihao Liu, Hui Yin, Xinyi Wu, Zhenyao Wu, Yang Mi, Song Wang

Figure 1 for From Shadow Generation to Shadow Removal
Figure 2 for From Shadow Generation to Shadow Removal
Figure 3 for From Shadow Generation to Shadow Removal
Figure 4 for From Shadow Generation to Shadow Removal

Shadow removal is a computer-vision task that aims to restore the image content in shadow regions. While almost all recent shadow-removal methods require shadow-free images for training, in ECCV 2020 Le and Samaras introduces an innovative approach without this requirement by cropping patches with and without shadows from shadow images as training samples. However, it is still laborious and time-consuming to construct a large amount of such unpaired patches. In this paper, we propose a new G2R-ShadowNet which leverages shadow generation for weakly-supervised shadow removal by only using a set of shadow images and their corresponding shadow masks for training. The proposed G2R-ShadowNet consists of three sub-networks for shadow generation, shadow removal and refinement, respectively and they are jointly trained in an end-to-end fashion. In particular, the shadow generation sub-net stylises non-shadow regions to be shadow ones, leading to paired data for training the shadow-removal sub-net. Extensive experiments on the ISTD dataset and the Video Shadow Removal dataset show that the proposed G2R-ShadowNet achieves competitive performances against the current state of the arts and outperforms Le and Samaras' patch-based shadow-removal method.

* Accepted by CVPR2021 
Viaarxiv icon

e-ACJ: Accurate Junction Extraction For Event Cameras

Jan 27, 2021
Zhihao Liu, Yuqian Fu

Figure 1 for e-ACJ: Accurate Junction Extraction For Event Cameras
Figure 2 for e-ACJ: Accurate Junction Extraction For Event Cameras
Figure 3 for e-ACJ: Accurate Junction Extraction For Event Cameras
Figure 4 for e-ACJ: Accurate Junction Extraction For Event Cameras

Junctions reflect the important geometrical structure information of the image, and are of primary significance to applications such as image matching and motion analysis. Previous event-based feature extraction methods are mainly focused on corners, which mainly find their locations, however, ignoring the geometrical structure information like orientations and scales of edges. This paper adapts the frame-based a-contrario junction detector(ACJ) to event data, proposing the event-based a-contrario junction detector(e-ACJ), which yields junctions' locations while giving the scales and orientations of their branches. The proposed method relies on an a-contrario model and can operate on asynchronous events directly without generating synthesized event frames. We evaluate the performance on public event datasets. The result shows our method successfully finds the orientations and scales of branches, while maintaining high accuracy in junction's location.

* Submitted by ICIP 2021 
Viaarxiv icon

Shadow Removal by a Lightness-Guided Network with Training on Unpaired Data

Jun 28, 2020
Zhihao Liu, Hui Yin, Yang Mi, Mengyang Pu, Song Wang

Figure 1 for Shadow Removal by a Lightness-Guided Network with Training on Unpaired Data
Figure 2 for Shadow Removal by a Lightness-Guided Network with Training on Unpaired Data
Figure 3 for Shadow Removal by a Lightness-Guided Network with Training on Unpaired Data
Figure 4 for Shadow Removal by a Lightness-Guided Network with Training on Unpaired Data

Shadow removal can significantly improve the image visual quality and has many applications in computer vision. Deep learning methods based on CNNs have become the most effective approach for shadow removal by training on either paired data, where both the shadow and underlying shadow-free versions of an image are known, or unpaired data, where shadow and shadow-free training images are totally different with no correspondence. In practice, CNN training on unpaired data is more preferred given the easiness of training data collection. In this paper, we present a new Lightness-Guided Shadow Removal Network (LG-ShadowNet) for shadow removal by training on unpaired data. In this method, we first train a CNN module to compensate for the lightness and then train a second CNN module with the guidance of lightness information from the first CNN module for final shadow removal. We also introduce a loss function to further utilise the colour prior of existing data. Extensive experiments on widely used ISTD, adjusted ISTD and USR datasets demonstrate that the proposed method outperforms the state-of-the-art methods with training on unpaired data.

* Submitted to IEEE TIP 
Viaarxiv icon