Alert button
Picture for Chenhao Li

Chenhao Li

Alert button

Inverse Rendering of Translucent Objects using Physical and Neural Renderers

May 15, 2023
Chenhao Li, Trung Thanh Ngo, Hajime Nagahara

Figure 1 for Inverse Rendering of Translucent Objects using Physical and Neural Renderers
Figure 2 for Inverse Rendering of Translucent Objects using Physical and Neural Renderers
Figure 3 for Inverse Rendering of Translucent Objects using Physical and Neural Renderers
Figure 4 for Inverse Rendering of Translucent Objects using Physical and Neural Renderers

In this work, we propose an inverse rendering model that estimates 3D shape, spatially-varying reflectance, homogeneous subsurface scattering parameters, and an environment illumination jointly from only a pair of captured images of a translucent object. In order to solve the ambiguity problem of inverse rendering, we use a physically-based renderer and a neural renderer for scene reconstruction and material editing. Because two renderers are differentiable, we can compute a reconstruction loss to assist parameter estimation. To enhance the supervision of the proposed neural renderer, we also propose an augmented loss. In addition, we use a flash and no-flash image pair as the input. To supervise the training, we constructed a large-scale synthetic dataset of translucent objects, which consists of 117K scenes. Qualitative and quantitative results on both synthetic and real-world datasets demonstrated the effectiveness of the proposed model.

* Accepted to CVPR2023 
Viaarxiv icon

Efficient and Secure Federated Learning for Financial Applications

Mar 15, 2023
Tao Liu, Zhi Wang, Hui He, Wei Shi, Liangliang Lin, Wei Shi, Ran An, Chenhao Li

Figure 1 for Efficient and Secure Federated Learning for Financial Applications
Figure 2 for Efficient and Secure Federated Learning for Financial Applications
Figure 3 for Efficient and Secure Federated Learning for Financial Applications
Figure 4 for Efficient and Secure Federated Learning for Financial Applications

The conventional machine learning (ML) and deep learning approaches need to share customers' sensitive information with an external credit bureau to generate a prediction model that opens the door to privacy leakage. This leakage risk makes financial companies face an enormous challenge in their cooperation. Federated learning is a machine learning setting that can protect data privacy, but the high communication cost is often the bottleneck of the federated systems, especially for large neural networks. Limiting the number and size of communications is necessary for the practical training of large neural structures. Gradient sparsification has received increasing attention as a method to reduce communication cost, which only updates significant gradients and accumulates insignificant gradients locally. However, the secure aggregation framework cannot directly use gradient sparsification. This article proposes two sparsification methods to reduce communication cost in federated learning. One is a time-varying hierarchical sparsification method for model parameter update, which solves the problem of maintaining model accuracy after high ratio sparsity. It can significantly reduce the cost of a single communication. The other is to apply the sparsification method to the secure aggregation framework. We sparse the encryption mask matrix to reduce the cost of communication while protecting privacy. Experiments show that under different Non-IID experiment settings, our method can reduce the upload communication cost to about 2.9% to 18.9% of the conventional federated learning algorithm when the sparse rate is 0.01.

Viaarxiv icon

Versatile Skill Control via Self-supervised Adversarial Imitation of Unlabeled Mixed Motions

Sep 16, 2022
Chenhao Li, Sebastian Blaes, Pavel Kolev, Marin Vlastelica, Jonas Frey, Georg Martius

Figure 1 for Versatile Skill Control via Self-supervised Adversarial Imitation of Unlabeled Mixed Motions
Figure 2 for Versatile Skill Control via Self-supervised Adversarial Imitation of Unlabeled Mixed Motions
Figure 3 for Versatile Skill Control via Self-supervised Adversarial Imitation of Unlabeled Mixed Motions
Figure 4 for Versatile Skill Control via Self-supervised Adversarial Imitation of Unlabeled Mixed Motions

Learning diverse skills is one of the main challenges in robotics. To this end, imitation learning approaches have achieved impressive results. These methods require explicitly labeled datasets or assume consistent skill execution to enable learning and active control of individual behaviors, which limits their applicability. In this work, we propose a cooperative adversarial method for obtaining single versatile policies with controllable skill sets from unlabeled datasets containing diverse state transition patterns by maximizing their discriminability. Moreover, we show that by utilizing unsupervised skill discovery in the generative adversarial imitation learning framework, novel and useful skills emerge with successful task fulfillment. Finally, the obtained versatile policies are tested on an agile quadruped robot called Solo 8 and present faithful replications of diverse skills encoded in the demonstrations.

Viaarxiv icon

Learning Agile Skills via Adversarial Imitation of Rough Partial Demonstrations

Jun 23, 2022
Chenhao Li, Marin Vlastelica, Sebastian Blaes, Jonas Frey, Felix Grimminger, Georg Martius

Figure 1 for Learning Agile Skills via Adversarial Imitation of Rough Partial Demonstrations
Figure 2 for Learning Agile Skills via Adversarial Imitation of Rough Partial Demonstrations
Figure 3 for Learning Agile Skills via Adversarial Imitation of Rough Partial Demonstrations
Figure 4 for Learning Agile Skills via Adversarial Imitation of Rough Partial Demonstrations

Learning agile skills is one of the main challenges in robotics. To this end, reinforcement learning approaches have achieved impressive results. These methods require explicit task information in terms of a reward function or an expert that can be queried in simulation to provide a target control output, which limits their applicability. In this work, we propose a generative adversarial method for inferring reward functions from partial and potentially physically incompatible demonstrations for successful skill acquirement where reference or expert demonstrations are not easily accessible. Moreover, we show that by using a Wasserstein GAN formulation and transitions from demonstrations with rough and partial information as input, we are able to extract policies that are robust and capable of imitating demonstrated behaviors. Finally, the obtained skills such as a backflip are tested on an agile quadruped robot called Solo 8 and present faithful replication of hand-held human demonstrations.

Viaarxiv icon