Alert button
Picture for Bei Jiang

Bei Jiang

Alert button

Gaussian Differential Privacy on Riemannian Manifolds

Add code
Bookmark button
Alert button
Nov 09, 2023
Yangdi Jiang, Xiaotian Chang, Yi Liu, Lei Ding, Linglong Kong, Bei Jiang

Viaarxiv icon

Class Interference of Deep Neural Networks

Add code
Bookmark button
Alert button
Oct 31, 2022
Dongcui Diao, Hengshuai Yao, Bei Jiang

Figure 1 for Class Interference of Deep Neural Networks
Figure 2 for Class Interference of Deep Neural Networks
Figure 3 for Class Interference of Deep Neural Networks
Figure 4 for Class Interference of Deep Neural Networks
Viaarxiv icon

Conformalized Fairness via Quantile Regression

Add code
Bookmark button
Alert button
Oct 05, 2022
Meichen Liu, Lei Ding, Dengdeng Yu, Wulong Liu, Linglong Kong, Bei Jiang

Figure 1 for Conformalized Fairness via Quantile Regression
Figure 2 for Conformalized Fairness via Quantile Regression
Figure 3 for Conformalized Fairness via Quantile Regression
Figure 4 for Conformalized Fairness via Quantile Regression
Viaarxiv icon

How Does Value Distribution in Distributional Reinforcement Learning Help Optimization?

Add code
Bookmark button
Alert button
Sep 29, 2022
Ke Sun, Bei Jiang, Linglong Kong

Figure 1 for How Does Value Distribution in Distributional Reinforcement Learning Help Optimization?
Figure 2 for How Does Value Distribution in Distributional Reinforcement Learning Help Optimization?
Figure 3 for How Does Value Distribution in Distributional Reinforcement Learning Help Optimization?
Figure 4 for How Does Value Distribution in Distributional Reinforcement Learning Help Optimization?
Viaarxiv icon

Sigmoidally Preconditioned Off-policy Learning:a new exploration method for reinforcement learning

Add code
Bookmark button
Alert button
May 20, 2022
Xing Chen, Dongcui Diao, Hechang Chen, Hengshuai Yao, Jielong Yang, Haiyin Piao, Zhixiao Sun, Bei Jiang, Yi Chang

Figure 1 for Sigmoidally Preconditioned Off-policy Learning:a new exploration method for reinforcement learning
Figure 2 for Sigmoidally Preconditioned Off-policy Learning:a new exploration method for reinforcement learning
Figure 3 for Sigmoidally Preconditioned Off-policy Learning:a new exploration method for reinforcement learning
Figure 4 for Sigmoidally Preconditioned Off-policy Learning:a new exploration method for reinforcement learning
Viaarxiv icon

Distributional Reinforcement Learning via Sinkhorn Iterations

Add code
Bookmark button
Alert button
Feb 16, 2022
Ke Sun, Yingnan Zhao, Yi Liu, Bei Jiang, Linglong Kong

Figure 1 for Distributional Reinforcement Learning via Sinkhorn Iterations
Figure 2 for Distributional Reinforcement Learning via Sinkhorn Iterations
Figure 3 for Distributional Reinforcement Learning via Sinkhorn Iterations
Figure 4 for Distributional Reinforcement Learning via Sinkhorn Iterations
Viaarxiv icon

Word Embeddings via Causal Inference: Gender Bias Reducing and Semantic Information Preserving

Add code
Bookmark button
Alert button
Dec 09, 2021
Lei Ding, Dengdeng Yu, Jinhan Xie, Wenxing Guo, Shenggang Hu, Meichen Liu, Linglong Kong, Hongsheng Dai, Yanchun Bao, Bei Jiang

Figure 1 for Word Embeddings via Causal Inference: Gender Bias Reducing and Semantic Information Preserving
Figure 2 for Word Embeddings via Causal Inference: Gender Bias Reducing and Semantic Information Preserving
Figure 3 for Word Embeddings via Causal Inference: Gender Bias Reducing and Semantic Information Preserving
Figure 4 for Word Embeddings via Causal Inference: Gender Bias Reducing and Semantic Information Preserving
Viaarxiv icon

Damped Anderson Mixing for Deep Reinforcement Learning: Acceleration, Convergence, and Stabilization

Add code
Bookmark button
Alert button
Oct 20, 2021
Ke Sun, Yafei Wang, Yi Liu, Yingnan Zhao, Bo Pan, Shangling Jui, Bei Jiang, Linglong Kong

Figure 1 for Damped Anderson Mixing for Deep Reinforcement Learning: Acceleration, Convergence, and Stabilization
Figure 2 for Damped Anderson Mixing for Deep Reinforcement Learning: Acceleration, Convergence, and Stabilization
Figure 3 for Damped Anderson Mixing for Deep Reinforcement Learning: Acceleration, Convergence, and Stabilization
Figure 4 for Damped Anderson Mixing for Deep Reinforcement Learning: Acceleration, Convergence, and Stabilization
Viaarxiv icon