Alert button
Picture for Amin Rakhsha

Amin Rakhsha

Alert button

Maximum Entropy Model Correction in Reinforcement Learning

Add code
Bookmark button
Alert button
Nov 29, 2023
Amin Rakhsha, Mete Kemertas, Mohammad Ghavamzadeh, Amir-massoud Farahmand

Viaarxiv icon

Operator Splitting Value Iteration

Add code
Bookmark button
Alert button
Nov 25, 2022
Amin Rakhsha, Andrew Wang, Mohammad Ghavamzadeh, Amir-massoud Farahmand

Figure 1 for Operator Splitting Value Iteration
Figure 2 for Operator Splitting Value Iteration
Figure 3 for Operator Splitting Value Iteration
Figure 4 for Operator Splitting Value Iteration
Viaarxiv icon

Reward Poisoning in Reinforcement Learning: Attacks Against Unknown Learners in Unknown Environments

Add code
Bookmark button
Alert button
Feb 16, 2021
Amin Rakhsha, Xuezhou Zhang, Xiaojin Zhu, Adish Singla

Viaarxiv icon

Policy Teaching in Reinforcement Learning via Environment Poisoning Attacks

Add code
Bookmark button
Alert button
Nov 21, 2020
Amin Rakhsha, Goran Radanovic, Rati Devidze, Xiaojin Zhu, Adish Singla

Figure 1 for Policy Teaching in Reinforcement Learning via Environment Poisoning Attacks
Figure 2 for Policy Teaching in Reinforcement Learning via Environment Poisoning Attacks
Figure 3 for Policy Teaching in Reinforcement Learning via Environment Poisoning Attacks
Figure 4 for Policy Teaching in Reinforcement Learning via Environment Poisoning Attacks
Viaarxiv icon

Policy Teaching via Environment Poisoning: Training-time Adversarial Attacks against Reinforcement Learning

Add code
Bookmark button
Alert button
Mar 28, 2020
Amin Rakhsha, Goran Radanovic, Rati Devidze, Xiaojin Zhu, Adish Singla

Figure 1 for Policy Teaching via Environment Poisoning: Training-time Adversarial Attacks against Reinforcement Learning
Figure 2 for Policy Teaching via Environment Poisoning: Training-time Adversarial Attacks against Reinforcement Learning
Figure 3 for Policy Teaching via Environment Poisoning: Training-time Adversarial Attacks against Reinforcement Learning
Figure 4 for Policy Teaching via Environment Poisoning: Training-time Adversarial Attacks against Reinforcement Learning
Viaarxiv icon