Attacks Which Do Not Kill Training Make Adversarial Learning Stronger

Feb 26, 2020
Jingfeng Zhang, Xilie Xu, Bo Han, Gang Niu, Lizhen Cui, Masashi Sugiyama, Mohan Kankanhalli


  Access Model/Code and Paper
Diversity-Promoting Deep Reinforcement Learning for Interactive Recommendation

Mar 19, 2019
Yong Liu, Yinan Zhang, Qiong Wu, Chunyan Miao, Lizhen Cui, Binqiang Zhao, Yin Zhao, Lu Guan


  Access Model/Code and Paper
Ethically Aligned Opportunistic Scheduling for Productive Laziness

Jan 02, 2019
Han Yu, Chunyan Miao, Yongqing Zheng, Lizhen Cui, Simon Fauvel, Cyril Leung

* Proceedings of the 2nd AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (AIES-19), 2019 

  Access Model/Code and Paper