Get our free extension to see links to code for papers anywhere online!

 Add to Chrome

 Add to Firefox

CatalyzeX Code Finder - Browser extension linking code for ML papers across the web! | Product Hunt Embed
Semi-Supervised Node Classification on Graphs: Markov Random Fields vs. Graph Neural Networks

Dec 25, 2020
Binghui Wang, Jinyuan Jia, Neil Zhenqiang Gong

* Accepted by AAAI 2021 

  Access Paper or Ask Questions

Certified Robustness of Nearest Neighbors against Data Poisoning Attacks

Dec 07, 2020
Jinyuan Jia, Xiaoyu Cao, Neil Zhenqiang Gong


  Access Paper or Ask Questions

Almost Tight L0-norm Certified Robustness of Top-k Predictions against Adversarial Perturbations

Nov 15, 2020
Jinyuan Jia, Binghui Wang, Xiaoyu Cao, Hongbin Liu, Neil Zhenqiang Gong


  Access Paper or Ask Questions

Robust and Verifiable Information Embedding Attacks to Deep Neural Networks via Error-Correcting Codes

Oct 26, 2020
Jinyuan Jia, Binghui Wang, Neil Zhenqiang Gong

* Accepted by AsiaCCS'21 

  Access Paper or Ask Questions

Intrinsic Certified Robustness of Bagging against Data Poisoning Attacks

Sep 04, 2020
Jinyuan Jia, Xiaoyu Cao, Neil Zhenqiang Gong


  Access Paper or Ask Questions

On the Intrinsic Differential Privacy of Bagging

Aug 22, 2020
Hongbin Liu, Jinyuan Jia, Neil Zhenqiang Gong


  Access Paper or Ask Questions

Backdoor Attacks to Graph Neural Networks

Jun 19, 2020
Zaixi Zhang, Jinyuan Jia, Binghui Wang, Neil Zhenqiang Gong


  Access Paper or Ask Questions

Stealing Links from Graph Neural Networks

May 05, 2020
Xinlei He, Jinyuan Jia, Michael Backes, Neil Zhenqiang Gong, Yang Zhang


  Access Paper or Ask Questions

Certified Robustness of Community Detection against Adversarial Structural Perturbation via Randomized Smoothing

Feb 09, 2020
Jinyuan Jia, Binghui Wang, Xiaoyu Cao, Neil Zhenqiang Gong

* Accepted by WWW'20; This is technical report version 

  Access Paper or Ask Questions

Certified Robustness for Top-k Predictions against Adversarial Perturbations via Randomized Smoothing

Dec 20, 2019
Jinyuan Jia, Xiaoyu Cao, Binghui Wang, Neil Zhenqiang Gong

* ICLR 2020, code is available at this: https://github.com/jjy1994/Certify_Topk 

  Access Paper or Ask Questions

Local Model Poisoning Attacks to Byzantine-Robust Federated Learning

Nov 26, 2019
Minghong Fang, Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong

* The paper was submitted to Usenix Security Symposium in February 2019 and will appear in Usenix Security Symposium 2020 

  Access Paper or Ask Questions

IPGuard: Protecting the Intellectual Property of Deep Neural Networks via Fingerprinting the Classification Boundary

Oct 30, 2019
Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong


  Access Paper or Ask Questions

MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples

Sep 26, 2019
Jinyuan Jia, Ahmed Salem, Michael Backes, Yang Zhang, Neil Zhenqiang Gong

* To appear in CCS'19 

  Access Paper or Ask Questions

Defending against Machine Learning based Inference Attacks via Adversarial Examples: Opportunities and Challenges

Sep 19, 2019
Jinyuan Jia, Neil Zhenqiang Gong

* Book chapter. arXiv admin note: substantial text overlap with arXiv:1805.04810 

  Access Paper or Ask Questions

Graph-based Security and Privacy Analytics via Collective Classification with Joint Weight Learning and Propagation

Dec 06, 2018
Binghui Wang, Jinyuan Jia, Neil Zhenqiang Gong

* To appear in the 26th Annual Network and Distributed System Security Symposium (NDSS), Feb 2019 

  Access Paper or Ask Questions

AttriGuard: A Practical Defense Against Attribute Inference Attacks via Adversarial Machine Learning

May 13, 2018
Jinyuan Jia, Neil Zhenqiang Gong

* 27th Usenix Security Symposium, Privacy protection using adversarial examples 

  Access Paper or Ask Questions