Machine learning (ML) and artificial intelligence are being widely deployed in many aspects of our society. Traditional ML research mainly focuses on developing new methods to optimize accuracy and efficiency. Security and privacy of ML are largely ignored, though they are key for safety and security critical application domains such as self-driving cars, health care, and cybersecurity. We aim to build provably secure and privacy-preserving ML. In ML systems, both users and model providers desire confidentiality/privacy: users desire privacy of their confidential training and testing data, while model providers desire confidentiality of their proprietary models, learning algorithms, and training data, as they represent intellectual property. We are interested in protecting confidentiality/privacy for both users and model providers. For security, an attacker's goal is to manipulate an ML system such that the system makes predictions as the attacker desires. An attacker can manipulate the training phase and/or the testing phase to achieve this goal. We aim to build ML systems that are provably secure against such attacks.
Publications
1 Privacy/Confidentiality/Intellectual Property of machine learning
- Hongbin Liu, Jinyuan Jia, and Neil Zhenqiang Gong. "On the Intrinsic Differential Privacy of Bagging". In International Joint Conference on Artificial Intelligence (IJCAI), 2021.
- Xinlei He, Jinyuan Jia, Michael Backes, Neil Zhenqiang Gong, and Yang Zhang. "Stealing Links from Graph Neural Networks". In USENIX Security Symposium, 2021.
- Xiaoyu Cao, Jinyuan Jia, and Neil Zhenqiang Gong. "Data Poisoning Attacks to Local Differential Privacy Protocols". In USENIX Security Symposium, 2021.
- Xiaoyu Cao, Jinyuan Jia, and Neil Zhenqiang Gong. "IPGuard: Protecting Intellectual Property of Deep Neural Networks via Fingerprinting the Classification Boundary". In ACM ASIA Conference on Computer and Communications Security (ASIACCS), 2021.
- Jinyuan Jia, Binghui Wang, and Neil Zhenqiang Gong. "Robust and Verifiable Information Embedding Attacks to Deep Neural Networks via Error-Correcting Codes". In ACM ASIA Conference on Computer and Communications Security (ASIACCS), 2021.
- Bo Hui, Yuchen Yang, Haolin Yuan, Philippe Burlina, Neil Zhenqiang Gong, and Yinzhi Cao. "Practical Blind Membership Inference Attack via Differential Comparisons". In ISOC Network and Distributed System Security Symposium (NDSS), 2021.
- Jinyuan Jia and Neil Zhenqiang Gong. "Defending against Machine Learning based Inference Attacks via Adversarial Examples: Opportunities and Challenges". Adaptive Autonomous Secure Cyber Systems. Springer, Cham, 2020.
- Jinyuan Jia, Ahmed Salem, Michael Backes, Yang Zhang, and Neil Zhenqiang Gong. "MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples". In ACM Conference on Computer and Communications Security (CCS), 2019. Code and data are available [here].
- Jinyuan Jia and Neil Zhenqiang Gong. "Calibrate: Frequency Estimation and Heavy Hitter Identification with Local Differential Privacy via Incorporating Prior Knowledge". In IEEE International Conference on Computer Communications (INFOCOM), 2019.
- Jinyuan Jia and Neil Zhenqiang Gong. "AttriGuard: A Practical Defense Against Attribute Inference Attacks via Adversarial Machine Learning". In USENIX Security Symposium, 2018. Code and data are available [here].
- Binghui Wang and Neil Zhenqiang Gong. "Stealing Hyperparameters in Machine Learning". In IEEE Symposium on Security and Privacy (IEEE S&P), 2018.
- Bin Liu, Deguang Kong, Lei Cen, Neil Zhenqiang Gong, Hongxia Jin, and Hui Xiong. "Personalized Mobile App Recommendation: Reconciling App Functionality and User Privacy Preference. In ACM International Conference on Web Search and Data Mining (WSDM), 2015.
2 Security of machine learning
2.1 Security at training phase: poisoning attacks and defenses
Provably secure defenses against data poisoning attacks
- Jinyuan Jia, Xiaoyu Cao, and Neil Zhenqiang Gong. "Intrinsic Certified Robustness of Bagging against Data Poisoning Attacks". In AAAI Conference on Artificial Intelligence (AAAI), 2021.
- Binghui Wang, Xiaoyu Cao, Jinyuan Jia, and Neil Zhenqiang Gong. "On Certifying Robustness against Backdoor Attacks via Randomized Smoothing". CVPR 2020 Workshop on Adversarial Machine Learning in Computer Vision, 2020.
Poisoning attacks to federated data analytics and their defenses
- Xiaoyu Cao, Jinyuan Jia, and Neil Zhenqiang Gong. "Data Poisoning Attacks to Local Differential Privacy Protocols". In USENIX Security Symposium, 2021.
- Minghong Fang, Minghao Sun, Qi Li, Neil Zhenqiang Gong, Jin Tian, and Jia Liu. "Data Poisoning Attacks and Defenses to Crowdsourcing Systems". In The Web Conference (WWW), 2021.
- Xiaoyu Cao, Jinyuan Jia, and Neil Zhenqiang Gong. "Provably Secure Federated Learning against Malicious Clients". In AAAI Conference on Artificial Intelligence (AAAI), 2021.
- Xiaoyu Cao, Minghong Fang, Jia Liu, and Neil Zhenqiang Gong. "FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping". In ISOC Network and Distributed System Security Symposium (NDSS), 2021.
-
Minghong Fang*, Xiaoyu Cao*, Jinyuan Jia, and Neil Zhenqiang Gong. "Local Model Poisoning Attacks to Byzantine-Robust Federated Learning". In USENIX Security Symposium, 2020. *Equal contribution
Poisoning attacks to graph-based methods and their defenses
- Binghui Wang, Jinyuan Jia, Xiaoyu Cao, and Neil Zhenqiang Gong. "Certified Robustness of Graph Neural Networks against Adversarial Structural Perturbation". In ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 2021.
- Zaixi Zhang*, Jinyuan Jia*, Binghui Wang, and Neil Zhenqiang Gong. "Backdoor Attacks to Graph Neural Networks". In ACM Symposium on Access Control Models and Technologies (SACMAT), 2021. *Equal contribution
- Jinyuan Jia*, Binghui Wang*, Xiaoyu Cao, and Neil Zhenqiang Gong. "Certified Robustness of Community Detection against Adversarial Structural Perturbation via Randomized Smoothing". In The Web Conference (WWW), 2020. *Equal contribution
- Binghui Wang and Neil Zhenqiang Gong. "Attacking Graph-based Classification via Manipulating the Graph Structure". In ACM Conference on Computer and Communications Security (CCS), 2019.
Poisoning attacks to recommender systems and their defenses
- Hai Huang, Jiaming Mu, Neil Zhenqiang Gong, Qi Li, Bin Liu, and Mingwei Xu. "Data Poisoning Attacks to Deep Learning Based Recommender Systems". In ISOC Network and Distributed System Security Symposium (NDSS), 2021.
- Minghong Fang, Neil Zhenqiang Gong, and Jia Liu. "Influence Function based Data Poisoning Attacks to Top-N Recommender Systems". In The Web Conference (WWW), 2020.
-
Minghong Fang, Guolei Yang, Neil Zhenqiang Gong, and Jia Liu. "Poisoning Attacks to Graph-Based Recommender Systems". In Annual Computer Security Applications Conference (ACSAC), 2018.
-
Guolei Yang, Neil Zhenqiang Gong, and Ying Cai. "Fake Co-visitation Injection Attacks to Recommender Systems". In ISOC Network and Distributed System Security Symposium (NDSS), 2017.
2.1 Security at testing phase (i.e., adversarial examples): attacks, defenses, and applications for privacy protection
Provably secure defenses against adversarial examples
-
Hongbin Liu*, Jinyuan Jia*, and Neil Zhenqiang Gong. "PointGuard: Provably Robust 3D Point Cloud Classification". In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021. *Equal contribution
-
Jinyuan Jia, Xiaoyu Cao, Binghui Wang, and Neil Zhenqiang Gong. "Certified Robustness for Top-k Predictions against Adversarial Perturbations via Randomized Smoothing". In International Conference on Learning Representations (ICLR), 2020.
-
Xiaoyu Cao and Neil Zhenqiang Gong. "Mitigating Evasion Attacks to Deep Neural Networks via Region-based Classification". In Annual Computer Security Applications Conference (ACSAC), 2017.
Applications of adversarial examples for privacy protection
- Jinyuan Jia and Neil Zhenqiang Gong. "Defending against Machine Learning based Inference Attacks via Adversarial Examples: Opportunities and Challenges". Adaptive Autonomous Secure Cyber Systems. Springer, Cham, 2020.
- Jinyuan Jia, Ahmed Salem, Michael Backes, Yang Zhang, and Neil Zhenqiang Gong. "MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples". In ACM Conference on Computer and Communications Security (CCS), 2019. Code and data are available [here].
- Jinyuan Jia and Neil Zhenqiang Gong. "AttriGuard: A Practical Defense Against Attribute Inference Attacks via Adversarial Machine Learning". In USENIX Security Symposium, 2018. Code and data are available [here].