Trustworthy Machine Learning

Machine learning (ML) and artificial intelligence are being widely deployed in many aspects of our society. Traditional ML research mainly focuses on developing new methods to optimize accuracy and efficiency. Security and privacy of ML are largely ignored, though they are key for safety and security critical application domains such as self-driving cars, health care, and cybersecurity. We aim to build provably secure and privacy-preserving ML. In ML systems, both users and model providers desire confidentiality/privacy: users desire privacy of their confidential training and testing data, while model providers desire confidentiality of their proprietary models, learning algorithms, and training data, as they represent intellectual property. We are interested in protecting confidentiality/privacy for both users and model providers. For security, an attacker's goal is to manipulate an ML system such that the system makes predictions as the attacker desires. An attacker can manipulate the training phase and/or the testing phase to achieve this goal. We aim to build ML systems that are provably secure against such attacks.  

Publications

1  Privacy/Confidentiality/Intellectual Property of machine learning

 

2  Security of machine learning

2.1 Security at training phase: poisoning attacks and defenses

Provably secure defenses against data poisoning attacks
Poisoning attacks to federated data analytics and their defenses
Poisoning attacks to graph-based methods and their defenses
Poisoning attacks to recommender systems and their defenses

2.1 Security at testing phase (i.e., adversarial examples): attacks, defenses, and applications for privacy protection

Provably secure defenses against adversarial examples
Applications of adversarial examples for privacy protection