Secure Learning in Adversarial Environments

Talk
Bo Li
University of Michigan
Talk Series: 
Time: 
03.08.2017 11:00 to 12:00
Location: 

AVW 4172

Advances in machine learning have led to rapid and widespread deployment of software-based inference and decision making, resulting in various applications such as data analytics, autonomous systems, and security diagnostics. Current machine learning systems, however, assume that training and test data follow the same, or similar, distributions, and do not consider active adversaries manipulating either distribution. Recent work has demonstrated that motivated adversaries can circumvent anomaly detection or classification models at test time through evasion attacks, or can inject well-crafted malicious instances into training data to induce errors in classification through poisoning attacks. In addition, by undermining the integrity of learning systems, the privacy of users' data can also be compromised. In this talk, I will describe my recent research addressing evasion attacks, poisoning attacks, and privacy problems for machine learning systems in adversarial environments. The key approach is to utilize game theoretic analysis and model the interactions between an intelligent adversary and a machine learning system as a Stackelberg game, allowing us to design robust learning strategies which explicitly account for an adversary’s optimal response. Human subject experiments are conducted to back up the mathematical models. I will also introduce a real world malware detection system deployed based on adversarial machine learning analysis.