Characterizing the Space of Adversarial Examples in Machine Learning

Date/Time

02/19/2018
11:00 am-12:00 pm
Add to Outlook/iCal
Add to Google Calendar

Location

CSE 252 (Informatics Institute)
432 NEWELL DR
GAINESVILLE, FL 32611

Details

Nicolas Papernot, The Pennsylvania State University

There is growing recognition that machine learning (ML) exposes new security and privacy vulnerabilities in software systems, yet the technical community’s understanding of the nature and extent of these vulnerabilities remains limited but expanding. In this talk, I explore the threat model space of ML algorithms, and systematically explore the vulnerabilities resulting from the poor generalization of ML models when they are presented with inputs manipulated by adversaries. This characterization of the threat space prompts an investigation of defenses that exploit the lack of reliable confidence estimates for predictions made. In particular, we introduce a promising new approach to defensive measures tailored to the structure of deep learning. Through this research, we expose connections between the resilience of ML to adversaries, model interpretability, and training data privacy.

Categories

Hosted by

Kevin Butler, CISE