Guest Lecturer Nicholas Papernot Presents Characterizing the Space of Adversarial Examples in Machine Learning

Date/Time

02/19/2018
11:00 am-12:00 pm
Add to Outlook/iCal
Add to Google Calendar

Location

UF Informatics Institute
432 Newell Drive, CISE Bldg. Room E252
Gainesville, FL 32611-5585

Details

There is growing recognition that machine learning
(ML) exposes new security and privacy vulnerabilities
in software systems, yet the technical community’s
understanding of the nature and extent of these
vulnerabilities remains limited but expanding.

In this talk, I explore the threat model space of
ML algorithms, and systematically explore the
vulnerabilities resulting from the poor generalization
of ML models when they are presented with inputs
manipulated by adversaries. This characterization
of the threat space prompts an investigation of
defenses that exploit the lack of reliable confidence
estimates for predictions made.

In particular, we introduce a promising new
approach to defensive measures tailored to the
structure of deep learning. Through this research,
we expose connections between the resilience of ML
to adversaries,

Categories