Cyber-Physical Systems Security through Robust Adaptive Possibilitistic Algorithms: a Cross Layered Framework

Principal Investigator: Arturo Bretas

Co-PI: Alina Zare, Janice McNair

Sponsor: NSF

Start Date: August 15, 2018

End Date: July 31, 2021

Amount: $360,000


The goal of this project is to develop a cross-layer cyber-physical security framework for the smart grid. The proposed research will improve the quality of real-time monitoring of the smart grid through anomaly analysis. This will lead to more reliable data for control, situation awareness to first responders and other improved applications to smart grids. The proposed research will improve the resilience of smart grids to cyber-attacks in meters, parameters, topology and communication infrastructure and large physical disturbances by developing new techniques for distributed control of large complex systems that guarantees secure and reliable performance. The project will foster education through enhancement to curriculum by building bridges among communications, machine learning, power and control systems. The PIs plan to teach short courses on smart grid security at conferences. In addition, they plan to engage under-represented minority students in their project.

The project aims at developing a distributed nonlinear controller for transient stability enhancement. The new control layer will actuate on distributed energy storage systems, be robust to uncertainties in modelling and capable of compensating input time-delay while independent of operating conditions. Furthermore, the robust controller will not require exact knowledge of the system dynamics. Second, bad data analytics based on the innovation approach and cross-layered information provided by distributed software-defined network will be developed. The bad data analytics will consider the inherent interdependencies of the physical processes while providing a countermeasure. Third, an adaptive distributed robust machine learning approach will be developed. The overwhelming majority of supervised machine learning methods require large amounts of carefully labeled training data that is representative of the data distribution to be seen under test. However, in security applications, novel threats and malicious attacks are continuously being developed and attempted. Thus, approaches that rely on prior training data are unlikely to be robust in the case of behaviors never seen before, as would be the case in a rapidly changing threat environment. The novel distributed machine intelligence method that will be developed will be focused on being rapidly adaptive to identifying and distinguishing novel threats given even only one example of an anomalous novel threat.

More Information: