Project Category: Infrastructure and the Built Environment

Learning in Mean Field Games

The objective of the proposed research is to develop fundamental mathematics for general state space Markov processes and controlled interacting particle systems. The mathematical goal pertains to the development of existence, uniqueness and regularity theory for a Poisson equation, clarification of the underlying assumptions, regularity estimates, and relationship to Lyapunov exponents. Several representations of the gradient of the solution of the Poisson equation are discussed—based on the theory of elliptic PDEs along with certain compact embedding arguments for Sobolev spaces, a Lyapunov based construction, a representation in terms of the generalized resolvent, and a construction where the semigroup is approximated in terms of a diffusion map. These representations of the gradient are used to obtain new algorithms for both reinforcement learning and nonlinear filtering: These include a kernel-based algorithm based on the diffusion map approximation, and a stochastic approximation algorithm rooted in ideas from approximate value iteration. A goal of the proposed research will be to explore connections between these approximations and more broadly between the underlying mathematical concepts.

Reinforcement Learning and Kullback-Leibler Stochastic Optimal Control for Complex Networks

Natural and man-made networked systems are all around us. The power grid and the Internet are two examples of apparently complex interconnected systems, in which millions of “agents” are eager to extract value in the form of energy or bandwidth. While these systems are complex when measured in graph-theoretic terms, the behavior of communication and energy systems appears simple and highly predictable to the end users (in most of the world). This success is due in part to distributed control loops that manage system-wide supply-demand balance. An example of distributed control in the Internet is TCP/IP, and automatic generation control (AGC) in most electric power grids. While distributed control protocols are highly developed and widely accepted in communication applications, this is less true in other networked systems such as electric power and natural gas distribution.

This project aims to advance control theory for complex interconnected systems. The application focus is on power systems, but the control techniques are general and are likely to have far broader impact.

Distributed Control for Demand Dispatch: The Creation of Virtual Energy Storage from Flexible Loads

The goal of this project is to create virtual energy storage resources via demand dispatch to be used for grid-level regulation, ramping, peak smoothing, and even recovery from contingencies such as generation faults, while ensuring that QoS to consumers obeys strict constraints. Demand dispatch can only be realized by devising distributed control algorithms that meet multiple, potentially conflicting objectives: the grid needs high quality resources for regulation; the consumer expects that water supply is not interrupted, fish in the refrigerator stays fresh, and the climate within a building remains within desired bounds. The project aims to create a science for demand dispatch based on three essential ingredients.