Machine Learning and Feedback Control

Pratyush Kumar

Data science has become a part of large number of enginnering applications. This project aims to identify potential applications of Machine Learning and Artificially Intelligence for Feedback Control. We are investigating both model-based and model-free approaches which are being proposed in the literature.

Reinforcement Learning is a class of model-free approach and it aims to solve the optimal control problem without using the model of the system. eg: For Linear systems, it solves the Linear Quadratic Regulator problem online without using the system model. We are analyzing the advantages/disadvantages of Reinforcement Learning and its applicability for process control applications.

Explicit Model Predictive Control was developed for Linear Time Invariant (LTI) systems with Quadratic objective functions in order to reduce the online optimization burden. It gives us an excellent idea about the piecewise affine nature of MPC control law, but it has large storage requirements and longer implementation time for high dimensional states. Interestingly, Neural Networks with Rectified Linear Units (ReLU) as activation functions also represent piecewise affine functions defined on polyhedral regions. In this work we are trying to approximate the piecewise affine control law for Linear systems using Neural Networks.

This project is in collaboration with Johnson Controls, Inc.

References:

A. Bemporad, M. Morari, V. Dua, and E. N. Pistikopoulos.
The explicit linear quadratic regulator for constrained systems.
Automatica, 38(1):3-20, 2002.

S. J. Bradtke, B. E. Ydstie, and A. G. Barto.
Adaptive linear quadratic control using policy iteration.
American Control Conference, 1994, volume 3, pages 3475-3479. IEEE, 1994.