This post is part of a series of posts on optimal control theory. We take a detalied look on how classical LQR control is derived. A simple implementation is provided for clarity.
LQR is an extremely popular optomal control framework. This blog closely follows (Duriez, Brunton, and Noack 2017).
Small
Let’s consider the linear system
˙x=Ax+Buy=Cx+Du
If the system in (1) is controllable then a proportional controller can be designed as
u=−Krx
Hence the closed loop system becomes
˙x=(A−BKr)x
We can construct a quadratic cost J that balances the regulation of x with the cost of control input u,
J(t)=∫t0[xT(τ)Qx(τ)]+uT(τ)Ru(τ)]
By solving Algebraic Riccati Equation (ARE) we get the optimal control law,
Kr=R−1BTP
where the ARE is expressed as
ATP+PA−PBR−1BTP+Q=0
Duriez, Thomas, Steven L Brunton, and Bernd R Noack. 2017. Machine Learning Control-Taming Nonlinear Dynamics and Turbulence. Springer.