Robust Reinforcement Learning Decoupling Control Based on Int-egral Quadratic Constraints
Authors
Wang Teng
Abstract
In order to keep stable in reinforcement learning process, a novel robust reinforcement learning
decoupling control (RRLDC) based on integral quadratic constraints\uff08IQC\uff09is presented in this paper. It
composes of a linear model to approximate the nonlinear plant, a state feedback K controller to generate the
basic control law, and an adaptive critic unit to evaluate decoupling performance, which tunes an actor unit to
compensate decoupling action and model uncertainty as well as system nonlinearity. By replacing nonlinear
and time-varying aspects of a neural network and model uncertainty with IQC, the stability of the control
loop is analyzed. As a result, the range of the adjusted parameters is found within which the stability is
guaranteed, the control system performance is improved through learning and the algorithm convergence
speed is accelerated. The proposed RRLDC is applied to gas collector pressure control of coke ovens. The
simulation results show the proposed control strategy can not only obtain the good performance, but also
avoid unstable behavior in learning process. It is an effective multivariable decoupling control method for a
class of strong coupling systems such as the gas collector pressure control of coke ovens.the effectiveness of
proposed control strategy for the collector gas pressure of coke ovens