In this paper, we determine the approximation ratio of a linear-saturated control policy of a typical robust-stabilization problem. We consider a system, whose state integrates the discrepancy between the unknown but bounded disturbance and control. The control aims at keeping the state within a target set, whereas, the disturbance aims at pushing the state outside of the target set by opposing the control action. The literature often solves this kind of problems via a linear-saturated control policy. We show how this policy is an approximation for the optimal control policy by reframing the problem in the context of quadratic zero-sum differential games. We prove that the considered approximation ratio is asymptotically bounded by 2, and it is upper bounded by 2 in the case of 1-dimensional system. In this last case, we also discuss how the approximation ratio may apparently change, when the system's demand is subject to uncertainty. In conclusion, we compare the approximation ratio of the linear-saturated policy with the one of a family of control policies which generalize the bang-bang one.

In this paper, we determine the approximation ratio of a linear-saturated control policy of a typical robust-stabilization problem. We consider a system, whose state integrates the discrepancy between the unknown but bounded disturbance and control. The control aims at keeping the state within a target set, whereas the disturbance aims at pushing the state outside of the target set by opposing the control action. The literature often solves this kind of problems via a linear-saturated control policy. We show how this policy is an approximation for the optimal control policy by reframing the problem in the context of quadratic zero-sum differential games. We prove that the considered approximation ratio is asymptotically bounded by 2, and it is upper bounded by 2 in the case of 1-dimensional system. In this last case, we also discuss how the approximation ratio may apparently change, when the system’s demand is subject to uncertainty. In conclusion, we compare the approximation ratio of the linear-saturated policy with the one of a family of control policies which generalize the bang–bang one.

### Robust Sub-Optimality of Linear-Saturated Control via Quadratic Zero-Sum Differential Games

#### Abstract

In this paper, we determine the approximation ratio of a linear-saturated control policy of a typical robust-stabilization problem. We consider a system, whose state integrates the discrepancy between the unknown but bounded disturbance and control. The control aims at keeping the state within a target set, whereas the disturbance aims at pushing the state outside of the target set by opposing the control action. The literature often solves this kind of problems via a linear-saturated control policy. We show how this policy is an approximation for the optimal control policy by reframing the problem in the context of quadratic zero-sum differential games. We prove that the considered approximation ratio is asymptotically bounded by 2, and it is upper bounded by 2 in the case of 1-dimensional system. In this last case, we also discuss how the approximation ratio may apparently change, when the system’s demand is subject to uncertainty. In conclusion, we compare the approximation ratio of the linear-saturated policy with the one of a family of control policies which generalize the bang–bang one.
##### Scheda breve Scheda completa Scheda completa (DC)
2020
184
File in questo prodotto:
File
paperRevised191109Definitivo.pdf

non disponibili

Descrizione: Post-print
Tipologia: Documento in Post-print
Licenza: Accesso chiuso-personale
Dimensione 2.93 MB