This is the second of two papers on boundary optimal control problems with linear state equation and convex cost arising from boundary control of PDEs and the the associated Hamilton--Jacobi--Bellman equation. In the first paper we studied necessary and sufficient conditions of optimality (Pontryagin Maximum Principle). In this second paper we will apply Dynamic Programming to show that the value function of the problem is a solution of an integral version of the HJB equation, and moreover that it is the pointwise limit of classical solutions of approximating equations.
Boundary Control Problems with Convex Cost and Dynamic Programming in Infinite Dimension Part II: Hamilton--Jacobi--Bellman Equation
FAGGIAN, Silvia
2005-01-01
Abstract
This is the second of two papers on boundary optimal control problems with linear state equation and convex cost arising from boundary control of PDEs and the the associated Hamilton--Jacobi--Bellman equation. In the first paper we studied necessary and sufficient conditions of optimality (Pontryagin Maximum Principle). In this second paper we will apply Dynamic Programming to show that the value function of the problem is a solution of an integral version of the HJB equation, and moreover that it is the pointwise limit of classical solutions of approximating equations.File in questo prodotto:
File | Dimensione | Formato | |
---|---|---|---|
03 DCDS rete.pdf
non disponibili
Tipologia:
Documento in Post-print
Licenza:
Accesso chiuso-personale
Dimensione
276.71 kB
Formato
Adobe PDF
|
276.71 kB | Adobe PDF | Visualizza/Apri |
I documenti in ARCA sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.