In this work we consider the solution of large scale (possibly nonconvex) unconstrained optimization problems. We focus on Truncated Newton methods which represent one of the commonest methods to tackle such problems. In particular, we follow the approach detailed in Caliciotti et al. (Comput Optim Appl 77:627-651, 2020), where a modified version of the Bunch and Kaufman decomposition (Bunch and Kaufman, Math Comput 31:163-179, 1977) is proposed for solving the Newton equation. Such decomposition is used within SYMMBK routine as proposed by Chandra (Conjugate gradient methods for partial differential equations, Ph.D. thesis, Yale University, New Haven, 1978; see also Conn et al., Trust-Region Methods, MPS-SIAM Series on Optimization, Philadelphia, PA, 2000; HSL: A collection of Fortran codes for large scale scientific computation, https://www.hsl.rl.ac.uki ; Marcia, Appl Numer Math 58(4):449-458, 2008) for iteratively solving symmetric possibly indefinite linear systems. The proposal in Caliciotti et al. (Comput Optim Appl 77:627-651, 2020) enabled to overcome a relevant drawback of nonconvex problems, namely the computed search direction might not be gradient-related. Here we propose further extensions of such approach, aiming at improving the pivoting strategy of the Bunch and Kaufman decomposition and enhancing its flexibility.

An Improvement of the Pivoting Strategy in the Bunch and Kaufman Decomposition, Within Truncated Newton Methods

Fasano, G
;
2022-01-01

Abstract

In this work we consider the solution of large scale (possibly nonconvex) unconstrained optimization problems. We focus on Truncated Newton methods which represent one of the commonest methods to tackle such problems. In particular, we follow the approach detailed in Caliciotti et al. (Comput Optim Appl 77:627-651, 2020), where a modified version of the Bunch and Kaufman decomposition (Bunch and Kaufman, Math Comput 31:163-179, 1977) is proposed for solving the Newton equation. Such decomposition is used within SYMMBK routine as proposed by Chandra (Conjugate gradient methods for partial differential equations, Ph.D. thesis, Yale University, New Haven, 1978; see also Conn et al., Trust-Region Methods, MPS-SIAM Series on Optimization, Philadelphia, PA, 2000; HSL: A collection of Fortran codes for large scale scientific computation, https://www.hsl.rl.ac.uki ; Marcia, Appl Numer Math 58(4):449-458, 2008) for iteratively solving symmetric possibly indefinite linear systems. The proposal in Caliciotti et al. (Comput Optim Appl 77:627-651, 2020) enabled to overcome a relevant drawback of nonconvex problems, namely the computed search direction might not be gradient-related. Here we propose further extensions of such approach, aiming at improving the pivoting strategy of the Bunch and Kaufman decomposition and enhancing its flexibility.
2022
Optimization in Artificial Intelligence and Data Sciences
File in questo prodotto:
File Dimensione Formato  
FA-RO-AIRO2021_3_submitted.pdf

accesso aperto

Tipologia: Documento in Pre-print
Licenza: Accesso libero (no vincoli)
Dimensione 289.57 kB
Formato Adobe PDF
289.57 kB Adobe PDF Visualizza/Apri

I documenti in ARCA sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10278/5012203
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact