Hyperparameter selection is very important for the success of deep neural network training. Random search of hyperparameters for deep neural networks may take a long time to converge and yield good results because the training of deep neural networks with a huge number of parameters for every selected hyperparameter is very time-consuming. In this work, we propose the Hyperparameter Exploration LSTM-Predictor (HELP) which is an improved random exploring method using a probability-based exploration with an LSTM-based prediction. The HELP has a higher probability to find a better hyperparameter with less time. The HELP uses a series of hyperparameters in a time period as input and predicts the fitness values of these hyperparameters. Then, exploration directions in the hyper-parameter space yielding higher fitness values will have higher probabilities to be explored in the next turn. Experimental results for training both the Generative Adversarial Net and the Convolution Neural Network show that the HELP finds hyperparameters yielding better results and converges faster. (c) 2021 Elsevier B.V. All rights reserved.

HELP: An LSTM-based approach to hyperparameter exploration in neural network learning

Pelillo, M;
2021-01-01

Abstract

Hyperparameter selection is very important for the success of deep neural network training. Random search of hyperparameters for deep neural networks may take a long time to converge and yield good results because the training of deep neural networks with a huge number of parameters for every selected hyperparameter is very time-consuming. In this work, we propose the Hyperparameter Exploration LSTM-Predictor (HELP) which is an improved random exploring method using a probability-based exploration with an LSTM-based prediction. The HELP has a higher probability to find a better hyperparameter with less time. The HELP uses a series of hyperparameters in a time period as input and predicts the fitness values of these hyperparameters. Then, exploration directions in the hyper-parameter space yielding higher fitness values will have higher probabilities to be explored in the next turn. Experimental results for training both the Generative Adversarial Net and the Convolution Neural Network show that the HELP finds hyperparameters yielding better results and converges faster. (c) 2021 Elsevier B.V. All rights reserved.
2021
442
File in questo prodotto:
File Dimensione Formato  
Neurocomputing 2021.pdf

non disponibili

Tipologia: Versione dell'editore
Licenza: Accesso chiuso-personale
Dimensione 1.93 MB
Formato Adobe PDF
1.93 MB Adobe PDF   Visualizza/Apri

I documenti in ARCA sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10278/5004664
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 38
  • ???jsp.display-item.citation.isi??? 21
social impact