We quantify the performance of approximations to stochastic filtering by the Kullback-Leibler divergence to the optimal Bayesian filter. Using a two-state Markov process that drives a Brownian measurement process as prototypical test case, we compare two stochastic filtering approximations: a static low-pass filter as baseline, and machine learning of Volterra expansions using nonlinear Vector Auto-Regression (nVAR). We highlight the crucial role of the chosen per-formance metric, and present two solutions to the specific challenge of predicting a likelihood bounded between 0 and 1.

Learning stochastic filtering

Auconi A.;
2022-01-01

Abstract

We quantify the performance of approximations to stochastic filtering by the Kullback-Leibler divergence to the optimal Bayesian filter. Using a two-state Markov process that drives a Brownian measurement process as prototypical test case, we compare two stochastic filtering approximations: a static low-pass filter as baseline, and machine learning of Volterra expansions using nonlinear Vector Auto-Regression (nVAR). We highlight the crucial role of the chosen per-formance metric, and present two solutions to the specific challenge of predicting a likelihood bounded between 0 and 1.
2022
140
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in ARCA sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10278/5072704
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact