Deep learning has achieved remarkable success in various domains; however, its application to tabular data remains challenging due to the complex nature of feature interactions and patterns. This paper introduces novel neural network architectures that leverage intrinsic periodicity in tabular data to enhance prediction accuracy for regression and classification tasks. We propose FourierNet, which employs a Fourier-based neural encoder to capture periodic feature patterns, and ChebyshevNet, utilizing a Chebyshev-based neural encoder to model non-periodic patterns. Furthermore, we combine these approaches in two architectures: Periodic-Non-Periodic Network (PNPNet) and AutoPNPNet. PNPNet detects periodic and non-periodic features a priori, feeding them into separate branches, while AutoPNPNet automatically selects features through a learned mechanism. The experimental results on a benchmark of 53 datasets demonstrate that our methods outperform the current state-of-the-art deep learning technique on 34 datasets and show interesting properties for explainability.

Leveraging Periodicity for Tabular Deep Learning

Matteo Rizzo
;
Ebru Ayyurek;Andrea Albarelli;Andrea Gasparetto
2025-01-01

Abstract

Deep learning has achieved remarkable success in various domains; however, its application to tabular data remains challenging due to the complex nature of feature interactions and patterns. This paper introduces novel neural network architectures that leverage intrinsic periodicity in tabular data to enhance prediction accuracy for regression and classification tasks. We propose FourierNet, which employs a Fourier-based neural encoder to capture periodic feature patterns, and ChebyshevNet, utilizing a Chebyshev-based neural encoder to model non-periodic patterns. Furthermore, we combine these approaches in two architectures: Periodic-Non-Periodic Network (PNPNet) and AutoPNPNet. PNPNet detects periodic and non-periodic features a priori, feeding them into separate branches, while AutoPNPNet automatically selects features through a learned mechanism. The experimental results on a benchmark of 53 datasets demonstrate that our methods outperform the current state-of-the-art deep learning technique on 34 datasets and show interesting properties for explainability.
2025
14
File in questo prodotto:
File Dimensione Formato  
electronics-14-01165.pdf

accesso aperto

Tipologia: Versione dell'editore
Licenza: Accesso libero (no vincoli)
Dimensione 834.42 kB
Formato Adobe PDF
834.42 kB Adobe PDF Visualizza/Apri

I documenti in ARCA sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10278/5105076
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact