Large datasets are often beneficial for the generation of predictive models using machine learning approaches. However, it is often the case that not all variables in the dataset contain useful information. In fact, some variables might be useless, redundant, misleading, or even harmful to performance, both in terms of accuracy and computational effort. Because of that, Feature Selection (FS) is one of the most delicate and important steps in machine learning. This is even more relevant in the case of interpretable models based on Fuzzy Inference Systems (FIS). The reasons are two-fold: on the one hand, FIS are generally built on top of a data partitioning based on clustering, which can suffer from high dimensionality; on the other hand, the knowledge base of the FIS, to be concretely understandable, should not contain rules involving too many variables. FS can be performed using multiple approaches, most notably filter and wrapper methods. The latter are often based on evolutionary algorithms, where a population of candidate solutions (each representing a possible set of selected variables) evolves towards the optimal selection. Although wrapper methods can be effective, they are, in general, computationally expensive. In this work, we propose a completely different – and more computationally effective – algorithm based on Random Forest (RF) models. Specifically, we exploit RFs to rank variables according to their importance. Then, we use that information to perform a statistical analysis and determine the minimal set of features necessary to build an accurate FIS. We show the effectiveness of our approach by using two (semi)synthetic datasets built on real-world datasets, and we validate our approach by applying the FS method to a medical dataset.

A Fast Feature Selection for Interpretable Modeling Based on Fuzzy Inference Systems

Nobile, Marco S.
2024-01-01

Abstract

Large datasets are often beneficial for the generation of predictive models using machine learning approaches. However, it is often the case that not all variables in the dataset contain useful information. In fact, some variables might be useless, redundant, misleading, or even harmful to performance, both in terms of accuracy and computational effort. Because of that, Feature Selection (FS) is one of the most delicate and important steps in machine learning. This is even more relevant in the case of interpretable models based on Fuzzy Inference Systems (FIS). The reasons are two-fold: on the one hand, FIS are generally built on top of a data partitioning based on clustering, which can suffer from high dimensionality; on the other hand, the knowledge base of the FIS, to be concretely understandable, should not contain rules involving too many variables. FS can be performed using multiple approaches, most notably filter and wrapper methods. The latter are often based on evolutionary algorithms, where a population of candidate solutions (each representing a possible set of selected variables) evolves towards the optimal selection. Although wrapper methods can be effective, they are, in general, computationally expensive. In this work, we propose a completely different – and more computationally effective – algorithm based on Random Forest (RF) models. Specifically, we exploit RFs to rank variables according to their importance. Then, we use that information to perform a statistical analysis and determine the minimal set of features necessary to build an accurate FIS. We show the effectiveness of our approach by using two (semi)synthetic datasets built on real-world datasets, and we validate our approach by applying the FS method to a medical dataset.
2024
2024 IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology (CIBCB)
File in questo prodotto:
File Dimensione Formato  
A_Fast_Feature_Selection_for_Interpretable_Modeling_Based_on_Fuzzy_Inference_Systems.pdf

non disponibili

Tipologia: Versione dell'editore
Licenza: Copyright dell'editore
Dimensione 303.77 kB
Formato Adobe PDF
303.77 kB Adobe PDF   Visualizza/Apri

I documenti in ARCA sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10278/5081861
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact