Techniques to verify or invalidate the security of computer systems have come a long way in recent years. While sophisticated tools are available to specify and formally verify the behavior of a system, attack techniques have evolved to the point of questioning the possibility of obtaining adequate levels of security, especially in critical applications. In a recent paper, Bognar et al. have clearly highlighted this inconsistency between the two worlds: on one side, formal verification allows writing irrefutable proofs of the security of a system, on the other side, concrete attacks make these proofs waver, exhibiting a gap between models and implementations which is very complex to bridge. In this extended abstract, we outline a new general methodology based on robust non-interference preservation that reduces this gap by bringing together some peculiarities of the two approaches. Our methodology uses automata learning to extract behavioral models of real systems, and model checking to analyze them and identify attacks or anomalies.

Automated Learning and Verification of Embedded Security Architectures

Matteo Busi;Riccardo Focardi;Flaminia Luccio
2023-01-01

Abstract

Techniques to verify or invalidate the security of computer systems have come a long way in recent years. While sophisticated tools are available to specify and formally verify the behavior of a system, attack techniques have evolved to the point of questioning the possibility of obtaining adequate levels of security, especially in critical applications. In a recent paper, Bognar et al. have clearly highlighted this inconsistency between the two worlds: on one side, formal verification allows writing irrefutable proofs of the security of a system, on the other side, concrete attacks make these proofs waver, exhibiting a gap between models and implementations which is very complex to bridge. In this extended abstract, we outline a new general methodology based on robust non-interference preservation that reduces this gap by bringing together some peculiarities of the two approaches. Our methodology uses automata learning to extract behavioral models of real systems, and model checking to analyze them and identify attacks or anomalies.
2023
Workshop on Principles of Secure Compilation
File in questo prodotto:
File Dimensione Formato  
bfl_prisc23.pdf

non disponibili

Tipologia: Documento in Pre-print
Licenza: Accesso chiuso-personale
Dimensione 481.85 kB
Formato Adobe PDF
481.85 kB Adobe PDF   Visualizza/Apri

I documenti in ARCA sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10278/5021701
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact