Towards Explainable Machine Learning in Intrusion Detection Systems
The work described in this website has been conducted within the project NeCS. This project has received funding from the European Union’s Horizon 2020 (H2020) research and innovation programme under the Grant Agreement no 675320. This website and the content displayed in it do not represent the opinion of the European Union, and the European Union is not responsible for any use that might be made of its content.
Author (ESR):
Ly Vu Duc (Universita Degli Studi Di Trento)
Authors:
Chau D.M. Pham
Duc-Ly Vu
Fabio Massacci
Tran Khanh Dang
Sandro Etalle
Davide Fauri
Poster
The lacking of semantics or reasonable explanations for machine learning predictions is one of the main reasons for its adoption barrier in the field of intrusion detection. In fact, without explanations, one machine learning approach fails to gain users’ trust in its predictions, especially after having wasted their effort for examining false-positives. Therefore, we are motivated to study a novel approach of applying machine learning such that it not only efficiently detects intrusions but also provides explanations for those decisions.
Venue:
ESSoS: Engineering Secure Software and Systems