SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Xu Q, Zhang L, Ou D, Yu W. Transp. Res. Rec. 2023; 2677(9): 421-437.

Copyright

(Copyright © 2023, Transportation Research Board, National Research Council, National Academy of Sciences USA, Publisher SAGE Publishing)

DOI

10.1177/03611981231159118

PMID

unavailable

Abstract

Along with providing several benefits, the unprecedented growth of connected and automated vehicles brings worries about damaging cyber attacks. Network-based intrusion detection systems (IDSs) using deep learning methods can effectively mitigate the threats by promptly detecting malicious behaviors. However, the centralized learning mode may cause data leverage. Federated learning has emerged as a new distributed machine learning training paradigm to preserve data privacy by allowing clients to train and validate models locally with their data and then send the model parameters to the central server. First, we propose a new framework named DPFL-F2IDS scheme for an edge inter-vehicle network that transmits Basic Safety Messages, which consists of Differentially Private Federated Learning (DPFL) and F2IDS (Framework for IDS). DPFL can defend against the member inference attacks faced by the standard federated learning, but difficulties still exist in making a tradeoff on the utility metrics and privacy metrics. Second, experiments by centralized learning methods were performed on the VeReMi Extension dataset. Third, the performance of federated learning by different numbers of vehicles and different optimizers is evaluated. DPFL by different noise values is also evaluated.

RESULTS clarified that the F1-scores reached 0.9915 and 0.9700 by long short-term memory (LSTM)-based intrusion detection for binary classification and multi-classification in the centralized learning mode, respectively. The utility results achieved by federated averaging (FedAvg) with Adabound optimizer were closer to the centralized learning mode than the classical FedAvg algorithms. The optimal values of the noise multipliers were also found without degrading the model quality and preserving privacy.


Language: en

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print