SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Seyedi S, Jiang Z, Levey A, Clifford GD. Biomed. Eng. Online 2022; 21(1): e67.

Copyright

(Copyright © 2022, Holtzbrinck Springer Nature Publishing Group - BMC)

DOI

10.1186/s12938-022-01035-1

PMID

36100851

Abstract

BACKGROUND: The expanding usage of complex machine learning methods such as deep learning has led to an explosion in human activity recognition, particularly applied to health. However, complex models which handle private and sometimes protected data, raise concerns about the potential leak of identifiable data. In this work, we focus on the case of a deep network model trained on images of individual faces.

MATERIALS AND METHODS: A previously published deep learning model, trained to estimate the gaze from full-face image sequences was stress tested for personal information leakage by a white box inference attack. Full-face video recordings taken from 493 individuals undergoing an eye-tracking- based evaluation of neurological function were used. Outputs, gradients, intermediate layer outputs, loss, and labels were used as inputs for a deep network with an added support vector machine emission layer to recognize membership in the training data.

RESULTS: The inference attack method and associated mathematical analysis indicate that there is a low likelihood of unintended memorization of facial features in the deep learning model.

CONCLUSIONS: In this study, it is showed that the named model preserves the integrity of training data with reasonable confidence. The same process can be implemented in similar conditions for different models.


Language: en

Keywords

Deep neural networks; Data leakage; Eye-tracking; Facial features

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print