Integrated safety and security analysis of cyber-physical systems

PhD Student; Jin Zhang

Project Duration: 2017-2021

Project Partner: NTNU

Abstract

Deep neural networks (DNN) have been successfully adopted in a number of Safety-Critical Cyber-Physical Systems (SCCPSs), such as autonomous vehicles, drones, health care devices, and robot systems. Unfortunately, there is significant evidence that DNNs are intrinsically vulnerable to perturbations. This makes designing and validating the robustness of DNNs a crucial need in autonomous driving applications. Any incorrect conclusions from deep learning algorithms, such as missing objects or incorrect classifications will lead to potentially fatal incidents.

The robustness of a DNN is its ability to cope with erroneous inputs. The erroneous inputs can be naturally perturbed inputs or an input that adds small perturbation intentionally to mislead the classification of NN, so-called adversarial example. At DTU Engineering Systems Design Group, we systematically review the state-of-the-art testing and verification approaches that were employed to assure the robustness of DNNs. We propose a novel metric named CriticalGap to measure DNN robustness. It can guide software designers to design a DNN model with an appropriate robustness level. 

Achieving DNN robustness is a challenging goal. Data augmentation, i.e., introducing perturbed samples into the training set and increasing model complexity, are commonly used approaches for improving robustness. However, robustness improvement is not uniform across perturbation types.

In collaboration with the computer science department at Norwegian University of Science and Technology, at DTU RiskLab, we identify the critical insight behind robust DNNs and investigate the trade-off caused between different data augmentation. 

Previous
Previous

Adaptive Risk Processes for the Development and Construction of Complex Systems

Next
Next

Analysis and modelling of Engineering Systems using Data Science and Complex Networks