DeepXplore
DeepXplore: Automated Whitebox Testing of Deep Learning Systems. Deep learning (DL) systems are increasingly deployed in safety- and security-critical domains including self-driving cars and malware detection, where the correctness and predictability of a system’s behavior for corner case inputs are of great importance. Existing DL testing depends heavily on manually labeled data and therefore often fails to expose erroneous behaviors for rare inputs. We design, implement, and evaluate DeepXplore, the first whitebox framework for systematically testing real-world DL systems. First, we introduce neuron coverage for systematically measuring the parts of a DL system exercised by test inputs. Next, we leverage multiple DL systems with similar functionality as cross-referencing oracles to avoid manual checking. Finally, we demonstrate how finding inputs for DL systems that both trigger many differential behaviors and achieve high neuron coverage can be represented as a joint optimization problem and solved efficiently using gradient-based search techniques. DeepXplore efficiently finds thousands of incorrect corner case behaviors (e.g., self-driving cars crashing into guard rails and malware masquerading as benign software) in state-of-the-art DL models with thousands of neurons trained on five popular datasets including ImageNet and Udacity self-driving challenge data. For all tested DL models, on average, DeepXplore generated one test input demonstrating incorrect behavior within one second while running only on a commodity laptop. We further show that the test inputs generated by DeepXplore can also be used to retrain the corresponding DL model to improve the model’s accuracy by up to 3%.
Keywords for this software
References in zbMATH (referenced in 7 articles )
Showing results 1 to 7 of 7.
Sorted by year (- Jiazhen Gu, Xuchuan Luo, Yangfan Zhou, Xin Wang: Muffin: Testing Deep Learning Libraries via Neural Architecture Fuzzing (2022) arXiv
- Giunchiglia, Eleonora; Lukasiewicz, Thomas: Multi-label classification neural networks with hard logical constraints (2021)
- Mohammadinejad, Sara; Paulsen, Brandon; Deshmukh, Jyotirmoy V.; Wang, Chao: DiffRNN: differential verification of recurrent neural networks (2021)
- Wu, Huihui; Lv, Deyun; Cui, Tengxiang; Hou, Gang; Watanabe, Masahiko; Kong, Weiqiang: SDLV: verification of steering angle safety for self-driving cars (2021)
- Huang, Xiaowei; Kroening, Daniel; Ruan, Wenjie; Sharp, James; Sun, Youcheng; Thamo, Emese; Wu, Min; Yi, Xinping: A survey of safety and trustworthiness of deep neural networks: verification, testing, adversarial attack and defence, and interpretability (2020)
- Ruthotto, Lars; Haber, Eldad: Deep neural networks motivated by partial differential equations (2020)
- Dreossi, Tommaso; Donzé, Alexandre; Seshia, Sanjit A.: Compositional falsification of cyber-physical systems with machine learning components (2019)