TensorFuzz: Debugging Neural Networks with Coverage-Guided Fuzzing. Machine learning models are notoriously difficult to interpret and debug. This is particularly true of neural networks. In this work, we introduce automated software testing techniques for neural networks that are well-suited to discovering errors which occur only for rare inputs. Specifically, we develop coverage-guided fuzzing (CGF) methods for neural networks. In CGF, random mutations of inputs to a neural network are guided by a coverage metric toward the goal of satisfying user-specified constraints. We describe how fast approximate nearest neighbor algorithms can provide this coverage metric. We then discuss the application of CGF to the following goals: finding numerical errors in trained neural networks, generating disagreements between neural networks and quantized versions of those networks, and surfacing undesirable behavior in character level language models. Finally, we release an open source library called TensorFuzz that implements the described techniques.
Keywords for this software
References in zbMATH (referenced in 4 articles )
Showing results 1 to 4 of 4.
- Jiazhen Gu, Xuchuan Luo, Yangfan Zhou, Xin Wang: Muffin: Testing Deep Learning Libraries via Neural Architecture Fuzzing (2022) arXiv
- Zukang Liao, Pengfei Zhang, Min Chen: ML4ML: Automated Invariance Testing for Machine Learning Models (2022) arXiv
- Mohammadinejad, Sara; Paulsen, Brandon; Deshmukh, Jyotirmoy V.; Wang, Chao: DiffRNN: differential verification of recurrent neural networks (2021)
- Huang, Xiaowei; Kroening, Daniel; Ruan, Wenjie; Sharp, James; Sun, Youcheng; Thamo, Emese; Wu, Min; Yi, Xinping: A survey of safety and trustworthiness of deep neural networks: verification, testing, adversarial attack and defence, and interpretability (2020)