Net2Net: Accelerating Learning via Knowledge Transfer. We introduce techniques for rapidly transferring the information stored in one neural net into another neural net. The main purpose is to accelerate the training of a significantly larger neural net. During real-world workflows, one often trains very many different neural networks during the experimentation and design process. This is a wasteful process in which each new model is trained from scratch. Our Net2Net technique accelerates the experimentation process by instantaneously transferring the knowledge from a previous network to each new deeper or wider network. Our techniques are based on the concept of function-preserving transformations between neural network specifications. This differs from previous approaches to pre-training that altered the function represented by a neural net when adding layers to it. Using our knowledge transfer mechanism to add depth to Inception modules, we demonstrate a new state of the art accuracy rating on the ImageNet dataset.
Keywords for this software
References in zbMATH (referenced in 4 articles )
Showing results 1 to 4 of 4.
- Chen, Yiming; Pan, Tianci; He, Cheng; Cheng, Ran: Efficient evolutionary deep neural architecture search (NAS) by noisy network morphism mutation (2020)
- Sodhani, Shagun; Chandar, Sarath; Bengio, Yoshua: Toward training recurrent neural networks for lifelong learning (2020)
- Tyukin, Ivan Yu.; Gorban, Alexander N.; Green, Stephen; Prokhorov, Danil: Fast construction of correcting ensembles for legacy artificial intelligence systems: algorithms and a case study (2019)
- Haifeng Jin, Qingquan Song, Xia Hu: Auto-Keras: An Efficient Neural Architecture Search System (2018) arXiv