DARTS

DARTS: Differentiable Architecture Search. This paper addresses the scalability challenge of architecture search by formulating the task in a differentiable manner. Unlike conventional approaches of applying evolution or reinforcement learning over a discrete and non-differentiable search space, our method is based on the continuous relaxation of the architecture representation, allowing efficient search of the architecture using gradient descent. Extensive experiments on CIFAR-10, ImageNet, Penn Treebank and WikiText-2 show that our algorithm excels in discovering high-performance convolutional architectures for image classification and recurrent architectures for language modeling, while being orders of magnitude faster than state-of-the-art non-differentiable techniques. Our implementation has been made publicly available to facilitate further research on efficient architecture search algorithms.


References in zbMATH (referenced in 17 articles )

Showing results 1 to 17 of 17.
Sorted by year (citations)

  1. Fraccaroli, Michele; Lamma, Evelina; Riguzzi, Fabrizio: Symbolic DNN-tuner (2022)
  2. Psaros, Apostolos F.; Kawaguchi, Kenji; Karniadakis, George Em: Meta-learning PINN loss functions (2022)
  3. Changlin Li, Tao Tang, Guangrun Wang, Jiefeng Peng, Bing Wang, Xiaodan Liang, Xiaojun Chang: BossNAS: Exploring Hybrid CNN-transformers with Block-wisely Self-supervised Neural Architecture Search (2021) arXiv
  4. Fernandes, Francisco E. jun.; Yen, Gary G.: Pruning deep convolutional neural networks architectures with evolution strategy (2021)
  5. García Trillos, Nicolás; Morales, Félix; Morales, Javier: Traditional and accelerated gradient descent for neural architecture search (2021)
  6. Hao, Jie; Zhu, William: Architecture self-attention mechanism: nonlinear optimization for neural architecture search (2021)
  7. Huang, Di; Zhang, Rui; Zhang, Xishan; Wu, Fan; Wang, Xianzhuo; Jin, Pengwei; Liu, Shaoli; Li, Ling; Chen, Yunji: A decomposable Winograd method for N-D convolution acceleration in video analysis (2021)
  8. Repin, Denis; Petrov, Tatjana: Automated deep abstractions for stochastic chemical reaction networks (2021)
  9. Chen Gao, Yunpeng Chen, Si Liu, Zhenxiong Tan, Shuicheng Yan: AdversarialNAS: Adversarial Neural Architecture Search for GANs (2020) arXiv
  10. Chen, Yiming; Pan, Tianci; He, Cheng; Cheng, Ran: Efficient evolutionary deep neural architecture search (NAS) by noisy network morphism mutation (2020)
  11. Gu, Xue; Meng, Ziyao; Liang, Yanchun; Xu, Dong; Huang, Han; Han, Xiaosong; et al.: ESAE: evolutionary strategy-based architecture evolution (2020)
  12. Kandasamy, Kirthevasan; Vysyaraju, Karun Raju; Neiswanger, Willie; Paria, Biswajit; Collins, Christopher R.; Schneider, Jeff; Poczos, Barnabas; Xing, Eric P.: Tuning hyperparameters without grad students: scalable and robust Bayesian optimisation with Dragonfly (2020)
  13. Lin, Zhou-Chen: How can machine learning and optimization help each other better? (2020)
  14. Shao, Wenqi; Li, Jingyu; Ren, Jiamin; Zhang, Ruimao; Wang, Xiaogang; Luo, Ping: SSN: learning sparse switchable normalization via SparsestMax (2020)
  15. Xinyu Gong, Shiyu Chang, Yifan Jiang, Zhangyang Wang: AutoGAN: Neural Architecture Search for Generative Adversarial Networks (2019) arXiv
  16. Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov: Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context (2019) arXiv
  17. Haifeng Jin, Qingquan Song, Xia Hu: Auto-Keras: An Efficient Neural Architecture Search System (2018) arXiv