Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by the Berkeley Vision and Learning Center (BVLC) and by community contributors. Yangqing Jia created the project during his PhD at UC Berkeley. Caffe is released under the BSD 2-Clause license.

References in zbMATH (referenced in 78 articles )

Showing results 61 to 78 of 78.
Sorted by year (citations)
  1. Hao Dong, Akara Supratak, Luo Mai, Fangde Liu, Axel Oehmichen, Simiao Yu, Yike Guo: TensorLayer: A Versatile Library for Efficient Deep Learning Development (2017) arXiv
  2. Haojin Yang, Martin Fritzsche, Christian Bartz, Christoph Meinel: BMXNet: An Open-Source Binary Neural Network Implementation Based on MXNet (2017) arXiv
  3. Li, Yao; Liu, Lingqiao; Shen, Chunhua; van den Hengel, Anton: Mining mid-level visual patterns with deep CNN activations (2017)
  4. Philipp Moritz, Robert Nishihara, Stephanie Wang, Alexey Tumanov, Richard Liaw, Eric Liang, Melih Elibol, Zongheng Yang, William Paul, Michael I. Jordan, Ion Stoica: Ray: A Distributed Framework for Emerging AI Applications (2017) arXiv
  5. Rawat, Waseem; Wang, Zenghui: Deep convolutional neural networks for image classification: a comprehensive review (2017)
  6. Richard Wei, Vikram Adve, Lane Schwartz: DLVM: A modern compiler infrastructure for deep learning systems (2017) arXiv
  7. Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg: BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain (2017) arXiv
  8. Wang, Linnan; Yang, Yi; Min, Renqiang; Chakradhar, Srimat: Accelerating deep neural network training with inconsistent stochastic gradient descent (2017)
  9. Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, Jian Sun: ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices (2017) arXiv
  10. Diamond, Steven; Boyd, Stephen: Matrix-free convex optimization modeling (2016)
  11. Ganin, Yaroslav; Ustinova, Evgeniya; Ajakan, Hana; Germain, Pascal; Larochelle, Hugo; Laviolette, François; Marchand, Mario; Lempitsky, Victor: Domain-adversarial training of neural networks (2016)
  12. Hao, Ningbo; Yang, Jie; Liao, Haibin; Dai, Wenhua: A unified factors analysis framework for discriminative feature extraction and object recognition (2016)
  13. Levine, Sergey; Finn, Chelsea; Darrell, Trevor; Abbeel, Pieter: End-to-end training of deep visuomotor policies (2016)
  14. Matthew Moskewicz, Forrest Iandola, Kurt Keutzer: Boda-RTC: Productive Generation of Portable, Efficient Code for Convolutional Neural Networks on Mobile Computing Platforms (2016) arXiv
  15. Patrick Doetsch, Albert Zeyer, Paul Voigtlaender, Ilya Kulikov, Ralf Schlüter, Hermann Ney: RETURNN: The RWTH Extensible Training framework for Universal Recurrent Neural Networks (2016) arXiv
  16. Zhou, Peicheng; Cheng, Gong; Liu, Zhenbao; Bu, Shuhui; Hu, Xintao: Weakly supervised target detection in remote sensing images based on transferred deep features and negative bootstrapping (2016)
  17. Aishwarya Agrawal, Jiasen Lu, Stanislaw Antol, Margaret Mitchell, C. Lawrence Zitnick, Dhruv Batra, Devi Parikh: VQA: Visual Question Answering (2015) arXiv
  18. Hyeonwoo Noh, Seunghoon Hong, Bohyung Han: Learning Deconvolution Network for Semantic Segmentation (2015) arXiv