Caffe

Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by the Berkeley Vision and Learning Center (BVLC) and by community contributors. Yangqing Jia created the project during his PhD at UC Berkeley. Caffe is released under the BSD 2-Clause license.


References in zbMATH (referenced in 80 articles )

Showing results 41 to 60 of 80.
Sorted by year (citations)
  1. Aggarwal, Charu C.: Neural networks and deep learning. A textbook (2018)
  2. Ahmad, Shahzor; Cheong, Loong-Fah: Robust detection and affine rectification of planar homogeneous texture for scene understanding (2018)
  3. Basu, Saikat; Mukhopadhyay, Supratik; Karki, Manohar; DiBiano, Robert; Ganguly, Sangram; Nemani, Ramakrishna; Gayaka, Shreekant: Deep neural networks for texture classification -- a theoretical analysis (2018)
  4. Baydin, Atılım Güneş; Pearlmutter, Barak A.; Radul, Alexey Andreyevich; Siskind, Jeffrey Mark: Automatic differentiation in machine learning: a survey (2018)
  5. Calden Wloka, Toni Kunić, Iuliia Kotseruba, Ramin Fahimi, Nicholas Frosst, Neil D. B. Bruce, John K. Tsotsos: SMILER: Saliency Model Implementation Library for Experimental Research (2018) arXiv
  6. Gudivada, Venkat N.; Arbabifard, Kamyar: Open-source libraries, application frameworks, and workflow systems for NLP (2018)
  7. Hananel Hazan, Daniel J. Saunders, Hassaan Khan, Darpan T. Sanghavi, Hava T. Siegelmann, Robert Kozma: BindsNET: A machine learning-oriented spiking neural networks library in Python (2018) arXiv
  8. Jacob R. Gardner, Geoff Pleiss, David Bindel, Kilian Q. Weinberger, Andrew Gordon Wilson: GPyTorch: Blackbox Matrix-Matrix Gaussian Process Inference with GPU Acceleration (2018) arXiv
  9. Larsson, Måns; Arnab, Anurag; Zheng, Shuai; Torr, Philip; Kahl, Fredrik: Revisiting deep structured models for pixel-level labeling with gradient-based inference (2018)
  10. Liu, Yu; Chen, Xun; Cheng, Juan; Peng, Hu; Wang, Zengfu: Infrared and visible image fusion with convolutional neural networks (2018)
  11. Muir, Dylan R.: Feedforward approximations to dynamic recurrent network architectures (2018)
  12. Rishabh Iyer, Pratik Dubal, Kunal Dargan, Suraj Kothawade, Rohan Mahadev, Vishal Kaushal: Vis-DSS: An Open-Source toolkit for Visual Data Selection and Summarization (2018) arXiv
  13. Ryan Chard, Zhuozhao Li, Kyle Chard, Logan Ward, Yadu Babuji, Anna Woodard, Steve Tuecke, Ben Blaiszik, Michael J. Franklin, Ian Foster: DLHub: Model and Data Serving for Science (2018) arXiv
  14. Schäfer, Dirk; Hüllermeier, Eyke: Dyad ranking using Plackett-Luce models based on joint feature representations (2018)
  15. Shikhar Bhardwaj, Ryan R. Curtin, Marcus Edel, Yannis Mentekidis, Conrad Sanderson: ensmallen: a flexible C++ library for efficient function optimization (2018) arXiv
  16. Shubham Jain; Abhronil Sengupta; Kaushik Roy; Anand Raghunathan: Rx-Caffe: Framework for evaluating and training Deep Neural Networks on Resistive Crossbars (2018) arXiv
  17. Yin, Penghang; Xin, Jack; Qi, Yingyong: Linear feature transform and enhancement of classification on deep neural network (2018)
  18. Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam: MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications (2017) arXiv
  19. Fernando, Basura; Gould, Stephen: Discriminatively learned hierarchical rank pooling networks (2017)
  20. Han Wang, Linfeng Zhang, Jiequn Han, Weinan E: DeePMD-kit: A deep learning package for many-body potential energy representation and molecular dynamics (2017) arXiv