DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition. We evaluate whether features extracted from the activation of a deep convolutional network trained in a fully supervised fashion on a large, fixed set of object recognition tasks can be re-purposed to novel generic tasks. Our generic tasks may differ significantly from the originally trained tasks and there may be insufficient labeled or unlabeled data to conventionally train or adapt a deep architecture to the new tasks. We investigate and visualize the semantic clustering of deep convolutional features with respect to a variety of such tasks, including scene recognition, domain adaptation, and fine-grained recognition challenges. We compare the efficacy of relying on various network levels to define a fixed feature, and report novel results that significantly outperform the state-of-the-art on several important vision challenges. We are releasing DeCAF, an open-source implementation of these deep convolutional activation features, along with all associated network parameters to enable vision researchers to be able to conduct experimentation with deep representations across a range of visual concept learning paradigms.

References in zbMATH (referenced in 29 articles )

Showing results 1 to 20 of 29.
Sorted by year (citations)

1 2 next

  1. Tang, Jingjing; He, Yiwei; Tian, Yingjie; Liu, Dalian; Kou, Gang; Alsaadi, Fawaz E.: Coupling loss and self-used privileged information guided multi-view transfer learning (2021)
  2. Zhang, Jianjia; Wang, Lei; Zhou, Luping; Li, Wanqing: Beyond covariance: SICE and kernel based visual feature representation (2021)
  3. Zhang, Ningshan; Mohri, Mehryar; Hoffman, Judy: Multiple-source adaptation theory and algorithms (2021)
  4. Gao, Depeng; Wu, Rui; Liu, Jiafeng; Fan, Xiaopeng; Tang, Xianglong: Finding robust transfer features for unsupervised domain adaptation (2020)
  5. Li, Aoxue; Lu, Zhiwu; Guan, Jiechao; Xiang, Tao; Wang, Liwei; Wen, Ji-Rong: Transferrable feature and projection learning with class hierarchy for zero-shot learning (2020)
  6. Liu, Li; Ouyang, Wanli; Wang, Xiaogang; Fieguth, Paul; Chen, Jie; Liu, Xinwang; Pietikäinen, Matti: Deep learning for generic object detection: a survey (2020)
  7. Yang, Liran; Zhong, Ping: Robust adaptation regularization based on within-class scatter for domain adaptation (2020)
  8. Gao, Depeng; Liu, Jiafeng; Wu, Rui; Cheng, Dansong; Fan, Xiaopeng; Tang, Xianglong: Utilizing relevant RGB-D data to help recognize RGB images in the target domain (2019)
  9. He, Lingxiao; Li, Haiqing; Zhang, Qi; Sun, Zhenan: Dynamic feature matching for partial face recognition (2019)
  10. Lenc, Karel; Vedaldi, Andrea: Understanding image representations by measuring their equivariance and equivalence (2019)
  11. Li, Shan; Deng, Weihong: Reliable crowdsourcing and deep locality-preserving learning for unconstrained facial expression recognition (2019)
  12. Shafieezadeh-Abadeh, Soroosh; Kuhn, Daniel; Esfahani, Peyman Mohajerin: Regularization via mass transportation (2019)
  13. Waegeman, Willem; Dembczyński, Krzysztof; Hüllermeier, Eyke: Multi-target prediction: a unifying view on problems and methods (2019)
  14. Zhou, Joey Tianyi; Pan, Sinno Jialin; Tsang, Ivor W.: A deep learning framework for hybrid heterogeneous transfer learning (2019)
  15. Ahmad, Shahzor; Cheong, Loong-Fah: Robust detection and affine rectification of planar homogeneous texture for scene understanding (2018)
  16. Flamary, Rémi; Cuturi, Marco; Courty, Nicolas; Rakotomamonjy, Alain: Wasserstein discriminant analysis (2018)
  17. Li, Jun; Chang, Heyou; Yang, Jian; Luo, Wei; Fu, Yun: Visual representation and classification by learning group sparse deep stacking network (2018)
  18. Qin, Yao; Feng, Mengyang; Lu, Huchuan; Cottrell, Garrison W.: Hierarchical cellular automata for visual saliency (2018)
  19. Wang, Wei; Wang, Hao; Zhang, Chen; Gao, Yang: Cross-domain metric and multiple kernel learning based on information theory (2018)
  20. Zheng, Charles; Achanta, Rakesh; Benjamini, Yuval: Extrapolating expected accuracies for large multi-class problems (2018)

1 2 next