Cohn-Kanade

The Cohn-Kanade AU-Coded Facial Expression Database is for research in automatic facial image analysis and synthesis and for perceptual studies. Cohn-Kanade is available in two versions and a third is in preparation. Version 1 (the original or initial release (Kanade, Cohn, & Tian, 2000)) includes 486 sequences from 97 posers. Each sequence begins with a neutral expression and proceeds to a peak expression. The peak expression for each sequence is fully FACS (Ekman, Friesen, & Hager, 2002; Ekman & Friesen, 1979) coded and given an emotion label. The emotion label refers to what expression was requested rather than what may actually have been performed. For validated emotion labels, please use version 2, CK+, as described below. Version 2, referred to as CK+, includes both posed and non-posed (spontaneous) expressions and additional types of metadata. For posed expressions, the number of sequences is increased from the initial release by 22% and the number of subjects by 27%. As with the initial release, the target expression for each sequence is fully FACS coded. In addition validated emotion labels have been added to the metadata. Thus, sequences may be analyzed for both action units and prototypic emotions. The non-posed expressions are from Ambadar, Cohn, & Reed (2009). Additionally, CK+ provides protocols and baseline results for facial feature tracking and action unit and emotion recognition. Tracking results for shape and appearance are via the approach of Matthews & Baker (2004). For action unit and expression recognition, a linear support vector machine (SVM) classifier with leave-one-out subject cross-validation was used. Both sets of results are included with the metadata. For a full description of CK+, please see P. Lucey et al. (2010). Version 3 is planned for spring 2013. The original data collection of Cohn-Kanade included synchronized frontal and 30-degree from frontal video (fig. 1, below). Version 3 will add the synchronized 30-degree from frontal video. To receive the database for research, non-commercial use, download, sign, and return an Agreement to the Affect Analysis Group. All student or non-faculty agreement forms must be co-signed by a faculty advisor.


References in zbMATH (referenced in 60 articles )

Showing results 41 to 60 of 60.
Sorted by year (citations)
  1. de Campos, Cassio P.; Zhang, Lei; Tong, Yan; Ji, Qiang: Semi-qualitative probabilistic networks in computer vision problems (2009)
  2. Fu, Xiaofeng; Wei, Wei: Facial expression recognition based on multi-scale centralized binary pattern (2009)
  3. Hadid, Abdenour; Pietikäinen, Matti: Combining appearance and motion for face and gender recognition from videos (2009) ioport
  4. Kumano, Shiro; Otsuka, Kazuhiro; Yamato, Junji; Maeda, Eisaku; Sato, Yoichi: Pose-invariant facial expression recognition using variable-intensity templates (2009) ioport
  5. Liao, Wenhui; Ji, Qiang: Learning Bayesian network parameters under incomplete data with domain knowledge (2009)
  6. Chen, Hsiuao-Ying; Huang, Chung-Lin; Fu, Chih-Ming: Hybrid-boost learning for multi-pose face detection and facial expression recognition (2008)
  7. De Silva, Chathura R.; Ranganath, Surendra; De Silva, Liyanage C.: Cloud basis function neural network: A modified RBF network architecture for holistic facial expression recognition (2008)
  8. Kotsia, Irene; Zafeiriou, Stefanos; Pitas, Ioannis: Texture and shape information fusion for facial expression and facial action unit recognition (2008)
  9. Ma, Rui; Wang, Jiaxin; Song, Yixu: Multi-manifold learning using locally linear embedding (LLE) nonlinear dimensionality reduction (2008)
  10. Xia, Ding-yin; Wu, Fei; Zhang, Xu-Qing; Zhuang, Yue-Ting: Local and global approaches of affinity propagation clustering for large scale data (2008)
  11. Xiang, T.; Leung, M. K. H.; Cho, S. Y.: Expression recognition using fuzzy spatio-temporal modeling (2008)
  12. Balasuriya, Janaka Chaminda; Marasinghe, Chandrajith Ashuboda; Watanabe, Keigo; Izumi, Kiyotaka: Kansei and human experience analysis for mobile robot navigation in a ubiquitous environment (2007) ioport
  13. Buenaposada, José M.; Muñoz, Enrique; Baumela, Luis: Recognising facial expressions in video sequences (2007) ioport
  14. Chuang, Chao-Fa; Shih, Frank Y.: Recognizing facial action units using independent component analysis and support vector machine (2006)
  15. Xu, Yong; Zhang, David; Jin, Zhong; Li, Miao; Yang, Jing-Yu: A fast kernel-based nonlinear discriminant analysis for multi-class problems (2006)
  16. Su, Congyong; Zhuang, Yueting; Huang, Li; Wu, Fei: Steerable pyramid-based face hallucination (2005) ioport
  17. Xu, Yong; Yang, Jing-yu; Lu, Jianfeng; Yu, Dong-jun: An efficient renovation on kernel Fisher discriminant analysis and face recognition experiments (2004) ioport
  18. Fasel, B.; Luettin, Juergen: Automatic facial expression analysis: a survey (2003)
  19. Dubuisson, Séverine; Davoine, Franck; Cocquerez, Jean-Pierre: Automatic facial feature extraction and facial expression recognition (2001)
  20. Pardàs, Montse; Sayrol, Elisa: Motion estimation based tracking of active contours (2001)