Cohn-Kanade

The Cohn-Kanade AU-Coded Facial Expression Database is for research in automatic facial image analysis and synthesis and for perceptual studies. Cohn-Kanade is available in two versions and a third is in preparation. Version 1 (the original or initial release (Kanade, Cohn, & Tian, 2000)) includes 486 sequences from 97 posers. Each sequence begins with a neutral expression and proceeds to a peak expression. The peak expression for each sequence is fully FACS (Ekman, Friesen, & Hager, 2002; Ekman & Friesen, 1979) coded and given an emotion label. The emotion label refers to what expression was requested rather than what may actually have been performed. For validated emotion labels, please use version 2, CK+, as described below. Version 2, referred to as CK+, includes both posed and non-posed (spontaneous) expressions and additional types of metadata. For posed expressions, the number of sequences is increased from the initial release by 22% and the number of subjects by 27%. As with the initial release, the target expression for each sequence is fully FACS coded. In addition validated emotion labels have been added to the metadata. Thus, sequences may be analyzed for both action units and prototypic emotions. The non-posed expressions are from Ambadar, Cohn, & Reed (2009). Additionally, CK+ provides protocols and baseline results for facial feature tracking and action unit and emotion recognition. Tracking results for shape and appearance are via the approach of Matthews & Baker (2004). For action unit and expression recognition, a linear support vector machine (SVM) classifier with leave-one-out subject cross-validation was used. Both sets of results are included with the metadata. For a full description of CK+, please see P. Lucey et al. (2010). Version 3 is planned for spring 2013. The original data collection of Cohn-Kanade included synchronized frontal and 30-degree from frontal video (fig. 1, below). Version 3 will add the synchronized 30-degree from frontal video. To receive the database for research, non-commercial use, download, sign, and return an Agreement to the Affect Analysis Group. All student or non-faculty agreement forms must be co-signed by a faculty advisor.


References in zbMATH (referenced in 56 articles )

Showing results 1 to 20 of 56.
Sorted by year (citations)

1 2 3 next

  1. Daghyani, Masoud; Zamzami, Nuha; Bouguila, Nizar: Toward an efficient computation of log-likelihood functions in statistical inference: overdispersed count data clustering (2020)
  2. Daoudi, Mohamed; Alvarez Paiva, Juan-Carlos; Kacem, Anis: The Riemannian and affine geometry of facial expression and action recognition (2020)
  3. Liu, Yipeng; Ji, Zhongping; Zhang, Yu-Wei; Xu, Gang: Example-driven modeling of portrait bas-relief (2020)
  4. Najar, Fatma; Bourouis, Sami; Al-Azawi, Rula; Al-Badi, Ali: Online recognition via a finite mixture of multivariate generalized Gaussian distributions (2020)
  5. Qin, Shu; Zhu, Zhengzhou; Zou, Yuhang; Wang, Xiaowei: Facial expression recognition based on Gabor wavelet transform and 2-channel CNN (2020)
  6. Kucukoglu, Irem; Simsek, Buket; Simsek, Yilmaz: Multidimensional Bernstein polynomials and Bézier curves: analysis of machine learning algorithm for facial expression recognition based on curvature (2019)
  7. Lu, Yang; Wang, Shigang; Zhao, Wenting: Facial expression recognition based on discrete separable shearlet transform and feature selection (2019)
  8. Ahmed, Faisal; Kabir, Md. Hasanul: Facial expression recognition under difficult conditions: a comprehensive study on edge directional texture patterns (2018)
  9. Quost, Benjamin; Denoeux, Thierry; Li, Shoumei: Parametric classification with soft labels using the evidential EM algorithm: linear discriminant analysis versus logistic regression (2017)
  10. Gaidhane, Vilas H.; Hote, Yogesh V.; Singh, Vijander: Emotion recognition using eigenvalues and Levenberg-Marquardt algorithm-based classifier (2016)
  11. Susan, Seba; Aggarwal, Nandini; Chand, Shefali; Gupta, Ayush: Image coding based on maximum entropy partitioning for identifying improbable intensities related to facial expressions (2016)
  12. Kamaruzaman, Fadhlan; Shafie, Amir Akramin; Mustafah, Yasir M.: Coincidence detection using spiking neurons with application to face recognition (2015)
  13. Özöğür-Akyüz, Süreyya; Windeatt, Terry; Smith, Raymond: Pruning of error correcting output codes by optimization of accuracy-diversity trade off (2015)
  14. Poria, Soujanya; Cambria, Erik; Hussain, Amir; Huang, Guang-Bin: Towards an intelligent framework for multimodal affective data analysis (2015) ioport
  15. An, Gaoyun; Liu, Shuai; Jin, Yi; Ruan, Qiuqi; Lu, Shan: Facial expression recognition based on discriminant neighborhood preserving nonnegative tensor factorization and ELM (2014)
  16. Fang, Hui; Mac Parthaláin, Neil; Aubrey, Andrew J.; Tam, Gary K. L.; Borgo, Rita; Rosin, Paul L.; Grant, Philip W.; Marshall, David; Chen, Min: Facial expression recognition in dynamic sequences: an integrated approach (2014) ioport
  17. Farajzadeh, Nacer; Pan, Gang; Wu, Zhaohui: Facial expression recognition based on meta probability codes (2014) ioport
  18. Senechal, Thibaud; Bailly, Kevin; Prevost, Lionel: Impact of action unit detection in automatic emotion recognition (2014) ioport
  19. Wan, Shaohua; Aggarwal, J. K.: Spontaneous facial expression recognition: a robust metric learning approach (2014) ioport
  20. Liao, Chia-Te; Chuang, Hui-Ju; Duan, Chih-Hsueh; Lai, Shang-Hong: Learning spatial weighting for facial expression analysis via constrained quadratic programming (2013) ioport

1 2 3 next