Autoclass - A Bayesian Approach to Classification. We describe a Bayesian approach to the unsupervised discovery of classes in a set of cases, sometimes called finite mixture separation or clustering. The main difference between clustering and our approach is that we search for the “best” set of class descriptions rather than grouping the cases themselves. We describe our classes in terms of probability distribution or density functions, and the locally maximal posterior probability parameters. We rate our classifications with an approximate posterior probability of the distribution function w.r.t. the data, obtained by marginalizing over all the parameters. Approximation is necessitated by the computational complexity of the joint probability, and our marginalization is w.r.t. a local maxima in the parameter space. This posterior probability rating allows direct comparison of alternate density functions that differ in number of classes and/or individual class density functions. We discuss the rationale behind our approach to classification. We give the mathematical development for the basic mixture model, describe the approximations needed for computational tractability, give some specifics of models for several common attribute types, and describe some of the results achieved by the AutoClass program..

References in zbMATH (referenced in 68 articles )

Showing results 1 to 20 of 68.
Sorted by year (citations)

1 2 3 4 next

  1. Sangam, Ravi Sankar; Om, Hari: Equi-Clustream: a framework for clustering time evolving mixed data (2018)
  2. Sangam, Ravi Sankar; Om, Hari: An equi-biased (k)-prototypes algorithm for clustering mixed-type data (2018)
  3. Ye, Mao; Zhang, Peng; Nie, Lizhen: Clustering sparse binary data with hierarchical Bayesian Bernoulli mixture model (2018)
  4. Cagnina, Leticia; Errecalde, Marcelo; Ingaramo, Diego; Rosso, Paolo: An efficient particle swarm optimization approach to cluster short texts (2014) ioport
  5. Jiang, Cuiqing; Liang, Kun; Chen, Hsinchun; Ding, Yong: Analyzing market performance via social media: a case study of a banking industry crisis (2014) ioport
  6. Lin, Kawuu W.; Lin, Chun-Hung; Hsiao, Chun-Yuan: A parallel and scalable CAST-based clustering algorithm on GPU (2014) ioport
  7. Marshall, Adele H.; Shaw, Barry: Computational learning of the conditional phase-type (C-Ph) distribution. Learning C-Ph distributions (2014)
  8. Cheung, Yiu-ming; Jia, Hong: Categorical-and-numerical-attribute data clustering based on a unified similarity metric without knowing cluster number (2013)
  9. Chen, Tao; Zhang, Nevin L.; Liu, Tengfei; Poon, Kin Man; Wang, Yi: Model-based multidimensional clustering of categorical data (2012)
  10. Kormaksson, Matthias; Booth, James G.; Figueroa, Maria E.; Melnick, Ari: Integrative model-based clustering of microarray methylation and expression data (2012)
  11. Micheloni, Christian; Rani, Asha; Kumar, Sanjeev; Foresti, Gian Luca: A balanced neural tree for pattern classification (2012) ioport
  12. Giannakopoulou, Dimitra; Bushnell, David H.; Schumann, Johann; Erzberger, Heinz; Heere, Karen: Formal testing for separation assurance (2011)
  13. Meng, Hai-Dong; Song, Yu-Chen; Song, Fei-Yan; Shen, Hai-Tao: Research and application of cluster and association analysis in geochemical data processing (2011)
  14. Bouguila, Nizar: On multivariate binary data clustering and feature weighting (2010)
  15. Li, Rui; Tian, Tai-Peng; Sclaroff, Stan; Yang, Ming-Hsuan: 3D human motion tracking with a coordinated mixture of factor analyzers (2010) ioport
  16. McNicholas, P. D.; Murphy, T. B.; McDaid, A. F.; Frost, D.: Serial and parallel implementations of model-based clustering via parsimonious Gaussian mixture models (2010)
  17. Bissantz, Nicolas; Hagedorn, Jürgen: Data mining (Datenmustererkennung) (2009) ioport
  18. Cariou, Véronique; Verdun, Stéphane; Diaz, Emmanuelle; Qannari, El Mostafa; Vigneau, Evelyne: Comparison of three hypothesis testing approaches for the selection of the appropriate number of clusters of variables (2009)
  19. Flores, M. Julia; Gámez, José A.; Martínez, Ana M.; Puerta, José M.: HODE: hidden one-dependence estimator (2009)
  20. Huang, Han-Shen; Yang, Bo-Hou; Chang, Yu-Ming; Hsu, Chun-Nan: Global and componentwise extrapolations for accelerating training of Bayesian networks and conditional random fields (2009) ioport

1 2 3 4 next