UCI-ml

UC Irvine Machine Learning Repository. We currently maintain 251 data sets as a service to the machine learning community. You may view all data sets through our searchable interface. Our old web site is still available, for those who prefer the old format. For a general overview of the Repository, please visit our About page. For information about citing data sets in publications, please read our citation policy. If you wish to donate a data set, please consult our donation policy. For any other questions, feel free to contact the Repository librarians. We have also set up a mirror site for the Repository. The UCI Machine Learning Repository is a collection of databases, domain theories, and data generators that are used by the machine learning community for the empirical analysis of machine learning algorithms. The archive was created as an ftp archive in 1987 by David Aha and fellow graduate students at UC Irvine. Since that time, it has been widely used by students, educators, and researchers all over the world as a primary source of machine learning data sets. As an indication of the impact of the archive, it has been cited over 1000 times, making it one of the top 100 most cited ”papers” in all of computer science. The current version of the web site was designed in 2007 by Arthur Asuncion and David Newman, and this project is in collaboration with Rexa.info at the University of Massachusetts Amherst. Funding support from the National Science Foundation is gratefully acknowledged. Many people deserve thanks for making the repository a success. Foremost among them are the donors and creators of the databases and data generators. Special thanks should also go to the past librarians of the repository: David Aha, Patrick Murphy, Christopher Merz, Eamonn Keogh, Cathy Blake, Seth Hettich, and David Newman.


References in zbMATH (referenced in 3162 articles )

Showing results 1 to 20 of 3162.
Sorted by year (citations)

1 2 3 ... 157 158 159 next

  1. Ai, Mingyao; Wang, Fei; Yu, Jun; Zhang, Huiming: Optimal subsampling for large-scale quantile regression (2021)
  2. Allassonnière, Stéphanie; Chevallier, Juliette: A new class of stochastic EM algorithms. Escaping local maxima and handling intractable sampling (2021)
  3. Arachie, Chidubem; Huang, Bert: A general framework for adversarial label learning (2021)
  4. Aydemir, Onder: A new performance evaluation metric for classifiers: polygon area metric (2021)
  5. Bagirov, Adil M.; Taheri, Sona; Cimen, Emre: Incremental DC optimization algorithm for large-scale clusterwise linear regression (2021)
  6. Bénard, Clément; Biau, Gérard; Da Veiga, Sébastien; Scornet, Erwan: SIRUS: stable and interpretable RUle set for classification (2021)
  7. Bermanis, Amit; Salhov, Moshe; Averbuch, Amir: Geometric component analysis and its applications to data analysis (2021)
  8. Bertsimas, Dimitris; Dunn, Jack; Wang, Yuchen: Near-optimal nonlinear regression trees (2021)
  9. Blanquero, Rafael; Carrizosa, Emilio; Ramírez-Cobo, Pepa; Sillero-Denamiel, M. Remedios: A cost-sensitive constrained Lasso (2021)
  10. Blaser, Rico; Fryzlewicz, Piotr: Regularizing axis-aligned ensembles via data rotations that favor simpler learners (2021)
  11. Burkart, Nadia; Huber, Marco F.: A survey on the explainability of supervised machine learning (2021)
  12. Cansu Alakus, Denis Larocque, Aurelie Labbe: RFpredInterval: An R Package for Prediction Intervals with Random Forests and Boosted Forests (2021) arXiv
  13. Carrizosa, Emilio; Molero-Río, Cristina; Romero Morales, Dolores: Mathematical optimization in classification and regression trees (2021)
  14. Chen, Hong; Guo, Changying; Xiong, Huijuan; Wang, Yingjie: Sparse additive machine with ramp loss (2021)
  15. Du, Yu; Lin, Xiaodong; Pham, Minh; Ruszczyński, Andrzej: Selective linearization for multi-block statistical learning (2021)
  16. Feurer, Matthias; van Rijn, Jan N.; Kadra, Arlind; Gijsbers, Pieter; Mallik, Neeratyoy; Ravi, Sahithya; Müller, Andreas; Vanschoren, Joaquin; Hutter, Frank: OpenML-Python: an extensible Python API for OpenML (2021)
  17. Frogner, Charlie; Claici, Sebastian; Chien, Edward; Solomon, Justin: Incorporating unlabeled data into distributionally robust learning (2021)
  18. Gan, Guojun; Ma, Chaoqun; Wu, Jianhong: Data clustering. Theory, algorithms, and applications (2021)
  19. Gao, Zheming; Fang, Shu-Cherng; Luo, Jian; Medhin, Negash: A kernel-free double well potential support vector machine with applications (2021)
  20. Gómez, Andrés; Prokopyev, Oleg A.: A mixed-integer fractional optimization approach to best subset selection (2021)

1 2 3 ... 157 158 159 next