ENDER

ENDER - A statistical framework for boosting decision rules. Induction of decision rules plays an important role in machine learning. The main advantage of decision rules is their simplicity and human-interpretable form. Moreover, they are capable of modeling complex interactions between attributes. In this paper, we thoroughly analyze a learning algorithm, called ENDER, which constructs an ensemble of decision rules. This algorithm is tailored for regression and binary classification problems. It uses the boosting approach for learning, which can be treated as generalization of sequential covering. Each new rule is fitted by focusing on examples which were the hardest to classify correctly by the rules already present in the ensemble. We consider different loss functions and minimization techniques often encountered in the boosting framework. The minimization techniques are used to derive impurity measures which control construction of single decision rules. Properties of four different impurity measures are analyzed with respect to the trade-off between misclassification (discrimination) and coverage (completeness) of the rule. Moreover, we consider regularization consisting of shrinking and sampling. Finally, we compare the ENDER algorithm with other well-known decision rule learners such as SLIPPER, LRI and RuleFit.


References in zbMATH (referenced in 15 articles )

Showing results 1 to 15 of 15.
Sorted by year (citations)

  1. Bénard, Clément; Biau, Gérard; Da Veiga, Sébastien; Scornet, Erwan: SIRUS: stable and interpretable RUle set for classification (2021)
  2. Lejeune, Miguel; Lozin, Vadim; Lozina, Irina; Ragab, Ahmed; Yacout, Soumaya: Recent advances in the theory and practice of logical analysis of data (2019)
  3. Možina, Martin; Demšar, Janez; Bratko, Ivan; Žabkar, Jure: Extreme value correction: a method for correcting optimistic estimations in rule learning (2019)
  4. Alsolami, Fawaz; Amin, Talha; Chikalov, Igor; Moshkov, Mikhail: Bi-criteria optimization problems for decision rules (2018)
  5. Marek Sikora, Łukasz Wróbel, Adam Gudyś: GuideR: a guided separate-and-conquer rule learning in classification, regression, and survival settings (2018) arXiv
  6. Nalenz, Malte; Villani, Mattias: Tree ensembles with rule structured horseshoe regularization (2018)
  7. Marjolein Fokkema: pre: An R Package for Fitting Prediction Rule Ensembles (2017) arXiv
  8. Caserta, Marco; Reiners, Torsten: A pool-based pattern generation algorithm for logical analysis of data with automatic fine-tuning (2016)
  9. Jawanpuria, Pratik; Nath, Jagarlapudi Saketha; Ramakrishnan, Ganesh: Generalized hierarchical kernel learning (2015)
  10. Amin, Talha; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata: Dynamic programming approach to optimization of approximate decision rules (2013)
  11. Amin, Talha; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata: Classifiers based on optimal decision rules (2013)
  12. Sikora, Marek; Gudyś, Adam: CHIRA - convex hull based iterative algorithm of rules aggregation (2013)
  13. Amin, Talha; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata: Dynamic programming approach for partial decision rule optimization (2012)
  14. Sikora, Marek; Sikora, Beata: Improving prediction models applied in systems monitoring natural hazards and machinery (2012)
  15. Dembczyński, Krzysztof; Kotłowski, Wojciech; Słowiński, Roman: ENDER: a statistical framework for boosting decision rules (2010) ioport