AnnexML: Approximate Nearest Neighbor Search for Extreme Multi-label Classification. Extreme multi-label classification methods have been widely used in Web-scale classification tasks such as Web page tagging and product recommendation. In this paper, we present a novel graph embedding method called ”AnnexML”. At the training step, AnnexML constructs a k-nearest neighbor graph of label vectors and attempts to reproduce the graph structure in the embedding space. The prediction is efficiently performed by using an approximate nearest neighbor search method that efficiently explores the learned k-nearest neighbor graph in the embedding space. We conducted evaluations on several large-scale real-world data sets and compared our method with recent state-of-the-art methods. Experimental results show that our AnnexML can significantly improve prediction accuracy, especially on data sets that have larger a label space. In addition, AnnexML improves the trade-off between prediction time and accuracy. At the same level of accuracy, the prediction time of AnnexML was up to 58 times faster than that of SLEEC, which is a state-of-the-art embedding-based method.
Keywords for this software
References in zbMATH (referenced in 5 articles )
Showing results 1 to 5 of 5.
- Davoodi, Arash Gholami; Chang, Sean; Yoo, Hyun Gon; Baweja, Anubhav; Mongia, Mihir; Mohimani, Hosein: ForestDSH: a universal hash design for discrete probability distributions (2021)
- Panos, Aristeidis; Dellaportas, Petros; Titsias, Michalis K.: Large scale multi-label learning using Gaussian processes (2021)
- Khandagale, Sujay; Xiao, Han; Babbar, Rohit: Bonsai: diverse and shallow trees for extreme multi-label classification (2020)
- Babbar, Rohit; Schölkopf, Bernhard: Data scarcity, robustness and extreme multi-label classification (2019)
- Pronobis, Wiktor; Panknin, Danny; Kirschnick, Johannes; Srinivasan, Vignesh; Samek, Wojciech; Markl, Volker; Kaul, Manohar; Müller, Klaus-Robert; Nakajima, Shinichi: Sharing hash codes for multiple purposes (2018)