EasyMKL: A scalable multiple kernel learning algorithm. The goal of Multiple Kernel Learning (MKL) is to combine kernels derived from multiple sources in a data-driven way with the aim to enhance the accuracy of a target kernel machine. State-of-the-art methods of MKL have the drawback that the time required to solve the associated optimization problem grows (typically more than linearly) with the number of kernels to combine. Moreover, it has been empirically observed that even sophisticated methods often do not significantly outperform the simple average of kernels. In this paper, we propose a time and space efficient MKL algorithm that can easily cope with hundreds of thousands of kernels and more. The proposed method has been compared with other baselines (random, average, etc.) and three state-of-the-art MKL methods showing that our approach is often superior. We show empirically that the advantage of using the method proposed in this paper is even clearer when noise features are added. Finally, we have analyzed how our algorithm changes its performance with respect to the number of examples in the training set and the number of kernels combined.
Keywords for this software
References in zbMATH (referenced in 4 articles )
Showing results 1 to 4 of 4.
- Wang, Peiyan; Cai, Dongfeng: Multiple kernel learning by empirical target kernel (2020)
- Yaohao, Peng; Albuquerque, Pedro Henrique Melo: Non-linear interactions and exchange rate prediction: empirical evidence using support vector regression (2019)
- Donini, Michele; Aiolli, Fabio: Learning deep kernels in the space of dot product polynomials (2017)
- Li, Yujian; Zhang, Ting: Deep neural mapping support vector machines (2017)