ASKIT
ASKIT: an efficient, parallel library for high-dimensional kernel summations. Kernel-based methods are a powerful tool in a variety of machine learning and computational statistics methods. A key bottleneck in these methods is computations involving the kernel matrix, which scales quadratically with the problem size. Previously, we introduced ASKIT (Approximate Skeletonization Kernel Independent Treecode), an efficient, scalable, kernel-independent method for approximately evaluating kernel matrix-vector products. ASKIT is based on a novel, randomized method for efficiently factoring off-diagonal blocks of the kernel matrix using approximate nearest neighbor information. In this context, ASKIT can be viewed as an algebraic fast multipole method for arbitrary dimensions. In this paper, we introduce our open-source implementation of ASKIT. Features of our ASKIT library include linear dependence on the input dimension of the data, the ability to approximate kernel functions with no prior information on the kernel, and scalability to tens of thousands of compute cores and data with billions of points or hundreds of dimensions. We also introduce some new extensions and improvements of ASKIT, included in our library. We introduce a new method for adaptively selecting approximation ranks and correctly partition the nearest neighbor information, both of which improve the performance of ASKIT over our previous implementation. We describe the ASKIT algorithm in detail, and collect and summarize our previous theoretical complexity and error bounds in one place. We present a brief selection of experimental results illustrating the accuracy and scalability of ASKIT. We then provide some details and guidance for users of ASKIT.
Keywords for this software
References in zbMATH (referenced in 11 articles , 1 standard article )
Showing results 1 to 11 of 11.
Sorted by year (- Chen, Chao; Reiz, Severin; Yu, Chenhan D.; Bungartz, Hans-Joachim; Biros, George: Fast approximation of the Gauss-Newton Hessian matrix for the multilayer perceptron (2021)
- Robinson, Gregor; Grooms, Ian: A fast tunable blurring algorithm for scattered data (2020)
- Wang, Lei; Krasny, Robert; Tlupova, Svetlana: A kernel-independent treecode based on barycentric Lagrange interpolation (2020)
- Xing, Xin; Chow, Edmond: Interpolative decomposition via proxy points for kernel matrices (2020)
- Chen, Duan; Cai, Wei: An (O(N \logN)) hierarchical random compression method for kernel matrices by sampling partial matrix entries (2019)
- Zaspel, Peter: Algorithmic patterns for (\mathcalH)-matrices on many-core processors (2019)
- Chen, Jie; Avron, Haim; Sindhwani, Vikas: Hierarchically compositional kernels for scalable nonparametric learning (2017)
- March, William B.; Biros, George: Far-field compression for fast kernel summation methods in high dimensions (2017)
- March, William B.; Xiao, Bo; Yu, Chenhan D.; Biros, George: ASKIT: an efficient, parallel library for high-dimensional kernel summations (2016)
- Xiao, Bo; Biros, George: Parallel algorithms for nearest neighbor search problems in high dimensions (2016)
- March, William B.; Xiao, Bo; Biros, George: ASKIT: approximate skeletonization kernel-independent treecode in high dimensions (2015)