TensorToolbox
Efficient MATLAB computations with sparse and factored tensors The term tensor refers simply to a multidimensional or N-way array, and we consider how specially structured tensors allow for efficient storage and computation. First, we study sparse tensors, which have the property that the vast majority of the elements are zero. We propose storing sparse tensors using coordinate format and describe the computational efficiency of this scheme for various mathematical operations, including those typical to tensor decomposition algorithms. Second, we study factored tensors, which have the property that they can be assembled from more basic components. We consider two specific types: A Tucker tensor can be expressed as the product of a core tensor (which itself may be dense, sparse, or factored) and a matrix along each mode, and a Kruskal tensor can be expressed as the sum of rank-1 tensors. We are interested in the case where the storage of the components is less than the storage of the full tensor, and we demonstrate that many elementary operations can be computed using only the components. All of the efficiencies described in this paper are implemented in the Tensor Toolbox for MATLAB.
Keywords for this software
References in zbMATH (referenced in 53 articles , 1 standard article )
Showing results 1 to 20 of 53.
Sorted by year (- Brezinski, Claude; Redivo-Zaglia, Michela: The simplified topological $\varepsilon$-algorithms: software and applications (2017)
- Chen, Bilian; He, Simai; Li, Zhening; Zhang, Shuzhong: On new classes of nonnegative symmetric tensors (2017)
- Hackbusch, Wolfgang; Uschmajew, AndrĂ©: On the interconnection between the higher-order singular values of real tensors (2017)
- Zhao, Na; Yang, Qingzhi; Liu, Yajun: Computing the generalized eigenvalues of weakly symmetric tensors (2017)
- Bigoni, Daniele; Engsig-Karup, Allan P.; Marzouk, Youssef M.: Spectral tensor-train decomposition (2016)
- Chen, Zhongming; Qi, Liqun: A semismooth Newton method for tensor eigenvalue complementarity problem (2016)
- De Sterck, Hans; Howse, Alexander: Nonlinearly preconditioned optimization on Grassmann manifolds for computing approximate Tucker tensor decompositions (2016)
- Fan, H.-Y.; Zhang, L.; Chu, E.K.-w.; Wei, Y.: Q-less QR decomposition in inner product spaces (2016)
- Yang, Yuning; Feng, Yunlong; Huang, Xiaolin; Suykens, Johan A.K.: Rank-1 tensor properties with applications to a class of tensor optimization problems (2016)
- Yu, Gaohang; Yu, Zefeng; Xu, Yi; Song, Yisheng; Zhou, Yi: An adaptive gradient method for computing generalized tensor eigenpairs (2016)
- Batselier, Kim; Liu, Haotian; Wong, Ngai: A constructive algorithm for decomposing a tensor into a finite sum of orthonormal rank-1 terms (2015)
- Biagioni, David J.; Beylkin, Daniel; Beylkin, Gregory: Randomized interpolative decomposition of separated representations (2015)
- Chen, Bilian; Li, Zhening; Zhang, Shuzhong: On optimal low rank Tucker approximation for tensors: the case for an adjustable core size (2015)
- Da Silva, Curt; Herrmann, Felix J.: Optimization on the hierarchical Tucker manifold - applications to tensor completion (2015)
- De Sterck, Hans; Winlaw, Manda: A nonlinearly preconditioned conjugate gradient algorithm for rank-$R$ canonical tensor approximation. (2015)
- Dreesen, Philippe; Ishteva, Mariya; Schoukens, Johan: Decoupling multivariate polynomials using first-order information and tensor decompositions (2015)
- Huang, Furong; Niranjan, U.N.; Hakeem, Mohammad Umar; Anandkumar, Animashree: Online tensor methods for learning latent variable models (2015)
- Kolda, Tamara G.: Numerical optimization for symmetric tensor decomposition (2015)
- Xu, Yangyang: Alternating proximal gradient method for sparse nonnegative Tucker decomposition (2015)
- Yu, Zhuliang; Feng, Bao; Gu, Zhenghui; Xue, Zhenxia; Li, Yuanqing; Wang, Cong: Voxel selection and neural decoding of fMRI data based on robust sparse programming with multi-dimensional derivative constraints (2015) ioport