Efficient MATLAB computations with sparse and factored tensors The term tensor refers simply to a multidimensional or N-way array, and we consider how specially structured tensors allow for efficient storage and computation. First, we study sparse tensors, which have the property that the vast majority of the elements are zero. We propose storing sparse tensors using coordinate format and describe the computational efficiency of this scheme for various mathematical operations, including those typical to tensor decomposition algorithms. Second, we study factored tensors, which have the property that they can be assembled from more basic components. We consider two specific types: A Tucker tensor can be expressed as the product of a core tensor (which itself may be dense, sparse, or factored) and a matrix along each mode, and a Kruskal tensor can be expressed as the sum of rank-1 tensors. We are interested in the case where the storage of the components is less than the storage of the full tensor, and we demonstrate that many elementary operations can be computed using only the components. All of the efficiencies described in this paper are implemented in the Tensor Toolbox for MATLAB.

References in zbMATH (referenced in 45 articles , 1 standard article )

Showing results 1 to 20 of 45.
Sorted by year (citations)

1 2 3 next

  1. Chen, Zhongming; Qi, Liqun: A semismooth Newton method for tensor eigenvalue complementarity problem (2016)
  2. De Sterck, Hans; Howse, Alexander: Nonlinearly preconditioned optimization on Grassmann manifolds for computing approximate Tucker tensor decompositions (2016)
  3. Fan, H.-Y.; Zhang, L.; Chu, E.K.-w.; Wei, Y.: Q-less QR decomposition in inner product spaces (2016)
  4. Yang, Yuning; Feng, Yunlong; Huang, Xiaolin; Suykens, Johan A.K.: Rank-1 tensor properties with applications to a class of tensor optimization problems (2016)
  5. Batselier, Kim; Liu, Haotian; Wong, Ngai: A constructive algorithm for decomposing a tensor into a finite sum of orthonormal rank-1 terms (2015)
  6. Chen, Bilian; Li, Zhening; Zhang, Shuzhong: On optimal low rank Tucker approximation for tensors: the case for an adjustable core size (2015)
  7. Da Silva, Curt; Herrmann, Felix J.: Optimization on the hierarchical Tucker manifold - applications to tensor completion (2015)
  8. De Sterck, Hans; Winlaw, Manda: A nonlinearly preconditioned conjugate gradient algorithm for rank-$R$ canonical tensor approximation. (2015)
  9. Dreesen, Philippe; Ishteva, Mariya; Schoukens, Johan: Decoupling multivariate polynomials using first-order information and tensor decompositions (2015)
  10. Huang, Furong; Niranjan, U.N.; Hakeem, Mohammad Umar; Anandkumar, Animashree: Online tensor methods for learning latent variable models (2015)
  11. Kolda, Tamara G.: Numerical optimization for symmetric tensor decomposition (2015)
  12. Xu, Yangyang: Alternating proximal gradient method for sparse nonnegative Tucker decomposition (2015)
  13. Zhang, Min; Yang, Lei; Huang, Zheng-Hai: Minimum $ n$-rank approximation via iterative hard thresholding (2015)
  14. Kressner, Daniel; Steinlechner, Michael; Vandereycken, Bart: Low-rank tensor completion by Riemannian optimization (2014)
  15. Özay, Evrim Korkmaz; Demiralp, Metin: Reductive enhanced multivariance product representation for multi-way arrays (2014)
  16. Bebendorf, M.; Kühnemund, A.; Rjasanow, S.: An equi-directional generalization of adaptive cross approximation for higher-order tensors (2013)
  17. Bunker, Douglas; Han, Lixing; Zhang, Shuhua: A proximal ANLS algorithm for nonnegative tensor factorization with a periodic enhanced line search. (2013)
  18. Cai, Xingju; Chen, Yannan; Han, Deren: Nonnegative tensor factorizations using an alternating direction method (2013)
  19. Chen, Zhen; Lu, Linzhang: A gradient based iterative solutions for Sylvester tensor equations (2013)
  20. Chen, Zhen; Lu, Linzhang: A tensor singular values and its symmetric embedding eigenvalues (2013)

1 2 3 next