Structure in matrix/tensor decompositions
Lieven De Lathauwer
In applications that involve matrix/tensor decompositions, it is often a good idea to exploit available “structure”. We discuss various types of structure and recent advances in the exploitation.
First, the entries of factor matrices are often not completely arbitrary. For instance, factors are expected to be reasonably smooth, they may contain a number of peaks, etc. We will elaborate on possibilities that emerge if factors admit a good approximation by polynomials, rational functions, sums-of-exponentials, … The use of such models has potentially many advantages. It may allow a (significant) reduction of computational and storage requirements, so that (much) larger data sets can be handled. It may increase the signal-to-noise ratio, as noise is fitted less well by the model. Some models come with factorizations that are essentially unique, even in cases where only matrix data are available.
Second, we focus on low multilinear rank structure, i.e., we consider data tensors that have a small Tucker core. Orthogonal Tucker compression is widely used as a preprocessing step in CANDECOMP/PARAFAC/CP analysis, significantly speeding up the computations. However, for constrained CP analysis its use has so far been rather limited. For instance, in a CP analysis that involves nonnegative factors, an orthogonal compression would break the nonnegativity. We will discuss a way around this.
Third, we focus on the analysis of multi-set data, in which coupling induces structural constraints besides the classical constraints on individual factors.
Fourth, we discuss (surprising) implications for the analysis of partially sampled data / data with missing entries.
In connection with chemometrics, aspects that we pay special attention to include: smoothness and peaks in spectra, nonnegativity of spectra and concentrations, large-scale problems, new grounds for factor uniqueness, data that are incomplete (e.g. because of scattering), and data fusion.
[1] Sidiropoulos N., De Lathauwer L., Fu X., Huang K., Papalexakis E. and Faloutsos C., "Tensor Decomposition for Signal Processing and Machine Learning", IEEE Transactions on Signal Processing, 2017, to appear.
[2] Vervliet N., Debals O., Sorber L., Van Barel M. and De Lathauwer L. Tensorlab 3.0, Available online, Mar. 2016. URL: http://www.tensorlab.net/ .
Mini-CV:
Lieven De Lathauwer received the Master’s degree in electromechanical engineering and the Ph.D. degree in applied sciences from KU Leuven, Belgium, in 1992 and 1997, respectively. From 2000 to 2007 he was Research Associate of the French CNRS. He is currently Professor at KU Leuven, affiliated both with the Group Science, Engineering and Technology of Kulak, and with the STADIUS Center for Dynamical Systems, Signal Processing and Data Analytics of the Electrical Engineering Department (ESAT). His research concerns the development of tensor tools for mathematical engineering. It centers on the following axes: (i) algebraic foundations, (ii) numerical algorithms, (iii) generic methods for signal processing, data analysis and system modelling, and (iv) specific applications. He was general chair of Workshops on Tensor Decompositions and Applications (TDA 2005, 2010, 2016). In 2015 he became Fellow of the IEEE for contributions to signal processing using tensor decompositions. In 2017 he became Fellow of SIAM for fundamental contributions to theory, computation, and application of tensor decompositions. Algorithms have been made available as Tensorlab (www.tensorlab.net) (with N. Vervliet, O. Debals, L. Sorber and M. Van Barel).