کتابخانه مرکزی دانشگاه صنعتی شریف
    • [نمايش بزرگتر]
    • [نمايش کوچکتر]
  • صفحه 
     از  0
  • [صفحه قبل]
  • [صفحه بعد]
  • [نمایش تمام صفحه]
  • [بستن]
 
Nonnegative matrix and tensor factorizations : applications to exploratory multi-way data analysis and blind source separation

اطلاعات کتابشناختی

Nonnegative matrix and tensor factorizations : applications to exploratory multi-way data analysis and blind source separation
Author :  
Publisher :   John Wiley,
Pub. Year  :   2009
Subjects :   Computer algorithms. Data mining. Machine learning. Data structures (Computer science)
Call Number :   ‭QA 76 .9 .A43 .N65 2009

جستجو در محتوا

ترتيب

فهرست مطالب

  • NONNEGATIVE MATRIX AND TENSOR FACTORIZATIONS: APPLICATIONS TO EXPLORATORY MULTI-WAY DATA ANALYSIS AND BLIND SOURCE SEPARATION (5)
    • Contents (7)
    • Preface (13)
    • Acknowledgments (17)
    • Glossary of Symbols and Abbreviations (19)
    • 1 Introduction - Problem Statements and Models (25)
      • 1.1 Blind Source Separation and Linear Generalized Component Analysis (26)
      • 1.2 Matrix Factorization Models with Nonnegativity and Sparsity Constraints (31)
        • 1.2.1 Why Nonnegativity and Sparsity Constraints? (31)
        • 1.2.2 Basic NMF Model (32)
        • 1.2.3 Symmetric NMF (33)
        • 1.2.4 Semi-Orthogonal NMF (34)
        • 1.2.5 Semi-NMF and Nonnegative Factorization of Arbitrary Matrix (34)
        • 1.2.6 Three-factor NMF (34)
        • 1.2.7 NMF with Offset (Affine NMF) (37)
        • 1.2.8 Multi-layer NMF (38)
        • 1.2.9 Simultaneous NMF (38)
        • 1.2.10 Projective and Convex NMF (39)
        • 1.2.11 Kernel NMF (40)
        • 1.2.12 Convolutive NMF (40)
        • 1.2.13 Overlapping NMF (41)
      • 1.3 Basic Approaches to Estimate Parameters of Standard NMF (42)
        • 1.3.1 Large-scale NMF (45)
        • 1.3.2 Non-uniqueness of NMF and Techniques to Alleviate the Ambiguity Problem (46)
        • 1.3.3 Initialization of NMF (48)
        • 1.3.4 Stopping Criteria (49)
      • 1.4 Tensor Properties and Basis of Tensor Algebra (50)
        • 1.4.1 Tensors (Multi-way Arrays) – Preliminaries (50)
        • 1.4.2 Subarrays, Tubes and Slices (51)
        • 1.4.3 Unfolding – Matricization (52)
        • 1.4.4 Vectorization (55)
        • 1.4.5 Outer, Kronecker, Khatri-Rao and Hadamard Products (56)
        • 1.4.6 Mode-n Multiplication of Tensor by Matrix and Tensor by Vector, Contracted Tensor Product (58)
        • 1.4.7 Special Forms of Tensors (62)
      • 1.5 Tensor Decompositions and Factorizations (63)
        • 1.5.1 Why Multi-way Array Decompositions and Factorizations? (64)
        • 1.5.2 PARAFAC and Nonnegative Tensor Factorization (66)
        • 1.5.3 NTF1 Model (71)
        • 1.5.4 NTF2 Model (73)
        • 1.5.5 Individual Differences in Scaling (INDSCAL) and Implicit Slice Canonical Decomposition Model (IMCAND) (76)
        • 1.5.6 Shifted PARAFAC and Convolutive NTF (77)
        • 1.5.7 Nonnegative Tucker Decompositions (79)
        • 1.5.8 Block Component Decompositions (83)
        • 1.5.9 Block-Oriented Decompositions (86)
        • 1.5.10 PARATUCK2 and DEDICOM Models (87)
        • 1.5.11 Hierarchical Tensor Decomposition (89)
      • 1.6 Discussion and Conclusions (90)
      • Appendix 1.A: Uniqueness Conditions for Three-way Tensor Factorizations (90)
      • Appendix 1.B: Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) with Sparsity and/or Nonnegativity Constraints (91)
        • 1.B.1 Standard SVD and PCA (92)
        • 1.B.2 Sparse PCA (94)
        • 1.B.3 Nonnegative PCA (95)
      • Appendix 1.C: Determining a True Number of Components (95)
      • Appendix 1.D: Nonnegative Rank Factorization Using Wedderborn Theorem – Estimation of the Number of Components (98)
      • References (99)
    • 2 Similarity Measures and Generalized Divergences (105)
      • 2.1 Error-induced Distance and Robust Regression Techniques (106)
      • 2.2 Robust Estimation (108)
      • 2.3 Csiszár Divergences (114)
      • 2.4 Bregman Divergence (120)
        • 2.4.1 Bregman Matrix Divergences (127)
      • 2.5 Alpha-Divergences (128)
        • 2.5.1 Asymmetric Alpha-Divergences (128)
        • 2.5.2 Symmetric Alpha-Divergences (134)
      • 2.6 Beta-Divergences (136)
      • 2.7 Gamma-Divergences (140)
      • 2.8 Divergences Derived from Tsallis and Rényi Entropy (142)
        • 2.8.1 Concluding Remarks (143)
      • Appendix 2.A: Information Geometry, Canonical Divergence, and Projection (144)
        • 2.A.1 Space of Probability Distributions (144)
        • 2.A.2 Geometry of Space of Positive Measures (147)
      • Appendix 2.B: Probability Density Functions for Various Distributions (149)
      • References (151)
    • 3 Multiplicative Iterative Algorithms for NMF with Sparsity Constraints (155)
      • 3.1 Extended ISRA and EMML Algorithms: Regularization and Sparsity (156)
        • 3.1.1 Multiplicative NMF Algorithms Based on the Squared Euclidean Distance (156)
        • 3.1.2 Multiplicative NMF Algorithms Based on Kullback-Leibler I-Divergence (163)
      • 3.2 Multiplicative Algorithms Based on Alpha-Divergence (167)
        • 3.2.1 Multiplicative Alpha NMF Algorithm (167)
        • 3.2.2 Generalized Multiplicative Alpha NMF Algorithms (171)
      • 3.3 Alternating SMART: Simultaneous Multiplicative Algebraic Reconstruction Technique (172)
        • 3.3.1 Alpha SMART Algorithm (172)
        • 3.3.2 Generalized SMART Algorithms (174)
      • 3.4 Multiplicative NMF Algorithms Based on Beta-Divergence (175)
        • 3.4.1 Multiplicative Beta NMF Algorithm (175)
        • 3.4.2 Multiplicative Algorithm Based on the Itakura-Saito Distance (180)
        • 3.4.3 Generalized Multiplicative Beta Algorithm for NMF (180)
      • 3.5 Algorithms for Semi-orthogonal NMF and Orthogonal Three-Factor NMF (181)
      • 3.6 Multiplicative Algorithms for Affine NMF (183)
      • 3.7 Multiplicative Algorithms for Convolutive NMF (184)
        • 3.7.1 Multiplicative Algorithm for Convolutive NMF Based on Alpha-Divergence (186)
        • 3.7.2 Multiplicative Algorithm for Convolutive NMF Based on Beta-Divergence (186)
        • 3.7.3 Efficient Implementation of CNMF Algorithm (189)
      • 3.8 Simulation Examples for Standard NMF (190)
      • 3.9 Examples for Affine NMF (194)
      • 3.10 Music Analysis and Decomposition Using Convolutive NMF (200)
      • 3.11 Discussion and Conclusions (208)
      • Appendix 3.A: Fast Algorithms for Large-scale Data (211)
        • 3.A.1 Random Block-wise Processing Approach – Large-scale NMF (211)
        • 3.A.2 Multi-layer Procedure (212)
        • 3.A.3 Parallel Processing (212)
      • Appendix 3.B: Performance Evaluation (212)
        • 3.B.1 Signal-to-Interference-Ratio - SIR (212)
        • 3.B.2 Peak Signal-to-Noise-Ratio (PSNR) (214)
      • Appendix 3.C: Convergence Analysis of the Multiplicative Alpha NMF Algorithm (215)
      • Appendix 3.D: MATLAB Implementation of the Multiplicative NMF Algorithms (217)
        • 3.D.1 Alpha Algorithm (217)
        • 3.D.2 SMART Algorithm (219)
        • 3.D.3 ISRA Algorithm for NMF (221)
      • Appendix 3.E: Additional MATLAB Functions (222)
        • 3.E.1 Multi-layer NMF (222)
        • 3.E.2 MC Analysis with Distributed Computing Tool (223)
      • References (223)
    • 4 Alternating Least Squares and Related Algorithms for NMF and SCA Problems (227)
      • 4.1 Standard ALS Algorithm (227)
        • 4.1.1 Multiple Linear Regression – Vectorized Version of ALS Update Formulas (230)
        • 4.1.2 Weighted ALS (230)
      • 4.2 Methods for Improving Performance and Convergence Speed of ALS Algorithms (231)
        • 4.2.1 ALS Algorithm for Very Large-scale NMF (231)
        • 4.2.2 ALS Algorithm with Line-Search (232)
        • 4.2.3 Acceleration of ALS Algorithm via Simple Regularization (232)
      • 4.3 ALS Algorithm with Flexible and Generalized Regularization Terms (233)
        • 4.3.1 ALS with Tikhonov Type Regularization Terms (234)
        • 4.3.2 ALS Algorithms with Sparsity Control and Decorrelation (235)
      • 4.4 Combined Generalized Regularized ALS Algorithms (236)
      • 4.5 Wang-Hancewicz Modified ALS Algorithm (237)
      • 4.6 Implementation of Regularized ALS Algorithms for NMF (237)
      • 4.7 HALS Algorithm and its Extensions (238)
        • 4.7.1 Projected Gradient Local Hierarchical Alternating Least Squares (HALS) Algorithm (238)
        • 4.7.2 Extensions and Implementations of the HALS Algorithm (240)
        • 4.7.3 Fast HALS NMF Algorithm for Large-scale Problems (241)
        • 4.7.4 HALS NMF Algorithm with Sparsity, Smoothness and Uncorrelatedness Constraints (244)
        • 4.7.5 HALS Algorithm for Sparse Component Analysis and Flexible Component Analysis (246)
        • 4.7.6 Simplified HALS Algorithm for Distributed and Multi-task Compressed Sensing (251)
        • 4.7.7 Generalized HALS-CS Algorithm (255)
        • 4.7.8 Generalized HALS Algorithms Using Alpha-Divergence (257)
        • 4.7.9 Generalized HALS Algorithms Using Beta-Divergence (258)
      • 4.8 Simulation Results (260)
        • 4.8.1 Underdetermined Blind Source Separation Examples (260)
        • 4.8.2 NMF with Sparseness, Orthogonality and Smoothness Constraints (261)
        • 4.8.3 Simulations for Large-scale NMF (263)
        • 4.8.4 Illustrative Examples for Compressed Sensing (265)
      • 4.9 Discussion and Conclusions (273)
      • Appendix 4.A: MATLAB Source Code for ALS Algorithm (276)
      • Appendix 4.B: MATLAB Source Code for Regularized ALS Algorithms (277)
      • Appendix 4.C: MATLAB Source Code for Mixed ALS-HALS Algorithms (280)
      • Appendix 4.D: MATLAB Source Code for HALS CS Algorithm (283)
      • Appendix 4.E: Additional MATLAB Functions (285)
      • References (288)
    • 5 Projected Gradient Algorithms (291)
      • 5.1 Oblique Projected Landweber (OPL) Method (292)
      • 5.2 Lin’s Projected Gradient (LPG) Algorithm with Armijo Rule (294)
      • 5.3 Barzilai-Borwein Gradient Projection for Sparse Reconstruction (GPSR-BB) (295)
      • 5.4 Projected Sequential Subspace Optimization (PSESOP) (297)
      • 5.5 Interior Point Gradient (IPG) Algorithm (299)
      • 5.6 Interior Point Newton (IPN) Algorithm (300)
      • 5.7 Regularized Minimal Residual Norm Steepest Descent Algorithm (RMRNSD) (303)
      • 5.8 Sequential Coordinate-Wise Algorithm (SCWA) (305)
      • 5.9 Simulations (307)
      • 5.10 Discussions (313)
      • Appendix 5.A: Stopping Criteria (314)
      • Appendix 5.B: MATLAB Source Code for Lin’s PG Algorithm (316)
      • References (317)
    • 6 Quasi-Newton Algorithms for Nonnegative Matrix Factorization (319)
      • 6.1 Projected Quasi-Newton Optimization (320)
        • 6.1.1 Projected Quasi-Newton for Frobenius Norm (320)
        • 6.1.2 Projected Quasi-Newton for Alpha-Divergence (322)
        • 6.1.3 Projected Quasi-Newton for Beta-Divergence (327)
        • 6.1.4 Practical Implementation (329)
      • 6.2 Gradient Projection Conjugate Gradient (329)
      • 6.3 FNMA algorithm (332)
      • 6.4 NMF with Quadratic Programming (334)
        • 6.4.1 Nonlinear Programming (335)
        • 6.4.2 Quadratic Programming (336)
        • 6.4.3 Trust-region Subproblem (338)
        • 6.4.4 Updates for A (340)
      • 6.5 Hybrid Updates (342)
      • 6.6 Numerical Results (343)
      • 6.7 Discussions (347)
      • Appendix 6.A: Gradient and Hessian of Cost Functions (348)
      • Appendix 6.B: MATLAB Source Codes (349)
      • References (357)
    • 7 Multi-Way Array (Tensor) Factorizations and Decompositions (361)
      • 7.1 Learning Rules for the Extended Three-way NTF1 Problem (361)
        • 7.1.1 Basic Approaches for the Extended NTF1 Model (362)
        • 7.1.2 ALS Algorithms for NTF1 (364)
        • 7.1.3 Multiplicative Alpha and Beta Algorithms for the NTF1 Model (365)
        • 7.1.4 Multi-layer NTF1 Strategy (367)
      • 7.2 Algorithms for Three-way Standard and Super Symmetric Nonnegative Tensor Factorization (368)
        • 7.2.1 Multiplicative NTF Algorithms Based on Alpha- and Beta-Divergences (369)
        • 7.2.2 Simple Alternative Approaches for NTF and SSNTF (374)
      • 7.3 Nonnegative Tensor Factorizations for Higher-Order Arrays (375)
        • 7.3.1 Alpha NTF Algorithm (377)
        • 7.3.2 Beta NTF Algorithm (379)
        • 7.3.3 Fast HALS NTF Algorithm Using Squared Euclidean Distance (379)
        • 7.3.4 Generalized HALS NTF Algorithms Using Alpha- and Beta-Divergences (382)
        • 7.3.5 Tensor Factorization with Additional Constraints (384)
      • 7.4 Algorithms for Nonnegative and Semi-Nonnegative Tucker Decompositions (385)
        • 7.4.1 Higher Order SVD (HOSVD) and Higher Order Orthogonal Iteration (HOOI) Algorithms (386)
        • 7.4.2 ALS Algorithm for Nonnegative Tucker Decomposition (389)
        • 7.4.3 HOSVD, HOOI and ALS Algorithms as Initialization Tools for Nonnegative Tensor Decomposition (390)
        • 7.4.4 Multiplicative Alpha Algorithms for Nonnegative Tucker Decomposition (390)
        • 7.4.5 Beta NTD Algorithm (394)
        • 7.4.6 Local ALS Algorithms for Nonnegative TUCKER Decompositions (394)
        • 7.4.7 Semi-Nonnegative Tucker Decomposition (398)
      • 7.5 Nonnegative Block-Oriented Decomposition (399)
        • 7.5.1 Multiplicative Algorithms for NBOD (400)
      • 7.6 Multi-level Nonnegative Tensor Decomposition - High Accuracy Compression and Approximation (401)
      • 7.7 Simulations and Illustrative Examples (402)
        • 7.7.1 Experiments for Nonnegative Tensor Factorizations (402)
        • 7.7.2 Experiments for Nonnegative Tucker Decomposition (408)
        • 7.7.3 Experiments for Nonnegative Block-Oriented Decomposition (416)
        • 7.7.4 Multi-Way Analysis of High Density Array EEG – Classification of Event Related Potentials (419)
        • 7.7.5 Application of Tensor Decomposition in Brain Computer Interface – Classification of Motor Imagery Tasks (428)
        • 7.7.6 Image and Video Applications (433)
      • 7.8 Discussion and Conclusions (436)
      • Appendix 7.A: Evaluation of Interactions and Relationships Among Hidden Components for NTD Model (439)
      • Appendix 7.B: Computation of a Reference Tensor (440)
      • Appendix 7.C: Trilinear and Direct Trilinear Decompositions for Efficient Initialization (442)
      • Appendix 7.D: MATLAB Source Code for Alpha NTD Algorithm (444)
      • Appendix 7.E: MATLAB Source Code for Beta NTD Algorithm (445)
      • Appendix 7.F: MATLAB Source Code for HALS NTD Algorithm (447)
      • Appendix 7.G: MATLAB Source Code for ALS NTF1 Algorithm (449)
      • Appendix 7.H: MATLAB Source Code for ISRA BOD Algorithm (450)
      • Appendix 7.I: Additional MATLAB functions (451)
      • References (453)
    • 8 Selected Applications (457)
      • 8.1 Clustering (457)
        • 8.1.1 Semi-Binary NMF (458)
        • 8.1.2 NMF vs. Spectral Clustering (459)
        • 8.1.3 Clustering with Convex NMF (460)
        • 8.1.4 Application of NMF to Text Mining (462)
        • 8.1.5 Email Surveillance (464)
      • 8.2 Classification (466)
        • 8.2.1 Musical Instrument Classification (466)
        • 8.2.2 Image Classification (467)
      • 8.3 Spectroscopy (471)
        • 8.3.1 Raman Spectroscopy (471)
        • 8.3.2 Fluorescence Spectroscopy (473)
        • 8.3.3 Hyperspectral Imaging (474)
        • 8.3.4 Chemical Shift Imaging (476)
      • 8.4 Application of NMF for Analyzing Microarray Data (479)
        • 8.4.1 Gene Expression Classification (479)
        • 8.4.2 Analysis of Time Course Microarray Data (483)
      • References (491)
    • Index (497)
Loading...