برای استفاده از امکانات سیستم، گزینه جاوا اسکریپت در مرورگر شما باید فعال باشد
صفحه
از
0
Matrix differential calculus with applications in statistics and econometrics
Magnus, Jan R.
اطلاعات کتابشناختی
Matrix differential calculus with applications in statistics and econometrics
Author :
Magnus, Jan R.
Publisher :
Wiley,
Pub. Year :
1999
Subjects :
Matrices. Differential calculus. Statistics. Econometrics.
Call Number :
QA 188 .M345 1999
جستجو در محتوا
ترتيب
شماره صفحه
امتياز صفحه
فهرست مطالب
MATRIX DIFFERENTIAL CALCULUS WITH APPLICATIONS IN STATISTICS AND ECONOMETRICS, 3RD ED.
(1)
Back Cover
(2)
Title Page
(5)
Copyright Page
(6)
Contents
(7)
Preface
(15)
Part I: Matrices
(21)
Chapter 1. Basic properties of vectors and matrices
(23)
1 Introduction
(23)
2 Sets
(23)
3 Matrices: addition and multiplication
(24)
4 The transpose of a matrix
(26)
5 Square matrices
(26)
6 Linear forms and quadratic forms
(27)
7 The rank of a matrix
(28)
8 The inverse
(29)
9 The determinant
(30)
10 The trace
(31)
11 Partitioned matrices
(31)
12 Complex matrices
(33)
13 Eigenvalues and eigenvectors
(34)
14 Schur’s decomposition theorem
(37)
15 The Jordan decomposition
(38)
16 The singular-value decomposition
(39)
17 Further results concerning eigenvalues
(40)
18 Positive (semi)definite matrices
(43)
19 Three further results for positive definite matrices
(45)
20 A useful result
(47)
Miscellaneous exercises
(47)
Bibliographical notes
(49)
Chapter 3. Miscellaneous matrix results
(67)
1 Introduction
(67)
2 The adjoint matrix
(67)
3 Proof of Theorem 1
(69)
4 Bordered determinants
(71)
5 The matrix equation AX = 0
(71)
6 The Hadamard product
(73)
7 The commutation matrix K mn
(74)
8 The duplication matrix D n
(76)
9 Relationship between D n+1 and D n , I
(78)
10 Relationship between D n+1 and D n , II
(80)
11 Conditions for a quadratic form to be positive (negative) subject to linear constraints
(81)
12 Necessary and sufficient conditions for r(A:B) = r(A) + r(B)
(84)
13 The bordered Gramian matrix
(86)
14 The equations X1 A + X2 B' = G1, X1 B = G2
(88)
Miscellaneous exercises
(91)
Bibliographical notes
(91)
Chapter 2. Kronecker products, the vec operator and the Moore–Penrose inverse
(51)
1 Introduction
(51)
2 The Kronecker product
(51)
3 Eigenvalues of a Kronecker product
(53)
4 The vec operator
(54)
5 The Moore–Penrose (MP) inverse
(56)
6 Existence and uniqueness of the MP inverse
(57)
7 Some properties of the MP inverse
(58)
8 Further properties
(59)
9 The solution of linear equation systems
(61)
Miscellaneous exercises
(63)
Bibliographical notes
(65)
Part II: Differentials: the theory
(93)
Chapter 4. Mathematical preliminaries
(95)
1 Introduction
(95)
2 Interior points and accumulation points
(95)
3 Open and closed sets
(96)
4 The Bolzano–Weierstrass theorem
(99)
5 Functions
(100)
6 The limit of a function
(101)
7 Continuous functions and compactness
(102)
8 Convex sets
(103)
9 Convex and concave functions
(105)
Bibliographical notes
(108)
Chapter 5. Differentials and differentiability
(109)
1 Introduction
(109)
2 Continuity
(109)
3 Differentiability and linear approximation
(111)
4 The differential of a vector function
(113)
5 Uniqueness of the differential
(115)
6 Continuity of differentiable functions
(116)
7 Partial derivatives
(117)
8 The first identification theorem
(118)
9 Existence of the differential, I
(119)
10 Existence of the differential, II
(121)
11 Continuous differentiability
(123)
12 The chain rule
(123)
13 Cauchy invariance
(125)
14 The mean-value theorem for real-valued functions
(126)
15 Matrix functions
(127)
16 Some remarks on notation
(129)
Miscellaneous exercises
(130)
Bibliographical notes
(131)
Chapter 6. The second differential
(133)
1 Introduction
(133)
2 Second-order partial derivatives
(133)
3 The Hessian matrix
(134)
4 Twice differentiability and second-order approximation, I
(135)
5 Definition of twice differentiability
(136)
6 The second differential
(138)
7 (Column) symmetry of the Hessian matrix
(140)
8 The second identification theorem
(142)
9 Twice differentiability and second-order approximation, II
(143)
10 Chain rule for Hessian matrices
(145)
11 The analogue for second differentials
(146)
12 Taylor’s theorem for real-valued functions
(148)
13 Higher-order differentials
(149)
14 Matrix functions
(149)
Bibliographical notes
(151)
Chapter 7. Static optimization
(153)
1 Introduction
(153)
2 Unconstrained optimization
(154)
3 The existence of absolute extrema
(155)
4 Necessary conditions for a local minimum
(157)
5 Sufficient conditions for a local minimum: first-derivative test
(158)
6 Sufficient conditions for a local minimum: second-derivative tes
(160)
7 Characterization of differentiable convex functions
(162)
8 Characterization of twice differentiable convex functions
(165)
9 Sufficient conditions for an absolute minimum
(167)
10 Monotonic transformations
(167)
11 Optimization subject to constraints
(168)
12 Necessary conditions for a local minimum under constraints
(169)
13 Sufficient conditions for a local minimum under constraints
(174)
14 Sufficient conditions for an absolute minimum under constraint
(178)
15 A note on constraints in matrix form
(179)
16 Economic interpretation of Lagrange multipliers
(180)
Appendix: the implicit function theorem
(182)
Bibliographical notes
(183)
Part III: Differentials: the practice
(185)
Chapter 8. Some important differentials
(187)
1 Introduction
(187)
2 Fundamental rules of differential calculus
(187)
3 The differential of a determinant
(189)
4 The differential of an inverse
(191)
5 Differential of the Moore–Penrose inverse
(192)
6 The differential of the adjoint matrix
(195)
7 On differentiating eigenvalues and eigenvectors
(197)
8 The differential of eigenvalues and eigenvectors: symmetric case
(199)
9 The differential of eigenvalues and eigenvectors: complex case
(202)
10 Two alternative expressions for dλ
(205)
11 Second differential of the eigenvalue function
(208)
12 Multiple eigenvalues
(209)
Miscellaneous exercises
(209)
Bibliographical notes
(212)
Chapter 9. First-order differentials and Jacobian matrices
(213)
1 Introduction
(213)
2 Classification
(213)
3 Bad notation
(214)
4 Good notation
(216)
5 Identification of Jacobian matrices
(218)
6 The first identification table
(218)
7 Partitioning of the derivative
(219)
8 Scalar functions of a vector
(220)
9 Scalar functions of a matrix, I: trace
(220)
10 Scalar functions of a matrix, II: determinant
(222)
11 Scalar functions of a matrix, III: eigenvalue
(224)
12 Two examples of vector functions
(224)
13 Matrix functions
(225)
14 Kronecker products
(228)
15 Some other problems
(230)
Bibliographical notes
(231)
Chapter 10. Second-order differentials and Hessian matrices
(233)
1 Introduction
(233)
2 The Hessian matrix of a matrix function
(233)
3 Identification of Hessian matrices
(234)
4 The second identification table
(235)
5 An explicit formula for the Hessian matrix
(237)
6 Scalar functions
(237)
7 Vector functions
(239)
8 Matrix functions, I
(240)
9 Matrix functions, II
(241)
Part IV: Inequalities
(243)
Chapter 11. Inequalities
(245)
1 Introduction
(245)
2 The Cauchy-Schwarz inequality
(245)
3 Matrix analogues of the Cauchy–Schwarz inequality
(247)
4 The theorem of the arithmetic and geometric means
(248)
5 The Rayleigh quotient
(250)
6 Concavity of λ 1, convexity of λ n
(251)
7 Variational description of eigenvalues
(252)
8 Fischer’s min-max theorem
(253)
9 Monotonicity of the eigenvalues
(255)
10 The Poincaré separation theorem
(256)
11 Two corollaries of Poincaré’s theorem
(257)
12 Further consequences of the Poincaré theorem
(258)
13 Multiplicative version
(259)
14 The maximum of a bilinear form
(261)
15 Hadamard’s inequality
(262)
16 An interlude: Karamata’s inequality
(263)
17 Karamata’s inequality applied to eigenvalues
(265)
18 An inequality concerning positive semidefinite matrices
(265)
19 A representation theorem for (∑ a i ^p)^(1/p)
(266)
20 A representation theorem for (tr A^p)^(1/p)
(268)
21 Hölder’s inequality
(269)
22 Concavity of log|A|
(270)
23 Minkowski’s inequality
(272)
24 Quasilinear representation of |A|^(1/n)
(274)
25 Minkowski’s determinant theorem
(276)
26 Weighted means of order p
(276)
27 Schlömilch’s inequality
(279)
28 Curvature properties of M p (x, a)
(280)
29 Least squares
(281)
30 Generalized least squares
(283)
31 Restricted least squares
(283)
32 Restricted least squares: matrix version
(285)
Miscellaneous exercises
(286)
Bibliographical notes
(290)
Part V: The linear model
(293)
Chapter 12. Statistical preliminaries
(295)
1 Introduction
(295)
2 The cumulative distribution function
(295)
3 The joint density function
(296)
4 Expectations
(296)
5 Variance and covariance
(297)
6 Independence of two random variables
(299)
7 Independence of n random variables
(301)
8 Sampling
(301)
9 The one-dimensional normal distribution
(301)
10 The multivariate normal distribution
(302)
11 Estimation
(304)
Miscellaneous exercises
(305)
Bibliographical notes
(306)
Chapter 13. The linear regression model
(307)
1 Introduction
(307)
2 Affine minimum-trace unbiased estimation
(308)
3 The Gauss–Markov theorem
(309)
4 The method of least squares
(312)
5 Aitken’s theorem
(313)
6 Multicollinearity
(315)
7 Estimable functions
(317)
8 Linear constraints: the case M(R´)⊂M(X´)
(319)
9 Linear constraints: the general case
(322)
10 Linear constraints: the case M(R´)∩M(X´) = {0}
(325)
11 A singular variance matrix: the case M(X)⊂M(V )
(326)
12 A singular variance matrix: the case r(X´V+X) = r(X)
(328)
13 A singular variance matrix: the general case, I
(329)
14 Explicit and implicit linear constraints
(330)
15 The general linear model, I
(333)
16 A singular variance matrix: the general case, II
(334)
17 The general linear model, II
(337)
18 Generalized least squares
(338)
19 Restricted least squares
(339)
Miscellaneous exercises
(341)
Bibliographical notes
(342)
Chapter 14. Further topics in the linear model
(343)
1 Introduction
(343)
2 Best quadratic unbiased estimation of σ²
(343)
3 The best quadratic and positive unbiased estimator of σ²
(344)
4 The best quadratic unbiased estimator of σ²
(346)
5 Best quadratic invariant estimation of σ²
(349)
6 The best quadratic and positive invariant estimator of σ²
(350)
7 The best quadratic invariant estimator of σ²
(351)
8 Best quadratic unbiased estimation: multivariate normal case
(352)
9 Bounds for the bias of the least squares estimator of σ², I
(355)
10 Bounds for the bias of the least squares estimator of σ², II
(356)
11 The prediction of disturbances
(358)
12 Best linear unbiased predictors with scalar variance matrix
(359)
13 Best linear unbiased predictors with fixed variance matrix, I
(361)
14 Best linear unbiased predictors with fixed variance matrix, II
(364)
15 Local sensitivity of the posterior mean
(365)
16 Local sensitivity of the posterior precision
(367)
Bibliographical notes
(368)
Part VI: Applications to maximum likelihood estimation
(369)
Chapter 15. Maximum likelihood estimation
(371)
1 Introduction
(371)
2 The method of maximum likelihood (ML)
(371)
3 ML estimation of the multivariate normal distribution
(372)
4 Symmetry: implicit versus explicit treatment
(374)
5 The treatment of positive definiteness
(375)
6 The information matrix
(376)
7 ML estimation of the multivariate normal distribution: distinct means
(377)
8 The multivariate linear regression model
(378)
9 The errors-in-variables model
(381)
10 The non-linear regression model with normal errors
(384)
11 Special case: functional independence of mean- and variance parameters
(385)
12 Generalization of Theorem 6
(386)
Miscellaneous exercises
(388)
Bibliographical notes
(390)
Chapter 16. Simultaneous equations
(391)
1 Introduction
(391)
2 The simultaneous equations model
(391)
3 The identification problem
(393)
4 Identification with linear constraints on B and Γ only
(395)
5 Identification with linear constraints on B, Γ and Σ
(395)
6 Non-linear constraints
(397)
7 Full-information maximum likelihood (FIML): the information matrix (general case)
(398)
8 Full-information maximum likelihood (FIML): the asymptotic variance matrix (special case)
(400)
9 Limited-information maximum likelihood (LIML): the first-order conditions
(403)
10 Limited-information maximum likelihood (LIML): the information matrix
(406)
11 Limited-information maximum likelihood (LIML): the asymptotic variance matrix
(408)
Bibliographical notes
(413)
Chapter 17. Topics in psychometrics
(415)
1 Introduction
(415)
2 Population principal components
(416)
3 Optimality of principal components
(417)
4 A related result
(418)
5 Sample principal components
(419)
6 Optimality of sample principal components
(421)
7 Sample analogue of Theorem 3
(421)
8 One-mode component analysis
(421)
9 One-mode component analysis and sample principal components
(424)
10 Two-mode component analysis
(425)
11 Multimode component analysis
(426)
12 Factor analysis
(430)
13 A zigzag routine
(433)
14 A Newton–Raphson routine
(435)
15 Kaiser’s varimax method
(438)
16 Canonical correlations and variates in the population
(441)
Bibliographical notes
(443)
Bibliography
(447)
Index of symbols
(459)
Subject index
(463)