NMST539 | Lab Session 4Wishart Distribution(application for confidence bands and statistical tests)LS 2017/2018 | Monday 12/03/18Rmd file (UTF8 coding)The R-software is available for download from the website: https://www.r-project.org A user-friendly interface (one of many): RStudio. Manuals and introduction into R (in Czech or English):
1. Wishart’s distributionThe Wishart distribution is a multivariate generalization of a univariate \(\chi^2\) distribution and the variance inference in the univariate case. It is named in honor of John Wishart, who formulated this distribution in 1928. The Wishart distribution is used as an eficient tool for the analysis of the variance-covariance matrix of same random sample \(\boldsymbol{X}_{1}, \dots, \boldsymbol{X}_{n}\), for some \(n \in \mathbb{N}\), where \(\boldsymbol{X}_{i} = (X_{i 1}, \dots, X_{i p})^\top\) is a \(p\)-dimensional random vector (for instance, \(p\) different covariates/characteristics recorded on each subject). The Wishart distribution is a multi-variate generalization of the \(\chi^2\) distribution in the following sence: for instance, for a normally distributed random sample \(\boldsymbol{X}_{1}, \dots, \boldsymbol{X}_{n}\) drawn from some multivariate distribution \(N_{p}(\boldsymbol{0}, \Sigma)\), with the zero mean vector and \(\Sigma\) being some variance-covariance matrix (symmetric and positive definite), we have that \[ \mathbb{X}^{\top}\mathbb{X} \sim W_{p}(\Sigma, n), \] for \(\mathbb{X} = (\boldsymbol{X}_{1}^\top, \dots, \boldsymbol{X}_{n}^\top)^{\top}\). Similarly, for a univariate random sample \(X_{1}, \dots, X_{n}\) drawn, for instance, from \(N(0,1)\), we have that \[ \boldsymbol{X}^{\top}\boldsymbol{X} \sim \chi_{n}^2, \] where \(\boldsymbol{X} = (X_{1}, \dots, X_{n})^\top\). Thus, for a sample of size \(n \in \mathbb{N}\) drawn from \(N(0,1)\) the corresponding distribution of \(\mathbb{X}^\top\mathbb{X}\) is \(W_{1}(1, n) \equiv \chi_{n}^2\). Thus, the Wishart distribution is a family of probability distributions defined over symmetric, nonnegative-definite matrix-valued random variables (so called random matrices). The corresponding density function of the Wishart distribution takes the form \[ f(\mathcal{X}) = \frac{1}{2^{np/2} |\Sigma|^{n/2} \Gamma_{p}(\frac{n}{2})} \cdot |\mathcal{X}|^{\frac{n - p - 1}{2}} e^{-(1/2) tr(\Sigma^{-1}\mathcal{X})}, \] for \(\mathcal{X}\) being a \(p\times p\) random matrix and \(\Gamma_{p}(\cdot)\) is a multivariate generalization of the Gamma function \(\Gamma(\cdot)\) In the R software there are various options (packages) on how to use and apply the Wishart distribution. To Do by Yourself (Theoretical and Practical)
A simple random sample from the univariate Wishart distribution \(W_{1}(\Sigma = 1, n = 10)\) (equivalently a \(\chi^2\) distribution with \(n = 10\) degrees of freedom) can be obtained, for instance, as
For a general \(p \in \mathbb{N}\) the same command can be used, however, with the corresponding variance-covariance matrix \(S\) to be properly specified as a symetric, positive nefinite matrix \(\Sigma\), for instance:
We can also use the standard approach for generating a sample from the \(\chi^2\) distribution – the R function
To Do by Yourself
2. Hotelling’s \(\boldsymbol{T^2}\) DistributionIn a similar manner as we define a classical \(t\)-distribution in a univariate case (i.e. standard normal \(N(0,1)\) variable devided by a square root of a \(\chi^2\) variable normalized by its degrees of freedom) we define a multivariate generalization (the Hotelling’s \(T^{2}\) distribution) as \[ n \boldsymbol{Y}^{\top} \mathbb{M}^{-1} \boldsymbol{Y} \sim T^{2}(n, p), \] where \(p \in \mathbb{N}\) is the dimension of the random vector \(Y \sim N_{p}(0, \mathbb{I})\) and \(n \in \mathbb{N}\) being the parameter of the Wishart distribution for the random matrix \(\mathbb{M} \sim W_{p}(\mathbb{I}, n)\). Moreover, the random vector \(\boldsymbol{Y}\) is assumed to be independent of the random matrix \(\mathbb{M}\). A special case for \(p = 1\) gives the standard Fisher distribution with one and \(n\) degrees of freedom (equivalently a square of the \(t\)-distribution with \(n\) degrees of freedom). However, the Hotelling’s \(T^2\) distribution with parameters \(p, n \in \mathbb{N}\) is, in general, closely related with the Fisher’s F distribution. It holds, that \[ T^{2}(p, n) \equiv \frac{n p}{n - p + 1}F_{p, n - p + 1}. \] Therefore, the standard univariate Fisher’s F distribution can be effectively used to draw critical values also from the Hotelling’s \(T^2\) distribution. The corresponding transformation between the Hotelling’s \(T^2\) distribution and the Fisher’s F distribution only depends on parameters \(n, p \in \mathbb{N}\). The role of the given parameters and different parameter settings in the Hotelling’s \(T^2\) distribution can be (a little) visualized, for instance, by utilizing the Fisher’s F distribution and the knowledge about its mean and variance:
Let us recall, that for some random variable \(X\) with the standard Fisher’s F distribution \(F_{df_1, df_2}\) we have the following:
For some multivariate random sample \(\boldsymbol{X}_{1}, \dots, \boldsymbol{X}_{n} \sim N_{p}(\boldsymbol{\mu}, \Sigma)\) for some unknown mean vector \(\boldsymbol{\mu} \in \mathbb{R}^{p}\) and some variance-covariance matrix \(\Sigma\), we have that \[ (n - 1)\Big(\overline{\boldsymbol{X}} - \boldsymbol{\mu}\Big)^\top \mathcal{S}^{-1} \Big(\overline{\boldsymbol{X}} - \boldsymbol{\mu}\Big) \sim T^2(p, n - 1), \] which can be also equivalently expressed as \[ \frac{n - p}{p} \Big(\overline{\boldsymbol{X}} - \boldsymbol{\mu}\Big)^\top \mathcal{S}^{-1} \Big(\overline{\boldsymbol{X}} - \boldsymbol{\mu}\Big) \sim F_{p, n - p}. \] This can be now used to construct confidence regions for the unknown mean vector \(\boldsymbol{\mu}\) for testing hypothesis about the true value of the vector of parameters \(\boldsymbol{\mu} = (\mu_{1}, \dots, \mu_{p})^{\top}\). However, rather than constructing a confidence region for \(\boldsymbol{\mu}\) (which can be impractical in higher dimensions for even slightly larger values of \(p\)) one focusses on construction confidence intervals for the elements of \(\boldsymbol{\mu}\) such that the mutual coverage is under control (usually we require a simultaneous coverage of \((1 - \alpha)\times 100~\%\) for some small \(\alpha \in (0,1)\)). For a hypothesis test \[ H_{0}: \boldsymbol{\mu} = \boldsymbol{\mu}_{0} \in \mathbb{R}^{p} \] \[ H_{1}: \boldsymbol{\mu} \neq \boldsymbol{\mu}_{0} \in \mathbb{R}^{p} \] we can use the following test statistic: \[ (n - 1)\Big(\overline{\boldsymbol{X}} - \boldsymbol{\mu}_{0}\Big)^\top \mathcal{S}^{-1} \Big(\overline{\boldsymbol{X}} - \boldsymbol{\mu}_{0}\Big), \] which, under the null hypothesis, follows the \(T^2(p, n - 1)\) distribution. Equivalently we also have that \[ \frac{n - p}{p} \Big(\overline{\boldsymbol{X}} - \boldsymbol{\mu}_{0}\Big)^\top \mathcal{S}^{-1} \Big(\overline{\boldsymbol{X}} - \boldsymbol{\mu}_{0}\Big) \] follows the Fisher \(F\)-distribution with \(p\) and \(n - p\) degrees of freedom. In the R software we can use the library
Now we can use the function Using the same approach we can also construct the confidence elipsoid for \(\boldsymbol{\mu} \in \mathbb{R}^p\) - it holds that \[ \frac{n - p}{p} \Big(\overline{\boldsymbol{X}} - \boldsymbol{\mu}_{0}\Big)^\top \mathcal{S}^{-1} \Big(\overline{\boldsymbol{X}} - \boldsymbol{\mu}_{0}\Big) \sim F_{p, n - p}, \] and therefore, the following set \[ \left\{\boldsymbol{\mu} \in \mathbb{R}^p;~ \Big( \overline{\boldsymbol{X}} - \boldsymbol{\mu}\Big)^\top \mathcal{S}^{-1} \Big( \overline{\boldsymbol{X}} - \boldsymbol{\mu}\Big) \leq \frac{p}{n - p} F_{p, n- p}(1 - \alpha) \right\} \] is a confidence region at the confidence level of \(\alpha = 1 - \alpha\) for the vector of parameters \(\boldsymbol{\mu} \in \mathbb{R}^p\) - an interior of iso-distance ellipsoid in \(\mathbb{R}^p\). A brief example on how it works (example from the lecture notes):
(which is the 95% confidence region the true mean vector) To Do by Yourself
Difference of two multivariate means with the same variance-covariance matrixIn an analogous way one can also construct a two sample Hotelling test to compare two population means. We assume a multivariate sample \(\boldsymbol{X}_{1}, \dots, \boldsymbol{X}_{n} \sim N_{p}(\boldsymbol{\mu}_{1}, \Sigma)\) and some another sample \(\boldsymbol{Y}_{1}, \dots, \boldsymbol{Y}_{M} \sim N_{p}(\boldsymbol{\mu}_{2}, \Sigma)\), with some generally different mean parameter \(\boldsymbol{\mu}_{1} \neq \boldsymbol{\mu}_{2}\). We are interested in testing the null hypothesis \[ H_{0}: \boldsymbol{\mu} = \boldsymbol{\mu}_{0} \] against the alternative hypothesis, that the null does not hold. We have that the following holds: \[ (\overline{X}_{1} - \overline{X}_{2}) \sim N_{p}\Bigg(\boldsymbol{\Delta}, \frac{n + m}{n m} \Sigma\Bigg), \] and also \[ n\mathcal{S}_{1} + m\mathcal{S}_{2} \sim W_{p}(\Sigma, n + m - 2), \] where \(\mathcal{S}_{1}\) and \(\mathcal{S}_{2}\) respectively are the empirical estimates of the variance-covariance matrix \(\Sigma\), given the first sample and the second sample respectively. Then the rejection region is defined as \[ \frac{nm(n + m - p - 1)}{p(n + m)^2}(\overline{\boldsymbol{X}}_{1} - \overline{\boldsymbol{X}}_{2})^{\top} \mathcal{S}^{-1} (\overline{\boldsymbol{X}}_{1} - \overline{\boldsymbol{X}}_{2}) \geq F_{p, n + m - p - 1}(1 - \alpha). \]Difference of two multivariate means with unequal variance-covariance matrixNow, we assume a multivariate sample \(\boldsymbol{X}_{1}, \dots, \boldsymbol{X}_{n} \sim N_{p}(\boldsymbol{\mu}_{1}, \Sigma_{1})\) and some another sample \(\boldsymbol{Y}_{1}, \dots, \boldsymbol{Y}_{M} \sim N_{p}(\boldsymbol{\mu}_{2}, \Sigma_{2})\), with some generally different mean parameter \(\boldsymbol{\mu}_{1} \neq \boldsymbol{\mu}_{2}\). We are interested in testing the null hypothesis \[ H_{0}: \boldsymbol{\mu} = \boldsymbol{\mu}_{0} \] against the alternative hypothesis, that the null does not hold. Again, we have that the following holds: \[ (\overline{X}_{1} - \overline{X}_{2}) \sim N_{p}\Bigg(\boldsymbol{\Delta}, \frac{\Sigma_{1}}{n} + \frac{\Sigma_{2}}{m}\Bigg), \] and therefore, it also holds that \[ (\overline{X}_{1} - \overline{X}_{2})^\top \Big(\frac{\Sigma_{1}}{n} + \frac{\Sigma_{2}}{m}\Big)^{-1} (\overline{X}_{1} - \overline{X}_{2}) \sim \chi_{p}^{2}. \] To Do by Yourself
There is a dataset called
The estimates for the mean vector and variance-covariance matrix can be obtained as
and we can test whether the mean vector of concentrations in the first container ( For a two sample problem we can use the first covariate in the data to define two populations and we provide a test whether the mean vectors are equal or not. The corresponding Hotelling test statistic equals
and the corresponding test is performed by
Questions
The correponding simultaneous confidence intervals for all possible linear combinations \(\boldsymbol{a}^\top \boldsymbol{\mu}\) of the mean vector \(\boldsymbol{\mu} \in \mathbb{R}^{p}\) are given as \[ P\Big(\forall \boldsymbol{a} \in \mathbb{R}^{p};~ \boldsymbol{a}^\top \boldsymbol{\mu} \in \big( \boldsymbol{a}^\top\overline{\boldsymbol{X}} - \sqrt{K_{\alpha} \boldsymbol{a}^\top \mathcal{S} \boldsymbol{a}}, \boldsymbol{a}^\top\overline{\boldsymbol{X}} + \sqrt{K_{\alpha} \boldsymbol{a}^\top \mathcal{S} \boldsymbol{a}} \big) \big)\Big) = 1 - \alpha, \] where \(K_{\alpha}\) is the corresponding quantile obtained from the Fisher \(F\) distribution in the form \(K_{\alpha} = \frac{p}{n - p} F_{p, n - p}(1 - \alpha)\) and \(\mathcal{S}\) is the sample variance-covariance matrix.
To Do by Yourself
3. Wilk’s Lambda DistributionThe Wilks’s lambda distribution is defined from two independent Wishart distributed random matrices. The Wilk’s lambda distribution is a multivariate analogue for the univariate Fisher’s F distribution and it is used for inference on two variance-covariace matrices (in a similar way as the Fisher’s F distribution is used to perform an inference about two variacnce parameters). For two independent random matrices \[ \mathbb{A} \sim W_{p}(\mathbb{I}, n) \quad \textrm{and} \quad \mathbb{B} \sim W_{p}(\mathbb{I}, m) \] we define a random variable \(\frac{|\mathbb{A}|}{|\mathbb{A} + \mathbb{B}|}\) which is said to follow the Wilk’s Lambda distribution \(\Lambda(p, n, m)\). This distribution commnonly appears in the context of likelihood-ratio tests where \(n\) is typically the error degrees of freedom, and \(m\) is the hypothesis degrees of freedom, so that \(n+m\) is the total degrees of freedom. In a similar way as we perform the analysis of variance for univariate linear regression (ANOVA) we can use a multivariate approach – so called, multivariate analysis of variance (MANOVA). The corresponding inference in MANOVA (\(p\)-values of the tests) are based on the Wilk’s lambda distribution (see the standard R command To Do by Yourself
Homework Assignment(Deadline: fifth lab session / 19.03.2018)Use the command below to instal the
Chose one dataset of your choise from the list of all available datasets in the package:
There are 21 different datasets and you can load each of them by typing and running
|