NMST539 | Lab Session 4

Wishart Distribution

(application for confidence bands and statistical tests)

LS 2017 | Tuesday 21/03/2017

Rmd file (UTF8 encoding)

The R-software is available for download from the website: https://www.r-project.org

A user-friendly interface (one of many): RStudio.

Manuals and introduction into R (in Czech or English):

  • Bína, V., Komárek, A. a Komárková, L.: Jak na jazyk R. (PDF súbor)
  • Komárek, A.: Základy práce s R. (PDF súbor)
  • Kulich, M.: Velmi stručný úvod do R. (PDF súbor)
  • De Vries, A. a Meys, J.: R for Dummies. (ISBN-13: 978-1119055808)

Wishart distribution

The Wishart distribution is, as an analogy to the \(\chi^2\) distribution and the variance inference in the univariate case, a very useful tool for the analysis of the variance-covariance matrix of same random sample \(X_{1}, \dots, X_{n}\) for \(n \in \mathbb{N}\) where each \(X_{i}\) in the sample is a \(p\)-dimensional random vector (\(p\) different covariates recorded on each subject). Thus, it is probability distributions defined over symmetric and nonnegative-definite random matrices.

In general, the Wishart distribution is a multi-variate generalization of the \(\chi^2\) distribution: for instance, for a normaly distributed random sample \(\boldsymbol{X}_{1}, \dots, \boldsymbol{X}_{n}\) drawn from some multivariate distribution \(N_{p}(\boldsymbol{0}, \Sigma)\), with the zero mean vector and \(\Sigma\) being some variance-covariance matrix we have that \[ \mathbb{X}^{\top}\mathbb{X} \sim W_{p}(\Sigma, n), \] for \(\mathbb{X} = (\boldsymbol{X}_{1}, \dots, \boldsymbol{X}_{n})^{\top}\). Similarly, for a random sample \(X_{1}, \dots, X_{n}\) drawn from \(N(0,1)\) we have that \[ \boldsymbol{X}^{\top}\boldsymbol{X} \sim \chi_{n}^2, \] where \(\boldsymbol{X} = (X_{1}, \dots, X_{n})^\top\). Thus, for a sample of size \(n \in \mathbb{N}\) drawn from \(N(0,1)\) the corresponding distribution of \(\mathbb{X}^\top\mathbb{X}\) is \(W_{1}(1, n) \equiv \chi_{n}^2\).

The corresponding density function of the Wishart distribution takes the form \[ f(\mathcal{X}) = \frac{1}{2^{np/2} |\Sigma|^{n/2} \Gamma_{p}(\frac{n}{2})} \cdot |\mathcal{X}|^{\frac{n - p - 1}{2}} e^{-(1/2) tr(\Sigma^{-1}\mathcal{X})}, \] for \(\mathcal{X}\) being a \(p\times p\) random matrix and \(\Gamma_{p}(\cdot)\) is a multivariate generalization of the Gamma function \(\Gamma(\cdot)\) In the R software there are various options (packages) on how to use and apply the Wishart distribution.

To Do by Yourself (Theoretical and Practical)

  • Let the multivariate random sample \(\boldsymbol{X}_{1}, \dots, \boldsymbol{X}_{n}\) comes from some general multivariate distribution \(N_{p}(\boldsymbol{\mu}, \Sigma)\), with some mean vector \(\boldsymbol{\mu} \in \mathbb{R}^p\) and the variance-covariance positive definite matrix \(\Sigma\). Show, that the random matrix defined as \(n\mathbb{X}^\top \mathcal{H}\mathbb{X}\) follows the Wishart distribution \(W_{p}(\Sigma, n - 1)\), for \(\mathcal{H}\) being the idenpotent centering matrix \(\mathcal{H} = \mathbb{I}_{n} - \frac{1}{n}\boldsymbol{1}_{n}\boldsymbol{1}_{n}^\top\).
  • use the help session in R to find out more about some standard commands used in R for the Wishart distribution (in particular, check the help session for the command rWishart());
  • some other commands and additional extensions are available in some packages which needs to be downloaded and installed separately, e.g. function Wishart() in the library ‘MCMCpack’, or Wishart() in the library ‘mixAK’, or rwishart() in the library ‘dlm’, dist.Wishart() in the library ‘LaplacesDemon’, and others;

A simple random sample from Wishart distribution \(W_{1}(\Sigma = 1, n = 10)\) (equivalently a \(\chi^2\) distribution with \(n = 10\) degrees of freedom) can be obtained, for instance, as

S <- as.matrix(1)
sample <- rWishart(10, df = 10, S)

Using a standard approach for generating a univariate random sample we would use the R function rchisq() for this scenario. Try to use both and using some histograms compare whether do they correspond or not.For large sample size \(n \in \mathbb{N}\) we they should be quite similar/identical.

sampleWishart <- rWishart(5000, df = 10, S)
sampleChiSq <- rchisq(5000, df = 10)

par(mfrow = c(1,2))
hist(sampleWishart, col = "lightblue", main = expression(paste("Wishart Distribution", sep ="")), xlim = c(0, 40), breaks = 30, freq = F)
lines(density(sampleWishart), col = "red", lwd = 2)
hist(sampleChiSq, col = "lightblue", main = expression(paste(chi^2, "Distribution", sep = "")), xlim = c(0,40), breaks = 30, freq = F)
lines(density(sampleChiSq), col = "red", lwd = 2)

To Do by Yourself

  • consider some Wishart distribution for some \(p \in \mathbb{N}\) such that \(p > 1\);
  • generate a sample from the Wishart distribution \(W_{p}\) with some reasonable parameters \(\Sigma\) and \(df\);
  • consider some general matrix \(A \in \mathbb{R}^{p \times q}\) for some \(q \in \mathbb{N}\) such that \(p \neq q\) and verify (e.g. by histograms) that \(A^\top \mathbb{M} A \sim W_{q}(A^\top\Sigma A, n)\), where \(\mathbb{M} \sim W_{p}(\Sigma, n)\);

Hotelling’s \(\boldsymbol{T^2}\) Distribution

In a similar manner as we define a classical \(t\)-distribution in an univariate case (i.e. standard normal \(N(0,1)\) variable devided by a square root of a \(\chi^2\) variable normalized by its degrees of freedom) we define a multivariate generalization (Hotelling’s \(T^{2}\) distribution) as \[ n \boldsymbol{Y}^{\top} \mathbb{M}^{-1} \boldsymbol{Y} \sim T^{2}(n, p), \] to be a random variable with the Hotelling’s \(T^2\) distribution with \(p \in \mathbb{N}\) to be the dimension of \(Y \sim N_{p}(0, \mathbb{I})\) and \(n \in \mathbb{N}\) being the parameter of the Wishart distribution of \(\mathbb{M} \sim W_{p}(\mathbb{I}, n)\).

A special case for \(p = 1\) gives the standard Fisher distribution with one and \(n\) degrees of freedom (equivalently a square of the \(t\)-distribution with \(n\) degrees of freedom). The Hotelling’s \(T^2\) distribution with parameters \(p, n \in \mathbb{N}\) can be identivied with the Fisher distribution using the following expression \[ T^{2}(p, n) \equiv \frac{n p}{n - p + 1}F_{p, n - p + 1}. \]

In general, the relationship betwenn the Hotelling’s \(T^2\) distribution and the Fisher distribution is given by the same expression \[ T^{2}(p, n) \equiv \frac{n p}{n - p + 1}F_{p, n - p + 1}. \]

The effect of different parameter settings can be (a little) visualized, for instance, by the following figure:

samples <- NULL
samples <- cbind(samples, rf(n = 1000, df1 = 1, df2 = 1))
samples <- cbind(samples, rf(n = 1000, df1 = 1, df2 = 10))

samples <- cbind(samples, rf(n = 1000, df1 = 10, df2 = 10))
samples <- cbind(samples, rf(n = 1000, df1 = 10, df2 = 100))

samples <- cbind(samples, rf(n = 1000, df1 = 100, df2 = 10))
samples <- cbind(samples, rf(n = 1000, df1 = 100, df2 = 100))

samples <- cbind(samples, rf(n = 1000, df1 = 1000, df2 = 100))
samples <- cbind(samples, rf(n = 1000, df1 = 1000, df2 = 1000))

par(mfrow = c(4,2))
for (i in 1:dim(samples)[2]){
  hist(samples[,i], xlab=paste("Sample no.", i, sep = ""), col = "yellow", main = "", freq = F, breaks = 20)
  lines(density(samples[,i]), col = "red")

Moreover, using the relationship between the Hotelling’s \(T^2\) distribution and the Fisher’s F distriubtion we can effectively use the \(F\)-distribution to draw the corresponding critical values when needed for some statistical tests.

For some multivariate random sample \(\boldsymbol{X}_{1}, \dots, \boldsymbol{X}_{n} \sim N_{p}(\boldsymbol{\mu}, \Sigma)\) for some unknown mean vector \(\boldsymbol{\mu} \in \mathbb{R}^{p}\) and some variance-covariance matrix \(\Sigma\), we have that \[ (n - 1)\Big(\overline{\boldsymbol{X}} - \boldsymbol{\mu}\Big)^\top \mathcal{S}^{-1} \Big(\overline{\boldsymbol{X}} - \boldsymbol{\mu}\Big) \sim T^2(p, n - 1), \] which can be also equivalently expressed as \[ \frac{n - p}{p} \Big(\overline{\boldsymbol{X}} - \boldsymbol{\mu}\Big)^\top \mathcal{S}^{-1} \Big(\overline{\boldsymbol{X}} - \boldsymbol{\mu}\Big) \sim F_{p, n - p}. \] This can be now used to construct confidence regions for the unknown mean vector \(\boldsymbol{\mu}\) of for testing hypothesis about the true value of the vector of parameters \(\boldsymbol{\mu} = (\mu_{1}, \dots, \mu_{p})^{\top}\). However, rather than construction a confidence region for \(\boldsymbol{\mu}\) (which can be impractical in higher dimensions for even slightly larger values of \(p\)) one focusses on construction confidence intervals for the elements of \(\boldsymbol{\mu}\) such that the mutual coverage is under control (usually we require a simultaneous coverage of \((1 - \alpha)\times 100~\%\) for some small \(\alpha \in (0,1)\)).

For a hypothesis test

\[ H_{0}: \boldsymbol{\mu} = \boldsymbol{\mu}_{0} \in \mathbb{R}^{p} \]

\[ H_{1}: \boldsymbol{\mu} \neq \boldsymbol{\mu}_{0} \in \mathbb{R}^{p} \]

we can use the following test statistic:

\[ (n - 1)\Big(\overline{\boldsymbol{X}} - \boldsymbol{\mu}_{0}\Big)^\top \mathcal{S}^{-1} \Big(\overline{\boldsymbol{X}} - \boldsymbol{\mu}_{0}\Big), \]

which, under the null hypothesis, follows the \(T^2(p, n - 1)\) distribution. Equivalently we also have that

\[ \frac{n - p}{p} \Big(\overline{\boldsymbol{X}} - \boldsymbol{\mu}_{0}\Big)^\top \mathcal{S}^{-1} \Big(\overline{\boldsymbol{X}} - \boldsymbol{\mu}_{0}\Big) \]

follows the Fisher \(F\)-distribution with \(p\) and \(n - p\) degrees of freedom.

In the R software we can use the library install.packages("DescTools") - if it is installed in the R environment, we can load the library by the command


and we can use the function HotellingsT2Test() to perform the test on multivariate mean vector (use the help session to find out how the function works).

Using the same approach we can also construct the confidence elipsoid for \(\boldsymbol{\mu} \in \mathbb{R}^p\) - it holds that \[ \frac{n - p}{p} \Big(\overline{\boldsymbol{X}} - \boldsymbol{\mu}_{0}\Big)^\top \mathcal{S}^{-1} \Big(\overline{\boldsymbol{X}} - \boldsymbol{\mu}_{0}\Big) \sim F_{p, n - p}, \] and therefore, the following set \[ \left\{\boldsymbol{\mu} \in \mathbb{R}^p;~ \Big( \overline{\boldsymbol{X}} - \boldsymbol{\mu}\Big)^\top \mathcal{S}^{-1} \Big( \overline{\boldsymbol{X}} - \boldsymbol{\mu}\Big) \leq \frac{p}{n - p} F_{p, n- p}(1 - \alpha) \right\} \] is a confidence region at the confidence level of \(\alpha = 1 - \alpha\) for the vector of parameters \(\boldsymbol{\mu} \in \mathbb{R}^p\) - an interior of iso-distance ellipsoid in \(\mathbb{R}^p\).

A brief example on how it works (example from the lecture notes):

s=matrix(c(1,-0.5,-0.5,1),2);x=seq(-3,3,by=0.015) ### variance-covariance matrix

contour(x,x,outer(x,x, function(x