# Discriminant Analysis

According to Bayes’ Theorem, the posterior distribution that a sample ${x}$ belong to class ${l}$ is given by

$\displaystyle \pi(y=C_l | x) = \frac{\pi(y=C_l) \pi(x|y=C_l)}{\sum _{l=1}^{C}\pi(y=C_l) \pi(x|y=C_l)} \ \ \ \ \ (1)$

where ${\pi(y = C_l)}$ is the prior probability of membership in class ${C_l}$ and ${\pi(x|y=C_l)}$ is the conditional probability of the predictors ${x}$ given that data comes from class ${C_l}$.

The rule that minimizes the total probability of misclassification says to classify ${x}$ into class ${C_k}$ if

$\displaystyle \pi(y = C_k)\pi(x|y=C_k) \ \ \ \ \ (2)$

has the largest value across all of the ${C}$ classes.

There are different types of Discriminant Analysis and they usually differ on the assumptions made about the conditional distribution ${\pi(x|y=C_l)}$.

Linear Discriminant Analysis (LDA)

If we assume that ${\pi(x|y=C_l)}$ in Eq. (1) follows a multivariate Gaussian distribution ${N(\mu_l, \Sigma)}$ with a class-specific mean vector ${\mu_l}$ and a common covariance matrix ${\Sigma}$ we have that the ${\log}$ of Eq. (2), referred here as discriminant function, is given by

$\displaystyle x^T\Sigma^{-1}\mu_l - 0.5\mu_l^T \Sigma ^{-1}\mu_l + \log (\pi(y = C_l)),$

which is a linear function in ${x}$ that defines separating class boundaries, hence the name LDA.

In practice [1], we estimate the prior probability ${\pi(y = C_l)}$, the class-specific mean ${\mu_l}$ and the covariance matrix ${\Sigma}$ by ${\hat{\pi}_l}$, ${\hat{\mu}_l}$ and ${\hat{\Sigma}}$, respectively, where:

• ${\hat{\pi}_l = N_l/N}$, where ${N_l}$ is the number of class ${l}$ observations and ${N}$ is the total number of observations.
• ${\hat{\mu}_l = \sum_{\{i:y_i=l\}} x_i/N_l}$
• ${\hat{\Sigma} = \sum_{l=1}^{C} \sum_{\{i:y_i=l\}} (x_i - \hat{\mu}_l)(x_i - \hat{\mu}_l)^T/(N-C)}$

Quadratic Discriminant Analysis (QDA)

If instead we assume that ${\pi(x|y=C_l)}$ in Eq. (1) follows a multivariate Gaussian ${N(\mu_l, \Sigma _l)}$ with class-specific mean vector and covariance matrix we have a quadratic discriminant function

$\displaystyle -0.5 \log |\Sigma _l| - 0.5(x - \mu_l)^T \Sigma _l ^{-1}(x - \mu_l) + \log (\pi(y = C_l)),$

and the decision boundary between each pair of classes ${k}$ and ${l}$ is now described by a quadratic equation.

Notice that we pay a price for this increased flexibility when compared to LDA. We now have to estimate one covariance matrix for each class, which means a significant increase in the number of parameters to be estimated. This implies that the number of predictors needs to be less than the number of cases within each class to ensure that the class-specific covariance matrix is not singular. In addition, if the majority of the predictors in the data are indicators for discrete categories, QDA will only be able to model these as linear functions, thus limiting the effectiveness of the model [2].

Regularized Discriminant Analysis

Friedman ([1], [3]) proposed a compromise between LDA and QDA, which allows one to shrink the separate covariances of QDA toward a common covariance as in LDA. The regularized covariance matrices have the form

$\displaystyle \Sigma (\alpha) = \alpha \Sigma _l + (1-\alpha) \Sigma, \ \ \ \ \ (3)$

where ${\Sigma}$ is the common covariance matrix as used in LDA and ${\Sigma _l}$ is the class-specific covariance matrix as used in QDA. ${\alpha}$ is a number between ${0}$ and ${1}$ that can be chosen based on the performance of the model on validation data, or by cross-validation.

It is also possible to allow ${\Sigma}$ to shrunk toward the spherical covariance

$\displaystyle \Sigma(\gamma) = \gamma \Sigma + (1 - \gamma) \sigma ^2 I,$

where ${I}$ is the identity matrix. The equation above means that, when ${\gamma = 0}$, the predictors are assumed independent and with common variance ${\sigma ^2}$. Replacing ${\Sigma}$ in Eq. (3) by ${\Sigma(\gamma)}$ leads to a more general family of covariances ${\Sigma(\alpha, \gamma)}$ indexed by a pair of parameters that again can be chosen based on the model performance on validation data.

References:

[1] Hastie, T., Tibshirani, R., Friedman, J. (2009). The elements of statistical learning: data mining, inference and prediction. Springer.
[2] Kuhn, M. and Johnson, K. (2013). Applied Predictive Modeling. Springer.
[3] Friedman, J. H. (1989). Regularized discriminant analysis. Journal of the American statistical association, 84(405), 165-175.

Advertisements