# Correlation and dependence

This article is about correlation and dependence in statistical data.

For other uses, see correlation (disambiguation).

In statistics, correlation or dependence is any statistical relationship, whether causal or not, between two random variables or bivariate data.

In the broadest sense correlation is any statistical association, though it commonly refers to the degree to which a pair of variables are linearly related.

Familiar examples of dependent phenomena include the correlation between the height of parents and their offspring, and the correlation between the price of a good and the quantity the consumers are willing to purchase, as it is depicted in the so-called demand curve.

Correlations are useful because they can indicate a predictive relationship that can be exploited in practice.

For example, an electrical utility may produce less power on a mild day based on the correlation between electricity demand and weather.

In this example, there is a causal relationship, because extreme weather causes people to use more electricity for heating or cooling.

However, in general, the presence of a correlation is not sufficient to infer the presence of a causal relationship (i.e., correlation does not imply causation).

## Pearson's product-moment coefficient

Main article: Pearson product-moment correlation coefficient

### Definition

The most familiar measure of dependence between two quantities is the Pearson product-moment correlation coefficient (PPMCC), or "Pearson's correlation coefficient", commonly called simply "the correlation coefficient".

Mathematically, it is defined as the quality of least squares fitting to the original data.

It is obtained by taking the ratio of the covariance of the two variables in question of our numerical dataset, normalized to the square root of their variances.

Mathematically, one simply divides the covariance of the two variables by the product of their standard deviations.

Karl Pearson developed the coefficient from a similar but slightly different idea by Francis Galton.

A Pearson product-moment correlation coefficient attempts to establish a line of best fit through a dataset of two variables by essentially laying out the expected values and the resulting Pearson's correlation coefficient indicates how far away the actual dataset is from the expected values.

Depending on the sign of our Pearson's correlation coefficient, we can end up with either a negative or positive correlation if there is any sort of relationship between the variables of our dataset.

### Symmetry property

### Correlation and independence

If the variables are independent, Pearson's correlation coefficient is 0, but the converse is not true because the correlation coefficient detects only linear dependencies between two variables.

Even though uncorrelated data does not necessarily imply independence, one can check if random variables are independent if their mutual information is 0.

### Sample correlation coefficient

## Example

For this joint distribution, the marginal distributions are:

This yields the following expectations and variances:

Therefore:

## Rank correlation coefficients

Main articles: Spearman's rank correlation coefficient and Kendall tau rank correlation coefficient

Rank correlation coefficients, such as Spearman's rank correlation coefficient and Kendall's rank correlation coefficient (τ) measure the extent to which, as one variable increases, the other variable tends to increase, without requiring that increase to be represented by a linear relationship.

If, as the one variable increases, the other decreases, the rank correlation coefficients will be negative.

It is common to regard these rank correlation coefficients as alternatives to Pearson's coefficient, used either to reduce the amount of calculation or to make the coefficient less sensitive to non-normality in distributions.

However, this view has little mathematical basis, as rank correlation coefficients measure a different type of relationship than the Pearson product-moment correlation coefficient, and are best seen as measures of a different type of association, rather than as an alternative measure of the population correlation coefficient.

- (0, 1), (10, 100), (101, 500), (102, 2000).

## Other measures of dependence among random variables

See also: Pearson product-moment correlation coefficient § Variants

The information given by a correlation coefficient is not enough to define the dependence structure between random variables.

The correlation coefficient completely defines the dependence structure only in very particular cases, for example when the distribution is a multivariate normal distribution.

(See diagram above.)

In the case of elliptical distributions it characterizes the (hyper-)ellipses of equal density; however, it does not completely characterize the dependence structure (for example, a multivariate t-distribution's degrees of freedom determine the level of tail dependence).

Distance correlation was introduced to address the deficiency of Pearson's correlation that it can be zero for dependent random variables; zero distance correlation implies independence.

The Randomized Dependence Coefficient is a computationally efficient, copula-based measure of dependence between multivariate random variables.

RDC is invariant with respect to non-linear scalings of random variables, is capable of discovering a wide range of functional association patterns and takes value zero at independence.

The correlation ratio, entropy-based mutual information, total correlation, dual total correlation and polychoric correlation are all also capable of detecting more general dependencies, as is consideration of the copula between them, while the coefficient of determination generalizes the correlation coefficient to multiple regression.

## Sensitivity to the data distribution

Further information: Pearson product-moment correlation coefficient § Sensitivity to the data distribution

Various correlation measures in use may be undefined for certain joint distributions of X and Y.

For example, the Pearson correlation coefficient is defined in terms of moments, and hence will be undefined if the moments are undefined.

Measures of dependence based on quantiles are always defined.

Sample-based statistics intended to estimate population measures of dependence may or may not have desirable statistical properties such as being unbiased, or asymptotically consistent, based on the spatial structure of the population from which the data were sampled.

Sensitivity to the data distribution can be used to an advantage.

For example, scaled correlation is designed to use the sensitivity to the range in order to pick out correlations between fast components of time series.

By reducing the range of values in a controlled manner, the correlations on long time scale are filtered out and only the correlations on short time scales are revealed.

## Correlation matrices

See also: Covariance matrix § Correlation matrix

A correlation matrix appears, for example, in one formula for the coefficient of multiple determination, a measure of goodness of fit in multiple regression.

In statistical modelling, correlation matrices representing the relationships between variables are categorized into different correlation structures, which are distinguished by factors such as the number of parameters required to estimate them.

For example, in an exchangeable correlation matrix, all pairs of variables are modeled as having the same correlation, so all non-diagonal elements of the matrix are equal to each other.

On the other hand, an autoregressive matrix is often used when variables represent a time series, since correlations are likely to be greater when measurements are closer in time.

Other examples include independent, unstructured, M-dependent, and Toeplitz.

## Common misconceptions

### Correlation and causality

Main article: Correlation does not imply causation

See also: Normally distributed and uncorrelated does not imply independent

The conventional dictum that "correlation does not imply causation" means that correlation cannot be used by itself to infer a causal relationship between the variables.

This dictum should not be taken to mean that correlations cannot indicate the potential existence of causal relations.

However, the causes underlying the correlation, if any, may be indirect and unknown, and high correlations also overlap with identity relations (tautologies), where no causal process exists.

Consequently, a correlation between two variables is not a sufficient condition to establish a causal relationship (in either direction).

A correlation between age and height in children is fairly causally transparent, but a correlation between mood and health in people is less so.

Does improved mood lead to improved health, or does good health lead to good mood, or both?

Or does some other factor underlie both?

In other words, a correlation can be taken as evidence for a possible causal relationship, but cannot indicate what the causal relationship, if any, might be.

### Simple linear correlations

These examples indicate that the correlation coefficient, as a summary statistic, cannot replace visual examination of the data.

The examples are sometimes said to demonstrate that the Pearson correlation assumes that the data follow a normal distribution, but this is not correct.

## Bivariate normal distribution

## See also

Further information: Correlation (disambiguation)

Credits to the contents of this page go to the authors of the corresponding Wikipedia page: en.wikipedia.org/wiki/Correlation and dependence.