Wie ist die erwartete Korrelation zwischen Residuum und abhängiger Variable?


26

In der multiplen linearen Regression kann ich verstehen, dass die Korrelationen zwischen Residuum und Prädiktoren Null sind, aber wie ist die erwartete Korrelation zwischen Residuum und der Kriteriumsvariablen? Sollte es mit Null oder stark korreliert gerechnet werden? Was bedeutet das?


4
Was ist eine "Kriteriumsvariable"?
Whuber

2
@whuber Ich vermute, Jfly bezieht sich auf die Antwort / Ergebnis / abhängig / etc. Variable. davidmlane.com/hyperstat/A101702.html Es ist interessant zu sehen , die viele Namen , wie Variablen durch gehen: en.wikipedia.org/wiki/...
Jeromy Anglim

@ Jeromy Danke! Ich hatte vermutet, dass das die Bedeutung war, war mir aber nicht sicher. Das ist ein neuer Begriff für mich - und natürlich für Wikipedia.
Whuber

Ich hätte gedacht , dies wäre gleich E [ R 2 ]E[R2] oder etwas ähnliches, wie R 2 = [ c o r r ( y , y ) ] 2R2= [ c o r r ( y, y^) ]2
probabilityislogic

y = f ( x ) + ey= f( x ) + e , wobei ff die Regressionsfunktion ist, ee ein Fehler ist und C o v ( f ( x ) , e ) = 0 istCo v ( f( x ) , e ) = 0 . Dann C o r r ( y , e ) = S D ( e ) / S D ( y ) = 1 - R 2 . Das ist die Beispielstatistik. Der erwartete Wert wäre ähnlich, aber unordentlicher. Co r r ( y, e ) = SD ( e ) / SD ( y) = 1 - R2------
Ray Koopman

Antworten:


20

Im Regressionsmodell:

y i = x ' i β + u i

yi=xiβ+ui

the usual assumption is that (yi,xi,ui)(yi,xi,ui), i=1,...,ni=1,...,n is an iid sample. Under assumptions that Exiui=0Exiui=0 and E(xixi)E(xixi) has full rank, the ordinary least squares estimator:

ˆβ=(ni=1xixi)1i=1xiyi

βˆ=(i=1nxixi)1i=1xiyi

is consistent and asymptotically normal. The expected covariance between a residual and the response variable then is:

Eyiui=E(xiβ+ui)ui=Eu2i

Eyiui=E(xiβ+ui)ui=Eu2i

If we furthermore assume that E(ui|x1,...,xn)=0E(ui|x1,...,xn)=0 and E(u2i|x1,...,xn)=σ2E(u2i|x1,...,xn)=σ2, we can calculate the expected covariance between yiyi and its regression residual:

Eyiˆui=Eyi(yixiˆβ)=E(xiβ+ui)(uixi(ˆββ))=E(u2i)(1Exi(nj=1xjxj)1xi)

Eyiuˆi=Eyi(yixiβˆ)=E(xiβ+ui)(uixi(βˆβ))=E(u2i)1Exi(j=1nxjxj)1xi

Now to get the correlation we need to calculate Var(yi)Var(yi) and Var(ˆui)Var(u^i). It turns out that

Var(ˆui)=E(yiˆui),

Var(u^i)=E(yiu^i),

hence

Corr(yi,ˆui)=1Exi(nj=1xjxj)1xi

Corr(yi,u^i)=1Exi(j=1nxjxj)1xi

Now the term xi(nj=1xjxj)1xixi(nj=1xjxj)1xi comes from diagonal of the hat matrix H=X(XX)1XH=X(XX)1X, where X=[xi,...,xN]X=[xi,...,xN]. The matrix HH is idempotent, hence it satisfies a following property

trace(H)=ihii=rank(H),

trace(H)=ihii=rank(H),

where hiihii is the diagonal term of HH. The rank(H)rank(H) is the number of linearly independent variables in xixi, which is usually the number of variables. Let us call it pp. The number of hiihii is the sample size NN. So we have NN nonnegative terms which should sum up to pp. Usually NN is much bigger than pp, hence a lot of hiihii would be close to the zero, meaning that the correlation between the residual and the response variable would be close to 1 for the bigger part of observations.

The term hiihii is also used in various regression diagnostics for determining influential observations.


10
+1 This is exactly the right analysis. But why don't you finish the job and answer the question? The OP asks whether this correlation is "high" and what it might mean.
whuber

So you could say that the correlation is roughly 1pN1pN
probabilityislogic

1
Correlation is different for every observation, but yeah you can say that, provided X does not have outliers.
mpiktas

21

The correlation depends on the R2R2. If R2R2 is high, it means that much of variation in your dependent variable can be attributed to variation in your independent variables, and NOT your error term.

However, if R2R2 is low, then it means that much of the variation in your dependent variable is unrelated to variation in your independent variables, and thus must be related to the error term.

Consider the following model:

Y=Xβ+εY=Xβ+ε, where YY and XX are uncorrelated.

Assuming sufficient regularity conditions for the CLT to hold.

ˆββ^ will converge to 00, since XX and YY are uncorrelated. Therefore ˆY=XˆβY^=Xβ^ will always be zero. Thus, the ε:=YˆY=Y0=Yε:=YY^=Y0=Y. εε and YY are perfectly correlated!!!

Holding all else fixed, increasing the R2R2 will decrease the correlation between the error an the dependent. A strong correlation is not necessarily cause for alarm. This may simply means the underlying process is noisy. However, a low R2R2 (and hence high correlation between error and dependent) may be due to model misspecification.


I find this answer confusing, in part through its use of "εε" to stand both for the error terms in the model and the residuals YˆYYY^. Another point of confusion is the reference to "converge to" even though there is no sequence of anything at all in evidence to which convergence might apply. The assumption that XX and YY are uncorrelated seems special and not illustrative of general circumstances. All this obscures whatever this answer might be trying to say or which claims are generally true.
whuber

17

I find this topic quite interesting and current answers are unfortunately incomplete or partly misleading - despite the relevance and high popularity of this question.

By definition of classical OLS framework there should be no relationship between ŷŷ  and ˆuu^, since the residuals obtained are per construction uncorrelated with ŷŷ  when deriving the OLS estimator. The variance minimizing property under homoskedasticity ensures that the residual error are randomly spread around the fitted values. This can be formally shown by:

Cov(ŷ,û|X)=Cov(Py,My|X)=Cov(Py,(IP)y|X)=PCov(y,y)(IP)

Cov(ŷ ,û |X)=Cov(Py,My|X)=Cov(Py,(IP)y|X)=PCov(y,y)(IP)
=Pσ2Pσ2=0
=Pσ2Pσ2=0

Where MM and PP are idempotent matrices defined as: P=X(XX)XP=X(XX)X and M=IPM=IP.

This result is based on strict exogeneity and homoskedasticity, and practically holds in large samples. The intuition for their uncorrelatedness is the following: The fitted values ŷŷ  conditional on XX are centered around ûû , which are thought as independently and identically distributed. However, any deviation from the strict exogeneity and homoskedasticity assumption could cause the explanatory variables to be endogenous and spur a latent correlation between ûû  and ŷŷ .

Now the correlation between the residuals ûû  and the "original" yy is a completely different story:

Cov(y,û|X)=Cov(yMy|X)=Cov(y,(1P)y)=Cov(y,y)(1P)=σ2M

Cov(y,û |X)=Cov(yMy|X)=Cov(y,(1P)y)=Cov(y,y)(1P)=σ2M

Some checking in the theory and we know that this covariance matrix is identical to the covariance matrix of the residual ˆuu^ itself (proof omitted). We have:

Var(û)=σ2M=Cov(y,û|X)

Var(û )=σ2M=Cov(y,û |X)

If we would like to calculate the (scalar) covariance between yy and ˆuu^ as requested by the OP, we obtain:

Covscalar(y,û|X)=Var(û|X)=(u2i)/N

Covscalar(y,û |X)=Var(û |X)=(u2i)/N

(= by summing up of the diagonal entries of the covariance matrix and divide by N)

The above formula indicates an interesting point. If we test the relationship by regressing yy on the residuals ˆuu^ (+constant), the slope coefficient βˆu,y=1βu^,y=1, which can be easily derived when we divide the above expression by the Var(û|X)Var(û |X).

On the other hand, the correlation is the standardized covariance by the respective standard deviations. Now, the variance matrix of the residuals is σ2Mσ2M, while the variance of yy is σ2Iσ2I. The correlation Corr(y,û)Corr(y,û ) becomes therefore:

Corr(y,û)=Var(û)Var(ˆu)Var(y)=Var(û)Var(y)=Var(û)σ2

Corr(y,û )=Var(û )Var(u^)Var(y)=Var(û )Var(y)=Var(û )σ2

This is the core result which ought to hold in a linear regression. The intuition is that the Corr(y,û)Corr(y,û ) expresses the error between the true variance of the error term and a proxy for the variance based on residuals. Notice that the variance of yy is equal to the variance of ˆyy^ plus the variance of the residuals ˆuu^. So it can be more intuitively rewritten as:

Corr(y,û)=11+Var(^y)Var(û)

Corr(y,û )=11+Var(y)^Var(û )

The are two forces here at work. If we have a great fit of the regression line, the correlation is expected to be low due to Var(û)0Var(û )0. On the other hand, Var(ˆy)Var(y^) is a bit of a fudge to esteem as it is unconditional and a line in parameter space. Comparing an unconditional and conditional variances within a ratio may not be an appropriate indicator after all. Perhaps, that's why it rarely done in practice.

An attempt conclude the question: The correlation between yy and ûû  is positive and relates to the ratio of the variance of the residuals and the variance of the true error term, proxied by the unconditional variance in yy. Hence, it is a bit of a misleading indicator.

Notwithstanding this exercise may give us some intuition on the workings and inherent theoretical assumptions of an OLS regression, we rarely evaluate the correlation between yy and ûû . There are certainly more established tests for checking properties of the true error term. Secondly, keep in mind that the residuals are not the error term, and tests on residuals ûû  that make predictions of the characteristics on the true error term uu are limited and their validity need to be handled with utmost care.

For example, I would like to point out a statement made by a previous poster here. It is said that,

"If your residuals are correlated with your independent variables, then your model is heteroskedastic..."

I think that may not be entirely valid in this context. Believe it or not, but the OLS residuals ûû  are by construction made to be uncorrelated with the independent variable xkxk. To see this, consider:

Xui=XMy=X(IP)y=XyXPy

Xui=XMy=X(IP)y=XyXPy
=XyXX(XX)Xy=XyXy=0
=XyXX(XX)Xy=XyXy=0
Xui=0Cov(X,ui|X)=0Cov(xki,ui|xki)=0
Xui=0Cov(X,ui|X)=0Cov(xki,ui|xki)=0

However, you may have heard claims that an explanatory variable is correlated with the error term. Notice that such claims are based on assumptions about the whole population with a true underlying regression model, that we do not observe first hand. Consequently, checking the correlation between yy and ûû  seems pointless in a linear OLS framework. However, when testing for heteroskedasticity, we take here into account the second conditional moment, for example, we regress the squared residuals on XX or a function of XX, as it is often the case with FGSL estimators. This is different from evaluating the plain correlation. I hope this helps to make matters more clear.


1
Note that we have var(ˆu)var(y)=SSETSS=1R2var(u^)var(y)=SSETSS=1R2 (at least roughly anyway). This gives corr(y,ˆu)=1R2corr(y,u^)=1R2 which is some further intuition about what you mention in later paragraphs.
probabilityislogic

2
What I find interesting about this answer is that the correlation is always positive.
probabilityislogic

You state that Var(y)Var(y) is matrix, yet you divide by it.
mpiktas

@probabilityislogic: Not sure if I can follow your step. It would be then under the squareroot 1+(1/1-R^2), which is (2-R^2)/(1-R^2)? Yet what's true is that it remains positive. The intuition is that if you have a line through a scatterplot, and you regress this line on errors from that line, it should be obvious that as the value y of that line increases the value of the residuals increase as well. This is because the residuals are positively dependent on y by construction.
Majte

@mpiktas: In this case the matrix becomes a scalar as we are dealing y being only in one dimension.
Majte

6

The Adam's answer is wrong. Even with a model that fits data perfectly, you can still get high correlation between residuals and dependent variable. That's the reason no regression book asks you to check this correlation. You can find the answer on Dr. Draper's "Applied Regression Analysis" book.


3
Even if correct, this is more of an assertion than an answer according to CV's standards, @Jeff. Would you mind elaborating / backing up your claim? Even just a page number & edition of Draper & Smith would suffice.
gung - Reinstate Monica

4

So, the residuals are your unexplained variance, the difference between your model's predictions and the actual outcome you're modeling. In practice, few models produced through linear regression will have all residuals close to zero unless linear regression is being used to analyze a mechanical or fixed process.

Ideally, the residuals from your model should be random, meaning they should not be correlated with either your independent or dependent variables (what you term the criterion variable). In linear regression, your error term is normally distributed, so your residuals should also be normally distributed as well. If you have significant outliers, or If your residuals are correlated with either your dependent variable or your independent variables, then you have a problem with your model.

If you have significant outliers and non-normal distribution of your residuals, then the outliers may be skewing your weights (Betas), and I would suggest calculating DFBETAS to check the influence of your observations on your weights. If your residuals are correlated with your dependent variable, then there is a significantly large amount of unexplained variance that you are not accounting for. You may also see this if you're analyzing repeated observations of the same thing, due to autocorrelation. This can be checked for by seeing if your residuals are correlated with your time or index variable. If your residuals are correlated with your independent variables, then your model is heteroskedastic (see: http://en.wikipedia.org/wiki/Heteroscedasticity). You should check (if you haven't already) if your input variables are normally distributed, and if not, then you should consider scaling or transforming your data (the most common kinds are log and square-root) in order to make it more normalized.

In the case of both, your residuals, and your independent variables, you should take a QQ-Plot, as well as perform a Kolmogorov-Smirnov test (this particular implementation is sometimes referred to as the Lilliefors test) to make sure that your values fit a normal distribution.

Three things that are quick and may be helpful in dealing with this problem, are examining the median of your residuals, it should be as close to zero as possible (the mean will almost always be zero as a result of how the error term is fitted in linear regression), a Durbin-Watson test for autocorrelation in your residuals (especially as I mentioned before, if you are looking at multiple observations of the same things), and performing a partial residual plot will help you look for heteroscedasticity and outliers.


Thank you very much. Your explanation is very helpful to me.
Jfly

1
+1 Nice, comprehensive answer. I'm going to nitpick on 2 points. "If your residuals are correlated with your independent variables, then your model is heteroskedastic"--I would say that if the variance of your residuals depends on the level of an independent variable, then you have heteroscedasticity. Also, I have heard the Kolmogorov-Smirnov/Lilliefors tests described as "notoriously unreliable," and in practive I have certainly found this to be true. Better to make a subjective determination based on a Q-Q plot or a simple histogram.
rolando2

4
The claim that "the residuals from your model... should not be correlated with... your... dependent variable" is not generally true, as explained in other answers on this thread. Would you mind correcting this post?
gung - Reinstate Monica

1
(-1) I think this post is not relevant enough to the question asked. It is good as general advice, but perhaps a case of the "right answer to the wrong question".
probabilityislogic
Durch die Nutzung unserer Website bestätigen Sie, dass Sie unsere Cookie-Richtlinie und Datenschutzrichtlinie gelesen und verstanden haben.
Licensed under cc by-sa 3.0 with attribution required.