Schon seit
wir wissendass
β -β~N(0,σ2(XTX)-1)
und so erkennen wirdaß für jede Komponentekvon β ,
β k-βk≤N(0,σ2Skk)
β^=(XTX)−1XTY=(XTX)−1XT(Xβ+ε)=β+(XTX)−1XTε
β^−β∼N(0,σ2(XTX)−1)
kβ^β^k−βk∼N(0,σ2Skk)
wobei
das
k th istSkkkth(XTX)−1zk=β^k−βkσ2Skk−−−−−√∼N(0,1).
Beachten Sie die Aussage des Satzes zur Verteilung einer idempotenten quadratischen Form in einem Standardnormalvektor (Satz B.8 in Greene):
x∼N(0,I) and A is symmetric and idempotent, then xTAx is distributed χ2ν where ν is the rank of A.
Let ε^ denote the regression residual vector and let
M=In−X(XTX)−1XT,
which is the residual maker matrix (i.e.
My=ε^). It's easy to verify that
M is symmetric and idempotent.
Let
s2=ε^Tε^n−p
be an estimator for
σ2.
We then need to do some linear algebra. Note these three linear algebra properties:
- The rank of an idempotent matrix is its trace.
- Tr(A1+A2)=Tr(A1)+Tr(A2)
- Tr(A1A2)=Tr(A2A1) if A1 is n1×n2 and A2 is n2×n1 (this property is critical for the below to work)
So
rank(M)=Tr(M)=Tr(In−X(XTX)−1XT)=Tr(In)−Tr(X(XTX)−1XT))=Tr(In)−Tr((XTX)−1XTX))=Tr(In)−Tr(Ip)=n−p
Then
V=(n−p)s2σ2=ε^Tε^σ2=(εσ)TM(εσ).
Applying the Theorem for the Distribution of an Idempotent Quadratic Form in a Standard Normal Vector (stated above), we know that V∼χ2n−p.
Since you assumed that ε is normally distributed, then β^ is independent of ε^, and since s2 is a function of ε^, then s2 is also independent of β^. Thus, zk and V are independent of each other.
Then,
tk=zkV/(n−p)−−−−−−−−√
is the ratio of a standard Normal distribution with the square root of a Chi-squared distribution with the same degrees of freedom (i.e.
n−p), which is a characterization of the
t distribution.
Therefore, the statistic tk has a t distribution with n−p degrees of freedom.
It can then be algebraically manipulated into a more familiar form.
tk=β^k−βkσ2Skk√(n−p)s2σ2/(n−p)−−−−−−−−−−−−√=β^k−βkSkk√s2−−√=β^k−βks2Skk−−−−−√=β^k−βkse(β^k)