Skip links



1. Consider the so-called linear probability model

yi = β0 + β1xi + ui

where yi can be either 0 or 1. In particular, we assume that P r(yi = 1 j xi) =

β0 + β1xi.

(a) Show that E(ui j xi) = 0.
(b) Show that var(ui j xi) = (β0 + β1xi)(1 − (β0 + β1xi)).
(c) Comment on results (a) and (b) and on how the statistical properties of the
OLS estimators for β0; β1 could be affected.

2. Suppose that we are given the return for N assets in January 2004 (ri;J04; i =
1; :::; N) as well as the return for the same assets in December 2004 (ri;D04; i =
1; :::; N).

Assume that when N = 24 we have that the standard deviation of the ri is equal
to 0:5087 for Jan 04 and equal to 0:3645 for Dec 04. Moreover the correlation
between the ri;J04 and ri;D04 is 0:8753.

(a) Are we able to estimate at least some of the parameters of the linear regression
ri;D04 = α + βri;J04 + i; i = 1; ::; N?

(b) Can we test the hypothesis that
H0 : β = 1 against H1 : β < 1 ?

(c) Can we derive the R2 of the regression?

3. Let y1; :::; yT ; x1; :::; xT be a random sample from the model

yi = βxi + i;

where E(i) = 0; var(i) = 1, E(xi) = µx 6= 0; var(xi) = σx2, where xi; j inde
pendent from each other for any i; j.

Define the estimator

β~ = y¯/x¯;
where ¯ x = 1=T PT t=1 xt; y¯ = 1=T PT t=1 yt.

(a) Recalling the asymptotic properties of the sample mean of a set of i:i:d:
observations, show that β~ converges in probability to β (Hint: g(^ α) is a
consistent estimator of g(α) if g(:) is a continuous function and the vector
α^ – p α).

Leave a comment