作业代写|Homework 7: Mathematical Statistics (MATH-UA 234)
AUTHOR
essaygo
PUBLISHED ON:
2022年12月30日
PUBLISHED IN:

这是一篇来自美国的关于数理统计学的作业代写

 

Reminder. Remember than the project presentations are on December 14th!

Problem 1. Suppose 𝑋1 , … , 𝑋𝑛 ∼ Ber(𝑝) (with 1 representing heads and zero representing tails) and that we use the prior distribution 𝑝 ∼ Beta(𝛼, 𝛽).

(a) Compute the posteriori distribution for 𝑝|𝑋1 = 𝑥1 , … , 𝑋𝑛 = 𝑥𝑛 .

(b) For each of the coins below, find values of 𝛼 and 𝛽 so that your prior distribution represents your belief about the parameter 𝑝 of the coin. Plot and label these 6 prior distributions. Note that the head side is the side marked with the number.

(c) Suppose you flipped coin zero and got 53 heads and 47 tails. Make a plot showing the prior and posterior densities for 𝑝.

(d) Suppose you flipped coin 4 and got 39 heads and 61 tails. Make a plot showing the prior and the posterior densities for 𝑝.

(e) Suppose you flipped coin 6 and got 0 heads and 100 tails. Make a plot showing the prior and the posterior densities for 𝑝.

(f) For the coin 6 example, is the probability that 𝑝 = 0 under your posterior 100%? Does this make sense?

Why or why not?

Thisimagewas takenfrom this site: https:// izbicki.me/ blog/ how-to-create-an-unfair-coin-and-prove-it-with-math.html

Problem 2 (Wasserman 11.1). Suppose 𝑋1 , … , 𝑋𝑛 ∼ 𝑁(𝜃, 𝜎2 ), and that we use the prior distribution 𝜃 ∼𝑁(𝑎, 𝑏2 ). Show that 𝜃|𝑋1 = 𝑥1 , … , 𝑋𝑛 = 𝑥𝑛 ∼ 𝑁(̄𝜃, 𝜏2 ) where 𝜃 = 𝑤𝑥1 + ⋯ + 𝑥𝑛/𝑛+ (1 − 𝑤)𝑎, 𝑤 =1/se2/1/se2 + 1/𝑏2, 𝜏 = 1/√1/se2 + 1/𝑏2,se = 𝜎/√𝑛.

Problem 3 (Wasserman 11.2). Let 𝑋1 , … , 𝑋𝑛 ∼ 𝑁(𝜇, 1).

(a) Simulate a dataset (using 𝜇 = 5) consisting of 𝑛 = 100 observations

(b) Take 𝑓(𝜇) = 1 as the prior density, and find the posterior density given the observed data. Plot this density

Problem 4. Consider a model of the form 𝑓(𝑥) =̂𝛽0 +̂𝛽1𝑥 and, given data  (𝑋1 , 𝑌1 ), … , (𝑋𝑛 , 𝑌𝑛 ),define the loss function 𝐿(̂𝛽0 ,̂𝛽1 ) =𝑛∑𝑖=1(𝑌𝑖 − 𝑓(𝑋𝑖 ))2 .

(a) Compute the partial derivatives 𝜕𝐿(̂𝛽0 ,̂𝛽1 )/𝜕̂𝛽0 and 𝜕𝐿(̂𝛽0 ,̂𝛽1 )/𝜕̂𝛽1

(b) Find the minimizers ̂𝛽0 and ̂𝛽1 for 𝐿(̂𝛽0 ,̂𝛽1 ).

(c) Show that you can write the loss function in the form ‖⃗𝑏 −⃗𝐴 ⃗𝑥‖22 , where ⃗𝑏is a particular vector of length 𝑛, ⃗𝐴 is a 𝑛 × 2 matrix, and ⃗𝑥 is a length 2 vector.

Problem 5. Consider the following four data sets:

x1 = [10, 8, 13, 9, 11, 14, 6, 4, 12, 7, 5]

y1 = [8.04 , 6.95 , 7.58 , 8.81 , 8.33 , 9.96 , 7.24 , 4.26 , 10.84 , 4.82 , 5.68]

x2 = [10, 8, 13, 9, 11, 14, 6, 4, 12, 7, 5]

y2 = [9.14 , 8.14 , 8.74 , 8.77 , 9.26 , 8.10 , 6.13 , 3.10 , 9.13 , 7.26 , 4.74]

x3 = [10, 8, 13, 9, 11, 14, 6, 4, 12, 7, 5]

y3 = [7.46 , 6.77 , 12.74 , 7.11 , 7.81 , 8.84 , 6.08 , 5.39 , 8.15 , 6.42 , 5.73]

x4 = [8, 8, 8, 8, 8, 8, 8, 19, 8, 8, 8]

y4 = [6.58 , 5.76 , 7.71 , 8.84 , 8.47 , 7.04 , 5.25 , 12.50 , 5.56 , 7.91 , 6.89]

(a) Find the sample mean and sample variance of each datasets’ 𝑋 and 𝑌 values. Compute the sample correlation between the 𝑋 and 𝑌 values for each dataset.

(b) Find the linear regression line and compute the 𝑅 2 value for each dataset.

(c) Now, plot the datasets and the linear regression lines. Explain what happened.

You may also like:
扫描二维码或者
添加微信skygpa