K
kartik_newpro
Member
Hello again. This time I have a problem with the mean square error. It starts with biased and unbiased estimators. I am assuming that the purpose of using MLEs and other parameter estimation techniques is to derive a "single value" for the parameter of the concerned distribution.
And the property of unbiasedness precisely gives E[g(X)] = parameter
Then why would a value that is not equal to the true value, even though a small spread, be a better estimate?
The Acted notes say "some estimates are too large and some estimates are too small - but on AVERAGE they give the true value" and true value is what we want right?
And MSE is the measure of the spread of the estimates. Then, in example 10.10 MSE of the Poi(mu) distribution is (mu)/n. What does this tell me about how far I am from the true value?
And I find the notes too technical through these topics (Bias, MSE, CRLB). I am still trying to understand the purpose behind doing all this and its practical application.
And the property of unbiasedness precisely gives E[g(X)] = parameter
Then why would a value that is not equal to the true value, even though a small spread, be a better estimate?
The Acted notes say "some estimates are too large and some estimates are too small - but on AVERAGE they give the true value" and true value is what we want right?
And MSE is the measure of the spread of the estimates. Then, in example 10.10 MSE of the Poi(mu) distribution is (mu)/n. What does this tell me about how far I am from the true value?
And I find the notes too technical through these topics (Bias, MSE, CRLB). I am still trying to understand the purpose behind doing all this and its practical application.