• We are pleased to announce that the winner of our Feedback Prize Draw for the Winter 2024-25 session and winning £150 of gift vouchers is Zhao Liang Tay. Congratulations to Zhao Liang. If you fancy winning £150 worth of gift vouchers (from a major UK store) for the Summer 2025 exam sitting for just a few minutes of your time throughout the session, please see our website at https://www.acted.co.uk/further-info.html?pat=feedback#feedback-prize for more information on how you can make sure your name is included in the draw at the end of the session.
  • Please be advised that the SP1, SP5 and SP7 X1 deadline is the 14th July and not the 17th June as first stated. Please accept out apologies for any confusion caused.

Hypothesis Testing

  • Thread starter StevieG4captain
  • Start date
S

StevieG4captain

Member
Think I'm being dumb, but would really appreciate it if anyone could clear up the following for me:

Chapter 12 - Hypothesis Testing

2 Classical testing, significance and p-values

2.1 'Best' tests

Regarding the Neyman-Pearson Lemma, I’m not sure how the following criteria gives the test statistics for the mean mu and the variance sigma^2:

[max(Likelihood under H0)]/[max(Likelihood under H0+H1)] < critical value

In the case of the mean mu:
I make the max(Likelihood under H0): mu0hat=Xbar
and the max(Likelihood under H0+H1): muhat=Xbar

How this method leads to the test statistics:

[Xbar-mu0]/[S/n^0.5]~t{n-1} under H0: mu=mu0

or [(n-1)S^2]/[sigma0^2]~chisqr{n-1} under H0: sigma^2=sigma0^2

I do not know :confused:

Maybe I just need some sleep, but if anybody knows it would save me a great deal of time!

Thanks (in anticipation)
 
I think this may be one of those calculations everyone has to suffer for themselves, but see if this helps...

0) For the max(Likelihood under H0) you need to find the maximum of the likelihood for values of the parameters allowable under H0: Say mu_0 and sigma_0 maximise here.

1) For the max(Likelihood under H0+H1) any value of mu is allowed. Say mu_1 and sigma_1 maximise here.

Divide L(mu_0,sigma_0,x) by L(mu_1, sigma_1,x), set <=c and rearrange, and you should get the sort of condition you want on x-bar.

If H0 is that mu=mu0, then in 0) you are evaluating the likelihood function at mu=mu0 but maximising over sigma. So mu_0=mu0 and I think but haven't checked sigma_0=s. In 1) any value of mu is allowed then mu_1=x-bar and sigma_1=s.

Any help? Or did a good night's sleep fix it anyway? :)
 
Not sure yet

I get max(Likelihood under H0) = Xbar (i.e. mu0hat=Xbar)

and max(Likelihood under H0+H1) = Xbar also (i.e. muhat=Xbar)

so the ratio of these is 1, can't see how arranging 1<critical value gives what you suggested.

I think I'm either missing a fundamental step or two or I'm not working out the max likelihoods correctly.

Should I be calculating the max likelihood for mu and sigma simultaneoulsy, I've been treating them seperately? (i.e. does the H0 in max(Likelihood under H0) stand for H0:mu=mu0 and sigma=sigma0) I don't think I've ever come across a null-hypothesis with 2 variables, is this possible? I would have thought H0 could only test either mu or sigma not both at the same time.

Thanks for the reply though, think I'm a bit closer to understanding what's going on with this Neyman-Pearson stuff.
 
Okay, give me one more try at this:

It's not the value of mu (x-bar, say) which maximises the likelihood that you need in the quotient, but the maximised value of the likelihood function itself at that point: For H0+H1,
max L(mu,sigma^2,x_i)= L(x-bar, s^2, x_i).

And x-bar is not an allowable value for the parameter mu under H0 if the assumption under H0 is that mu=mu0: the only allowable value for the parameter mu is mu0, so you have to evaluate the likelihood function at that point.
so for H0: mu=mu0, we get
max L(mu0,sigma^2, x_i) = L(mu0, s^2 (?), x_i)

In both cases I'm assuming the variance is unknown, so both under H0 and H0+H1 you choose the maximum value of L over all possible (i.e. all) sigma^2.

Null hypotheses can theoretically have more than one parameter. The test would end up rather messy I'd think. I've never come across a circumstance where you'd want to do this though...
Or would you class chi-squared goodness of fit tests as having more than one parameter?
 
Thanks

I'll give it a go.

I think the first bit is where I was going wrong, hopefully should work out now. Thanks!
 
Back
Top