• We are pleased to announce that the winner of our Feedback Prize Draw for the Winter 2024-25 session and winning £150 of gift vouchers is Zhao Liang Tay. Congratulations to Zhao Liang. If you fancy winning £150 worth of gift vouchers (from a major UK store) for the Summer 2025 exam sitting for just a few minutes of your time throughout the session, please see our website at https://www.acted.co.uk/further-info.html?pat=feedback#feedback-prize for more information on how you can make sure your name is included in the draw at the end of the session.
  • Please be advised that the SP1, SP5 and SP7 X1 deadline is the 14th July and not the 17th June as first stated. Please accept out apologies for any confusion caused.

Help! Maximum likelihood estimation - Gamma

E

Entact30

Member
Hi

I'm trying to figure out the algorithm that is performed behind the scenes in Emblem software. I've read through 'A Practitioner's Guide to GLMs" (http://www.casact.org/library/studynotes/anderson9.pdf) but it doesn't give an example of how parameters are estimated for a Gamma severity model with a log link. It also doesn't go through the Newton Rhapson method.


Also, the pdf formula given in Appendix E is different to other formula's for the pdf of a gamma dirstribution I have seen elsewhere like the Actaurial tables. I'm really trying to understand how this all works but it's frustrating that there are no concrete example given on the internet or in any of the references I've seen. I know there are texts out there but I'm not prepared to fork out €80 just yet.

Basically, I'm looking for a very descriptive explanantion of the algorithm so that i might be able to replicate it in Sas or excel myself.

Any help would be much appreciated.
 
GLMs are generally fitted using the algorithm known as "Iteratively (Re)weighted Least Squares." A quick Google search will return plenty of results. For a gamma-log model, let \(\mathbf{X}\) be the design matrix. Start off with an initial estimate of your vector of model parameters, \(\beta\). Then let \(\mathbf{z}\) be a vector such that\[z_i=\log(\mu_i)+\frac{y_i-\mu_i}{\mu_i}\]Note that \(\mathbf{z}\) is a function of \(\beta\). Then the next estimate iteration of \(\beta\) is given by:\[(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T\mathbf{z}\]Now calculate a new vector \(\mathbf{z}\) and use the above formula to get another interation of \(\beta\). Continue iterating until the values of \(\beta\) converge.
 
Last edited by a moderator:
The Newton-Raphson formula is in the Tables.

Have you had a look at the Wikipedia article? There are two common forms for parameterising the Gamma, which would account for the different formulas you see. The standard form in the Tables also needs to be re-parameterised before taking likelihoods as otherwise the algebra falls over.
 
The Newton-Raphson formula is in the Tables.

Have you had a look at the Wikipedia article? There are two common forms for parameterising the Gamma, which would account for the different formulas you see. The standard form in the Tables also needs to be re-parameterised before taking likelihoods as otherwise the algebra falls over.

I have had a look at the article but neither of the pdf's match the one in the paper in the link below.

It would be great if there was a numerical example using a tiny dataset of a couple of factors and a step-by-step explanation of exactly what is done in the algorithm. I find it difficult really understand a formula unless I see it applied to an example.

The ST8 notes go through very simplified example as does the paper below but there seems to be a massive jump to what's actually done in Emblem and that is what I'm trying to figure out.

Thanks for you comments and feedback though - I appreciate it.
 
GLMs are generally fitted using the algorithm known as "Iteratively (Re)weighted Least Squares." A quick Google search will return plenty of results. For a gamma-log model, let \(\mathbf{X}\) be the design matrix. Start off with an initial estimate of your vector of model parameters, \(\beta\). Then let \(\mathbf{z}\) be a vector such that\[z_i=\log(\mu_i)+\frac{y_i-\mu_i}{\mu_i}\]Note that \(\mathbf{z}\) is a function of \(\beta\). Then the next estimate iteration of \(\beta\) is given by:\[(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T\mathbf{z}\]Now calculate a new vector \(\mathbf{z}\) and use the above formula to get another interation of \(\beta\). Continue iterating until the values of \(\beta\) converge.

Hi td290, thanks for your feedback - I was just replying to Calum explaining that I do understand bits and pieces of what you are saying but it would be very helpful if you knew a good resource that actually went through the steps that are carried out when you run severity model in Emblem.
 
Not sure I can help. The steps I've given you are the steps used by Emblem. I think you'll find that if you seek out such specialised knowledge it will be assumed that you are happy reading specialised concise notation.

I think you'll find that the pdf given by Anderson et al is simply a reparameterisation of that on Wikipedia and other sources. It should be fairly straightforward to derive the change of variables by equating parts of the formulae, e.g. the argument of the gamma function must be the same in both cases.
 
Not sure I can help. The steps I've given you are the steps used by Emblem. I think you'll find that if you seek out such specialised knowledge it will be assumed that you are happy reading specialised concise notation.

I think you'll find that the pdf given by Anderson et al is simply a reparameterisation of that on Wikipedia and other sources. It should be fairly straightforward to derive the change of variables by equating parts of the formulae, e.g. the argument of the gamma function must be the same in both cases.

I take your point but I'm keen to understand this area in more depth. I had no trouble with St8 but I find there is quite a jump technically from st8 and some of the academic papers out there on these subjects. Perhaps a textbook on the subject would be a good start.

Thanks for your help though, I'll get there eventually.
 
Back
Top