E
echo20
Member
This question is asking for reasons why the cost of guarantees calculated with a stochastic model will differ from a Black-Scholes calculation. I’m a bit surprised neither the examiners report nor ASET refer to the fact that Black-Scholes assumes a lognormal distribution, rather than something with fatter tails. Isn’t this a significant cause for under-estimating the cost of guarantees? Would the stochastic asset models that are actually applied use a fatter tailed distribution or would they generally also assume lognormal returns?