This question is in regard to Chapter 20: Risk models 2, section 3.3, example question (ii), page 25.
I understand what the R code is trying to do, but for some reason, when I create a vector of length 10,000 for lambda outside of the for-loop I get a slightly different mean and standard deviation to what is produced if I create a single lambda every time in the first for-loop.
To demonstrate what I mean, if I do:
Code:
set.seed(123)
sims = 10000
policy = 100
R = matrix(0, nrow = sims, ncol = policy)
lambda = sample(c(0.1, 0.3), sims, prob = c(0.5, 0.5), replace = T)
for(i in 1:sims){
N = rpois(policy, lambda[i])
S = numeric(policy)
for(j in 1:policy){
S[j] = sum(rgamma(N[j], 750, 0.25))
}
R[i,] = S
}
mean(rowSums(R)); sd(rowSums(R))
The result I get for the sample mean and standard deviation are 60189.14 and 32835.25 respectively.
I'm guessing it's probably because the 10,000 lambda values that get created in one go produces a slightly different result to creating a single lambda value 10,000 times.
But would such an answer in an exam setting be considered incorrect?