• We are pleased to announce that the winner of our Feedback Prize Draw for the Winter 2024-25 session and winning £150 of gift vouchers is Zhao Liang Tay. Congratulations to Zhao Liang. If you fancy winning £150 worth of gift vouchers (from a major UK store) for the Summer 2025 exam sitting for just a few minutes of your time throughout the session, please see our website at https://www.acted.co.uk/further-info.html?pat=feedback#feedback-prize for more information on how you can make sure your name is included in the draw at the end of the session.
  • Please be advised that the SP1, SP5 and SP7 X1 deadline is the 14th July and not the 17th June as first stated. Please accept out apologies for any confusion caused.

SA3 Pass Rate Sept

D

DeliAli

Member
Wow the pass list was only 72 out of 274 which equates to just 26.3%. This is very disappointing :(
 
Wow that's harsh! How is this even possible though? Just doesn't feel right if you ask me
 
I am one of the 73.7 %, quite frustrating. I am not sure how they made the cut. However, it seems that you have to score 55 if you want to pass an exam. I have not noticed lower pass scores in any of the ST or SA exams in this exam cohort. It might hold true for earlier chorts but I am not sure.
 
Yes, apologises I mean the recent sitting. There seem to be serious issues with the marking quality for SA3 too. It seems that the quality of marking differs markedly from marker 1 to marker 2 so I'd be highly concerned about the quality of the processes involved or ability of certain markers in SA3 to perform their duties. It would be fine if the same two markers are used for all but I'd be concerned that you might be unlucky to get a maker who performs their duties with less rigour than others.

How are the markers chosen for these exams? Are they just volunteers who are not able to perform their duties adequately? Also, I note the lower pass rate for many SA subjects this sitting. I wonder if this is to do with persuading individuals that the Associate level is of more value? Surely, they need to improve their marking processes in the future. It seems SA3 is the worst exam for discrepancies between markers so there's clearly something strange going on with the markers applying the same marking scheme to each script.
 
I did email the IFoA to ask for their comments about this large discrepancies. Although it says that they will reply in 2 working days on the automated reply, I have heard nothing so far. I am not sure if it is because of the festive season and they are short of staff or they simply ignored my email.

I am currently considering to email them again to remind them of this though not sure if I should do this in the new year when they should most be back in the office or this is just a waste of time (can I ask for any ideas or suggestions here?).
 
Last edited by a moderator:
I would wait until the 3rd of January and then email them again. Probably your email was seen, yet the answer needs input from more people who are most likely out of the office.
 
I would wait until the 3rd of January and then email them again. Probably your email was seen, yet the answer needs input from more people who are most likely out of the office.
Thank you Uroš, :)
 
I got a response from one of the "Senior Assessment Executive" as follows:

Dear xxx,

Thank you for your email.

The SA3 Examiner Report details that the performance for this paper was lower than has been seen for a number of years and that “candidates appeared underprepared” .

More information about the aims of this subject, how it is marked and the student performance for the September 2018 exam is detailed in the SA3 Examiner Report, which can be found on the website.

I hope this provides you with the information you require.

Kind regards,

xxx

Senior Assessment Executive

I did some calculation to convince myself how "unprepared" this cohort are:
upload_2018-12-31_15-12-20.png
The data is from since April-2007, and got from
http://www.actuarial-lookup.co.uk/exams/26

Above assumes a normal distribution (for simplicity). The Quantile calculated using the total error. So our preparation puts us at the lower (roughly) 1% (regardless include or exclude this sitting?)

Even with a better distribution, we are probably also at a quite low quantile :'(

I am really having a difficult time to rationalise their argument. :'(
 
To be honest, I am not sure how one can assess unpreparedness of the students based on the Sep 18 exam.
By quickly skimming over the examineers' report, it seems that the majority of the students went the wrong route in question 4.
It might be nice to have an opinion of someone who requested an exam review. After reading the report, I am very surprised how we managed to miss so many marks :). I suppose the markers write off many points that are in general ok but written down a bit vague.
 
the difference is significant either absolute, percentage or statistical. either we are an extremely 'bad' or small cohort or the examination process, including e.g. how the questions are set and how marks are given, have serious issues:(

given that we have 274 students, the sample is likely to be reasonably spread and roughly, on average, similar to past cohort. it is more likely due to other factors outside our students' control.:(
 
How many of those 274 students used acted tuition then...?
 
Back
Top