Control frequency sample size 1640

Hi Milan,%0AThe Maximum Error(s) depends on the number population. If you test e.g. 15 samples and the population is 150 or 300, the max error is different.%0AEverything is statistical counting. You have to consider %0A Error rate of population (e.g. p=5%), %0A Likelihood we can assert that error rate of population is maximum 5% (e.g. 5%) and %0A Number of population (e.g. N=150). %0AThe theorem gives you, in this case, the n (testing sample) is 46. %0Anote: ‘p’ and +/ percentage is up to you%0AAnd result is: if you do not find within 46 samples any error you can assert with 95% likelihood that control works well  error § rate is not over 5%.%0AMaybe it helps or not%0AI hope that is understandable I am not sure about my English

Hi Ricker,
Exactly what I needed…your reply was helpful and the example made it understandable/possible to easily implement in the SOX Controls Test Plans (in Excel).
Thanks,
Milan

Please keep in mind that Statistical sampling always give a much higher sample size which could be a drain on resources.
The Big 4 auditors have streamlined their original sample sizes, retest sizes, expansion sizes and rollforward sizes. These sizes are much lower than statistical size. We have been advised specifically by our external auditors that our statistically derived sample sizes are too high. Therefore, in 2005, we abandoned what Ricker suggested (regression model) and determined our original sample sizes, retest sizes, expansion sizes and rollforward sizes based on our auditor’s expectations and understanding how they audit us and coupled with the fact me being having an extensive big 4 backgound.
If you want I can share this guidelines with you.

Hi Arif,
Of course…thanks for your input too…404cpa_at_gmail.com
Milan

Hi Chhaava
ricker_at_centrum.cz

Lets say you have 30 sites that gives you coverage of 6570%…however 1 of these sites is substantially larger than all the others…statistically, your population size is skewed by the one site however everyone is saying test it 30 times if it is a daily transaction…why wouldnt you determine the population size across all the sites and obtain a random sample which would inherently require you to test more at the larger location than some of the smaller ones…the statistics dont work for me based on an arbitrary number of X when the poplution may be very small or quite large.

Hi,
it looks like you are trying to test a control once rather than in each site. You need to make sure that this rationale is valid in the first place. It is only allowable if the diff locations are all part of one signficant location/entity ( in which case, you can take all samples from the larger location).
Otherwise, you will need to testyour controls separately in each location. In this case, if the total transactions incurred are less than minimum sample size, you need to test all transactions that occurred during the period.[/code]

EMM,
Thanks for your reply and reasoning too. Although an earlier question was answered previously, the new question recently posted and your reply/feedback are helpful now.
One learns something new every day and it always insightful to read other posts and insight from SOX professionals.
Kind Regards,
Milan

Manual  Periodic
Annual  1
Quarterly  2
Monthly  3
Weekly  5
Daily  25
I’m curious, what do these numbers represent?

Sample sizes.
J

Going back to the discussion around ‘As needed’ how about using ‘On Trigger’ for controls that are executed on a trigger event that has an irregular frequency i.e. when an employee is hired.
I would aslo use Transactional as shorthand for multiple times per day.

It has been a long time since I was in the audit world, so this is probably a basic question: When we do interim testing in Q3, do we test the full sample size as prescribed based on the control frequency? Or can we ‘save’ some samples for the later rollforward testing?

you can save.

We do the majority of our testing in Q3 and then small samples in Q4 for rollforward testing. Lowrisk areas have all samples testeed in Q3 with rollforward testing consisting of an inquiry as to whether the control is still operating as it was when tested.

It would also be a valid approach to do, say, all you testing in 3 quarters and then roll forward based on enquiry  in particular looking for a trigger event that would suggest that the process needs to be looked at again e.g. change in process, people or systems; evidence of errors; fraud or other irregulaity; etc.
I prefer to split testing evenly over 4 quarters as I think this is more conducive to embedding controls.

Going back to the discussion on ‘sample sizes’: does anyone kwon what the big 4 sample size means in terms of:
 Sampling error
 Population size
 Confidence level
 Expected error rate
Sample Sizes:
Annual = 1
Quarterly = 2
Monthly = 2 to 5
Weekly = 5, 10, 15
Daily = 20, 30, 40
Many times per day = 25, 30, 45, 60

What do you see as the frequency of the control that JEs are reviewed? Theoretically, JEs can be made and reviewed multiple times a day, but can we make an argument that holistically, all JEs within a month are for the F/S for that month, and hence the activity (JEs being booked and reviewed) is a monthly activity?

We look at JEs as monthly activities and test using sample sizes for monthly controls. While recorded throughout the month, they really are not transactional in nature and I don’t feel they should be tested as such.

Once you consider JEs as monthly activities, you will need to test the holy month (all journal entries…), in order to do not make sample from another sample. I would prefer to test considering as multiple times a day (as we have more than thousand entries every year).

I stand corrected. We do treat them as multiple times per day and select our sample size based on that. We have also started to include only the most significant of the entries in our test pool as we could never have a material misstatement come from small dollar entries.
Our multiple times per day sample size is 35. We test across all periods up to the test date.