Sample Sizes and _and_quot;cycles_and_quot; 687

  • The external auditor has recommended increasing sample sizes significantly. In addition, they believe in failing a test if your are unable to obtain all of the sample. For example if the test script requires 50 and there were only 15 available then it fails. %0ADoes this make sense? Their justfication is there wasn’t enough cycles. I’ve worked on other projects and you performed the test with the sample available and then documented that you obtained 100% of the sample available even though it fell short. %0AHas any one run into this problem? Suggestions?

  • Sadly they auditors often don’t know a hole in the wall from their, well, you get the picture.%0AAnyway, my recommendation is to escalate this issue inside your company and with the audit firm. The job of the audit firm is to attest to whether there is room for ‘shennanigans’ given the controls that are in place and how closely you monitor them. If, over a defined period of time, you don’t have enough occurences of a control to perform the sample test size, you do a 100% test, which will show 100% that there were no control deficiencies. The caveat to this is where the auditor wants to see results from before the control existed. Say you implemented a control that the auditor wants to see 90 days worth of testing on, but you only implemented it 30 days before the audit, I can see them throwing an exception out. %0ASome of the auditors I am hearing about are taking a very academic stance (and I use the term academic loosely) on their audit responsibilities. They feel that their job is to strinctly adhere to some standard that has been provided to them, even when it doesn’t make sense to, because they aren’t really given the latitude to make any interpretations, but they are working against standards that are one-size-fits-all and do not account for many possibilities such as this.%0AAnyway, that’s my opinion on the matter.

  • We have the following test guidelines internally - %0A 3 items for controls performed quarterly%0A 5 items for controls performed monthly%0A 15 items for controls performed weekly%0A 30 items for controls performed daily%0A 40 items for controls performed multiple times per day.%0AIf, over the period of time being tested, there are fewer transactions or executions of the control and we do a 100% test, there is nothing more that can be done to assess the effectiveness of the control. Your auditor appears to be way off base. You should use them for guidance, but they cannot tell you how your management team needs to do its testing. All they can do is form their own opinion of your management’s assessment as well as their own assessment of controls.%0AOut of curiousity, what were their recommendations on this control since the number of times the control was executed was less than the suggested sample size? Look for compensating controls? There is certainly nothing to remediate if 100% of the control executions were consistent with the design of the control. A little bit more background may help us to give you better guidance as to how to address this with your auditors.

  • _at_kymike - agree with your logic 100%. Your sample sizes look a touch on the large side though.

  • to provide a bit more background, the external auditor provided initial sample size requirements that seemed reasonable. Then they changed their requirements for samples sizes and bumped them way up after we completed our testing for 2004. they came on site and made a declaration that there were not enough ‘cycles’
    this external auditor now wants to see the following samples sizes based on the frequency:
    daily = 50
    weekly = 10
    quarterly = 2
    annual = 1
    they have also stated they wanted additional steps added to test scripts that seem more aligned with best practice rather than focused on sox compliance. we can’t push back too much because they can retaliate against us

  • The daily sample size seems excessive, but depending on the number of key controls you are testing (you are only testing ‘key’ controls, right?) this may or may not add a lot to your test load.
    Since this is in the IT forum, I am assuming that you are referring primarily to manual controls within the IT function (adding new user security, etc.). The idea behind sampling is to test a sample size that will give you statistically accurate depiction of the population as a whole. You may need to revisit the items that you are testing to see what your annual volume of control instances is and refine your test size to reflect an appropriate sample size. The idea is to test a representative sample of the controls, not test all of them. Maybe your auditor needs a refresher in statistical sampling :roll:

  • agree with you.
    yes we are testing only key general computing controls. there are 59 key controls identified. i think that is too high but that is what we have.
    regardless of the total population we have in IT, the external auditor still wants the large sample sizes. This is a significant increase in the effort required to test the controls.
    this is medium sized company. it does seem like overkill.

  • Hello,%0AI am an IT Auditor with a big four firm.%0AOur audit size selection is:%0ADaily or more often = 365+ we sample 25%0AWeekly or more often = 52 364 = 15%0AMonthly will sample 5%0AQuarterly we will sample 2%0AAnnually we will sample 1.%0AI should also state that we try to get to a 95% rate.%0AThis means that if we would sample a control and pull 25 and find one that is an exception we would still pass the control. If we found 2 out of the 25 as exceptions we would pull another 25 and see if we got a clean 25 selection in the next sampling. If we didn’t get anymore exceptions we would pass the control.%0ATherefore 2 exceptions out of 50 would be allowable, but three exceptions would fail the control.%0AI would also agree that many IT Auditors are not aware to the realities of IT. I know that exceptions will happen or you may have an SDLC that you only use once or twice a year for in scope systems, so I would then say OK, prove to me you only used it twice. If your proof is reasonable I say OK that falls within 1 4 times a year so I will test two.%0AI think this makes practical sense.

  • SOX_Monster, I am guessing you are working for E-and-Y. They are the only Big 4 I have worked with so far (others are KPMG and PWC) that cap the sample size at 25.
    These are the guidelines PWC and KPMG use:
    annual - 1
    quarterly - 2
    monthly - 2 to 5
    weekly - 5 to 15
    daily - 20 to 40
    multiple times a day - 25 to 60
    annual - 1
    quarterly - 2 to 3
    monthly - 2 to 4
    weekly - 5 to 10
    daily - 15 to 30
    multiple times a day - 30 to 60
    They break it down by risk. For instance, a control reoccuring several times a day that is high risk would be a sample size of 60. Low risk, a sample size of 30.
    After a while, this becomes absurd. Pulling 60 screenshots becomes a tedious exercise in excessiveness.

  • :lol:
    I agree, Pulling 60 samples and getting screen shots is tedious.
    I have never had to pull that many so I guess I am lucky.
    I also didn’t say we capped the sample at 25 as if we find two exceptions we pull another 25.
    I usually get someone else to pull my screen shots for me and do the documentation on them as well :lol:
    In the two SOX jobs I was on last year I was the manager, so if I knew there were screen shots from you know where coming up, I got the first years to do that test, and then I just reviewed their work.
    Of course my work was then reviewed by the senior manager, and his work was reviewed by a partner, and then the national risk partner…

  • why do you need to pull screenshots? that’s more than is required to evidence testing.

  • SOX_Monster, I am guessing you are working for E-and-Y. They are the only Big 4 I have worked with so far (others are KPMG and PWC) that cap the sample size at 25.
    My company use D-and-T which has also given us a cap at sample size 25

  • As far as I am concerned you don’t need to pull 60 screen shots.
    But you do need evidence.
    Off the top of me head I cannot think of anytime that we actually pulled 25 screen shots for one test.
    I usually tell my people to document the testing steps, pick you selection, get a screen shot or two of a couple of the tests and then document that you observed the rest of the selections.
    We take an attitude that if it can be reproduced at a later date, i.e. a screen shot of an access log, then one screen shot if any, is all that is required.
    Screen shots of 25 different computers showing that they had a viable anti-virus that was up to date at the time of the test may be required though. (I actually conducted that test myself and I just attested to the up to date AV)

  • One thing that should be considered as well is if it is an automated control, or a manual control

  • for a daily process, we were given a sample size of 50-60. and yes they do expect screen shots or some kind of evidence for all of them. an important clarification we just got is that the sample size is on an annual basis. this is important for us because we are doing sox testing now and we will do it again in 4th quarter. this means we would pull 25-30 now and then another 25-30 for the later round of testing.

  • Has anyone heard of a ‘rule of thumb’ for samples sizes that if you know the total population then you pull 30% for your sample (even when the external auditor or test script asks for a larger amount)?

  • Hi ugogirl,%0AThis sounds like it would lead you to performing a lot more testing than is necessary for the more frequent controls. For example, if a control were exercised on a daily basis, the population would be 365 over a year. 365 x 30% = ~110 samples. Not to mention some controls may be execised 3 times per day or more… 8O %0AOn the other hand, for a monthly control, this would lead you to a sample size of 4, which is in-line with the big-4’s. I think applying the 30% rule to anything beyond a weekly control frequency would create unneccessary burden. Maybe 30% for controls exercised weekly or less… 20% for annual controls… 15% for controls exercised more than twice a day…? Someone better with statistics could probably prepare the %0A’assurance’ curve for this one… 😮 %0AGood point about the control testing being performed on an annual basis - I know that some companies perofrming quarterly testing have not taken this into consideration and have ended up doing a lot more testing than would otherwise be necessary.%0ACheers,%0Alordkukuface

  • for a daily process, we were given a sample size of 50-60. and yes they do expect screen shots or some kind of evidence for all of them. an important clarification we just got is that the sample size is on an annual basis. this is important for us because we are doing sox testing now and we will do it again in 4th quarter. this means we would pull 25-30 now and then another 25-30 for the later round of testing.
    If the evidence of the control is a screenshot then this suggests automated control to me. If you are looking at automated controls then you can go down a GCC test of one route.
    Can’t imagine anything more pointless or soul destroying than pulling 50 screenshots for one control 8O

  • actually the screen shots are to show the approvals from user managment and IT management (external auditor wants to see: approval to start the project, approval of test results, and approval to migrate to production). these approvals are done via a software product that does help desk tickets, workflow, and change management. the only way to get the evidence is screen prints unfortunately.

  • You don’t need to keep any document as testing evidence that you can easily reproduce. Just ensure that your testing write-up covers what you tested and includes enough information to reproduce that testing. Usually, a test matrix with the attributes tested, the results of the tests and a conclusion as to the effectiveness of the controls is adequate.
    If you think that it is easier to keep the documents than reproduce later, that is a decision that you will have to make.

Log in to reply