Can't a designer of control also be an evaluator? 2233

  • As we have designed/implemented internal controls, drying up our already-poor resources, we are told that the management evaluation of the effectiveness of internal controls should be undertaken by someone who was NOT involved in designing/implementing those controls. :roll:
    I understand that someone who executes a control cannot be an evaluator of that control, but is there any ‘segregation of duties’ applied here? : between design and evaluation ?
    What if the person who designed controls is transferred to a division responsible for the evaluation?
    After all, everybody works for the same ‘management’. How can we segregate the two functions?
    I would appreciate any input. Thank you.

  • Yes, Designer of the control should not be allowed to do testing on operating effectiveness of that control. This is because, as he/she was involved in designing control, it might happen he/she may miss on considering all aspects of risk that third person who was not involved may look at if he test that control.
    To give you simple example, suppose there is glass containing half water. Now if you all are involved in designing control, you must all have agreed that glass is half filled. So we somebody from your team is testing the control, he will consider that glass is half filled and will test it.
    But if you ask third person to look at same glass, he may have 2 views: glass is half filled and glass is half empty…
    So designer of control should not be tester.

  • Hi Kate and welcome to the forums 🙂
    Probably, this need is more pertinent to a true ‘non-biased’ assessment (independance rather than SOD as nilu shares).
    One approach might be for the SOX compliancy leader or team to design the controls and later Internal Audit can evaluate the effectiveness of the design.

  • I am actually also confused about the need to segregate the designer and tester of a control. Any design flaws on controls should be identified at the beginning of the process, way before any testing takes place, because it will be too late otherwise. I wouldn’t want to depend on the tester to find flaws in the design of the control.
    Of course, in an ideal world, all functions should be separated, but I do not see the real harm in having the designer test the control, given that all control designs are reviewed by a separate person (e.g. SOX coordinator) at the beginning of the process, or unless the designer also happens to be the process owner.

  • I am actually also confused about the need to segregate the designer and tester of a control.
    Hi - While my background is more in IT, I don’t think it is absolutely mandatory for designers and testers to be different, (esp. in a smaller company were resources might not be available).
    My comments were related to an evaluation for the overall effectiveness of the controls (which Internal auditors or SOX compliancy auditors might render an opinion on to ensure all bases were covered.

  • Thank you, harrywaldron, nilu, and Hoiya.
    In the literature called ‘Internal control over financial reporting - guidance for small public companies - volume I: Executive Summary’ issued by COSO (Treadway Commision) and dated June 2006,
    I’ve found the following sentences.
    ’ management is often directly involved in performing control procedures and for those procedures there may be only minimal documentation because management can determine that controls are functioning effectively through direct observation .’

    • this is regarding the documentation that ‘also provide evidence to support reporting on internal control effectiveness’.
      This, to me, seems to suggest that one person, whether he/she is management or not, can perform a control procedure, directly observe it’s working, and then say ‘the control is effective.’.
      Or, am I interpreting too much for my convenience?

  • Hi Kate,
    It sounds like the guidance was referring to the performance of the control itself, which is part of management’s/the company’s normal operations. SOX 404 then requires management to test these controls to ensure that they are operating effectively. In essence, using what you quoted, here is an example:
    There is a control where Person A browses an exception report daily to see if anyone placed a PO and also received the goods. The documentation may be simple because the action itself would be: 1. look at report for any SOD conflicts; 2. investigate the conflicts. Also, in this example, the ‘directly observing’ part would be the observation/review of the report.
    Then the SOX tester (internal employee or contractor, not the external auditors) performs testing on this control. He will ask for proof that Person A looked at the reports. He will be the one to assess whether the control is effective, based on the result of his testing.
    That’s my take. I still don’t think there is a conflict between designing the control and testing it, unless that person is also the one performing it.

  • Thanks, Hoiya.
    Let me see whether I understand it correctly.
    As for the ONGOING assessment, the management can determine that the control is effective because he/she is the one who performed the control. But, for the purpose of the annual assessment that is mandated by SOX 404, we need DIFFERENT MANAGEMENT from the one who performed the control to test it. Is that right?
    I understand that those who designed controls will not be able to detect a design deficiency of those controls. I also understand that those who were involved in the design may be biased to design/plan the evaluation project.
    I just still don’t see any reason why a designer, who was not involved in performing the control, is not allowed to be the tester of that control, as Hoiya wrote.

  • I think I’ve found the answer.:lol:
    In the SEC’s Interpretive Guidance Regarding Management’s Report on ICFR,
    it says, ‘Evidence about the effective operation of controls may be obtained from direct testing and on-going monitoring activities.’ and 'The qualitative characteristic of the evidence include nature of the evaluation procedures performed, the period of time to which the evidence relates, the objectivities of the evaluating controls , ’ and so on.
    Then, in the footnote, it goes on to say, ‘In determining the objectivity of those evaluating controls, management is not required to make an absolute conclusion regarding objectivity, but rather should recognize that personnel will have varying degree of objectivity based on, among other things, their job function, their relationship to the control being evaluated, and their level of authority and responsibility with in the organization. Personnel whose core function involves permanently serving as a testing or compliance authority at the company , such as internal auditors, normally are expected to be the most objective. However, the degree of objectivity of other personnel may be such that the evaluation of controls performed by them would provide sufficient evidence. Management judgments about whether the degree of objectivity is adequate to provide sufficient evidence should take into account the ICFR risk.’
    So, you’re all right. We will see to it that the most critical controls are evaluated by the internal auditors, and keep in mind that those who were deeply involved are not considered very objective.
    Thank you very much.

Log in to reply