Testing design of controls 373



  • Thanks Krish. That’s perfectly right.



  • Of course, as important as evaluating design effectiveness we also have to evaluate operating effectiveness - which means full on controls testing.



  • Hi there,
    I live in The Netherlands and will be involved in SOx-work within the next year. Therefore I’m very interested in how you guys think of the following:
    I could imagine the following work to be performed during a test of design:
    Step 1 (as mentioned before):

    • identify control objectives and controls satisfying those objectives;
    • assess whether the identified controls, when operating effectively, would prevent/detect material errors or fraud;
      Typically, I would say that a walkthrough would provide you with good documentation on how the accuracy, but especially the completeness of the identified objectives and relating controls have been determined (e.g. have all potential -material- financial statement risks been adressed in a control objective).
      Example: during a walkthrough of the orders to cash cycle of a manufacturer, you come across the fact that your company occasionally enters into consignment contracts, whereas this is not the case for most customers (and this has not yet been documented in your process documentation). This could mean that a specific control objective should be recorded for the accuracy, existence and completeness of consignment stock and related revenue/cost of sales, when considered significant. If the consignment contracts are not considered significant, you could also document that in your walkthrough documentation, including the considerations and underlying documentation.
      Step 2 (actual design):
      You could consider the following in determining the adequacy of design (including documentation thereof):
    • Assess whether documentation used by staff members is in accordance with the identified objectives and controls (e.g. handbook, detailed process documentation and instructions drawn up by the department manager, specific forms used in the process)
    • documentation on interviews of staff performing key controls; e.g. does the staff member have a good understanding of what to look at when inspecting and authorizing an invoice in accordance with the identified control? Are there any exceptions from the regular process steps? Have any errors been identified during the past period and how was this followed-up?
    • trace transactions through the information system (make printscreens of important steps, considering input of data, error messages)
    • observation of certain process activities, such as cycle counts for inventories, access control for entrance tickets. You could consider to be present at one or more instances of such controls.
      I guess these steps could well be part of a walkthrough.
      As mentioned before, I’m very interested in any comments on the abovementioned.
      Regards, JR


  • The test of design (TOD) will be on work paper for each process or some people prefer to make a workpaper for each control. The auditors are looking for you to create these documents for them so that they can review the design of the control. Some companies use narratives instead of tests of design. These serve a few purposes for the auditors. one is that they can familiarize themselves with the controls and how the process works. Also if walkthrough evidence is kept it serves as an example of the test of effectiveness (TOE). Second if a control is not designed properly then it cannot be tested and so the TOD is important in saving time and money on the TOE side. The auditors usually will perform their own test of design, but if yours is done well and documented they may agree to rely on yours and you can save on audit fees.



  • Hey,
    I think the TOD is pretty much covered well before me. But thot would share one point here.
    During our SOX TOD, we do one thing that the Process/control owners don’t like, but thats our strong point that was well appreciated by all in the year end.
    We take a control and come up all the exceptions that can arise which can lead to a deficiency or weaken the control and test the design with that. We take the documentation, do a good analysis and come up with these exception cases. If the control meets with all these, then we give a go ahead. If not, we document observations where there is a potential weakness. If the control does not address certain exceptions, then its a deficiency.
    Our exception cases are based on history of past events that we analyze, such as tickets from the last quarter or two. Also often the organization structure results in certain weaknesses, like a request for access to be approved by a line manager and then a system owner. What if the System owner is the line manager for a request, what would be the control activity?
    Coming up with such cases make the TOD stronger.
    Any comments on the above?
    Regards
    Hari.



  • Hey,
    We take a control and come up all the exceptions that can arise which can lead to a deficiency or weaken the control and test the design with that. We take the documentation, do a good analysis and come up with these exception cases. If the control meets with all these, then we give a go ahead. If not, we document observations where there is a potential weakness. If the control does not address certain exceptions, then its a deficiency.

    Its a good approach in implementation of TOD but it basically saying the same thing in different words. A good test of design will anyway look at all the possible scenarios to evaluate whether the design of the control and related process covers all the risk or not.
    Walkthroughs are important part of TOD for us as far as key control is concerned.
    A lot of time there is insufficient documentation related to a process or control. A TOD based solely on inquiry/observation may lead to a wrong conclusion about design specially in cases when its done by inexperienced person.
    I have seen myself some companies using single sample testing for their TOD. Mostly because initially nearly all control pass TOD but u see most of them failing in the TOE.



  • A small addition to the extensive discussion above is that of internal quality assurance review. Simply put, once the soxer has done there part through documeting risks, control activities, relationship to f/s and COSO assertions, doing a walkthrough and assessing adequacy of design; a knowledge/preferably senior person needs to review the resultant work.
    In a sense, this is testing the work done on design. Likewise, a more senior person would (I assume) review the work done while testing effectiveness. To extend the thought, the QA review is the monitoring control and the external auditor testing is the independent assurance.
    It might be semantics, but quality control is crucial to establishing and maintaining credibility both internally within a client; and externally with the auditor. It also helps keep everyone on the same page, methodology wise, including production of clean consisent output that’s easily reviewed.


Log in to reply