Testing design of controls 373



  • This post is deleted!


  • This is a very interesting part of the audit process. The control design is closely related to control objectives. There is a need to assess the design effectiveness. The control should be properly constructed to achieve the related control objectives. You need to document the owner of the control, description of process flow, is the control properly designed? i.e. if the control is used as directed, will it accomplish its objective?. if the control is deemed deficient, what are its specific shortcomings?.
    After evaluating the control design, auditor is expected to classify the design deficiencies in to material/significent/insignificient design etc.
    BTW, Thereafter there is a need to LOOK for missing controls and develop the remediation plan which involves strengthening the controls, overhauling the controls etc.
    If you need further assistance, feel free to call me on 732-688-3802.
    Krish



  • Krish ist right about that…



  • Krish is right, but that is only part of the story.
    In evaluating control design you need to:

    1. Determine whether the controls that you have documented for the process meet ALL of the control objectives/financial statements assertions
    2. Whether there is an appropriate balance of prevent and detect controls
    3. Whether there is an appropraite balance of manual and automated controls
    4. Whether individual controls meet the requirements of SOX i.e. are they sufficiently well evidenced
      I would also recommend that you do a walkthrough test at this stage to determine that what you’ve been told in documenting the process corresponds with reality.


  • Dennis and Krish - I agree with what you are saying, but our externals are taking it a step further saying we have to document evidence that we tested the design. Translating that, they mean that we have to test that we have the right balance between prevent and detect controls, and test that we have the right balance between manual and system controls, and test that our controls are designed to meet our assertions. It is counter-inutuitive as these are subject evaluations that cannot be tested.
    I agree with the walk through though - even though it’s not required by companies, i think it’s a good practice.
    Bruce



  • Bruce, you need to show what the outcome of your tests has been. Therefore you need to document the test in terms of how and what you tested and the result of it. That enables e.g. the ext. Auditor or any other person to redo the test and verify it.



  • Thanks Holger, but how do you test the design of controls?



  • It is actually the ‘play’ of words.
    You need to ‘evaluate’ the design of controls/assess the design effectiveness of controls by seeing whether they are properly constructed to achieve the related control objective. Produce following 5 evidences for the external auditor to prove its effectiveness :-
    Owner of the control : Identify person responsible for executing the control.
    Description of the process flow : Detailed explanation of how control operates.
    Properly designed - Is the control built correctly? : if the control is used as directed will it accomplish the objective.
    Details of the internal control deficiency: If the control is deemed deficient, what are its specific shortcomings?
    Remediation Plan : How will the faulty design corrected.
    Specific methods you use for evaluating the design of controls depend upon
    . The types of control activities you are evaluating, including whether manual or programmed
    . competence of the individuals who perform the relevant control activities
    . The period of intended reliance.
    . The use of a service organization
    . Regulatory and governmental requirements
    Thanks… Krish



  • Thanks Krish. That’s perfectly right.



  • Of course, as important as evaluating design effectiveness we also have to evaluate operating effectiveness - which means full on controls testing.



  • Hi there,
    I live in The Netherlands and will be involved in SOx-work within the next year. Therefore I’m very interested in how you guys think of the following:
    I could imagine the following work to be performed during a test of design:
    Step 1 (as mentioned before):

    • identify control objectives and controls satisfying those objectives;
    • assess whether the identified controls, when operating effectively, would prevent/detect material errors or fraud;
      Typically, I would say that a walkthrough would provide you with good documentation on how the accuracy, but especially the completeness of the identified objectives and relating controls have been determined (e.g. have all potential -material- financial statement risks been adressed in a control objective).
      Example: during a walkthrough of the orders to cash cycle of a manufacturer, you come across the fact that your company occasionally enters into consignment contracts, whereas this is not the case for most customers (and this has not yet been documented in your process documentation). This could mean that a specific control objective should be recorded for the accuracy, existence and completeness of consignment stock and related revenue/cost of sales, when considered significant. If the consignment contracts are not considered significant, you could also document that in your walkthrough documentation, including the considerations and underlying documentation.
      Step 2 (actual design):
      You could consider the following in determining the adequacy of design (including documentation thereof):
    • Assess whether documentation used by staff members is in accordance with the identified objectives and controls (e.g. handbook, detailed process documentation and instructions drawn up by the department manager, specific forms used in the process)
    • documentation on interviews of staff performing key controls; e.g. does the staff member have a good understanding of what to look at when inspecting and authorizing an invoice in accordance with the identified control? Are there any exceptions from the regular process steps? Have any errors been identified during the past period and how was this followed-up?
    • trace transactions through the information system (make printscreens of important steps, considering input of data, error messages)
    • observation of certain process activities, such as cycle counts for inventories, access control for entrance tickets. You could consider to be present at one or more instances of such controls.
      I guess these steps could well be part of a walkthrough.
      As mentioned before, I’m very interested in any comments on the abovementioned.
      Regards, JR


  • The test of design (TOD) will be on work paper for each process or some people prefer to make a workpaper for each control. The auditors are looking for you to create these documents for them so that they can review the design of the control. Some companies use narratives instead of tests of design. These serve a few purposes for the auditors. one is that they can familiarize themselves with the controls and how the process works. Also if walkthrough evidence is kept it serves as an example of the test of effectiveness (TOE). Second if a control is not designed properly then it cannot be tested and so the TOD is important in saving time and money on the TOE side. The auditors usually will perform their own test of design, but if yours is done well and documented they may agree to rely on yours and you can save on audit fees.



  • Hey,
    I think the TOD is pretty much covered well before me. But thot would share one point here.
    During our SOX TOD, we do one thing that the Process/control owners don’t like, but thats our strong point that was well appreciated by all in the year end.
    We take a control and come up all the exceptions that can arise which can lead to a deficiency or weaken the control and test the design with that. We take the documentation, do a good analysis and come up with these exception cases. If the control meets with all these, then we give a go ahead. If not, we document observations where there is a potential weakness. If the control does not address certain exceptions, then its a deficiency.
    Our exception cases are based on history of past events that we analyze, such as tickets from the last quarter or two. Also often the organization structure results in certain weaknesses, like a request for access to be approved by a line manager and then a system owner. What if the System owner is the line manager for a request, what would be the control activity?
    Coming up with such cases make the TOD stronger.
    Any comments on the above?
    Regards
    Hari.



  • Hey,
    We take a control and come up all the exceptions that can arise which can lead to a deficiency or weaken the control and test the design with that. We take the documentation, do a good analysis and come up with these exception cases. If the control meets with all these, then we give a go ahead. If not, we document observations where there is a potential weakness. If the control does not address certain exceptions, then its a deficiency.

    Its a good approach in implementation of TOD but it basically saying the same thing in different words. A good test of design will anyway look at all the possible scenarios to evaluate whether the design of the control and related process covers all the risk or not.
    Walkthroughs are important part of TOD for us as far as key control is concerned.
    A lot of time there is insufficient documentation related to a process or control. A TOD based solely on inquiry/observation may lead to a wrong conclusion about design specially in cases when its done by inexperienced person.
    I have seen myself some companies using single sample testing for their TOD. Mostly because initially nearly all control pass TOD but u see most of them failing in the TOE.



  • A small addition to the extensive discussion above is that of internal quality assurance review. Simply put, once the soxer has done there part through documeting risks, control activities, relationship to f/s and COSO assertions, doing a walkthrough and assessing adequacy of design; a knowledge/preferably senior person needs to review the resultant work.
    In a sense, this is testing the work done on design. Likewise, a more senior person would (I assume) review the work done while testing effectiveness. To extend the thought, the QA review is the monitoring control and the external auditor testing is the independent assurance.
    It might be semantics, but quality control is crucial to establishing and maintaining credibility both internally within a client; and externally with the auditor. It also helps keep everyone on the same page, methodology wise, including production of clean consisent output that’s easily reviewed.


Log in to reply