Fundamental Segregation of Duties 320
vpertsovskiy last edited by
I am a consultant for a large global bank, but my group is only 5 people. We are being hammered to comply to this act with no regards for how thin our staff is. The basic problem is isolating development, qa, uat, and production.
The auditors are claiming that developers cannot have access to qa, uat or production. They are also demanding that there will be some kind of seperation of source control such that migrating source from dev to qa requires physicaly moving the source code somewhere. I guess their belief is if the source physically resides somewhere else than somehow that will make it safe.
The remaining request is to enforce that all developers when requesting source control send some kind of request to senior management. The manager will then have to perform some action to release the code in question to the developer. All of this, I am sure sounds wonderful on paper, and in a power point presentation, but in the tranches, this will create an incredible nightmare. The other problem is that it is a very hard to fight an external auditor that reports to your CEO.
My final note is that too much power is being given to the auditors. The Act is very vague and from what I gather does not deal with IT at all, but sort of mentiones it in passing. And yet, this act is being used to push forward all kinds of potentially dangerous changes.
mikec last edited by
I completely understand your issue, we have come across precisely the same problem.
The trick is to (a) consider the risks and (b) consider accountability.
The risks you need to counter are fraud and business disruption. Audit’s approach to this is that if the developer can’t hack the code in production then all is safe. Obviously this is not the case.
Taking each point in turm: -
Segregation of Environments
What you need to ensure is that if a developer makes an unauthorised change in the production/qa/uat environment then someone knows about it and does something about it.
Also, if they need access to those environments they must get explicit authorisation preferably each time. Also, it’s a good idea to record their sessions using Keon or some other tool.
Essentially thisrecognises the fact that for practical reasons your staff need access to those environments, however you have compensatory controls in place to mitigate the risks.
Promotion of Source Control
Another one I’ve come across. Audit, as I understand it, want to ensure that the ‘source’ that is in production is immutable and instead of stating the requirement they have requested a solution.
With most source control systems you can freeze a release i.e. lock down the code that made a release. If you can demonstrate this then the auditor should be satisfied.
Access to source code
You need to ask the auditor what the risk is here. Is the code that sensitive that you need authorisation to even look at it?
Auditor reporting to CEO
At the end of the day it is the CEO’s decision what controls he wants to put in place - it’s his signature on the 404. It’s the auditor’s job to point out the risks and weaknesses and make recommendations (the latter in my experience can become ‘requirements’). It’s your job to let the CEO know the realistic options from which he can choose.
Hope that helps
I respectfully disagree with the earlier gentleman about allowing developers direct access into production. I would agree with you if it were a small ma and pop shop and you were ma or pa and you were the only ones potentially impacted. In a large shop this could be quite dangerous, accountability drops to zero, mass amnesia or finger pointing strikes, and what could be a 10 minute recovery blossoms into hour hours . Segregation of duties is a fundamental control. Yes, it’s slower. Yes, it opens you up to bureaucracy (but lets face it we have to tackle that nightmare anyway). And yes it can be a monumental pain. But to just say I should have complete access from beginning to end is scary. And can be a huge risk. Just because something hasn’t happened to you yet does not mean it couldn’t or won’t.
If nothing else with seperation of powers it helps ensure an extra set of eyes is involved and exposes someone, whether they are sneaking something in with best of intentions or not, to being exposed by another person. I.e. they are taking more risk by acting in the open and interacting with third parties who could catch them.
The DBA should not write the application code. The developers should not have direct access to the database. Production support should not be writing the code but should be examining it (dev may be tempted to bypass everything so watch for doors), signing off on it, and handing it over. These groups should be watching over each other. They are a balance to each other. Yes, there is still access and risks involved such as with the sysa but if proper controls are in place you can hopefully keep this to a minimum and be somewhat, at least, prepared if something bad should happen. You may have a small enough shop with two people in it who are the most ethical and wise in the world but to assume everyone is this way or allow unnecessary risk is foolish. If impact is small then hack away but if impact is big and others are involved you have a duty to try to ensure your environment is properly protected.
I’m not crazy about it either but a higher awareness for us all is a good thing. This process can help you too … suppose you are oncall when one of your 60 developers was moving around doing something and accidentally broke something. He or she could tell you… maybe. If you had some kind of enforceable process you might have a little more information available to you so you can identify what happened and fix it before the business rails on you. Lets be honest too. Usually you have more time developing something than you do fixing a live, or what was once live and the business would like it back again, process. Businesses tend to be a little pushier about the second part.
P.S. I found the environmental differences comment someone made as a justification for direct access highly entertaining. If you can’t control the testbed or mirror your environments, and are not looking for differences between the environments prior to entering production, all the more reason to bar the gates. Many times that is how environments get out of whack in the first place. Also lets be honest… its one little change and it would be quicker and easier to just put it in instead of messing with the test environment we haven’t kept in sync anyway because we have 60 developers playing around and the business wants it now or I want to leave early for tennis or whatever… so…
On relocation of source code to QA…
If only one person has access to the code and passes it to QA I don’t see how, but don’t know, that makes a difference.
If a shop is large enough to have 60 developers not a good idea to let final tested code hang around. Once you are satisfied it’s good hand it over to another party to be locked down. It’s possible those people could tamper with it but if their skin in the game is ensuring integrity of the final package they are far less likely to play with it than development. They may also be familiar enough with the ‘thats not what I turned over’ story to have a process in place to prove the original file has not been altered.
If the company is large enough to pay for 60 developers than it should be able to buy or lease one server for code control and probably has enough in the game to warrant doing so.
I’ve looked at the segregation of duties matrix and can I ask please?
It appears to be mainly driven around financial controls and yes I understand SOX is mainly financial but does it cover conflict of interest in general? Or do ‘we’ only care when a financial application is involved?
This is a great conversation, I hope it continues.
What I am looking for are compensating controls when the segregation of duties has been violated. On occasion, we have a need to have the developer also install the code into production. Obviously this creates an issue, and I need a compensating control to mitigate the impact of this activity. No, I can’t prevent it…Already tried that avenue and failed.
Does anyone know of sufficient compensating controls to smooth this over with the auditors? I realize this will be a finding, I’m trying to make it so it doesn’t become a material weakness.
MrsM last edited by
We had a similar situation - small team on a system that is to be replaced soon so - due to size of team - need developers to be able to install. Our compensating control is as follows: detective control which captures when changes have been installed in the live system.
In our case, the system sends an e-mail notification when changes are promoted to live (although a secure report tracking changes should also be fine). This notification is reviewed by an independent person (i.e. IT person without access to promote changes on this system). They confirm the changes are valid and sign-off.
I’m actually already monitoring the system for changes such as these, and that’s how I’m able to identify that the developer is the same person that is installing the code.
My concern is that just identifying it is not an adequate control. What if the developer added some rouge code that noone else knows about. Since they developed it and deployed it, there is no control. How to mitigate that so it’s not a material weakness?
MrsM last edited by
On occasion, we have a need to have the developer also install the code into production.
What change sign-off documentation do you require your developers to complete as standard?
If you use a standard doc that already has two signatures (peer review is fine), then an independent review of your detective report to confirm all changes agree to signed change docs is sufficient to control the risk around unauthorised changes. This assumes the control cannot be circumvented (i.e. all changes show on your report),
As standard, we have an SDLC process that requires the developer to get the required approvals and QA testing before being installed into production. Those are not typically an issue for me, the issue arises when we have a small ‘emergency’ install that needs to be done right away. We have a policy/process for emergencies that requires the installer to send a change notification that the install was done. I have an independent group that monitors the production system for any changes, and matches the changes with the change notices.
My issue is that for ‘emergencies’ there is no approval or peer review. It’s coded, installed and an after the fact notification - that’s it. I wouldn’t have an issue if someone else besides the developer installed. With this current ‘process’ I have no separation of coder/installer.
KnightX last edited by
It sounds like your detect controls are pretty good, the only additional thing I could see doing to midigate the risk is to do a post implementation review of the code and test it in the test environment to ensure that the coder did not make any intentional or unintentional mistakes that could affect the financial statements. That should cover your bases from a financial risk anyway.
What kind of emergancy coding would not allow a coder to even email or call someone to notify them of something being moved into production?
Denis i do agree with you. I have a query though:
What about the case of acquired applications (off the shelf) where we dont have access to the code.
In this case we still do a lot of changes in the applications or database changes. In this case would we still be bound by the segregation of duties in development and production?
kymike last edited by
I think that the simple answer is yes. There still chould be change controls in place to limit who can make configuration changes and those making the configuration changes generally should not be also doing processing in the production environment.
but in my case the system administrator and some consultants responsible for development have access to the production environment. The users of the application do not have access to production.
in this case how would i address the issue
Sorry the users of the application do not have access to development
I advise any emergency be highly visible and allow some bureacracy to slow the process a little to ensure only true emergencies are coming in and are under the microscope. Even if the developers come in screaming with blood seeping out their eyes. If they come in via stretcher take their pulse. Some people act better than they code. The process should be a little painful. Not unsurpassable but painful.
On a completely different note I wasn’t talking about conflict with access into production. I was talking more along the lines of reporting structure. I don’t think anyone is asking that basic question. Who do you report to? And examining that relationship as to whether it makes sense or not. Are there square pegs in round holes?
In todays global economy people need to open their eyes past financials. IT is not a garment factory.
emobley last edited by
Some compensating controls to consider when encountering SOD issues:
- Mentioned in several prior posts: Reconcile production code (libraries, executables, scripts, etc.) to approved released versions in the source code repository. This tells us that what’s in production went through the change management process with proper testing, approvals, etc. This test should be part of management’s SOX testing anyway (SOD issues aside) to validate completeness of the population within the source code repository.
- If the population of the source code repository is validated as per the prior step (nobody circumvented the process) and released code has undergone appropriate testing and is approved, this would indicate that only authorized changes made it to production. In the case of emergency changes, proper approvals were obtained after the fact.
Financial controls (partial list)
- Look for and test financial reconciliations that would detect anomolies introduced by unauthorized changes to production.
- A/R aging - if an unauthorized change exaggerates revenue customers will not pay their bills and/or complain.
- Use of audit tools like ACL, Approva, etc. find out stuff like a vendor that has the same bank account number as an employee.
- Check that all employees (or at the very least, people with sensitive IT access) undergo a background and reference check. Generally, you don’t hire people who have committed fraud in the past.
- Consider a mandatory vacation policy as this complicates fraudulent activity.
There are other controls that others could add to this list I’m sure.
NC last edited by
Y all this fuss and confusion, there are so many tools( biggies and nominal ones too) that take care of so many SOX related issues, mainly SOD. You have tools like Virsa, approva, Foxt, Securinfo etc to bend thier backs so that we buy those tools, implement them and be relaxed ( though few may prefer the manual processes over these tools, all said and done manual methods are flexible u see )
Looks like our fello soxers have looooooooooooooootsa time to design their own control methodologies.
Iam of the view that, these tools will help us big time when it comes to sustenance of our SOX compliance efforts.
After all, SOX came up to ensure the survival of all these corporations rite
lolo56390 last edited by
Hello and thanks for the post: I have a situation where one person is the developer/administrator/manager of a in-house built financial system on a AS400 subject to SOX regulation. Since it is not possible to segregation the functions with additional staff, the only way to provide compensating controls is to implement what was suggested i.e. monitoring of the changes made, and unalterable audit trail.
The problem is that the administrator has the ability to edit the audit trail files, delete them etc.
Would there be anyone who knows how to protect the audit trail without changing the logic of the application?