I’m feeling a little… insecure
This is an operator’s nightmare: security breach leads to information theft leads to direct or indirect damage. Someone hacked into the information systems, and stole credit card numbers or other sensitive personal information; alternatively someone hacked into the information systems, and deleted vital records.
Whereas the latter has higher chance of being revealed rather quickly, the former may be done in stealth, over a period, creating undetected damage to revenue, reputation, or even to personal security and privacy.
What a mess!
It is therefore only makes sense that operators invest considerable amounts in security systems such as firewalls, malware detection, and backup.
However, this is only part of the story.
OWASP, The Open Web Application Security Project, releases from time to time the OWASP TOP TEN – the ten most critical WEB application security risks. (Formerly it used to be the most serious security vulnerabilities, and changed to reflect risk)
Many of these risks are transparent to many security systems installed on-site, as they include malicious usage of the service from the outside. If, for example, the software of your online service does not support protection against unauthorized login, your firewall and malware detection is likely to miss such an attack, because it looks like a legitimate use of the system.
What are the 2010 OWASP TOP TEN? Listed at the end of this post, they are simple, sometimes incredibly simple, uses of the service to break into it.
OK, you might think that worst case, you will run security tests rigorously from time to time, say once every major release, and fix vulnerabilities as patches if such are found. Right?
Errrr, yes, but say hello to zero-day attack: culprits are looking for such vulnerabilities, and use the time between detection until patch is deployed.
Say you develop online banking service software. You discovered such vulnerability, and posted on your release notes that it is fixed in version X.Y. The attacker now needs to quickly develop a procedure to exploit this vulnerability, attack your customers before the patch is installed, and asta la vista: attacker: 1, vendor: 0.
So, what can be done? And how is agile related to this anyway?
A lot! quite frankly.
I will now explain, using some basic practices, how agility can help you reduce and even eliminate many of the security threats:
Starting with TDD: By testing input validation in your test harness, you can easily eliminate most injection vulnerabilities: You try to break the input to your service/class in the test by injecting SQL, Java Script, or whatever is relevant to your technology, and then let the tests fail until all vulnerabilities are fixed.
The more experience you gain with TDD, the more injection types you detect and prevent before the code reaches other types of testing.
In addition to Injection(A1) This practice will also reduce XSS (A2), Insecure direct object reference (A4), CSRF (A5), Failure to restrict URL access (A8), Insufficient transport layer protection (A9) and Unvalidated redirects and forwards (A10).
Moving on to BDD: By having security aspects in edge-cases you specify, you will dramatically increase the awareness among developers and testers in their day to day work.
On top of all security risks mentioned above, BDD is a practice useful for preventing Broken authentication and session management (A3) and Insecure cryptographic storage (A7)
Both TDD and BDD make software development more fun. In the context of security testing, both are a type of gamification of the process:
> write the security test
> make the code pass
> write harder tests – get to the next game level
> repeat forever
All this in itself is great. But the next practice will make it more fun and more rigorous.
Pair programming, or pair testing in this case, introduces cross pollination to the process, and gets the brains to work much harder.
Put to the test (puns intended) one team member writes the tests, and other team member makes them pass. Then they swap – the person to write tests now tries to make them pass, and vice versa.
Both add new ideas with each such iteration.
Ultimately, both parties of the pair will soon have security awareness embedded into their daily life.
Make it visible:
Take ten sheets of A4 paper (or Letter, depending where you read this post from 😉 and print one security risk on each.
If you can laminate them – even better.
Hang the ten sheets in the team room, and provide green round stickers to the team members.
Every time someone prevents a risk before the software leaves the team room (so to speak), they stick it on the matching sheet.
Then, if you have a security testing team, provide them red round stickers.
Every time testers find a security defect after the team releases, they stick it to the right sticker.
At the end of each iteration – during the retrospective could be a perfect time for this, count the different types of stickers, and put them on a chart.
This will provide the cumulative count of security threats detected and at what stage (kinda like a security CFD)
Now you have a healthy competition: your team wants to have more green stickers; security experts want to find cracks in the system.
With time, one should expect to have green stickers ‘eating out’ the red ones.
During specification workshop make room specifically for security tests.
This can be kick-started using several practices, each has its own advantages and challenges, so choose what fits your team best. Here are a couple of examples:
– Have a security expert in the workshop. While this will improve the acceptance criteria, it may put you at risk that the security aspects are ‘deposited’ with this person, allowing the rest of the team to rely on that person
– Have a security practice of the sprint. This can be done in rotation, or according to the most exposed areas of the system according to your visibility exercises or your static code analysis tool. While this will help you close gaps, it might put other areas neglected until the entire team is up to scratch
– Create your own practice for introducing security aspects into the acceptance criteria. Your ideas in your context will help you take ownership. So let the team decide and change the practice periodically.
Scrum sprint review:
During sprint reviews, present one or two penetration tests. This will increase the trust of stakeholders, and make room for discussion about additional security aspects that can be added in subsequent stories
Deploy a static code analysis tool. Some of these tools specialize in security aspects of the code, and will help you identify where flaws exist in your existing code. By using the reports at regular intervals, you will be able to determine the pace in which you close gaps from your legacy code, or whether the gap is widening and it is time to invest more in swarming in on closing it.
Appendix: OWASP TOP 10 for 2010
- Injection: including code in seemingly legitimate use of the system, in an attempt to either break in or to break the system
- Cross site scripting (XSS): An attacker runs a script on a malicious site, in order to gain access into other sites the user surfs
- Broken authentication and session management: An attacker gains partial or full access into a service by stealing a session ID
- Insecure direct object references: An attacker hacks reference (e.g. in URL) to gain other users’ data
- Cross site request forgery (CSRF): An attacker gains control on the client side and uses automatic authentication (e.g. ‘remember password’) to perform illegitimate actions on behalf of the victim
- Security misconfiguration: For example, using known security vulnerabilities in the server (IIS, WAS, WL), such as missing patches, to hack into the infrastructure
- Insecure cryptographic storage: Getting hold of sensitive information while in transient storage, or completely left unencrypted
- Failure to restrict URL access: Bypassing authentication and authorisation by browsing directly to a URL
- Insufficient transport layer protection: Gain access to system by sniffing unencrypted or not-sufficiently-encrypted messages
- Unvalidated redirects and forwards: redirecting to site via attacker’s fake site, or using vulnerable-unauthenticated page to gain access to another authenticated page