Adding Appsec to Agile: Security Stories, Evil User Stories and Abuse(r) Stories
Because Agile development teams work from a backlog of stories, one way to inject application security into software development is by writing up application security risks and activities as stories, making them explicit and adding them to the backlog so that application security work can be managed, estimated, prioritized and done like everything else that the team has to do.
Security Stories
SAFECode has tried to do this by writing a set of common, non-functional Security Stories following the well-known
“As a [type of user] I want {something} so that {reason}”
These stories are not customer- or user-focused: not the kind that a Product Owner would understand or care about. Instead, they are meant for the development team (architects, developers and testers). Example:
As a(n) architect/developer, I want to ensure AND as QA, I want to verify that sensitive data is kept restricted to actors authorized to access it.
There are stories to prevent/check for the common security vulnerabilities in applications: XSS, path traversal, remote execution, CSRF, OS command injection, SQL injection, password brute forcing. Checks for information exposure through error messages, proper use of encryption, authentication and session management, transport layer security, restricted uploads and URL redirection to un-trusted sites; and basic code quality issues: NULL pointer checking, boundary checking, numeric conversion, initialization, thread/process synchronization, exception handling, use of unsafe/restricted functions.
SAFECode also includes a list of secure development practices (operational tasks) for the team that includes making sure that you’re using the latest compiler, patching the run-time and libraries, static analysis, vulnerability scanning, code reviews of high-risk code, tracking and fixing security bugs; and more advanced practices that require help from security experts like fuzzing, threat modeling, pen tests, environmental hardening.
Altogether this is a good list of problems that need to be watched out for and things that should be done on most projects. But although SAFECode’s stories look like stories, they can’t be used as stories by the team.
These Security Stories are non-functional requirements (NFRs) and technical constraints that (like requirements for scalability and maintainability and supportability) need to be considered in the design of the system, and may need to be included as part of the definition of done and conditions of acceptance for every user story that the team works on.
Security Stories can’t be pulled from the backlog and delivered like other stories and removed from the backlog when they are done, because they are never “done”. The team has to keep worrying about them throughout the life of the project and of the system.
As Rohit Sethi points out, asking developers to juggle long lists of technical constraints like this is not practical:
If you start adding in other NFR constraints, such as accessibility, the list of constraints can quickly grow overwhelming to developers. Once the list grows unwieldy, our experience is that developers tend to ignore the list entirely. They instead rely on their own memories to apply NFR constraints. Since the number of NFRs continues to grow in increasingly specialized domains such as application security, the cognitive burden on developers’ memories is substantial.
OWASP Evil User Stories – Hacking the Backlog
Someone at OWASP has suggested an alternative, much smaller set of non-functional Evil User Stories that can be “hacked” into the backlog:
A way for a security guy to get security on the agenda of the development team is by “hacking the backlog”. The way to do this is by crafting Evil User Stories, a few general negative cases that the team needs to consider when they implement other stories.Example #1. “As a hacker, I can send bad data in URLs, so I can access data and functions for which I’m not authorized.”
Example #2. “As a hacker, I can send bad data in the content of requests, so I can access data and functions for which I’m not authorized.”
Example #3. “As a hacker, I can send bad data in HTTP headers, so I can access data and functions for which I’m not authorized.”
Example #4. “As a hacker, I can read and even modify all data that is input and output by your application.”
Thinking like a Bad Guy – Abuse Cases and Abuser Stories
Another way to beef up security in software development is to get the team to carefully look at the system they are building from the bad guy’s perspective.
In “Misuse and Abuse Cases: Getting Past the Positive”, Dr. Gary McGraw at Cigital talks about the importance of anticipating things going wrong, and thinking about behaviour that the system needs to prevent. Assume that the customer/user is not going to behave, or is actively out to attack the application. Question all of the assumptions in the design (the can’ts and won’ts), especially trust conditions – what if the bad guy can be anywhere along the path of an action (for example, using an attack proxy between the client and the server)?
Abuse Cases are created by security experts working with the team as part of a critical review – either of the design or of an existing application. The goal of a review like this is to understand how the system behaves under attack/failure conditions, and document any weaknesses or gaps that need to be addressed.
At Agile 2013 Judy Neher presented a hands-on workshop on how to write Abuser Stories, a lighter-weight, Agile practice which makes “thinking like a bad guy” part of the team’s job of defining and refining user requirements.
Take a story, and as part of elaborating the story and listing the scenarios, step back and look at the story through a security lens. Don’t just think of what the user wants to do and can do – think about what they don’t want to do and can’t do. Get the same people who are working on the story to “put their black hats on” and think evil for a little while, brainstorm to come up with negative cases.
As {some kind of bad guy} I want to {do some bad thing}…
The {bad guy} doesn’t have to be a hacker. They could be an insider with a grudge or a selfish customer who is willing to take advantage of other users, or an admin user who needs to be protected from making expensive mistakes, or an external system that may not always function correctly.
Ask questions like: How do I know who the user is and that I can trust them? Who is allowed to do what, and where are the authorization checks applied? Look for holes in multi-step workflows – what happens if somebody bypasses a check or tries to skip a step or do something out of sequence? What happens if an action or a check times-out or blocks or fails – what access should be allowed, what kind of information should be shown, what kind shouldn’t be? Are we interacting with children? Are we dealing with money? With dangerous command-and-control/admin functions? With confidential or pirvate data?
Look closer at the data. Where is it coming from? Can I trust it? Is the source authenticated? Where is it validated – do I have to check it myself? Where is it stored (does it have to be stored)? If it has to be stored, should it be encrypted or masked (including in log files)? Who should be able to see it? Who shouldn’t be able to see it? Who can change it, and to the changes need to be audited? Do we need to make sure the data hasn’t been tampered with (checksum, HMAC, digital signature)?
Use this exercise to come up with refutation criteria (user can do this, but can’t do that; they can see this but they can’t see that), instead of, or as part of the conditions of acceptance for the story. Prioritize these cases based on risk, add the cases that you agree need to be taken care of as scenarios to the current story, or as new stories to the backlog if they are big enough.
“Thinking like a bad guy” as you are working on a story seems more useful and practical than other story-based approaches.
It doesn’t take a lot of time, and it’s not expensive. You don’t need to write Abuser Stories for every user Story and the more Abuser Stories that you do, the easier it will get – you’ll get better at it, and you’ll keep running into the same kinds of problems that can be solved with the same patterns.
You end up with something concrete and functional and actionable, work that has to be done and can be tested. Concrete, actionable cases like this are easier for the team to understand and appreciate – including the Product Owner, which is critical in Scrum, because the Product Owner decides what is important and what gets done. And because Abuser Stories are done in phase, by the people who are working on the stories already (rather than a separate activity that needs to be setup and scheduled) they are more likely to get done.
Simple, quick, informal threat modeling like this isn’t enough to make a system secure – the team won’t be able to find and plug all of the security holes in the system this way, even if the developers are well-trained in secure software development and take their work seriously. Abuser Stories are good for identifying business logic vulnerabilities, reviewing security features (authentication, access control, auditing, password management, licensing) improving error handling and basic validation, and keeping onside of privacy regulations.
Effective software security involves a lot more work than this: choosing a good framework and using it properly, watching out for changes to the system’s attack surface, carefully reviewing high-risk code for design and coding errors, writing good defensive code as much as possible, using static analysis to catch common coding mistakes, and regular security testing (pen testing and dynamic analysis). But getting developers and testers to think like a bad guy as they build a system should go a long way to improving the security and robustness of your app.