The vast majority of application security teams are under resourced, if resourced at all. Application security (AppSec) teams should scale with development teams, but this rarely happens. So, given this disadvantage, how can you make your applications safe and be effective with application security?
The only way application security scales given limited resources is shifting responsibility back to the developers. Developers should ultimately own the responsibility anyway even given infinite resources—it is their code that they wrote and they should ensure there are no vulnerabilities in the same way they would with other bugs. Even with no resources, shift responsibility to developers as soon as possible even while waiting for that impossible-to-find application security engineer. The longer you wait the harder it will be to get your head above water.
How do you shift responsibility back to the developers? There are four main ways:
- Develop & Enforce Secure Coding Standards
- Implement Secure Development Training
- Develop Security Requirements
- Deploy and Enforce Usage of Security Tools
And here are two additional ways (Bonus):
- Security Champions
- Defense in Depth
Develop & Enforce Secure Coding Standards
The first step is to create rules that developers need to follow. This helps developers take responsibility for security. Secure coding standards should be developed so that given the programing language it is clear what must be done and what must never be done.
For example, in Node, do not use the potentially dangerous functions Eval(), setInterval(), or setTimeout(). There are several free resources for creating Secure Coding Standards including Open Web Application Security Project (OWASP) and US Computer Emergency Readiness Team (CERT).
The Secure Coding Standards should also include:
- the policy of what open source libraries to use
- the process to gain approval for adding libraries to the list
- a process for ensuring that those libraries are monitored and updated in a timely manner
Vulnerabilities that live in open source code become public knowledge and attackers use them very soon after they are public.
Putting Secure Coding Standards in writing and distributing them to the team is a great first step. However, without holding the team accountable, it’s unlikely that the standards will ever be fully followed. The standards can be enforced by writing automated tests on all code before being merged with master, adding modules to tools like Sonarqube (community edition available) or other code scanners, and/or doing manual code reviews. If developers are not held accountable they will not follow secure coding standards.
Implement Secure Development Training
If you are going to shift the responsibility to developers you need to give them the resources to be successful. How do you expect a developer to protect against an XML External Entity (XXE) vulnerability if they don’t even know what it is? Arm your developers with the tools they need to be able to stop vulnerabilities at the earliest stage in the Software Development Lifecycle (SDLC). Ensure the training is engaging so it can be as effective as possible. Developers generally do not want to be forced to take training, but if the training is interesting and leverages their problem solving skills it will be more effective.
Develop Security Requirements
If there is not time dedicated to thinking about security no one will think about it. So as part of the planning process for any new feature there should be a time set aside to discuss and develop the security requirements. Without thinking through the use cases and abuse cases and how to stop them, functionality will be built without security in mind which can cause large gaps as well as making it difficult and expensive to fix later. When you’re developing security requirements, be sure to consider confidentiality, integrity, and availability for all data and systems. Both National Institute of Standards and Technology (NIST) and US-CERT have guidelines for how to develop security requirements.
Deploy and Enforce Usage of Security Tools
Deploy tools that help find vulnerabilities. Both static and dynamic analysis tools should be run to find vulnerabilities—the automated tools will add minimal overhead to run. There are many open source tools that can be used (such as Sonarqube, OWASP Zap, w3af, etc.), however some of the most effective tools are commercial and expensive. Having any tool in place is better than nothing. Some of the open source tools allow you to write rules to help make them more effective. These tools should be run by every developer before merging code to master and pushing to production.
Developers should be required to triage the issues before merging code, as this helps developers take responsibility for their code. Running dynamic and static tools early and often in the development process helps keep cost low by keeping the work with the developer who is most familiar with the code. In order to enforce usage of the tools, they should be integrated into the Continuous Integration/Continuous Deployment (CI/CD) process. Putting technical controls in place to enforce usage will help ensure compliance.
Bonus: Security Champions
Some companies have had success making 1 or 2 developers on a team the "Security Champions". This works best if it is voluntary rather than forcing people to be a part of it, as a lot of times there are developers that are already excited about security. Being a Security Champion means that these developers will take on additional responsibility. They are the ones who think about security, ensure that the Standards are up to date and enforced, research new security requirements that may be needed, and evaluate security tools.
Security Champions are the developers that will think about security when others do not. The success of security champions depends on the size of the team and the culture of the company. It is not worth it to force security champions if it will not work in your organization. If you force it then it could have negative effects such as security functions/requirements falling through the cracks.
Companies that are smaller and are earlier in their lifecycles typically have more success with Security Champions because there is less of a culture change. Having a Security Champion from the start will make it easier to scale as the team grows. Most organizations do have developers that would be interested in doing this so it is worth exploring.
Bonus: Defense in Depth
There are other things that should be done around the Secure Development Lifecycle that can help reduce the risk of vulnerabilities in your applications and do not require a lot of resources. Additional security controls include adding logging, setting up a Web Application Firewall (WAF) or Runtime Application Self-Protection (RASP), isolating environments (development/production and critical data/non-critical data), as well as monitoring for unusual behavior. With limited resources, adding additional risk reduction without adding people is effective. WAF and RASPs can be expensive, but these can be implemented without application security engineers.
Shifting responsibility to developers is the only way to "win" with limited resources and in general developers should ultimately be responsible since it is their code. The best place to start is ensuring that you Develop & Enforce Secure Coding Standards, Implement Secure Development Training, Develop Security Requirements, Deploy and Enforce Usage of Security Tools. You might also want to look into having Security Champions as well as adding Defense in Depth.
The earlier you can shift the responsibility to developers, the better. Everyone hates change, especially developers, so shift responsibility early so that fewer folks need to change in the future.
In addition, get buy in from engineering. Until you get buy in from the CTO the battle has just begun, sometimes even if the mandate comes from the CEO. In reality, putting in place these minimal technologies and process will help speed up development with less vulnerabilities to fix later in the SDLC where it is more expensive and time consuming. If developers won’t take responsibility for the security of their code, it shows a lack of ownership - you can expect non-security bugs and production code to be more like R&D code.