As a continuation of Part 1 of Software Design Testing, I’ll share what I would bring up during design phase for security, and tooling.

Security

For security, I’d typically explore trust boundaries, data flows, data entry & exit points, threats and vulnerabilities. You can refer to OWASP guide when defining the security threats for your application. Another good reference is from Microsoft, although it’s no longer maintained, it contains timeless and valuable content and even has a template for threat modelling. For simplicity, here’s the table to list your threats and to ensure you cover all aspects of STRIDE.

Security threat type (STRIDE) Security threat Mitigations Result Recovery
Spoofing
Illegal access and use another user’s credentials,
such as username and password.
E.g. Admin credentials are compromised   e.g. Alerts will be sent out
Audit tools will detect and log these requests
 
Tampering
Maliciously change/modify persistent data
       
Repudiation
Perform illegal operations in a system that lacks the ability to trace the prohibited operations
       
Information disclosure
Read a file that one was not granted access to, or to read data in transit
       
Denial-of-Service
Deny access to valid users
       
Elevation of privilege
Gain privileged access to resources for gaining unauthorized access to information or to compromise a system
       

A good book to read is Threat Modeling: Designing for Security by Adam Shostack. The end goal for security should be no unmitigated high priority threats to leak to production. Tools such as Checkmarx or Fortify (static code analysis), OWASP dependency scan, AppScan (runtime analysis), and Threat Modeling should help you achieve these goals. This brings us to the next topic - Tooling.

Tooling

“Only fools use no tools”

It helps to explore tooling early so your team will think about testing early and familiarize with the means at their disposal to measure how they are doing as a team even before a first line of code is checked in. Some questions you can ask

  • What are the static and dynamic analysis tools we can use?
  • What will be used for unit testing and code coverage?
  • What do we use for component testing, performance, and resiliency testing?
  • Do we need to minimize time to create new testsuites?
  • Do we need to replicate/generate/enrich/versioned existing data for testing?
Purpose Tools to consider
Static analysis for code quality E.g. CheckStyle, FindBugs, Checkmarx, Mutation Testing
Dynamic analysis for code quality E.g. DriftDetector or Hystrix Network Auditor Agent
Code reviews E.g. Git’s pull request (PR)
Unit testing E.g. JMockit, TestNG
Component testing (including dependencies stubbing and resiliency testing capability) E.g. RestAssured, WireMock
Performance E.g. Gatling, JMeter
Data generation/enrichment/replication/versioning E.g. Delphix
Minimizing time to create new tests E.g. RestAssured CLI

Two additional areas that is part of building tools and capabilities are testability & productivity enablement. While testability is owned by the scrum team, productivity enablement ownership can be a combination of scrum teams, DevOps and center of excellence.

Testability

The ability to Unit Test is a form of testabilty. Sometimes you have to build in capabilities such as HTTP headers, messages so that they can trigger application behaviors that are otherwise hard to simulate. These capabilities are only enabled in pre-production. To shed some light on how testable your application is, explore the following questions.

  • Is my code testable?
  • Can all tests be automated?
  • Are there any scenarios that are difficult to test?
  • How can we simulate other services we depend on?
  • How can we enable other teams to exercise our services reliably in pre-production?

Productivity enablement

With the transition of SCM/Ops to DevOps role, DevOps now develop capabilities to enable all scrum teams to deliver releases fast. Scrum teams are empowered to own production deployment, feature toggles, monitoring and alerts. This may vary on the company, however, you would still want to have the following questions answered for a low-risk, automated and frequent releases.

Capability Is plan in place to meet SLA on failure? Supported in Pre-Prod Supported in Production?
Blue-Green deployments      
System health monitors and alerts      
Logging monitors and alerts      
Performance and Hystrix Dashboards and alerts      
Production failover clusters & availability zones      

Conclusion

This concludes Software Design Testing Part I & II blogs. I hope this is useful. Please add any feedback or comments you may have.