Secure by Design: Part Two

By Larry Skelly on June, 9 2017

Get latest articles directly in your inbox, stay up to date

Back to main Blog
Larry Skelly

Part two of a three-part blog on building secure solutions using Online’s Secure Solution Delivery Life Cycle (SSDLC).

Click here for Part 1

Click here for Part 3

 

As attacks move up the stack from the operating system to applications themselves, a shift in the solution delivery life cycle philosophy is emerging, recognizing that security activities need to move earlier in the life cycle, to truly build security effectively. 

As organizations are pressured by the rapid promotion cycles of DevOps, the focus is often on secure coding, static code analysis tools, and earlier security testing. But this is too late to be effective; in order to build inherently secure applications, our security activities need to span the entire life cycle.

In Part One of this blog, we looked at the reasons why we need to build secure applications from the start. Next, we will look at the fundamental design principles you need to apply to build solutions that are secure by design.

What is Design?

Design is any activity where you transform “what the system must do” into “how it does it.” This includes making decisions about components, their scope and behaviour, and the relationships between them. If you are partitioning functionality at any level, allocating features or data to logical or physical components (sites, servers, systems, subsystems, processes, executables, compilation units, procedures, classes, methods, etc.) then you are doing design.

How do we design secure solutions? What makes one design more secure than another? Let’s take a look.

Secure Design Principles

Your solution delivery life cycle needs to employ secure design techniques that apply a number of important security principles. The objective is to design solutions with the ability to anticipate, withstand, recover from, and adapt to adverse conditions such as a cyberattack, this is known as “cyber resiliency.” These principles are applied in a variety of ways and at multiple stages in our Secure Solution Delivery Life Cycle (SSDLC). Notice that we say “solution delivery” and not “software development.” This is an important distinction! Many solutions we implement have little to no custom code but may pose as big or greater a security risk than those that do. All solutions are designed and our SSDLC treats all approaches equally.

Consider the following 15 secure design principles:

  1. Focus on Critical Assets
    As resources are not infinite, efforts should be focused on protecting the most critical assets of an organization and the most critical components of a solution. This might be a component that houses information with the highest information classification (risk), or a widely-used common component, a breach of which would expose multiple solutions that, in aggregate, present a greater risk. Common components are highly desirable to achieve goals of reuse. Common components are also high-value targets for attackers, therefore they must be secure, in fact securing them is a good investment that in turn increases the security of multiple solutions.

  2. Design for Adaptability
    It’s more or less a fact that unforeseen changes will occur in the threat landscape over the solution’s lifetime. The defences built into a solution design must not be static and must be designed to be maintainable so they can evolve to address changes in solution trustworthiness (e.g. a deprecated cipher suite) and emerging threats. The objective is to minimize the programmatic impact of changes in the threat model by having components that can be disabled, reorganized, or reconfigured as required. Tactics for achieving this include modular design with loose coupling and well-defined interfaces, quick replacement of individual components (e.g. a cipher suite), the externalization of configuration data, and the externalization of the business rules a component applies.

  3. Least Privilege
    Subjects (users, systems, computers, processes, etc.) should be granted only the privileges required to perform their functions, and no more. All other privileges should be explicitly denied. For instance, a Docker container should be configured to execute with only the individual privileges required by the function it contains, with the reminder explicitly denied. Placing a higher privilege process on a platform being used to host low privilege processes may compromise the entire platform (don’t mix high and low privilege applications within a single zone of trust). Never coming data and control instructions in a single entity (e.g. message). Finally, user privileges should be reviewed periodically to prevent “privilege creep.”

  4. Need to Know
    Subjects are granted access to only what they need to know (the data required) to complete their assigned job functions. This principle applies to any subject that accesses data – users, processes, etc.

  5. Fail-Safe Defaults
    The default behaviour should be to deny a requested action unless the user is explicitly authorized. By extension, a component that fails should fail into a fail-secure state that denies all access. A failure should not compromise security. Nor should it expose any sensitive information in its diagnostics.

  6. Never Assume Trust
    Earn or give, but never assume trust. Assume all clients are untrusted and all data to be compromised until proven otherwise. Assume that any component running on any system whose integrity cannot be attested is untrustworthy. Assume all clients will be compromised and client data exposed. Design the solution to minimize the impact/risk of the compromise of each individual component.

  7. The economy of Mechanism (a.k.a. KISS)
    All things being equal, complex software will be less secure than simpler software. Keep the design as simple as required to satisfy the stated requirements. No more. The resultant code will therefore also be reduced, and it too should be kept as simple as possible. 

  8. Complete Mediation
    Every access to every protected action or object by a subject should be validated and validated as close to the object as possible. Consider the following example of authentication bypass: a system has an elaborate login function (a feature), and after authenticating a user views a folder of documents. A document is retrieved via a URL with an obfuscated document identifier, without revalidating the user is authenticated and authorized to access the particular document (this is security by obscurity; do not rely on secrecy to enforce security). The URL can be reverse-engineered and document identifiers can be generated until a valid one is found. Complete Mediation would require the action of viewing the document to revalidate the user has been authenticated and is authorized to view the particular document, very close to the document itself so an attacker could not hijack the processing stream post-validation.

  9. Open Design
    Assume that secrets are not safe. Open Design suggests that designs should not be secret, or rely on secrecy or obscurity. Attackers’ toolkits include decompilers, disassemblers, and similar tools. Open designs invite examination and testing by a wider audience, and therefore greater security. For example, modern cryptographic systems open their algorithms up to public scrutiny to improve their security.

  10. Separation of Privilege
    Require multiple conditions to be satisfied to permit access to an operation. Thus a compromise of one condition does not expose the operation. Examples of this are multi-factor authentication, split knowledge, and M of N control for key escrow. A user analogy to this is a separation of duties, where sensitive functions are separated into two or more tasks completed by two or more users.

  11. Diversity
    Diversity (typically redundancy) is a well-understood technique for increasing the resilience of a solution to achieve Availability goals. Build resilient architectures and security mechanisms that will survive a single failure without compromising security (and “fail-safe” as described above). Implement Disaster Recovery and Business Continuity solutions to prevent catastrophic business disruptions. For cyber resilience, diversity can reduce risk, as the compromise of a single component (e.g. a server of a web server farm) does not compromise all services, as alternative components still provide the same services to other clients or users. 

  12. Minimize Common Mechanisms
    Minimize use of shared resources. For example, do not mix internal users and external customers in a single LDAP directory. Minimize exposure through shared, common data (“common coupling” was always an indicator of bad code design, even in the early days of Structured Design).

  13. Psychological Acceptability
    The security that makes a system too difficult to use will be bypassed (e.g. passwords that are too complex to remember will be written down). Perhaps even worse, it gives a false sense of security.

  14. Defense in Depth
    Implement multiple levels of access controls to provide layered security. Use a layered defence with multiple safeguards in series. Parallel safeguards will not survive a single breach.

  15. Consider Cost and Work Factor
    Compare the cost/time required to circumvent a security mechanism, known as the “work factor,” with the resources of a potential attacker. A design that reduces the time duration that an Attack Surface is accessible will increase the Work Factor and make an attack more difficult to execute. When considering protection mechanisms compare the cost of implementing a security mechanism to the expected loss, if compromised.

Now that we have covered some of the principles of secure design, in Part Three of this blog we will look at how we can apply these principles, using secure design techniques, across the complete solution delivery life cycle.

About Larry Skelly

Larry is a TOGAF and ITIL-certified Enterprise Architect in our Risk, Security and Privacy practice, with more than ten years of experience writing and teaching design methodologies and thirty years of experience with architecture and hands-on development. Larry understands the need to infuse security into everything that we do. He was also a primary contributor to Online’s Security Integration Framework (SIF) and Secure Solution Delivery Life Cycle (SSDLC), an approach for implementing secure IT solutions. To continue the conversation, feel free to leave a comment below. To learn more about Online's Risk, Security and Privacy practice, click here.

Submit a Comment

Get latest articles directly in your inbox, stay up to date