Secure By Design This is the Trace Id: e0d0419f6c81f46abece920d22c5c4e5
Practice 3

Perform secure​ design review and ​threat modeling 

 

Our computer systems operate in a hostile threat landscape with well-funded, technically capable adversaries continually probing for and exploiting vulnerabilities. Security design flaws can result in vulnerabilities that can be exploited by attackers. Threat modeling and security design reviews of threat models identify potential security threats so that designers can mitigate them early in the development process, resulting in systems that are Secure by Design. Adding security during the design process is much cheaper and less risky than adding it after the fact. To design secure systems, one must shift the focus from:

how products should work to how products might be abused.

Threat modeling lets you pause and consider the system from a security and privacy standpoint, answering questions like:

  • “What if this feature were abused by an attacker instead of being used as intended?”
  • “What happens to assets and users if attackers compromise the system?”
  • “What happens to the system if individual components I rely on become unavailable?”

Threat modeling is a structured approach to analyze the security design of the system, while “thinking like an attacker”. The threats that are identified can then be mitigated before the new product or feature ships.

threat model

3.1 Identify use cases, scenarios, and assets - An essential part of threat modeling and reviewing threat models is understanding what business functions or “use cases” the system has. Scenarios describing the sequence of steps for typical interactions with the system illustrate its intended purposes and workflows. They also describe the different roles of users and external systems that connect to it. The first step in threat modeling is to document the business functions the system performs and how users or other systems interact with it. Supplement textual descriptions of use cases and scenarios with flowcharts and UML sequence diagrams as needed. Whether you use formal documentation such as use cases and scenarios, or take a more informal approach, ensure you provide context that answers these questions:

  • What are the business functions of the system?
  • What are the roles of the actors that interact with it?
  • What kind of data does the system process and store?
  • Are there special business or legal requirements that impact security?
  • How many users and how much data is the system expected to handle?
  • What are the real-world consequences if the system fails to provide confidentiality, integrity, or availability of the data and services it handles?

An asset is something of value in a system that needs to be protected. Some assets are obvious such as money in financial transactions or secrets such as passwords and crypto keys. Others are more intangible like privacy, reputation, or system availability. Once you properly identify the system’s assets, it is easier to identify threats against them. Document the list of system assets.

Examples of tangible assets:

  • Personal photos and contacts stored on a smartphone
  • Compute resources in a cloud environment
  • Software supply chain integrity in a development environment
  • Medical imaging and diagnostic data
  • Financial accounting data
  • Biometric authentication data
  • Proprietary formulae and manufacturing processes
  • Military and government secrets
  • Machine learning models and training data
  • Credit card data

Examples of intangible assets:

  • Customer trust
  • User privacy
  • System availability

3.2 Create an architecture overview - In addition to documenting business functionality and assets, create diagrams and tables to depict the architecture of your application, including subsystems, trust boundaries, and data flows between actors and components. At minimum, a Data Flow Diagram (DFD) is recommended, and other supplementary diagrams such as UML Sequence Diagrams to illustrate complex flows may also be helpful. A DFD is a high-level way to visualize data flows between the major components of a system. It is a simplified view that does not include every detail—just enough to understand the security properties and attack surface of the system. It is a recommended practice to label the following in your DFD arrows:

  • Data types (business function)
  • Data transfer protocols

A trust boundary is a logical construct used to demarcate areas of the system with different levels of trust such as:

  • Public Internet vs. private network
  • Service APIs
  • Separate operating system processes
  • Virtual machines
  • User vs. kernel memory space
  • Containers

Trust boundaries are important to consider when threat modeling because calls that cross them often need to be authenticated and authorized. Data that crosses a trust boundary may need to be treated as untrusted and validated or blocked from flowing altogether. There may be business/regulatory rules related to trust boundaries. For example, sovereign clouds must ensure that data is stored only within their trust boundaries, and in some cases, HIPAA Protected Health Information (PHI) shouldn’t cross trust boundaries. Trust boundaries are often illustrated in DFDs with dashed red lines.

contoso cast diagram

 

3.3  Identify the threats – Threat modeling is most effective when performed by a group of people familiar with the system architecture and business functions who are prepared to think like attackers. Here are some tips for scheduling an effective threat modeling session:

  • Prepare the list of use cases/scenarios and, if possible, a first draft of DFDs in advance.
  • Limit the scope of your threat modeling activity primarily to the system or features that you are developing or directly interface with.
  • Designate an official notetaker to capture and record threats, mitigations and action items during the meeting.
  • Reserve at least 2 hours for the meeting. The first hour will usually be spent on getting a common understanding of system architecture and what scenarios you are modeling so you can spend the second hour identifying threats and mitigations.
  • If you can’t cover it all in two hours, decompose the system into smaller chunks and threat model them separately.
  • Invite people of varied backgrounds, including:
    • Engineers who are developing and testing the system
    • Product owners who can weigh security risk against business goals
    • Security analysts/engineers
    • People who are proficient at software testing (violating system assumptions, testing boundary conditions, generating invalid input, etc.)

STRIDE is a common methodology for enumerating potential security threats that fall into these categories:

 

Spoofing

Making false identity claims

Tampering

Unauthorized data modification

Repudiation

Performing actions and then denying that you did

Information Disclosure

Leaking sensitive data to unauthorized parties

Denial of Service

Crashing or overloading a system to impact its availability

Elevation of Privilege

Manipulating a system to gain unauthorized privileges

 

People and threat modeling tools apply this methodology by considering all the elements in a dataflow diagram and asking if threats in any of the STRIDE categories apply to them. STRIDE is useful for novice threat modelers who have not been exposed to all these threat categories and so might miss some important threats. However, STRIDE is not a substitute for thinking like an attacker. STRIDE may miss important design flaws that only thinking like an attacker will catch.

Thinking like an attacker is the most important and difficult part of threat modeling. Once you and your team understand the system architecture, use scenarios, and assets you need to protect, you must imagine what could go wrong with your system if a motivated, capable attacker attempts to compromise it. Thinking like an attacker is not as simple as applying a methodology like STRIDE to enumerate threats. You must also challenge the security assumptions in your design and contemplate what-if scenarios in which some or all your security controls fail as attackers actively try to compromise your assets. Examples of security assumptions:

  • We assume our open-source dependencies don’t have malicious code.
  • We assume that cloud computing services are inherently trustworthy.
  • We assume that app users will not root their mobile devices.
  • We assume that all authenticated users have benign intent.

Validate that your security assumptions are correct and consider what happens if they are not. Determine if any assumptions are invalid based on the threat landscape and the value of the assets you need to protect. It helps to study historical incidents to gain insight into the attacker mindset.

Many threats and mitigations are highly technical, and you are unlikely to think of them all on your own. Educate yourself on threats and associated mitigation techniques that apply to the domain you are working in. For instance, every web developer should be aware of attacks like cross-site scripting (XSS), cross-site request forgery (CSRF), and command injection. Study resources like the CWE Top 25 Most Dangerous Software Weaknesses and the OWASP Top Ten to learn more.

Record the threats you identify in your engineering team's work tracking system and rate their security severity so they can be prioritized accordingly. Document the threats you identify with sufficient detail that those reading them later can understand them. Well-written threats clearly describe:

  • The threat actor who exploits the vulnerability
  • Any preconditions required for exploitation
  • What the threat actor does
  • The consequences for affected assets and users

3.4 Identify and track mitigations - Secure Design Philosophy: When identifying mitigations, keep in mind that security is not “all or nothing”. A partial mitigation that raises the cost for an attacker, slows them down to give defenders time to detect them, or limits the scope of damage is much better than no mitigation at all. Think in terms of layered defenses. Attackers don’t just exploit a single vulnerability and stop there. They chain multiple vulnerabilities together, pivoting from one target system to the next until they achieve their objective (or get caught.) Each layered defense increases the likelihood that attackers will be blocked or detected. Also, assume that other layers’ security controls will be bypassed or disabled. This is the essence of the Assume Breach philosophy which results in a resilient set of layered defenses rather than relying solely on external defenses that, if bypassed, result in a major breach.

Recommended Secure Design Practices:

  • Design and Threat Model as a Team
  • Prefer Platform Security to Custom Code
  • Secure Configuration is the Default
  • Never Trust Data from the Client
  • Assume Breach
  • Enforce Least Privilege
  • Minimize Blast Radius
  • Minimize Attack Surface
  • Consider Abuse Cases
  • Monitor and Alert on Security Events

Threat modeling is not complete until you create work items to track your threat findings and the related development and testing tasks to mitigate them. Consider tagging the work items and writing queries so they are easy to find. A threat model provides the seeds for a good security test plan. Be sure to test that your mitigations work as intended and use automated testing when possible.

3.5 Communicate threat models to key stakeholders - Threat modeling is not complete until you create work items to track your threat findings and the related development and testing tasks to mitigate them. Consider tagging the work items and writing queries so they are easy to find. A threat model provides the seeds for a good security test plan. Be sure to test that your mitigations work as intended and use automated testing when possible.

3.6 Threat Modeling resources