A lot of teams think threat modeling is a document people write at the start of a project. In my experience, that’s how you end up with a “nice list” of threats that doesn’t change anything. The fix is to move from a threat list to actual attack paths—the real routes an attacker follows to reach a goal. When you connect STRIDE (types of threats) to attack trees (how attacks unfold), your model starts to predict what breaks in the real world.
Threat modeling deep dive sounds heavy, so here’s the direct answer: build your system diagram, use STRIDE to spot weaknesses in each part, then use attack trees to chain those weaknesses into step-by-step attacker paths. This turns “Tampering is bad” into “An attacker can change X by abusing Y, then use Z to get admin access.”
Below is a practical workflow you can use as of 2026, plus examples you can copy. I’ll also point out common mistakes I’ve seen during app reviews and incident postmortems.
What “attack paths” mean (and what most teams get wrong)
Attack paths are the ordered steps an attacker uses to go from an initial foothold to a specific outcome like data theft, account takeover, or code execution. They’re not just “threats.” They’re the chain of conditions and actions needed to succeed.
What most teams get wrong is focusing on categories instead of sequences. STRIDE is great for spotting risk types, but it won’t automatically tell you the exact chain that an attacker uses. Attack trees fix that by showing “AND/OR” relationships—what must all happen vs what choices the attacker has.
Another common miss: teams model only the app code. Real attackers also target identity, logging, backups, admin workflows, CI/CD, and “boring” glue services like message queues and API gateways.
Quick definitions: STRIDE vs attack trees
STRIDE is a threat classification method. It stands for:
- Spoofing identity
- Tampering with data or actions
- Repudiation (someone denies actions)
- Information disclosure
- Denial of service
- Elevation of privilege
STRIDE refers to types of problems. Attack trees refer to how an attacker accomplishes a goal.
Attack trees are a structured way to break a goal into sub-goals. Each node is an attacker objective. OR means “any one of these works.” AND means “all of these must be true at the same time.”
In plain terms: STRIDE helps you find ingredients. Attack trees help you cook the recipe.
Step-by-step workflow: STRIDE-first, then attack trees

If you want a repeatable process, use this sequence every time. It keeps teams aligned and it prevents the “we listed threats, now what?” problem.
1) Build the system boundaries and trust zones
Start with what’s inside your app and what’s outside it. I like to draw “trust zones” (for example: public internet, user device, internal network, partner system, admin environment). Trust zones are just groups of components where you assume the same level of safety.
For a typical web app, your boundary might include: browser, API gateway, web service, database, identity provider (IdP), background jobs, and third-party services. If you’re using a tool like AWS, GCP, or Azure, be explicit about which services run in which environment.
Practical tip: Write down your key assets on day one. Assets are what the attacker wants (tokens, PII, admin access, billing changes, source code). If you don’t list assets, your attack trees turn into vague “attack goal” nodes.
2) Map data flows and actions (not just components)
Data flow tells you where information moves. Action flow tells you where permissions and state changes happen. Both matter.
Example: a “change email” feature has at least two flows. First, it reads the account and current email. Second, it writes the new email and sends a verification message. If verification is weak, spoofing and elevation paths appear.
If you have multiple entry points (API, web app, mobile app, admin portal), treat each as its own flow. In 2026, attackers often pick the easiest endpoint, not the one you tested most.
3) Apply STRIDE to each part of the flow
Now you label risk types for each flow step. STRIDE works best when you apply it to the action, not the whole system.
Here’s a simple worksheet you can use. Pick one step, then answer the six STRIDE questions in order:
- S: Can someone pretend to be another user or service?
- T: Can someone change data or actions in a way they shouldn’t?
- R: Can someone do something and later deny it?
- I: Can someone learn secrets or private data?
- D: Can someone make the system slow or unavailable?
- E: Can someone gain extra rights?
What I’ve learned the hard way: Teams often stop after “I” and “E” because they sound dramatic. But “R” and “T” are where real attacker wins hide—especially when logs are incomplete or audit events are easy to bypass.
4) Convert STRIDE findings into attack tree goals
This is the core move. Take a STRIDE issue and ask: what attacker goal does this enable?
Then build an attack tree starting from the goal you care about most. Good goal examples:
- “Steal customer PII from the database.”
- “Become admin user.”
- “Change payment method without detection.”
- “Stop the service during a high-traffic event.”
Each STRIDE item becomes either a leaf node (a specific step) or an intermediate node (a sub-goal). You’ll use AND/OR to show requirements.
5) Add prerequisites, constraints, and detection gaps
Attack trees don’t end at “what’s possible.” You should add what makes it hard or easy. Example constraints include:
- Need an auth token with a certain role
- Need a race window (timing requirement)
- Need access to an admin-only endpoint
- Need the victim to click a link
- Need a misconfigured setting in production
This turns your tree into something engineering can actually act on. It also helps you pick controls that reduce risk, not just controls that sound good.
Example 1: “Become admin” using STRIDE + attack trees
This example shows how STRIDE categories turn into a real sequence. I’ve used a near version of this model in reviews for internal tools.
System slice
Imagine a web admin tool. Admins log in with an IdP (like Okta or Azure AD). After login, the app checks a JWT claim “role=admin.” There’s also a “create user” endpoint for admins.
Apply STRIDE to the critical steps
- S (Spoofing): Can an attacker forge or swap JWTs? Are tokens validated correctly?
- T (Tampering): Can an attacker change role values in the app without proper checks?
- R (Repudiation): Are admin actions fully audited with user id + request id?
- I (Information disclosure): Can an attacker see admin-only API responses?
- D (DoS): Can an attacker flood auth endpoints to force fallback behavior?
- E (Elevation): Can an attacker reach the “create user” endpoint and gain admin rights?
Most teams would stop at “E” and “S.” The deeper work is to see which “ingredients” are missing.
Build the attack tree
Goal node: “Attacker becomes admin.”
Top-level split (OR):
- Path A: Forge a token / bypass role check
- Path B: Abuse user creation and assign admin role
- Path C: Exploit an authorization bug in admin endpoints
Let’s pick Path B as an illustration.
Path B node: “Attacker creates a user with admin role.”
Path B splits (AND):
- Step 1: Attacker can call the “create user” endpoint
- Step 2: Endpoint allows role assignment from request body
- Step 3: Backend authorization checks are missing or incorrect
Now you translate STRIDE leaf risks into specific leaves:
- S leaf: Attacker spoofs a session cookie or obtains a token from another flow
- T leaf: Role field is not server-side validated and trusts input
- R leaf: Audit logs don’t record the original requestor role changes
- E leaf: Authorization middleware only checks “authenticated” not “admin” for that route
Original insight (what I do differently): I always add one detection-focused branch as a sibling node: “Attacker’s activity is detected too late.” That sounds like a quality issue, but it changes how you prioritize controls. If detection is slow, then a weaker exploit still becomes a real admin path.
Example 2: Data theft through an API + misconfigured object access
This one maps cleanly to STRIDE because information disclosure is obvious—but the real attack path usually comes from tampering or elevation sneaking in.
System slice
Users upload documents. Each document is stored in object storage (like S3, GCS, or Azure Blob). The app lists documents for a user by querying a table, then fetches objects.
Suppose the storage bucket policy is “private,” but the app generates signed URLs. Those signed URLs are time-limited and have a scope.
STRIDE findings
- I: Can an attacker access other users’ documents via signed URLs?
- T: Can they tamper with object keys or bucket name in requests?
- E: Can they gain access by forcing the app to generate a URL for objects they don’t own?
- R: Can they deny access because audit logs miss document id?
Attack tree for “read another user’s document”
Goal node: “Attacker reads document content of victim.”
Top split (OR):
- Path A: Steal a valid signed URL for victim’s object
- Path B: Make the app generate a signed URL for victim’s object
Path B is where STRIDE-to-tree works best.
Path B node: “App issues signed URL for an object not owned by attacker.”
Path B split (AND):
- Attacker can submit an object key (or document id) to the URL generator endpoint
- Ownership check is missing or bypassable
- Signed URL doesn’t bind to user identity in a way you can verify later
Concrete control ideas: Server-side enforce ownership with a database check before generating the signed URL. Then log the tuple: {requesting user id, object key, document id}. If your logs don’t include all three, you can’t prove the access pattern after an incident.
Threat modeling deeper: turn STRIDE into “attack ingredients”
Here’s the part most tutorials skip. STRIDE is useful, but only if you map each category to the kind of evidence you’ll need to validate the risk.
I use an “ingredient table” mindset. For each STRIDE category, I ask what technical ingredient is usually missing.
Ingredient table (quick reference)
| STRIDE | What’s usually wrong | What to check (hands-on) | Example leaf nodes |
|---|---|---|---|
| S: Spoofing | Auth tokens not validated, or identity checks happen too late | JWT validation (issuer, audience, signature), session cookie flags, auth middleware per route | Use token with wrong audience; bypass SSO callback checks |
| T: Tampering | User input trusted for security fields | Server-side role checks, input validation for object keys, rate limits on state-changing endpoints | Send role=admin; swap object key in request |
| R: Repudiation | Audit logs incomplete or can be turned off | Audit event contains actor id, target id, and request correlation id | No audit record for role change; logs don’t persist |
| I: Information disclosure | Access control missing in data paths, not just UI | Check object-level authorization, error message leaks, debug endpoints | Access other tenant’s export; verbose errors reveal ids |
| D: Denial of service | No guardrails on expensive operations | Rate limiting, job queue backpressure, cache controls | Trigger heavy report generation repeatedly |
| E: Elevation | Authorization gaps between routes, services, or background jobs | Per-action permission checks, least privilege on service accounts | Admin-only endpoint callable by any authenticated user |
This table helps you write better leaves in your attack trees. Leaves are where engineers can confirm “yes/no” quickly.
Where attack trees shine: “AND” requirements and hidden prerequisites
The biggest win of attack trees is forcing yourself to write down prerequisites. AND nodes show the “it only works if…” part.
Example: “Remote code execution” doesn’t happen with one bug. The AND chain might be:
- Inject controlled data into a template engine (tampering)
- Template engine is configured with dangerous options (misconfiguration)
- Execution is reachable in a production route (exposure)
- Output is not sanitized and ends in a file write or shell call
If any one condition fails, the full path collapses. That gives you a clear place to apply controls and measure improvement.
Also, attack trees help you avoid a big mistake: treating “separate vulnerabilities” as a guaranteed exploit chain. Chaining often fails in practice because of missing prerequisites, not because the vulnerabilities are less serious.
People Also Ask: STRIDE and attack trees
How do you create attack trees from STRIDE threats?
You start with STRIDE findings as candidate building blocks, then pick one concrete attacker goal and build outward. For each STRIDE category you found, ask “what action would an attacker take to use this weakness?” Those actions become leaf nodes. Then connect leaves with AND/OR based on requirements you can justify from system behavior.
A practical tip: limit the scope of one tree. Don’t build one tree for “break the whole company.” Build trees for one asset and one goal at a time, like “steal invoice PDFs” or “gain admin access.”
Is STRIDE enough for threat modeling?
No. STRIDE is a strong checklist for threat types, but it doesn’t show sequences, prerequisites, or attacker choices. It also won’t automatically tell you what to prioritize when multiple threats are present. In my work, STRIDE alone usually results in a “coverage report,” not a risk-driving model.
You need at least one method to show chaining. Attack trees are one of the most practical options because they’re simple to reason about and easy to review with engineers.
What tools can help with threat modeling and attack trees?
There are several approaches teams use in 2026. Some people use spreadsheets and draw.io diagrams for the workflow. Others use threat modeling tools that support attack patterns and structured outputs.
Common practical tool choices include:
- Microsoft Threat Modeling Tool (older but still useful ideas): good for structured thinking about data flows.
- OWASP resources and checklists: handy for validating authorization and input handling.
- Graph/diagram tools: use for attack tree visuals and review sessions.
My bias: use whatever tool your team will actually keep updated. The value comes from the attack path reasoning, not the diagram format.
How do you validate an attack tree without “hacking” your production systems?
Validate using safe, non-destructive tests. For example, test whether authorization is enforced by calling endpoints in a staging environment with different roles. For object storage, verify object-level permissions using a test account and a fake object key. For audit issues, confirm logs include the exact fields you rely on.
If you can’t test safely, validate by code review and config review. Show evidence in the model: “authorization middleware is applied here,” or “bucket policy denies public access.” That evidence is what makes the model trustworthy.
Common mistakes that break STRIDE + attack tree threat models
Here are the issues I see most. Fixing them makes your threat model far more useful for engineering work.
Mistake 1: Too many goals, too little detail
Teams often create one huge attack tree with dozens of top-level nodes. It becomes unreadable fast. Pick one asset and one goal per tree, and only expand the branches that matter for your release.
Mistake 2: Leaves that are too vague
A leaf like “attacker exploits a bug” is useless. A good leaf is testable. Example: “Authorization middleware checks role only for GET but not for POST.” That’s a clear fix point.
Mistake 3: Mixing threat categories and control names
STRIDE nodes are threats, not controls. Don’t label a leaf as “use MFA.” That’s a control, not a weakness an attacker uses. Instead, write the weakness: “MFA can be bypassed for certain flows.” Then your control can be “require MFA for that flow.”
Mistake 4: Forgetting background jobs and internal services
In real systems, attackers often aim for the “internal gap.” A background worker might process a queue message and apply elevated permissions by default. If queue messages can be forged, you get a clean elevation path that STRIDE will reveal under “E” and attack trees will show as a chain.
How to turn attack trees into a real security backlog

A threat model should end with action items. Otherwise it becomes a slide deck no one updates.
Use a scoring shortcut: feasibility + impact + detectability
You don’t need fancy math. I score each top goal path on three questions:
- Feasibility: How hard is it to pull off in a real environment?
- Impact: What’s the worst damage if it works?
- Detectability: How quickly do you catch it in logs/alerts?
Then prioritize control work that improves the path directly. If the path requires missing authorization checks, fix authorization. If it requires stolen tokens, strengthen token validation and session handling.
Map each control to tree leaves
Don’t just list “add rate limiting” and call it done. Tie the control to the specific leaf node it breaks.
Example mapping:
- Leaf: “App issues signed URL for non-owned object” → Control: enforce object-level ownership check before signing.
- Leaf: “Authorization middleware not applied for POST /admin/users” → Control: apply middleware per route and add tests.
- Leaf: “No audit record for role changes” → Control: add audit log event with actor+target+timestamp and ship to SIEM.
This is also where internal linking helps. If you want a deeper view into audit design and logging quality, you’ll like our post on security audit logging best practices. It pairs well with STRIDE’s R category.
Integrate with your SDLC (so the model stays alive)
The model shouldn’t be a one-time event. In 2026, teams that do threat modeling well update it when code changes touch trust boundaries.
When to rerun STRIDE + attack trees
Rerun threat modeling for these moments:
- New authentication flows (SSO changes, token format changes)
- New authorization rules or new roles
- Data model changes that touch PII or tenant boundaries
- New background jobs that process user input
- New integrations with external services or webhooks
If you follow that schedule, the attack trees stay relevant even as your system grows.
Make it part of threat intel and vulnerability reviews
Threat modeling should also pull in what’s happening in the wild. When new advisories drop, scan whether the affected components match your attack tree leaves. Then update your backlog.
For example, if you track how we triage security advisories in 2026, you’ll already have a workflow for deciding what matters to your system. Plug that into attack tree updates.
Featured image alt text
Image alt text suggestion (include keyword naturally): “Threat modeling deep dive diagram showing STRIDE categories mapped to attack tree steps.”
Conclusion: your next action
If you take one thing from this threat modeling deep dive, make it this: STRIDE helps you find the kinds of weaknesses in your system, but attack trees help you prove the attacker’s path through prerequisites and choices. That’s what turns threat modeling into a tool that guides real engineering fixes.
Action you can do this week: Pick one high-value asset (like admin access or customer documents), create one attack tree for a single goal, and then trace each branch back to STRIDE findings. When you finish, you should be able to point at 3–10 specific leaves and say, “We can break this path by changing these exact code/config checks.”
If you want to go one level deeper after this, review our authorization bypass case study—it shows how “E” and “T” issues chain together in ways teams don’t notice from a checklist alone.
