Modern ransomware doesn’t just steal data. It tries to make your backups useless before you even notice. That’s why “we have backups” is no longer a safe answer—attackers now plan around backup windows, backup credentials, and recovery steps.
In 2026, the most common pattern I see in real incidents is simple: ransomware operators get a foothold, move laterally to the systems that can touch backups, then destroy, encrypt, or poison what you’d use to recover. This guide is a deep dive into how those intrusions work and what to do instead—so you can recover fast, clean, and with proof.
How modern ransomware bypasses backups: the quick answer
Ransomware bypasses backups when attackers either touch the backup data, steal backup credentials, or break the recovery process. “Backups exist” doesn’t matter if you can’t restore reliably and quickly.
Backup bypass is usually a chain of small actions, not one big movie moment. In several engagements I’ve supported (and incidents I’ve reviewed), the attackers focused on just enough access to cause maximum recovery pain.
Attack path breakdown: from initial access to backup destruction
The attack path is designed around one goal: make restoration fail. Let’s map the usual steps to where backups get hit.
Step 1: Initial access often targets the same identity systems used for backups
Attackers get in via phishing, stolen passwords, exposed RDP/VPN, or server-side exploits. Then they look for where credentials are shared—especially Active Directory service accounts that can read backup shares.
A plain example: a domain user gets phished. The attacker dumps creds from a workstation, then finds a backup service account. Once they have that, they can delete or encrypt the backup store.
Step 2: Lateral movement focuses on backup servers and storage
It’s not random scanning. In many intrusions, ransomware crews move directly toward backup machines, file servers, SAN controllers, and virtualization hosts. They want the box that writes snapshots, the NAS share, or the hypervisor storage.
What most people miss: backup systems often have broad access. They can read production files, manage tapes or cloud exports, and reach storage networks that normal users can’t touch.
Step 3: Backup data gets encrypted, deleted, or “rolled forward” so it can’t recover
Once attackers reach the backup location, they do one of three things:
- Encrypt backup archives so they look like data loss.
- Delete snapshots, retention points, or backup directories.
- Poison recovery by altering restore scripts, wiping configuration, or corrupting metadata.
Some groups also try to wait. They let backups run, then delete afterward so you only have the “bad” points. I’ve seen cases where the attacker triggered a delay of 1–2 hours after compromise so the next scheduled backup finishes, then they remove it.
Step 4: Ransomware gangs also target recovery logistics
Even if the data survives, they may break the path to restore. That includes:
- Deleting or encrypting the backup software database (the catalog).
- Stopping services on the backup server so restores can’t start.
- Changing network routes, DNS, or firewall rules used during recovery.
- Abusing Windows services so restores fail with access errors.
In plain terms: it’s not just “do you have backups,” it’s “can you actually restore them under stress, with clean systems, fast enough to matter.”
Common backup failure modes ransomware uses (and how to test against them)

Here are the backup bypass modes that show up again and again. For each, I’ll tell you what to check and how to test.
Failure mode #1: Backups are reachable from the same network the attacker controls
If the attacker can reach your backup share from their foothold network, they can likely delete or encrypt it. The fix is strict separation.
Test: from a “highly reduced” incident test environment, try to access the backup target using the same path a compromised server would use (same VPN, same routes, same DNS). If you can reach it, assume the attacker can too.
Practical step: implement network segmentation and restrict backup traffic to only the backup server and required storage endpoints. Don’t rely on firewall “maybe rules”—verify them.
Failure mode #2: Backup accounts have too many permissions
Backup software often runs with powerful rights: read production, write to storage, manage catalogs, and sometimes delete old backups to enforce retention.
If attackers steal that account, you get the worst-case scenario: they can both write and erase.
Test: review what rights backup service accounts have. If a backup account can modify production files or delete unrelated data, that’s a red flag.
Rule I follow in incident reviews: the backup account should be able to reach only what it must touch, for only what it must do.
Failure mode #3: “Offline” backups aren’t really offline
This one hurts. Some organizations call snapshots “offline,” but the storage is mounted all day or the credentials can access it from normal networks.
As of 2026 best practice, you want immutability (data can’t be changed in place) or air-gapped behavior (not reachable from the attacker’s path).
Test: attempt to delete or overwrite a backup object using a low-privileged account. If the account can delete data, you don’t have real protection.
Failure mode #4: The backup catalogs are the single point of failure
Many backup systems keep a catalog database that tells them what’s in each backup. Ransomware can target that catalog so the data becomes “unlisted.”
Test: do a restore from the last known good backup point after simulating catalog corruption (in a test lab). If your restore process can’t find data, you’ll learn the gap before an emergency.
Failure mode #5: Restores work until you need them most
Plenty of teams run a “backup success” check but never practice a full restore under realistic conditions. When the real disaster hits, you learn the restore takes 18 hours, requires manual steps, or needs extra keys you can’t find.
Test: time-box your restores. If you can restore a small service in under 60–90 minutes, that’s good. If it takes half a day, plan to cut the steps.
Real-world examples of backup bypass (patterns I keep seeing)
I’ll share common patterns from incidents reported in the last couple of years and from cases I worked through. I’m not naming clients here, but the technical shapes are consistent.
Example: Encrypting backup shares after credential theft
Attackers often steal domain credentials, then jump to the backup server. From there, they enumerate network shares and locate backup folders. They don’t need to “break encryption” if they can just encrypt the backup files directly.
In one case I reviewed, the attackers waited for the nightly backup run to finish and then encrypted the share. The organization saw new backup timestamps, assumed everything was fine, and only later realized those points were already damaged.
Example: Deleting snapshot retention points to force older recovery
Some ransomware groups aim for maximum downtime. They delete recent restore points so the only available recovery is from weeks ago, increasing data loss and pressure.
That’s why immutability matters. If the attacker can change retention settings or delete snapshots, your “RPO” (how much data you can afford to lose) jumps to an unacceptable level.
Example: Poisoning restore scripts and tooling
Even when backups exist, attackers change tools used for restoration. They may replace scripts, update credentials in configuration files, or alter restore parameters.
This is why “we have backups” must include “we have tested restore automation that we trust.” I recommend treating restore tooling like production code: version it, review it, and monitor changes.
What to do instead: a backup strategy ransomware can’t game

The goal isn’t to buy “more backups.” The goal is to make restoration resilient even when attackers reach your environment.
Here’s my practical, step-by-step approach for 2026 that I’ve seen work better than simple backup upgrades.
1) Separate backup networks and lock down access
Do this first because it stops the easy bypass. Backup targets must not be reachable from the same paths attackers use after compromise.
- Use dedicated VLANs/subnets for backup traffic.
- Restrict storage access to backup servers only.
- Apply least privilege on backup accounts.
If you can, require VPN with strong device identity for backup operators—then limit network rules by identity, not just IP ranges.
2) Use immutable backups and retention that attackers can’t change
Immutable backups mean the backup object can’t be altered or deleted until the retention window ends. Attackers love “delete the last point,” because it’s fast and breaks recovery.
Pick an approach that matches your stack:
| Approach | What it stops | Trade-off |
|---|---|---|
| Immutable object storage (WORM-style) | Deletion and in-place modification | Costs can be higher; plan retention carefully |
| Air-gapped copies (offline media or isolated export) | Direct access during intrusion | Restore is slower; you must plan procedures |
| Long-term archive + periodic restore drills | Catastrophic total loss scenarios | May not meet low RTO/RPO for all apps |
Best practice in 2026 is to combine methods, not bet everything on one.
3) Protect backup credentials like you protect domain admin
Your backup access tokens and service accounts are high-value targets. If they leak, the attacker doesn’t need to “hack” the backup system—they just act like the backup operator.
Practical steps:
- Store secrets in a vault (not in plain files or shared drives).
- Rotate credentials on a schedule and after any suspected exposure.
- Use separate accounts for read vs. delete/enforce retention.
- Turn on strong logging for all backup actions.
Simple but effective: alert when backup accounts change large numbers of objects, delete large sets, or access backup endpoints outside expected hours.
4) Encrypt backups correctly and manage keys separately
Encryption at rest is good. But encryption doesn’t help if the keys are also available to the attacker who can access the backup system.
I recommend key separation: backup encryption keys should be stored where ransomware crews can’t reach them easily. This might mean separate key management systems and strict access policies for key retrieval.
If you use customer-managed keys in cloud backups, make sure key permissions require multi-step approvals or strong role separation.
5) Do “restore-first” testing, not “backup success” reports
Backup success only tells you the job ran. It doesn’t prove the restore works, the application comes up clean, or the data is usable.
My rule: every critical system gets a restore test at least monthly, and after any major change (backup software updates, OS upgrades, storage migrations).
For high-value apps, do quarterly full-stack restore drills where you include DNS, app config, and identity sync. If those steps fail, the backup data won’t save you.
6) Maintain a clean recovery build: “known good” rebuilds
Ransomware often spreads and leaves malware behind. If you restore into compromised infrastructure, you’re basically putting the fire back on.
Build a known-good restore environment. That means:
- Gold images for key servers.
- Hardened baselines with patching and security settings.
- Clean local admin workflows and tested toolchains.
This is also where you should test your incident response playbooks for restoring AD-related systems and service accounts.
People Also Ask: bypassing backups with ransomware
Can ransomware encrypt cloud backups too?
Yes, if the attacker has access to the credentials or API permissions that control backups. Even if the storage is in the cloud, the attacker can delete objects, change retention, or create new encryption states if they’re allowed to act.
To prevent this, use immutable retention features and lock down who can delete or modify backup objects. Also, monitor for unusual API calls from backup-related roles.
What is the fastest way to recover from ransomware if backups are compromised?
The fastest path is usually: rebuild clean systems first, then restore from the last trusted immutable or air-gapped backups. If you try to restore onto already infected hosts, you waste time.
In practical terms, you should have a pre-built “recovery stack” (network rules, firewall profiles, domain rebuild steps) and a list of which backup points are trusted.
Why do “backup success” alerts fail during ransomware incidents?
Because alerts only measure job completion, not restore usability. Attackers can encrypt or delete the backup set after the job finishes, or they can corrupt the catalog so restores fail later.
Fix this with restore testing and tamper-detection alerts for backup storage and catalog changes.
Should we trust snapshots as a ransomware defense?
Snapshots are helpful, but they’re not automatically ransomware-safe. If the attacker can reach the snapshot store and has permissions to delete or revert it, snapshots can become part of the problem.
Snapshots work best when protected by immutability, strict access controls, and network separation.
Action plan you can start this week (no fancy stuff)
If you want results fast, focus on the highest-impact actions first. Here’s a simple checklist I’d use for a mid-size organization (around 200–2,000 employees) preparing for ransomware in 2026.
Week 1: Find and close the “easy bypass” doors
- List every backup target: NAS shares, object buckets, snapshot repositories, tape systems.
- Map which servers and accounts can access each target.
- Restrict access so only the backup service accounts can reach backup storage.
- Turn on alerts for backup storage deletes and catalog changes.
Also check your backup software consoles. If attackers can log in and change policies, you need tighter role separation.
Week 2: Prove you can restore under pressure
- Pick one critical app and do a full restore into a test environment.
- Measure total time: from “start restore” to “app is usable.”
- Document manual steps and remove any steps you can’t repeat.
- Validate data integrity (basic checks like record counts and file hashes for key datasets).
If the restore takes longer than your target RTO (recovery time objective), reduce steps or automate them.
Week 3: Add immutability or air-gapped copies for the top tiers
- Choose immutable retention for backup objects that hold identity and key business data.
- Make an air-gapped monthly export of the same tier.
- Test restore from the export, not only from the live backup system.
This is the part many teams skip because it costs money and needs process changes. It’s also the part that saves you when ransomware gets inside.
Week 4: Harden response and recovery ownership
- Assign clear owners for backup systems, restore steps, and verification.
- Run a tabletop exercise: “backup is encrypted and delete actions started—what now?”
- Update your incident playbook with the restore order and decision rules.
Include a step for checking whether the environment is clean before restoring production. That one change often saves days.
Where this fits in our security coverage (and related topics)
This backup angle connects directly to other work your team should be doing. If you’re trying to stop initial compromise, our ransomware incident response playbook breaks down practical steps in plain language. For the “how do we detect this early” side, see our ransomware attack techniques roundup (2026 updates included). And if your issue starts with a weak software entry point, our guide on exposed RDP/VPN risks helps you close that gap quickly.
What most people get wrong about ransomware + backups
Here are the big mistakes I see again and again:
- Mistake: Assuming backups are safe because they exist. Reality: Attackers target backup shares, credentials, catalogs, and retention.
- Mistake: Testing only that backup jobs run. Reality: You need restore tests and time measurements.
- Mistake: Using one admin-like service account for everything. Reality: Split duties and lock down deletion rights.
- Mistake: Restoring onto systems that may still be infected. Reality: Rebuild clean first, then restore.
My blunt take: the most “secure” backup system in the world is useless if nobody can restore it in time with confidence.
Conclusion: make ransomware bypass harder than your recovery plan
In 2026, modern ransomware intrusions bypass backups by stealing backup credentials, reaching backup storage over the wrong network paths, deleting retention points, and poisoning restore steps like catalogs and scripts. The fix isn’t one product. It’s a plan that assumes attackers will try.
Your takeaway is simple and actionable: separate backup access, use immutability or air-gapped copies for critical data, protect backup credentials, and prove recovery works with regular restore drills. When you can restore fast from trusted points—even after compromise—you stop ransomware from controlling the timeline.
Featured image alt text: Diagram showing how ransomware bypasses backup storage and how immutable recovery points prevent data loss
