One thing I’ve learned doing incident response for real teams: the hardest part of a breach isn’t the first alarm. It’s the “second wave” — when attackers quietly move from one machine to the next, often without triggering the same kind of loud malware alerts. That’s why incident response deep dive: detecting lateral movement with endpoint and network telemetry matters. When you can see how a threat walks across devices, you stop it before it turns into full domain compromise.
Here’s the direct answer up front: you detect lateral movement by linking who changed what on endpoints (process, logon, file, admin actions) with who talked to what on the network (sessions, DNS, SMB/RDP/WinRM connections, unusual routing). The best results come from correlating timestamps within minutes, not by trusting a single alert.
I’m going to show you a practical 2026 workflow you can run with common tools (Microsoft Defender for Endpoint, Sysmon/Evtx, Zeek/Suricata, ELK/Splunk, and SOAR playbooks). You’ll get detection ideas, false-positive traps, and a checklist you can use during an active incident.
What lateral movement looks like (and why endpoints alone miss it)
Lateral movement is attacker activity that helps them reach new systems after the first foothold. In plain terms, they don’t just want one server. They want access to more machines where they can steal data, escalate privileges, and keep persistence.
On endpoints, lateral movement often shows up as new services, suspicious remote logons, admin tool use, or odd process trees. On the network, it shows up as repeated connections between specific hosts, odd ports, new DNS patterns, or authentication traffic that doesn’t match your normal behavior.
The problem: endpoint alerts can be late or incomplete. If the attacker uses a “living off the land” technique (meaning they use tools already on the system), you’ll see little or no malware. Meanwhile the network tells a clearer story: who connected to whom, from where, and how often.
What most people get wrong is this: they treat alerts like evidence. Alerts are just “signals.” Real evidence is the link between signals: a logon on Server A at 10:14, followed by an SMB session from Server A to Server B at 10:16, followed by a new admin share or a remote service creation on Server B shortly after.
Telemetry you need: endpoint events and network events (mapped to a timeline)
Good incident response deep dive starts with a timeline you can trust. That means you need consistent clock sync and telemetry sources that cover both host actions and network conversations.
Endpoint telemetry: the minimum fields that matter
When I set up investigations for lateral movement, I want endpoint events that answer these questions: what host, what user, what process, what command line, what parent process, and what remote target.
In Microsoft environments, this typically includes:
- Authentication/logon events (Windows Security logs, including successful and failed remote logons)
- Process creation (process name, command line, hashes if available, parent process, integrity level)
- Service creation/start events (because many lateral tools create a remote service)
- Scheduled task changes (common for persistence on a new host)
- File creation/modification for suspicious drops on admin shares or temp directories
If you’re not on Microsoft, the same ideas apply. Linux endpoint telemetry can come from auditd, Sysmon equivalents, or EDR events. The goal is still the same: capture process lineage and identity context.
Network telemetry: the minimum events that matter
Network telemetry should answer: what source IP talked to what destination IP, on which port, for how long, and whether it looked like authentication traffic.
From network sensors, I look for:
- DNS queries (especially newly used hostnames, internal names, and suspicious domain-to-IP patterns)
- SMB sessions (TCP 445 activity, and in rich logs, SMB command details)
- RDP sessions (TCP 3389 with timing and session duration)
- WinRM/HTTP(S) remoting (common in some attacker toolchains)
- Kerberos/NTLM-related traffic indicators (if your sensors support it)
- NetFlow or metadata sessions as a fallback when you lack deep protocol logs
Here’s a small but important rule I use: when endpoints say “a remote tool ran,” network should show “a session happened.” If only one side shows up, I assume incomplete visibility and I widen the time window for the correlation.
Time sync and why it decides wins and losses
In real incidents, time drift causes wrong conclusions. I’ve seen investigations miss a lateral jump because one log source was off by 12 minutes after an NTP change.
For 2026 setups, I recommend:
- Verify NTP/chrony sync across domain controllers and endpoints.
- Confirm your log pipeline preserves event time (don’t rely only on ingestion time).
- Use a correlation window of 2–5 minutes to start, then expand to 15 minutes when you see uncertainty.
Correlation patterns that catch lateral movement fast

Detection isn’t one rule. It’s a set of linked patterns. Below are correlation patterns I’ve used in practice, plus how to reduce false positives.
Pattern 1: Remote logon + admin tool process + follow-up network session
This is one of the strongest patterns because it ties identity to behavior and then to the network.
- Endpoint: A remote logon event occurs on Host B (e.g., a user authenticates remotely).
- Endpoint: On Host A (the source), a process runs that suggests admin tooling (examples: remote execution utilities, service control tools, or PowerShell with remoting).
- Network: Around the same time, Host A opens a session to Host B on a typical remote management port (SMB 445, RDP 3389, WinRM ports).
Correlation tip: don’t only look for “remote logon success.” Include failed logons too. Many attacker runs start with failures while they probe credentials.
False positive reduction: filter out known admin jump servers and your approved management accounts. If your SOC already has “good” admin activity baselines, use them.
Pattern 2: Service creation bursts on a new host after a first connection
Attackers often create a service on the target host to run code remotely. That shows up well on endpoints, and it tends to follow network access.
- Network: Host A makes a new SMB session to Host B.
- Endpoint (Host B): A new service is created within 1–5 minutes.
- Endpoint (Host B): The service starts and spawns a child process (often a command shell, script host, or a renamed binary).
What I like about this pattern is it doesn’t depend on malware hashes. Even if the attacker uses built-in tools, service creation is hard to hide.
Common miss: teams alert on “service created” but ignore context. If the service name matches your software deployment tools (for example, standard naming used by your monitoring agent), it’s likely benign.
Pattern 3: DNS lookups for internal hosts followed by SMB or WinRM traffic
DNS is often the first “quiet move” you can catch. Attackers need to resolve targets, especially if they don’t already know IPs.
- Network: Host A suddenly performs a series of DNS queries for internal server names.
- Network: Shortly after, Host A connects to the resolved IPs on 445/3389/WinRM-related ports.
- Endpoint: A script or admin tool runs on Host A that matches the timing (PowerShell, WMI, remote copy utilities, etc.).
Original insight from my own investigations: DNS “spikes” are more useful when compared to that one host’s normal behavior, not the whole network. One workstation doing 20 internal lookups at 2 a.m. is far more suspicious than the network doing 20,000 queries overall.
Step-by-step: an endpoint + network lateral movement hunt playbook

Here’s a hunt you can run during an incident or right after you close the first alert. I’m going to keep it practical and time-boxed.
Step 1: Pick the “suspect pivot” host from the first alert
Start with the machine that triggered the initial incident (malicious email execution, initial suspicious remote management, credential theft signal, or EDR alert). That host is your pivot.
In the first 10 minutes, confirm:
- Has the pivot host shown unusual logons to other systems?
- Has the pivot host made new outbound connections to internal server ranges?
- Did the pivot host run any remote tooling shortly before the first alert?
Step 2: Build a 30-minute timeline around the pivot host
Use the event time, not ingestion time. Create a simple table with three columns: endpoint events, network sessions, and identity context.
I usually start with a 30-minute window because many lateral steps cluster tightly. If you see clear proof, expand to 2–6 hours to check whether the attacker already did this earlier.
Step 3: Score “target likelihood” for every destination the pivot talked to
You don’t need to analyze every connection. You need triage.
Use a quick score (0–3) for each destination:
- 3: destination is a file server, domain controller, or known admin server
- 2: destination runs remote management services (RDP/WinRM) or has admin shares exposed
- 1: destination is a typical workstation or app server
- 0: destination is a printer/utility/other low value system
Then add a second layer score for behavior (also 0–3): number of sessions, unusual times (like 2 a.m.), and whether sessions match a known attacker pattern (SMB bursts, RDP right after DNS, etc.).
Step 4: Confirm lateral execution on the target using endpoint evidence
Pick the top 3–5 destination hosts from your score. Then verify host-side evidence around the session time:
- Remote logon events on the target host (Host B)
- Service creation, scheduled task creation, or WMI subscription events on Host B
- New process execution on Host B where the parent process suggests remote execution
- Any admin share file writes or script drops
In a clean environment, you should see a tight chain. If you don’t, it doesn’t mean “safe.” It means “visibility gap” — maybe your endpoint logs didn’t capture process creation.
Step 5: Validate credential paths (what accounts are traveling)
Lateral movement is usually credential-driven. The same user or service account showing up in multiple authentication events is a strong sign.
Check if the identity used from pivot host also appears on target host logons. If you see only new service accounts created on Host B, that’s still lateral movement — it just means the attacker may have generated or stolen new credentials earlier.
Tools and examples: how this shows up in common stacks
Different tools label similar events differently. If you map them to the same timeline, your detection quality improves a lot.
Microsoft Defender for Endpoint + Windows Security logs
In Microsoft Defender for Endpoint, I usually start with the alert’s device timeline and then pivot into:
- Process creation events (EDR events or Sysmon)
- Remote logon events in Windows Security (Event IDs depend on your auditing settings)
- Service creation and scheduled task events from Windows logs
When you see an attacker using PowerShell, check the command line for remoting keywords and for base64-encoded payloads. For lateral movement, the real clue is pairing that with network sessions to a new destination.
Sysmon (process lineage) + Zeek (network sessions)
If you run Sysmon on endpoints and Zeek in the network, you can do a strong correlation workflow:
- Sysmon gives you parent/child process chains and command lines.
- Zeek gives you connection metadata and protocol hints.
A practical method I like: export Sysmon process creation for the pivot host for the 30-minute window, then join it with Zeek connection logs where the pivot’s IP is the source. Look for tight time alignment and “new destination” patterns.
Limitations: if NAT exists, ensure Zeek logs include translated IPs or you’ll mis-link hosts. In networks with heavy NAT, map internal-to-edge addresses carefully.
Suricata rules for protocol hints (SMB/RDP/WinRM)
Suricata is great for spotting “this traffic looked like X” events. Use it as a clue, not the only evidence.
In my experience, Suricata alerts work best when you then verify the endpoint story: did the target host create a service or spawn a remote tool around the Suricata alert time?
People Also Ask: common lateral movement questions answered
How do you detect lateral movement without malware?
You detect it by behavior, not binaries. Focus on remote logons, new service/task creation, unusual admin tool runs, and repeated network sessions between hosts that aren’t normally talking.
A rule of thumb: if endpoint events show “remote execution behavior” and network events show “a matching session,” you’ve got a strong case even when there’s no malware file on disk.
If you’re building detections, create alerts for the chain: remote logon on Host B + admin tool on Host A + network session from Host A to Host B. Many teams only alert on one piece.
What is endpoint telemetry in incident response?
Endpoint telemetry is the security data generated by a device (like a server or laptop) that describes user actions and system changes. This includes process creation, logons, file activity, registry changes, and service/task changes.
In incident response, endpoint telemetry helps you answer: “What did this device do, and who did it?” Network telemetry helps you answer: “Who did it talk to?” Together they tell the story.
Which network telemetry is best for spotting lateral movement?
Best network telemetry is the kind that shows connections and protocol intent. Practically, that means DNS logs plus session-level connection metadata (NetFlow, Zeek, or similar) and protocol-aware logs for SMB/RDP/WinRM when you can get them.
DNS helps you find target discovery. SMB and RDP help you find access attempts. Even basic connection metadata is useful when you pair it with endpoint evidence.
What’s the fastest way to investigate suspected lateral movement?
Use a 30-minute timeline around the pivot host, then verify the top destination hosts on the endpoint. Don’t chase everything at once.
My fast path is:
- Identify the pivot host from the first alert.
- List destinations contacted by the pivot within the 30-minute window.
- Pick the highest-risk destinations (servers, domain controllers, management boxes).
- Check endpoint events on each destination for service/task creation and remote logons.
If you find one confirmed lateral hop, expand the timeline backward and forward to catch earlier or later hops.
Build detections: rules and thresholds that work (and what to avoid)
Detections fail most often because thresholds are wrong or because correlations are missing. Here are my preferred detection styles for lateral movement.
Detection rule style: “rare event on a high-value target”
Examples of rare events:
- Remote logon to a server by a user that never logs onto that server
- New SMB connections to a file server outside maintenance windows
- A new service created on a workstation that usually never gets services from remote tools
High-value targets include domain controllers, identity systems, backup servers, and admin jump boxes.
Detection rule style: “two signals within five minutes”
This is where endpoint + network wins. Alert when you see two things close together:
- Endpoint shows remote execution behavior on Host B
- Network shows a corresponding connection from pivot host to Host B within the same time window
Five minutes sounds small, but it’s realistic. In most environments, service creation and remote session setup happen quickly enough to keep you honest.
What to avoid: “one alert = stop the attacker” thinking
I’ve watched teams get stuck because they “closed” the ticket after the first alert. Lateral movement means more than one action. Your job is to figure out whether the attacker moved to new hosts and whether that expands access.
So, after any endpoint alert that suggests remote execution, you should check network sessions from that endpoint right away.
Response actions: contain, eradicate, and verify the lateral graph
Once you confirm lateral movement, you don’t stop at blocking one host. You verify the lateral movement graph: which systems were reached, which accounts were used, and whether persistence exists on new targets.
Containment that doesn’t break your investigation
In 2026, I still recommend you collect last before you lock down too hard. If you cut network access instantly, you lose session context that helps you understand the attacker path.
Practical actions (in order):
- Isolate the pivot host only if you confirm it’s the active attacker machine.
- Temporarily block outbound connections from pivot host to the top destination IPs you found.
- Preserve endpoint evidence (EDR timeline exports, memory capture if your playbook supports it, and raw logs).
- Then isolate targets where you confirm remote execution occurred.
Eradication: hunt for the tool and the persistence method
When you confirm service creation or scheduled tasks, hunt for:
- New services on target hosts (and whether they run at boot)
- Scheduled tasks that call scripts or remote commands
- New local admin users or changes to group membership
- Credential storage artifacts (for example, saved credentials files or tampered credential providers)
If you use common EDR platforms, export “persistence” views if available. If not, use the endpoint timeline plus your Windows event queries to list the changes made in the session window.
Verification: check for additional hops
Verification means answering: “Did lateral movement continue after the first confirmed hop?”
Do this:
- Use the confirmed target host as a new pivot and check whether it made similar outbound connections shortly after compromise.
- Search for the same account (or same process pattern) across the environment.
- Look for “fan-out” behavior: one host contacting many new destinations in a short time.
Case-style walkthrough: what I look for during a real investigation
Here’s a realistic chain I’ve seen, with the exact kinds of signals you should line up.
A workstation receives a suspicious email. The initial alert flags a PowerShell process spawning another script. On the endpoint, I see a command line that includes remote management keywords. Then I check network telemetry.
Two minutes later, the same workstation (pivot) does a burst of internal DNS lookups for server names in the same AD site. Then I see SMB sessions to one file server and one admin host within the next five minutes.
On the file server, I check endpoint logs. I find a new service created and started shortly after the SMB session time. The service launches a command shell that runs a script from a temporary directory. The parent process chain points back to the remote execution activity initiated on the workstation.
That’s lateral movement, even if the attacker never dropped a “malware” file. Their tool was the behavior itself: remote execution via built-in systems, paired with normal network protocols.
My takeaway: the win is in the timeline. The win isn’t in any one log source.
Internal links (related reading)
- How to Write Better Detection Rules for Credential Theft Using Endpoint Logs
- Threat Intelligence for Incident Response: Turning Indicators Into Actionable Queries
- Incident Response Playbook Basics: From Triage to Containment
Conclusion: your goal is a lateral movement graph you can stop
Your clear takeaway: detect lateral movement by correlating endpoint actions with network sessions, then build a lateral movement graph you can act on. Don’t hunt in circles. Use a pivot host, a short timeline, and destination scoring. Confirm with endpoint evidence on the targets, not just a network hit.
If you do this consistently, you’ll catch the “quiet” part of attacks — the moment an intruder turns one foothold into many. And in incident response, that’s the moment that really changes the outcome.
Featured image alt text suggestion (include keyword): Incident response deep dive detecting lateral movement with endpoint and network telemetry across hosts
