Every action an attacker takes inside your environment leaves a trace somewhere. A failed login attempt writes to a security log. A new service installation triggers a system event. A lateral movement hop generates a DNS query, a firewall allow rule, and an authentication event — all at roughly the same timestamp. Log analysis is the discipline of finding those traces, understanding their meaning in context, and assembling them into a coherent picture of what actually happened.

The challenge is not a shortage of data. Modern environments generate millions of log events per day across Windows endpoints, Linux servers, network devices, cloud APIs, and SaaS platforms. The challenge is knowing which logs to prioritize, what patterns within those logs indicate malicious activity versus noise, and how to correlate events across sources that use different timestamps, different schemas, and different levels of fidelity.

This guide is a practitioner reference for log analysis during active incident response. It covers the key log sources across Windows, Linux, network infrastructure, and cloud environments; the specific event IDs and patterns that matter most; and the analytical techniques that turn raw log data into actionable intelligence.

Log Analysis in the IR Context

Incident response operates under time pressure. The analyst is simultaneously trying to contain an active threat, preserve evidence, and understand scope — often with incomplete information and organizational pressure to reach conclusions faster than the evidence allows. Log analysis within this context is not an academic exercise; it is a triage operation.

The first priority is always to establish a timeline. Before you can understand what the attacker did, you need to know when things happened and in what order. Logs are the primary mechanism for building that timeline, but only if you account for clock skew, timezone inconsistencies, and the difference between event generation time and event ingestion time in your SIEM. A 30-second offset between a domain controller and a workstation can make an authentication event appear to precede the exploit that caused it — a confusion that has derailed many investigations.

The second priority is scope. Logs tell you which systems were touched, which accounts were used, and which data paths were traversed. Scope determination drives containment decisions: which systems to isolate, which credentials to rotate, which network segments to segment. Rushing to containment before scope is understood often means containing the wrong systems or missing the attacker's persistence mechanisms entirely.

With those principles in mind, the following log sources should be prioritized in roughly this order during initial triage: domain controller security logs, endpoint security and process creation logs, network perimeter logs, and cloud control plane logs. Secondary sources — application logs, DHCP, DNS, proxy — are queried as the investigation narrows.

Windows Event Logs

Windows Event Logs are the most information-dense source available on Windows endpoints and servers. They are structured, indexed, and queryable — but the default configuration logs far less than what incident responders need. Before you can rely on Windows event logs as an investigative source, audit policy must be tuned to capture process creation, command-line arguments, and PowerShell script block logging.

The primary logs relevant to incident response are located at:

Key Event IDs

The following Event IDs are the ones responders query first. They map to the most common attacker actions and cover authentication, execution, persistence, and privilege escalation:

PowerShell and Sysmon

Script block logging (Event ID 4104) captures the de-obfuscated content of PowerShell scripts as they execute. This is one of the most powerful visibility mechanisms available on Windows, because it defeats simple base64 encoding and string substitution obfuscation. Look for IEX (Invoke-Expression), DownloadString, WebClient, and EncodedCommand patterns in 4104 logs. The presence of Mimikatz function names, Cobalt Strike artifact patterns, or known offensive framework strings in script block logs is definitive evidence of attacker tooling execution.

Sysmon Event ID 1 (Process Create) captures the same process creation data as 4688 but includes the process hash and parent image hash, dramatically accelerating threat intelligence lookups. Sysmon Event ID 3 (Network Connection) logs every outbound TCP/UDP connection with the source process, destination IP, and destination port — a capability that Windows does not natively provide at this granularity. Sysmon Event ID 11 (FileCreate) and Event ID 13 (RegistryValue Set) are valuable for tracking file drops and persistence registry writes respectively.

Linux Logs

Linux log architecture varies by distribution and logging daemon, but the core sources remain consistent across most enterprise Linux deployments. Unlike Windows, Linux logging is not centrally managed by default; responders must know where to look on each distribution.

The primary log files for incident response on Debian/Ubuntu systems:

On systemd-based systems, journalctl is often the most practical interface for log queries. Key patterns to query during incident response:

On RHEL/CentOS systems, the audit framework is often already enabled by default. On Ubuntu systems, you may need to verify auditd is installed and that rules cover the relevant system calls before relying on audit.log as an investigative source.

Network Logs

Network logs capture attacker activity between systems — the lateral movement, the command-and-control beaconing, the data staging and exfiltration — that host-based logs only see from one side. A compromised endpoint generates security events on that endpoint, but the network logs see every connection that endpoint makes, regardless of what logging the attacker has disabled or cleared on the host itself.

Firewall Logs

Firewall logs record permitted and denied connection attempts at the network perimeter and between internal segments. During IR, the most valuable queries against firewall logs are:

Proxy and DNS Logs

Proxy logs capture HTTP and HTTPS metadata including the requested URL, user-agent string, and response code. Even when traffic is encrypted, proxy logs reveal the destination domain, the volume of data transferred, and the timing pattern of connections. C2 beaconing produces distinctive timing regularity — connections to the same host every 60 seconds, every 5 minutes, or on some other fixed interval — that stands out in proxy log timing analysis.

User-agent strings in proxy logs are often overlooked but frequently revealing. Many offensive frameworks use default user-agent strings (Cobalt Strike's default Malleable C2 profile uses a recognizable pattern), and tools like curl and python-requests appearing in user-agent strings from workstations that should be running browsers are anomalous. Long user-agent strings with unusual formatting are sometimes used for C2 data encoding.

DNS logs are among the most valuable network sources because nearly all C2 communication requires DNS resolution. Queries that indicate attacker activity include:

NetFlow

NetFlow data (or its equivalents IPFIX and sFlow) provides connection-level metadata without full packet capture overhead. Flows record the source and destination IP, port, protocol, byte count, and packet count for each connection. During IR, NetFlow enables host-centric queries: "show me every connection this compromised host made in the past 72 hours" — a scope determination query that is not feasible with firewall logs alone at high traffic volumes.

Cloud Logs

Cloud environments shift the attack surface to API calls, identity and access management (IAM) configurations, and control plane operations. Traditional network perimeter logs provide limited visibility into what happens inside a cloud account; the primary investigative source is the cloud provider's audit trail.

AWS CloudTrail

CloudTrail records every API call made to AWS services, including the caller identity, source IP, timestamp, service and action, and request parameters. For incident response, the highest-value queries are:

Azure Activity Log and Azure AD Sign-In Logs

Azure Activity Logs capture control plane operations against Azure resources (analogous to CloudTrail). Azure AD Sign-In Logs record authentication events against Azure AD, including MFA status, conditional access policy results, and the application being accessed.

Key patterns in Azure AD logs during IR: sign-ins from impossible travel locations (authenticated from Belgium and then from Southeast Asia 20 minutes later), sign-ins with legacy authentication protocols (Basic Auth bypasses MFA), and a spike in failed MFA attempts followed by successful authentication (MFA fatigue attack pattern). The UserAuthenticationMethod field in sign-in logs distinguishes between MFA-protected and unprotected authentications.

Microsoft Sentinel or the Azure Monitor Logs interface (Kusto Query Language) is the practical interface for querying at scale. A useful starting query for IR triage: identify all sign-in events for an account of interest in a specific time window, grouped by location and client application, to quickly map the scope of credential compromise.

GCP Cloud Audit Logs

GCP Cloud Audit Logs split into Admin Activity logs (always-on, capturing resource configuration changes) and Data Access logs (disabled by default, must be enabled per service to capture read/write operations on data). For IR in GCP environments, confirm whether Data Access logs are enabled before relying on them as an investigative source; their absence is a significant gap. Admin Activity logs capture IAM changes, service account key creation, and compute operations using the same investigative patterns as CloudTrail and Azure Activity Logs.

Correlation Techniques

Individual log sources tell partial stories. The analytical power emerges from correlating events across sources — connecting an authentication event on a domain controller to a process creation event on a workstation to a network connection in firewall logs, all within a coherent timeline.

Timeline Building

Timeline building is the foundation of log correlation. The goal is a single, unified chronological sequence of events from all relevant sources, with a consistent timestamp reference (UTC, always). The process:

Pivot Analysis

Pivot analysis uses a known artifact — an IP address, a username, a filename, a hash — as the starting point for expanding investigation scope. A compromised account name found in a suspicious logon event becomes the query term for every authentication log source. An attacker IP found in firewall logs becomes the search term in proxy logs, DNS logs, and cloud audit trails. Each pivot either confirms scope or uncovers new systems and accounts to investigate.

The most productive pivots during IR are typically: compromised usernames (to find every system they authenticated to), source IP addresses (to find every connection from attacker infrastructure), process hashes (to find every system where the same malicious binary executed), and scheduled task names or service names (to find every system where the same persistence mechanism was installed).

Cross-Source Enrichment

Raw log events gain investigative value when enriched with context from other sources. An IP address in a firewall log enriched with geolocation, ASN, and threat intelligence reputation becomes a C2 indicator or a VPN exit node. A process hash enriched with VirusTotal results becomes confirmed malware. A username enriched with HR data becomes a terminated employee logging in with credentials that should have been disabled. Enrichment is most effectively operationalized in a SIEM with automated feed integrations, but manual enrichment during IR is straightforward with threat intelligence platforms and reference databases.

Common Attack Patterns in Logs

Recognizing attack patterns across log sources accelerates investigation significantly. The following patterns are the most commonly encountered in enterprise IR engagements.

Brute Force and Credential Stuffing

Pattern: A high volume of Event ID 4625 (Windows) or Failed password entries (Linux/SSH) from a single source IP or small range of IPs, targeting multiple accounts, followed in many cases by a 4624 success event. In cloud environments, this appears as multiple ConsoleLogin failures in CloudTrail or Sign-in activity failures in Azure AD before a successful authentication. The defining characteristic is volume and breadth: a genuine user who forgot their password generates a handful of failures against their own account; brute force generates hundreds of failures across many accounts.

Privilege Escalation

Pattern: A standard user account authenticating normally (4624, Type 2 or 3), followed by Event ID 4672 or group membership changes (4732, 4728), followed by activity that only a privileged account could perform (accessing sensitive shares, installing services, modifying audit policy). On Linux, look for the sequence: SSH login as a non-privileged user, a sudo command in auth.log, and then subsequent commands executed as root. In AWS, look for a low-privilege IAM user calling AttachUserPolicy or AssumeRole to a higher-privilege role immediately after initial access.

Data Staging and Exfiltration

Pattern: Large volumes of file access events (audit.log or Windows Object Access events if enabled) preceding large outbound transfers in network or proxy logs. On Windows, look for xcopy, robocopy, 7z.exe, or rar.exe in process creation logs (4688/Sysmon Event ID 1) before the transfer. Archive creation in user-writable directories followed by an outbound HTTPS upload to a cloud storage provider (OneDrive, Dropbox, Google Drive, or AWS S3) is the most common modern exfiltration pattern. Detecting it requires correlating process logs with proxy logs and potentially DLP alerts.

C2 Beaconing

Pattern: Regular, periodic outbound connections to a consistent destination at fixed intervals. Unlike human-driven web browsing (irregular timing, many different destinations), C2 beaconing produces machine-regular intervals that stand out in proxy or firewall connection timing analysis. Jitter is commonly added by modern C2 frameworks to defeat naive interval detection, but even jittered beaconing produces a statistical distribution that differs from legitimate traffic. Look for: a single destination receiving connections every N seconds with low variance, consistent byte sizes for the outbound request (heartbeat traffic), and absence of typical browsing behavior context (no other connections from that process at other times).

Building a Log Analysis Workflow

Effective log analysis during IR requires a repeatable workflow that accounts for the speed-versus-thoroughness tension inherent in the IR context. The following structure has proven practical across a range of incident types.

Log analysis is iterative. The workflow above describes a linear progression, but in practice each step generates new pivot terms that send you back to earlier steps with new queries. Maintain contemporaneous notes throughout — document every query, every finding, and every hypothesis tested and rejected. In complex incidents, the investigation notes are themselves an essential artifact that supports post-incident reporting and any subsequent legal or regulatory process.

For complementary host-based evidence that fills gaps in log coverage, the Windows Forensic Artifacts cheatsheet covers execution artifacts, registry evidence, and file system traces on Windows systems. The Linux Forensic Artifacts guide provides the equivalent reference for Linux host investigation, including persistence locations, shell history, and package manager logs that complement the syslog sources described here.

Strengthen Your Detection and Response

Effective log analysis requires the right tools, processes, and expertise. Learn how ForgeWork helps organizations build comprehensive security monitoring.

Security Engineering Explore More Insights