AWS CloudTrail Management Events

Cloud & SaaSCloud InfrastructureAWSCloudTrailCloud Control PlaneSIEM / Log Aggregator

Location

AWS CloudTrail > Event history (last 90 days) or trail delivery in S3 / CloudWatch Logs

Description

AWS control-plane audit records for management events including console activity, API calls, IAM changes, role assumptions, service configuration updates, and destructive actions. Event history provides recent management events, while a trail is required for retained delivery to S3 or CloudWatch Logs.

Forensic Value

CloudTrail is the primary source for reconstructing attacker activity across AWS accounts. It identifies the calling principal, source IP, user agent, request parameters, and affected resources for changes to IAM, EC2, EKS, ECR, S3, and logging configuration itself. It also reveals anti-forensics such as trail deletion, region disabling, or tampering with guardrail services.

Tools Required

AWS ConsoleAWS CLIAthenaCloudWatch Logs InsightsSIEM

Collection Commands

AWS CLI

aws cloudtrail lookup-events --start-time 2026-03-01T00:00:00Z --end-time 2026-03-07T23:59:59Z --output json > cloudtrail_event_history.json

AWS CLI

aws s3 cp s3://<trail-bucket>/AWSLogs/<account-id>/CloudTrail/<region>/ ./cloudtrail/ --recursive

CloudWatch Logs Insights

fields @timestamp, eventSource, eventName, userIdentity.type, sourceIPAddress, userAgent | sort @timestamp desc | limit 200

Collection Constraints

  • Event history alone is short-lived and management-event focused; durable investigations require a retained trail or exported sink data.
  • Data events and organization-wide visibility depend on pre-incident CloudTrail configuration across the accounts and regions in scope.

MITRE ATT&CK Techniques

T1098T1078.004T1578T1562

Related Blockers

Critical Logs Rotated/Overwritten Before Collection

Key log files (Security EVTX, web server access logs, syslog) have been rotated out or overwritten due to aggressive retention settings, high volume, or attacker manipulation. The evidence window for those sources is now closed.

SIEM Not Ingesting Relevant Log Sources

The SIEM does not ingest logs from the affected systems, applications, or network segments. Correlation, alerting, and historical search capabilities are unavailable for the evidence sources most relevant to this incident.

Legal Requesting Preservation Conflicts with Containment

Legal counsel has issued a preservation hold requiring that certain systems, mailboxes, or data stores remain untouched. This directly conflicts with containment actions like reimaging hosts, resetting accounts, or blocking network segments.

Attacker Used Timestomping, Log Clearing, or Other Anti-Forensics

Evidence of deliberate anti-forensic activity has been found: timestamps modified, event logs cleared, prefetch/shimcache wiped, or tools designed to defeat forensic analysis were executed. Standard timeline analysis may be unreliable.

Cloud or Container Logging Coverage Missing

The investigation depends on cloud-control-plane or container telemetry that was never enabled, was retained too briefly, or was routed to an unavailable destination. This creates blind spots around identity misuse, cluster administration, and workload behavior.

Host Wiped Before Forensic Acquisition

The compromised host has been zeroed or securely wiped (DBAN, `dd if=/dev/zero`, `sdelete`, `shred`) before forensic imaging could begin. Traditional filesystem-carving techniques recover limited content; the investigation must pivot to peer-host artifacts, network telemetry, and cloud/identity records that survived the wipe.

Evidence Chain of Custody Compromised

Evidence handling has gaps or integrity issues (missing hash verification, broken custody log, unauthorized access to evidence storage, transfers without documented handoffs). Evidence may still be technically useful but legal admissibility is compromised; pivot to secondary preservation and early legal assessment.

Law Enforcement Requested Investigation Pause

A law-enforcement agency (FBI, Secret Service, Europol, national police cybercrime unit) has requested that the organization pause or slow-walk active investigation, containment, or notification steps while they pursue their own investigation. This creates tension between legal obligations to customers/regulators and cooperation with LEA.

Deep Anti-Forensics: Timestomping, Rootkits, Secure Delete

The attacker has employed anti-forensic techniques: timestomping ($MFT/$STANDARD_INFORMATION manipulation), log clearing (Security.evtx wiped, journalctl truncated), NTFS alternate data stream hiding, rootkits, file-attribute masking, or secure-delete of specific indicators. Standard forensic analysis produces incomplete or misleading results.

Investigation Requires Air-Gapped Network Access

The affected systems are on an isolated network segment with no connectivity to standard IR tooling (EDR management plane, SIEM, evidence-transfer channels). Acquisition and analysis must happen via physical media or through carefully-controlled trusted-transfer workflows that do not breach the air gap.

Evidence Spans Multiple Jurisdictions with Conflicting Laws

Affected systems or data span multiple countries with differing data-protection, breach-notification, and cross-border transfer laws (GDPR, data-residency rules, PIPL, LGPD, state-level US laws). Acquisition and analysis that is lawful in one jurisdiction may be unlawful in another. Engage legal counsel early and plan in-region processing.

SaaS Audit Logging Not Enabled or Not Licensed

The investigation depends on SaaS audit evidence that was never enabled, is unavailable under the current subscription tier, or requires a higher-privilege admin role than the response team currently has. This creates blind spots for identity abuse, collaboration-platform misuse, and source-code access.

SaaS Audit Retention Expired Before Collection

The response started after the native retention window for Google Workspace, Okta, Slack, GitHub, or similar SaaS evidence had already passed. The necessary events are no longer available in the vendor UI or API even though the underlying accounts and content may still exist.

Attack Delivered via Legitimately Signed Update

The malicious artifact carries a valid signature from the vendor's real signing key, so traditional allow-by-signature controls (Authenticode policy, Cosign verification, macOS notarization) do not flag it. Detection must pivot to behavioral indicators, reputation, and anomaly-based signals.

Compromised Vendor Artifact Provenance Lost

The compromised software was distributed through a legitimate channel (update server, package registry) but the vendor cannot or will not produce the exact pre-compromise build artifacts, build manifests, or signing-chain evidence needed to validate provenance. Without that baseline, it is difficult to definitively identify what was malicious versus legitimate in the distributed artifact.

Mining Incident Treated as Low Priority by Stakeholders

Stakeholders frame unauthorized mining as "just a resource cost" and push for immediate process-kill and closure rather than a full investigation. This under-scoping routinely leaves the entry vector open and misses secondary compromise (webshells, backdoors, credential theft) the attacker installed alongside the miner.

Serverless Workload Cannot Host EDR Agent

The compromised workload is serverless (AWS Lambda, GCP Cloud Functions, Azure Functions, Cloudflare Workers) and cannot host a traditional EDR agent. Execution environments are ephemeral and container-isolated; evidence must come from cloud-provider execution logs, function code/config, trigger/event sources, and attached IAM role activity.

Evidence Spans Multiple Clouds and On-Premises

The incident crosses two or more cloud providers (AWS, Azure, GCP) and/or on-premises infrastructure. Each environment has different evidence formats, retention policies, and access patterns. Investigation time is lost to evidence-normalization and timeline-alignment rather than analysis.