Threat intelligence has become a standard line item in security budgets without becoming a standard part of security operations. Organizations pay for premium feeds, license threat intelligence platforms, and subscribe to sector-specific ISACs — then watch their analysts continue to respond to incidents the same way they always have, without integrating any of it. The gap is not a tooling problem. It is a fundamentals problem.
Incident responders are ideally positioned to be both consumers and producers of actionable intelligence. You are closer to the raw evidence than anyone else in the organization. You see attacker behavior firsthand. You collect artifacts that intelligence analysts turn into finished reporting. But only if you understand how the intelligence lifecycle works and where your role sits within it.
This article covers the fundamentals that every incident responder should understand: what threat intelligence actually is at each level, how the intelligence lifecycle applies during an active response, which types of indicators age well and which expire within hours, how to consume feeds without drowning in noise, and how to build tactical intelligence on the fly when no prior reporting exists for the threat you are facing.
What Threat Intelligence Actually Is
Threat intelligence is often conflated with threat data. A list of malicious IP addresses is data. Intelligence is data that has been processed, contextualized, and analyzed to support a specific decision. The distinction matters because raw data without context frequently generates more work than it saves — analysts chasing false positives from stale feeds, SIEM rules triggering on IPs that have been reassigned, and blocked domains that turned out to be shared hosting infrastructure containing both legitimate and malicious sites.
The intelligence community uses a four-tier model that maps well to security operations:
- Strategic intelligence addresses the highest-level questions: which threat actors target our sector, what their motivations are, and how the threat landscape is likely to evolve. Strategic intelligence is consumed by executives and security leadership to justify investment decisions and risk acceptance. It answers questions like "are nation-state actors actively targeting organizations in our industry?" and "what does the ransomware ecosystem look like after recent law enforcement actions?" Responders rarely produce strategic intelligence directly, but their incident findings feed the analytical process that generates it.
- Operational intelligence focuses on specific campaigns, threat actors, and their intended targets. It describes ongoing operations — which groups are currently active, what industries they are hitting, and what initial access vectors they are exploiting right now. Operational intelligence informs decisions about defensive priorities and hunting campaigns. During an active incident, operational intelligence helps you understand whether you are dealing with a targeted attack or opportunistic intrusion, and whether the threat actor has been observed elsewhere recently.
- Tactical intelligence describes techniques, tactics, and procedures (TTPs): how threat actors actually operate, step by step. This is the layer most directly relevant to incident response. Knowing that a particular group habitually uses WMI for lateral movement and stores tooling in
C:\Windows\Tempbefore pivoting tells you exactly where to look and what to look for. Tactical intelligence has a longer shelf life than technical indicators because TTPs are expensive for adversaries to change. - Technical intelligence consists of specific, machine-readable indicators: file hashes, IP addresses, domain names, URLs, and registry keys associated with known-malicious activity. This is the layer most commonly delivered by commercial feeds and most commonly misunderstood. Technical indicators are immediately actionable for detection and blocking, but they decay rapidly — often within hours for IP addresses and days for domains. Their value is inversely proportional to how much effort the adversary has put into making them disposable.
An effective threat intelligence program operates at all four levels simultaneously. An effective incident responder understands which level of intelligence they are working with and what decisions it can and cannot support.
The Intelligence Lifecycle Applied to Incident Response
The intelligence lifecycle is a continuous process that begins with a question and ends with a decision. It applies whether you are running a full intelligence program or simply trying to understand an artifact you just pulled off a compromised host.
The five phases translate directly into IR practice:
- Requirements. Before collecting anything, define what you need to know. In an IR context, the initial requirements are typically: What actor or campaign is responsible? What was the initial access vector? What tools and techniques are in use? What is the intended outcome? Is this a targeted or opportunistic attack? These questions drive your collection priorities. Without explicit requirements, analysis becomes unfocused and artifacts get collected without a clear purpose.
- Collection. Gather raw data relevant to your requirements. During an incident, this means forensic artifact collection — memory images, log exports, network captures, malware samples — but also external collection: reviewing OSINT sources, querying threat intelligence platforms, and pulling relevant reporting from vendor blogs and sharing communities. The quality of your collection directly limits the quality of your eventual analysis.
- Processing. Raw data requires processing before it is useful. This means parsing log formats, extracting indicators from artifacts, enriching hashes and IPs with threat intelligence lookups, normalizing timestamps, and deconflicting overlapping sources. This is often the most time-consuming phase and the phase most likely to be skipped under pressure. Skipping it produces conclusions built on misunderstood data.
- Analysis. Apply analytical rigor to your processed data. Identify patterns, build timelines, map observed behavior to known frameworks, assess confidence levels, and generate hypotheses. This is where the intelligence work actually happens. Analysis requires structured thinking — distinguishing what you know from what you are inferring, documenting your reasoning, and being explicit about gaps and uncertainty.
- Dissemination. Finished intelligence is only valuable if it reaches the people who need it, in a format they can use, with enough context to act on it. During an incident, this means regular updates to stakeholders at the appropriate level of detail: technical findings for other responders, operational context for security leadership, and strategic implications for executives. After the incident, dissemination means contributing your findings back to sharing communities so that others benefit from what you learned.
The lifecycle is iterative. Analysis generates new questions that feed back into requirements, driving additional collection. During a complex incident, you may cycle through the loop several times before you have a complete picture.
IOC Types and Their Shelf Life
Not all indicators are equally valuable, and understanding why is essential for using them effectively. David Bianco's Pyramid of Pain provides the most useful mental model. The pyramid orders indicator types from easiest for defenders to use (at the base) to most painful for adversaries when those indicators are detected and blocked (at the apex).
From the base up:
- Hash values sit at the base. A file hash is trivially easy for an attacker to change — modifying a single byte of the malware binary produces a completely different hash. Hash-based detection is still worth doing because it catches reuse within a campaign, but a fresh hash block has a very short effective lifespan. Focus on cryptographic hashes (SHA-256) rather than MD5, which has collision vulnerabilities. Hash intelligence is most valuable for correlating samples across an investigation, not for blocking future intrusions.
- IP addresses are slightly more painful to change, but not much. Adversaries rent infrastructure by the hour, use bulletproof hosting, chain through VPNs and proxies, or rotate through a pool of addresses. An IP address associated with active C2 may be abandoned within 24 hours of discovery. IP intelligence is valuable for incident enrichment — confirming that a suspicious connection goes to known-malicious infrastructure — but blocking IPs as a primary defense is largely ineffective against sophisticated actors.
- Domain names require slightly more effort to replace but remain relatively cheap. Domain generation algorithms (DGAs) make static domain blocklists nearly useless against actors who use them. However, domain intelligence is highly valuable for pattern analysis: newly registered domains, lookalike domains mimicking legitimate brands, and domains registered with certain privacy-protected registrars cluster in patterns that are useful for hunting even when specific domains have been rotated.
- Network and host artifacts occupy the middle of the pyramid. URL patterns, HTTP request headers, user-agent strings, registry persistence keys, mutex names, and scheduled task naming conventions require meaningfully more effort to change and are more consistently reused across campaigns. A mutex name like
Global\{A4F4C1E3-7A1B-4B2D-9C3F-1D2E4F5A6B7C}seen in one sample that shows up in another six months later tells you something real about the threat actor's practices. - Tools represent a significant investment. Commercial post-exploitation frameworks like Cobalt Strike, Brute Ratel, or custom implants take months to develop and are not casually discarded. When you identify the specific toolset an adversary is using, you gain predictive power: you know what capabilities they have, what artifacts to look for, and often what techniques they prefer. Detection based on tool-specific behaviors has substantially longer value than indicator-based detection.
- TTPs sit at the apex. How an actor establishes persistence, moves laterally, escalates privileges, and exfiltrates data represents deeply embedded operational habits that are genuinely costly to change. A threat actor that has been using a specific technique for years does not simply adopt a different approach because one defender detected it. TTP-based detection — behavioral rules that identify technique patterns rather than specific artifacts — is the most durable investment in detection engineering. It is also the most difficult to build, which is why most organizations under-invest in it relative to indicator-based blocking.
When consuming intelligence feeds, evaluate each indicator type against this framework. High-volume technical indicator feeds provide volume that feels like coverage but often delivers minimal durable value. Finished reporting that documents TTPs and behavioral patterns, even if it covers fewer actors, generally provides more actionable intelligence for building long-term detection capability.
Consuming Threat Feeds
The mechanics of threat feed consumption involve three main formats and a constant signal-to-noise challenge.
STIX and TAXII
STIX (Structured Threat Information eXpression) is the standard data format for machine-readable threat intelligence. STIX 2.1 is the current version and represents a significant evolution from STIX 1.x, which was XML-based and notoriously verbose. STIX 2.1 uses JSON and introduces a richer object model: Indicators, Observables, Threat Actors, Attack Patterns, Campaigns, Malware, Tools, Relationships, and Sightings are all distinct object types with typed relationships between them.
TAXII (Trusted Automated eXchange of Intelligence Information) is the transport protocol for sharing STIX bundles. A TAXII 2.1 server exposes collections that clients can poll or subscribe to. Most commercial threat intelligence platforms and several open-source platforms expose TAXII endpoints, allowing automated ingestion into SIEM, SOAR, or threat intelligence platform (TIP) workflows.
In practice, most organizations ingest STIX/TAXII feeds through a platform layer (MISP, OpenCTI, or a commercial TIP) that deduplicates, enriches, and normalizes incoming indicators before pushing them to detection tooling. Direct integration without a platform layer tends to produce duplicate detections, stale indicators that are never retired, and no mechanism for tracking where an indicator came from or how confident the source is.
Open-Source Feeds
Several high-quality open-source feeds exist for organizations without commercial intelligence budgets:
- AlienVault OTX (Open Threat Exchange) aggregates community-contributed indicator data with reasonable attribution and context. Quality varies significantly by contributor, so treat it as a starting point for enrichment rather than a source of ground truth.
- abuse.ch operates several focused feeds: MalwareBazaar for malware samples and hashes, URLhaus for malicious URLs, ThreatFox for IOCs, and Feodo Tracker for botnet C2 servers. These feeds have high precision for their specific domains and are actively maintained.
- CIRCL MISP feeds provide curated MISP-format feeds from the Computer Incident Response Center Luxembourg, including feeds specifically tailored to European organizations and sectors.
- Emerging Threats (Proofpoint) publishes Snort/Suricata rule sets and IP reputation lists for network-level detection. The open ruleset is free; the professional ruleset requires a subscription.
- CISA Alerts and Advisories provide sector-specific threat intelligence from the US government's cybersecurity agency, often with actionable IOCs and MITRE ATT&CK mappings for significant campaigns.
The primary challenge with open-source feeds is signal-to-noise. A feed that pushes thousands of indicators per day creates a triage problem: analysts cannot review every new indicator, automated blocking on unvetted indicators causes false positives and business disruption, and the indicators that matter get lost in the volume. Effective feed consumption requires defined processes for indicator scoring, automated triage on confidence levels, and regular feed hygiene to expire stale indicators.
Managing Signal-to-Noise
Several practices significantly improve the signal-to-noise ratio when consuming threat intelligence at scale:
- Source scoring. Assign confidence scores to your intelligence sources based on their historical accuracy in your environment. A feed that has triggered confirmed detections five times and false positives once gets more weight than a feed with the inverse ratio. Platform tools like MISP and OpenCTI support source-level confidence scoring natively.
- Indicator scoring and decay. Not all indicators from a given source are equally reliable. Implement automated decay policies that reduce indicator confidence scores over time unless resighted. An IP address that has not been sighted in 30 days should be automatically deprioritized or retired rather than continuing to generate alerts indefinitely.
- Relevance filtering. Filter feeds to indicators relevant to your environment. If your organization does not use a particular sector's infrastructure or operate in a particular geography, indicators highly specific to those contexts add noise without value. Sector-specific ISAC memberships provide pre-filtered intelligence that is inherently more relevant than general feeds.
- Relationship analysis. A single IP address means little. That same IP address connected to a domain registered two days ago, associated with a specific threat actor known to target your sector, and observed communicating with a port used by a specific malware family means a great deal. Platforms that support relationship modeling between intelligence objects help analysts see these connections rather than treating each indicator in isolation.
Building Tactical Intelligence During Incidents
When you arrive at an active incident and no prior reporting covers the threat actor or campaign you are facing, you must build tactical intelligence from scratch. This is the core analytical skill for incident responders: taking raw artifacts and converting them into actionable context that guides the investigation.
The workflow follows a consistent pattern: artifact collection, enrichment, contextualization, and action.
Artifact to Enrichment
Every artifact collected during response is a potential intelligence source. A file hash, an IP address, a domain name, a registry key, a command-line argument, or a behavioral pattern observed in endpoint telemetry can be enriched against external sources to determine whether it is known malicious, associated with a specific campaign, or entirely novel.
Enrichment sources for common artifact types:
- File hashes — VirusTotal, MalwareBazaar, Hybrid Analysis, and internal sandboxing platforms. A submission to VirusTotal reveals detection rates across dozens of engines and, critically, behavioral sandboxing results that describe what the file actually does when executed.
- IP addresses and domains — Shodan for infrastructure fingerprinting (what services are exposed, what software versions, what certificates), VirusTotal for passive DNS and URL analysis, Censys for certificate intelligence, RiskIQ/Microsoft Defender Threat Intelligence for passive DNS history and host relationships, and your commercial TIP for actor attribution.
- Certificate data — Certificates associated with C2 infrastructure often share characteristics: self-signed, using the same organizational unit fields across multiple certificates, or registered through the same certificate authority with the same fake organizational details. Certificate pivoting through crt.sh or Shodan frequently expands a single known-malicious domain into a cluster of related infrastructure.
- WHOIS and registration data — Registration patterns, registrar selection, and registration timing relative to the incident can help identify infrastructure purpose and actor sophistication. Infrastructure registered within 48 hours of first use is a different risk profile than domains aged several months before activation.
Context to Action
Enrichment produces context. Context informs action. The analytical step that converts context into action requires asking: given what we now know about this artifact, what does it tell us about attacker capability, intent, and likely next steps?
A practical example: you identify a process injecting shellcode into svchost.exe on a compromised endpoint. Enrichment reveals the shellcode is a Cobalt Strike beacon configured to communicate with a domain registered six days ago, hosted on a VPS provider commonly used for offensive infrastructure, with a TLS certificate using default Cobalt Strike staging parameters. This context tells you: the attacker is using a commercial post-exploitation framework with a short-lived, purpose-built infrastructure. The beacon is likely the primary C2 channel. Cobalt Strike is capable of lateral movement, credential harvest, and data staging. You should immediately look for lateral movement indicators across the environment, check for additional compromised hosts communicating with the same infrastructure, and prioritize containing the initial compromise before the attacker pivots further.
That is tactical intelligence driving response action. It is not a feed lookup — it is analysis of collected artifacts, enriched with external context, interpreted through knowledge of attacker technique and capability.
Connecting Intelligence to MITRE ATT&CK
MITRE ATT&CK provides the standard framework for documenting and communicating TTPs. For incident responders, it serves two primary functions: a vocabulary for describing observed attacker behavior precisely, and a reference for predicting what an attacker might do next based on what they have already done.
Mapping Observed TTPs
As you collect artifacts and analyze behavior during an incident, map each observed technique to its ATT&CK identifier. A PowerShell command with Base64-encoded arguments that downloads and executes a payload maps to T1059.001 (Command and Scripting Interpreter: PowerShell) and T1140 (Deobfuscate/Decode Files or Information). The scheduled task created for persistence maps to T1053.005. The LSASS access for credential theft maps to T1003.001.
Building this mapping as you go produces several benefits:
- Cross-environment correlation. Shared ATT&CK terminology lets you correlate your findings against the ATT&CK knowledge base and community threat reporting. If you map T1059.001 and T1003.001, you can filter ATT&CK Groups to find which actors consistently use both techniques, narrowing the attribution space.
- Detection gap identification. Mapping observed TTPs against your existing detection rules reveals which techniques you have coverage for and which you do not. If the attacker used WMI-based lateral movement (T1021.006) and you have no detection for it, that is an immediate gap to address before the attacker pivots again.
- Stakeholder communication. ATT&CK technique identifiers give non-technical stakeholders a reference they can look up independently. Reporting that an attacker used "T1486: Data Encrypted for Impact" communicates both the action and the context without requiring the reader to interpret raw technical artifacts.
Predicting Next Steps
ATT&CK is organized by tactic phase: Reconnaissance, Resource Development, Initial Access, Execution, Persistence, Privilege Escalation, Defense Evasion, Credential Access, Discovery, Lateral Movement, Collection, Command and Control, Exfiltration, and Impact. Knowing where an attacker is in this progression helps predict where they are going.
An attacker who has achieved initial access, established persistence, and harvested credentials has completed the prerequisites for lateral movement. If you observe credential access techniques during an incident and have not yet seen lateral movement, you should treat lateral movement as imminent and hunt proactively rather than waiting for it to appear in detections. ATT&CK campaign profiles for known threat actors — documenting which techniques each group uses at each tactic phase — make this prediction more specific when attribution is available.
Sharing Intelligence
Intelligence sharing is the mechanism by which an incident investigation at one organization improves the defensive posture of every other organization in the community. It is also one of the most systematically under-practiced capabilities in security operations, despite being nearly universally endorsed in principle.
ISACs and Sharing Communities
Information Sharing and Analysis Centers (ISACs) are sector-specific communities that facilitate threat intelligence sharing among member organizations. Major ISACs cover financial services (FS-ISAC), healthcare (H-ISAC), energy (E-ISAC), automotive (Auto-ISAC), and most other critical infrastructure sectors. Membership provides access to curated sector-specific intelligence, peer communities for comparison and validation, and structured channels for rapid notification of active threats targeting sector peers.
Beyond formal ISACs, informal communities built around specific tools, frameworks, or interests — the MISP user community, threat hunting Slack communities, regional FIRST teams — often provide faster and more operationally relevant sharing than formal channels. Building relationships in these communities before an incident significantly accelerates your ability to get peer input during one.
TLP Protocol
The Traffic Light Protocol (TLP) provides a simple, universally understood framework for indicating how broadly intelligence may be shared. Understanding it is a prerequisite for participating in any sharing community:
- TLP:RED — Not for disclosure. Restricted to specific recipients only. Use for information that could endanger sources, ongoing operations, or specific individuals if shared more broadly. During an active incident, initial tactical intelligence often carries RED markings until the situation is contained.
- TLP:AMBER — Limited disclosure, restricted to the recipient's organization and their clients on a need-to-know basis. AMBER+STRICT limits further sharing to the recipient's organization only. Most incident-specific intelligence starts at AMBER until the organization decides to share more broadly.
- TLP:GREEN — Limited disclosure, restricted to the community. Can be shared with peer organizations and community members but not made publicly available. Most ISAC sharing operates at GREEN.
- TLP:CLEAR — Disclosure is not limited. Can be shared publicly. Finished reports, post-incident writeups, and public threat intelligence advisories typically carry CLEAR markings.
Respecting TLP markings is both an ethical obligation and a practical necessity. Organizations that share intelligence with you at TLP:AMBER and later discover it was distributed more broadly will stop sharing with you. Communities run entirely on trust, and violating TLP is the fastest way to exclude yourself from them.
What to Share
Organizations often hesitate to share intelligence because they are concerned about exposing the fact that they were compromised or revealing operational details they consider sensitive. This concern is legitimate and should be addressed through TLP markings rather than non-sharing. Most sharing communities are well-practiced at handling sensitive intelligence with appropriate discretion.
Useful intelligence contributions from an incident include: novel malware samples (shared with hash and metadata, not necessarily with attribution to your organization), C2 infrastructure indicators with context about how they were identified, TTPs observed that are not in existing public reporting, and timeline information that establishes when specific campaign activity occurred. Even partial information is useful if it is accurate and well-contextualized.
Practical Tools
Several open-source platforms support threat intelligence operations at different scales and use cases. Understanding the purpose and appropriate context for each prevents the common mistake of deploying a platform that does not match the actual workflow.
- MISP (Malware Information Sharing Platform) is the most widely deployed open-source threat intelligence platform. It provides a structured database for storing and sharing intelligence objects, a flexible attribute model that supports any indicator type, built-in STIX/TAXII feeds and correlation engine, and a large community of pre-configured feeds. MISP excels at high-volume indicator sharing and correlation. Its web interface is functional rather than elegant, and its data model requires some investment to understand. Most ISACs and many CERTs run MISP instances. MISP is well-suited for organizations that need to share intelligence with peer communities and want a community-supported platform.
- OpenCTI (Open Cyber Threat Intelligence) is a more recent platform that takes a graph-based approach to intelligence modeling, built natively on STIX 2.1 and heavily oriented toward relationship visualization. OpenCTI excels at representing complex relationships between threat actors, campaigns, malware, infrastructure, and techniques. Its interface is significantly more polished than MISP's, and its native MITRE ATT&CK integration makes TTP tracking more intuitive. OpenCTI is better suited for organizations that want rich analytical workflows and graph-based investigation, though it requires more infrastructure to operate at scale.
- TheHive is an incident response platform with threat intelligence integration rather than a dedicated TIP. Its case management and alert triage features integrate with Cortex (its analysis orchestration companion) to enrich artifacts automatically against external sources during incident response. TheHive is most valuable when threat intelligence needs to be deeply embedded in the response workflow rather than maintained as a separate analytical function. The artifact-to-enrichment workflow described earlier in this article maps naturally to a TheHive/Cortex deployment.
- Cortex (used alongside TheHive or independently) is an analysis orchestration engine that connects to external services — VirusTotal, Shodan, AlienVault OTX, MISP instances, and dozens of other enrichment sources — and allows analysts to submit observables for automated enrichment from a single interface. Building a Cortex analyzer library that covers your primary enrichment sources eliminates the repetitive manual lookups that consume significant analyst time during investigation.
Tool selection should follow workflow requirements, not marketing. Many organizations are better served by a simpler deployment — a single well-configured MISP instance or a TheHive/Cortex stack — than by attempting to operate a full enterprise-grade TIP with insufficient staffing to maintain it. A platform that is actually used provides far more value than a sophisticated one that is theoretically available.
Making Intelligence Operational
The gap between "we have threat intelligence" and "threat intelligence improves our response outcomes" is an operational one. Bridging it requires deliberate integration into existing workflows rather than standing up a separate intelligence function that operates in isolation from the responders who need to benefit from it.
Practical steps that close the gap:
- Embed intelligence in triage. When a new alert fires, the first enrichment step should be automatic: pull context from your TIP for every artifact in the alert before an analyst reviews it. A SOAR playbook that queries MISP, VirusTotal, and your commercial feed simultaneously and presents the results with the alert turns a cold-start triage into an informed one.
- Create intelligence requirements for every incident. At the start of every significant incident, explicitly document what you need to know and task collection accordingly. This prevents the common pattern of collecting artifacts exhaustively without a clear analytical direction.
- Assign a dedicated intelligence role during major incidents. In large-scale incidents, separate the collection and investigation function from the intelligence analysis function. Responders collecting artifacts often do not have the bandwidth to simultaneously enrich, analyze, and disseminate intelligence findings. A dedicated analyst processing incoming artifacts and producing regular intelligence updates for the rest of the team significantly improves response speed and coordination.
- Feed incident findings back into detection. Every novel technique, tool, or behavior observed during an incident is an opportunity to build new detection. The intelligence lifecycle closes when findings from response generate new detection rules, hunting hypotheses, and TIP entries that protect against the next incident. Without this feedback loop, organizations learn from incidents in AAR meetings but not in their detection tooling.
- Measure intelligence utility. Track how often intelligence informed a response decision, how many confirmed detections resulted from specific feeds, and how frequently indicators expired before ever being sighted in your environment. These metrics reveal which intelligence sources are actually providing value and which are generating noise without actionable output.
Threat intelligence done well transforms incident response from a reactive discipline into a proactive one. The incidents you handle better because you had prior knowledge of the adversary's techniques, the campaigns you disrupt because a partner shared early-stage indicators, and the detections you build from your own investigations before the next wave of intrusions — these are the compounding returns on a mature intelligence practice.
The tools are available, the frameworks exist, and the sharing communities are established. The investment is in building the analytical habits and operational workflows that connect intelligence to the decisions that actually matter.
For coverage of the ATT&CK framework in greater depth, including how to use it for campaign analysis and detection engineering, see the MITRE ATT&CK for Incident Responders guide. For the detection-side perspective on operationalizing intelligence-derived behavioral rules, the Detection Engineering Program article covers that workflow end to end.
Enhance Your Threat Intelligence Practice
Threat intelligence transforms reactive security into proactive defense. Learn how ForgeWork helps organizations operationalize intelligence.