Threat intelligence has become a standard line item in security budgets without becoming a standard part of security operations. Organizations pay for premium feeds, license threat intelligence platforms, and subscribe to sector-specific ISACs — then watch their analysts continue to respond to incidents the same way they always have, without integrating any of it. The gap is not a tooling problem. It is a fundamentals problem.

Incident responders are ideally positioned to be both consumers and producers of actionable intelligence. You are closer to the raw evidence than anyone else in the organization. You see attacker behavior firsthand. You collect artifacts that intelligence analysts turn into finished reporting. But only if you understand how the intelligence lifecycle works and where your role sits within it.

This article covers the fundamentals that every incident responder should understand: what threat intelligence actually is at each level, how the intelligence lifecycle applies during an active response, which types of indicators age well and which expire within hours, how to consume feeds without drowning in noise, and how to build tactical intelligence on the fly when no prior reporting exists for the threat you are facing.

What Threat Intelligence Actually Is

Threat intelligence is often conflated with threat data. A list of malicious IP addresses is data. Intelligence is data that has been processed, contextualized, and analyzed to support a specific decision. The distinction matters because raw data without context frequently generates more work than it saves — analysts chasing false positives from stale feeds, SIEM rules triggering on IPs that have been reassigned, and blocked domains that turned out to be shared hosting infrastructure containing both legitimate and malicious sites.

The intelligence community uses a four-tier model that maps well to security operations:

An effective threat intelligence program operates at all four levels simultaneously. An effective incident responder understands which level of intelligence they are working with and what decisions it can and cannot support.

The Intelligence Lifecycle Applied to Incident Response

The intelligence lifecycle is a continuous process that begins with a question and ends with a decision. It applies whether you are running a full intelligence program or simply trying to understand an artifact you just pulled off a compromised host.

The five phases translate directly into IR practice:

The lifecycle is iterative. Analysis generates new questions that feed back into requirements, driving additional collection. During a complex incident, you may cycle through the loop several times before you have a complete picture.

IOC Types and Their Shelf Life

Not all indicators are equally valuable, and understanding why is essential for using them effectively. David Bianco's Pyramid of Pain provides the most useful mental model. The pyramid orders indicator types from easiest for defenders to use (at the base) to most painful for adversaries when those indicators are detected and blocked (at the apex).

From the base up:

When consuming intelligence feeds, evaluate each indicator type against this framework. High-volume technical indicator feeds provide volume that feels like coverage but often delivers minimal durable value. Finished reporting that documents TTPs and behavioral patterns, even if it covers fewer actors, generally provides more actionable intelligence for building long-term detection capability.

Consuming Threat Feeds

The mechanics of threat feed consumption involve three main formats and a constant signal-to-noise challenge.

STIX and TAXII

STIX (Structured Threat Information eXpression) is the standard data format for machine-readable threat intelligence. STIX 2.1 is the current version and represents a significant evolution from STIX 1.x, which was XML-based and notoriously verbose. STIX 2.1 uses JSON and introduces a richer object model: Indicators, Observables, Threat Actors, Attack Patterns, Campaigns, Malware, Tools, Relationships, and Sightings are all distinct object types with typed relationships between them.

TAXII (Trusted Automated eXchange of Intelligence Information) is the transport protocol for sharing STIX bundles. A TAXII 2.1 server exposes collections that clients can poll or subscribe to. Most commercial threat intelligence platforms and several open-source platforms expose TAXII endpoints, allowing automated ingestion into SIEM, SOAR, or threat intelligence platform (TIP) workflows.

In practice, most organizations ingest STIX/TAXII feeds through a platform layer (MISP, OpenCTI, or a commercial TIP) that deduplicates, enriches, and normalizes incoming indicators before pushing them to detection tooling. Direct integration without a platform layer tends to produce duplicate detections, stale indicators that are never retired, and no mechanism for tracking where an indicator came from or how confident the source is.

Open-Source Feeds

Several high-quality open-source feeds exist for organizations without commercial intelligence budgets:

The primary challenge with open-source feeds is signal-to-noise. A feed that pushes thousands of indicators per day creates a triage problem: analysts cannot review every new indicator, automated blocking on unvetted indicators causes false positives and business disruption, and the indicators that matter get lost in the volume. Effective feed consumption requires defined processes for indicator scoring, automated triage on confidence levels, and regular feed hygiene to expire stale indicators.

Managing Signal-to-Noise

Several practices significantly improve the signal-to-noise ratio when consuming threat intelligence at scale:

Building Tactical Intelligence During Incidents

When you arrive at an active incident and no prior reporting covers the threat actor or campaign you are facing, you must build tactical intelligence from scratch. This is the core analytical skill for incident responders: taking raw artifacts and converting them into actionable context that guides the investigation.

The workflow follows a consistent pattern: artifact collection, enrichment, contextualization, and action.

Artifact to Enrichment

Every artifact collected during response is a potential intelligence source. A file hash, an IP address, a domain name, a registry key, a command-line argument, or a behavioral pattern observed in endpoint telemetry can be enriched against external sources to determine whether it is known malicious, associated with a specific campaign, or entirely novel.

Enrichment sources for common artifact types:

Context to Action

Enrichment produces context. Context informs action. The analytical step that converts context into action requires asking: given what we now know about this artifact, what does it tell us about attacker capability, intent, and likely next steps?

A practical example: you identify a process injecting shellcode into svchost.exe on a compromised endpoint. Enrichment reveals the shellcode is a Cobalt Strike beacon configured to communicate with a domain registered six days ago, hosted on a VPS provider commonly used for offensive infrastructure, with a TLS certificate using default Cobalt Strike staging parameters. This context tells you: the attacker is using a commercial post-exploitation framework with a short-lived, purpose-built infrastructure. The beacon is likely the primary C2 channel. Cobalt Strike is capable of lateral movement, credential harvest, and data staging. You should immediately look for lateral movement indicators across the environment, check for additional compromised hosts communicating with the same infrastructure, and prioritize containing the initial compromise before the attacker pivots further.

That is tactical intelligence driving response action. It is not a feed lookup — it is analysis of collected artifacts, enriched with external context, interpreted through knowledge of attacker technique and capability.

Connecting Intelligence to MITRE ATT&CK

MITRE ATT&CK provides the standard framework for documenting and communicating TTPs. For incident responders, it serves two primary functions: a vocabulary for describing observed attacker behavior precisely, and a reference for predicting what an attacker might do next based on what they have already done.

Mapping Observed TTPs

As you collect artifacts and analyze behavior during an incident, map each observed technique to its ATT&CK identifier. A PowerShell command with Base64-encoded arguments that downloads and executes a payload maps to T1059.001 (Command and Scripting Interpreter: PowerShell) and T1140 (Deobfuscate/Decode Files or Information). The scheduled task created for persistence maps to T1053.005. The LSASS access for credential theft maps to T1003.001.

Building this mapping as you go produces several benefits:

Predicting Next Steps

ATT&CK is organized by tactic phase: Reconnaissance, Resource Development, Initial Access, Execution, Persistence, Privilege Escalation, Defense Evasion, Credential Access, Discovery, Lateral Movement, Collection, Command and Control, Exfiltration, and Impact. Knowing where an attacker is in this progression helps predict where they are going.

An attacker who has achieved initial access, established persistence, and harvested credentials has completed the prerequisites for lateral movement. If you observe credential access techniques during an incident and have not yet seen lateral movement, you should treat lateral movement as imminent and hunt proactively rather than waiting for it to appear in detections. ATT&CK campaign profiles for known threat actors — documenting which techniques each group uses at each tactic phase — make this prediction more specific when attribution is available.

Sharing Intelligence

Intelligence sharing is the mechanism by which an incident investigation at one organization improves the defensive posture of every other organization in the community. It is also one of the most systematically under-practiced capabilities in security operations, despite being nearly universally endorsed in principle.

ISACs and Sharing Communities

Information Sharing and Analysis Centers (ISACs) are sector-specific communities that facilitate threat intelligence sharing among member organizations. Major ISACs cover financial services (FS-ISAC), healthcare (H-ISAC), energy (E-ISAC), automotive (Auto-ISAC), and most other critical infrastructure sectors. Membership provides access to curated sector-specific intelligence, peer communities for comparison and validation, and structured channels for rapid notification of active threats targeting sector peers.

Beyond formal ISACs, informal communities built around specific tools, frameworks, or interests — the MISP user community, threat hunting Slack communities, regional FIRST teams — often provide faster and more operationally relevant sharing than formal channels. Building relationships in these communities before an incident significantly accelerates your ability to get peer input during one.

TLP Protocol

The Traffic Light Protocol (TLP) provides a simple, universally understood framework for indicating how broadly intelligence may be shared. Understanding it is a prerequisite for participating in any sharing community:

Respecting TLP markings is both an ethical obligation and a practical necessity. Organizations that share intelligence with you at TLP:AMBER and later discover it was distributed more broadly will stop sharing with you. Communities run entirely on trust, and violating TLP is the fastest way to exclude yourself from them.

What to Share

Organizations often hesitate to share intelligence because they are concerned about exposing the fact that they were compromised or revealing operational details they consider sensitive. This concern is legitimate and should be addressed through TLP markings rather than non-sharing. Most sharing communities are well-practiced at handling sensitive intelligence with appropriate discretion.

Useful intelligence contributions from an incident include: novel malware samples (shared with hash and metadata, not necessarily with attribution to your organization), C2 infrastructure indicators with context about how they were identified, TTPs observed that are not in existing public reporting, and timeline information that establishes when specific campaign activity occurred. Even partial information is useful if it is accurate and well-contextualized.

Practical Tools

Several open-source platforms support threat intelligence operations at different scales and use cases. Understanding the purpose and appropriate context for each prevents the common mistake of deploying a platform that does not match the actual workflow.

Tool selection should follow workflow requirements, not marketing. Many organizations are better served by a simpler deployment — a single well-configured MISP instance or a TheHive/Cortex stack — than by attempting to operate a full enterprise-grade TIP with insufficient staffing to maintain it. A platform that is actually used provides far more value than a sophisticated one that is theoretically available.

Making Intelligence Operational

The gap between "we have threat intelligence" and "threat intelligence improves our response outcomes" is an operational one. Bridging it requires deliberate integration into existing workflows rather than standing up a separate intelligence function that operates in isolation from the responders who need to benefit from it.

Practical steps that close the gap:

Threat intelligence done well transforms incident response from a reactive discipline into a proactive one. The incidents you handle better because you had prior knowledge of the adversary's techniques, the campaigns you disrupt because a partner shared early-stage indicators, and the detections you build from your own investigations before the next wave of intrusions — these are the compounding returns on a mature intelligence practice.

The tools are available, the frameworks exist, and the sharing communities are established. The investment is in building the analytical habits and operational workflows that connect intelligence to the decisions that actually matter.

For coverage of the ATT&CK framework in greater depth, including how to use it for campaign analysis and detection engineering, see the MITRE ATT&CK for Incident Responders guide. For the detection-side perspective on operationalizing intelligence-derived behavioral rules, the Detection Engineering Program article covers that workflow end to end.

Enhance Your Threat Intelligence Practice

Threat intelligence transforms reactive security into proactive defense. Learn how ForgeWork helps organizations operationalize intelligence.

Security Engineering MITRE ATT&CK Guide