Why Training Matters

The cybersecurity skills gap is one of the most widely cited challenges in the industry. Global workforce studies consistently estimate a shortfall of millions of security professionals, and the problem is particularly acute in specialized disciplines like incident response, digital forensics, and malware analysis. But the training challenge extends beyond headcount — even well-staffed security teams can fail during real incidents if they haven't practiced the specific skills, communication patterns, and decision-making processes that crises demand.

The Human Factor

Analyses of major security breaches consistently identify human factors as primary contributors. This extends beyond the "employee clicked a phishing link" narrative. Incident response failures are often rooted in human factors: SOC analysts who miss critical alerts because they lack the skill to distinguish true threats from noise, incident commanders who fail to coordinate communication under pressure, IT operations teams who destroy forensic evidence while attempting to restore services, and executives who make poor containment decisions because they don't understand the trade-offs.

These are not failures of intelligence or effort — they are failures of preparation. The cognitive load during a real incident is enormous: time pressure, incomplete information, competing priorities, sleep deprivation, and the awareness that decisions have consequential outcomes. People who haven't trained under realistic conditions will consistently underperform, regardless of their theoretical knowledge.

Regulatory Requirements

The NIS2 Directive, which applies to essential and important entities across the EU, explicitly addresses training in Article 20. It requires management bodies to undergo training to gain sufficient knowledge and skills to identify risks and assess cybersecurity risk-management practices. The directive also requires that similar training be offered to employees on a regular basis. DORA (Digital Operational Resilience Act) imposes specific requirements for ICT-related incident management training and testing for financial entities. Organizations that fail to demonstrate adequate training programs face regulatory risk beyond the operational risk of poor incident handling.

Plans vs. Capabilities

Many organizations have incident response plans — documents that describe roles, responsibilities, escalation procedures, and communication protocols. Far fewer have tested those plans under realistic conditions. The difference between having a plan and having a capability is the difference between owning a fire extinguisher and knowing how to use one while the building is on fire and the hallway is filling with smoke. Plans are necessary but insufficient. Training transforms plans into capabilities.

Tabletop Exercises (TTX)

A tabletop exercise is a discussion-based session where team members walk through a simulated incident scenario. There are no live systems involved — participants sit around a table (or join a video call) and work through the scenario by describing the actions they would take, the decisions they would make, and the communications they would send at each stage.

Despite their simplicity, tabletop exercises are one of the most effective tools for improving incident response readiness. They are inexpensive relative to full-scale simulations, they involve the people who actually make decisions during incidents (including non-technical stakeholders like legal, communications, and executive leadership), and they surface gaps in plans, processes, and coordination that aren't visible until people try to apply them under scenario pressure.

What Makes a Good Tabletop Exercise

Realistic Scenarios

The scenario must be plausible for the organization's specific threat landscape. A ransomware scenario for a hospital should reflect how ransomware actually impacts healthcare operations — encrypted medical devices, diverted ambulances, compromised patient records — not a generic "your servers are encrypted, what do you do?" prompt. Scenario realism drives engagement and produces findings that are relevant to the organization's actual risk exposure.

Appropriate Participants

The best tabletop exercises include participants from across the organization: security operations, IT operations, legal, communications, human resources, and executive leadership. Incident response is not a purely technical function. Legal counsel must advise on notification obligations. Communications must manage external messaging. Executives must make containment decisions that balance security with business continuity. If these functions only interact for the first time during a real incident, coordination will be poor.

Progressive Complexity

Effective scenarios escalate over multiple phases, or "injects." The initial inject might present signs of a possible compromise. Subsequent injects increase complexity: additional affected systems, media inquiry, regulatory clock starting, attacker communication, conflicting information, and resource constraints. This escalation tests the team's ability to adapt as the situation evolves — which is what real incidents demand.

Facilitation Quality

The facilitator's role is critical. A good facilitator keeps the discussion on track, ensures all participants engage (not just the loudest voices), introduces scenario injects at the right pace, probes for specific details when participants give vague answers ("we would escalate" — to whom, through what channel, with what information?), and creates an environment where people feel safe admitting gaps and uncertainties. This last point matters enormously: the exercise's value comes from surfacing weaknesses, which only happens if participants feel comfortable being honest about what they don't know.

Frequency

Tabletop exercises should be conducted at least twice per year for critical scenarios (ransomware, data breach, insider threat), with additional exercises when significant organizational changes occur: new leadership, major infrastructure changes, new regulatory obligations, or after a real incident. Organizations that only exercise once per year find that lessons learned have faded and turnover has introduced team members who have never participated.

How ForgeWork Runs Tabletop Exercises

ForgeWork designs and facilitates tabletop exercises that test the specific capabilities organizations need during real incidents. Our approach is built on direct incident response experience — every scenario we design reflects attack patterns and operational challenges we've encountered in the field.

Scenario Design Process

We begin by understanding the organization's threat landscape, regulatory environment, critical assets, and incident response maturity. We then design a scenario that targets the intersection of likely threat and identified gap. If the organization has never tested their ransomware response, we start there. If they've exercised ransomware but never tested a supply chain compromise or insider threat, we advance to those scenarios. Scenario design includes: attack narrative, technical details calibrated to participant expertise, inject timeline, expected decision points, and evaluation criteria.

Role-Based Sessions

Real incidents involve multiple teams making concurrent decisions. Our tabletop exercises reflect this by assigning participants to roles that mirror their actual incident responsibilities:

Real-Time Facilitation

Our facilitators guide the exercise in real time, adjusting the pace and complexity of injects based on how the team is performing. If participants are handling the scenario comfortably, we escalate complexity — introducing conflicting information, time pressure from regulators, or secondary incidents that compete for resources. If participants are struggling, we slow the pace to allow deeper exploration of the challenges they're facing. The goal is to push the team to the edge of their capability, where the most valuable learning occurs.

5-Dimensional Scoring

ForgeWork evaluates exercise performance across five dimensions, providing a structured and repeatable assessment framework:

  1. Speed: How quickly participants recognize the situation, make decisions, and execute actions. Speed matters in incident response because attacker activity doesn't pause while the response team deliberates.
  2. Correctness: Whether the actions taken are technically and procedurally appropriate. Making a fast but wrong containment decision — such as wiping a compromised system before forensic collection — can be worse than making a slow but correct one.
  3. Coordination: How effectively participants communicate across roles and functions. Do teams share information proactively? Are decisions made with input from all relevant stakeholders? Does the incident commander maintain situational awareness?
  4. Adherence: Whether participants follow established plans, playbooks, and procedures. Deviation from tested procedures during high-stress situations is common and often harmful. This dimension measures whether the organization's documented processes are known, accessible, and actually used.
  5. Documentation: Whether actions, decisions, and findings are recorded in real time. Poor documentation during an incident creates problems downstream: incomplete forensic reports, difficulty meeting regulatory notification requirements, and loss of institutional knowledge about what happened and why.

Automated After-Action Reports

Using the IR TTX Training platform, we generate comprehensive after-action reports (AARs) that capture: exercise timeline, participant actions and decisions at each inject, scoring across all five dimensions, identified strengths, identified gaps, and prioritized recommendations for improvement. These reports become baseline measurements for tracking improvement across successive exercises. Automated AAR generation ensures consistent evaluation criteria and allows organizations to compare performance across exercises conducted months or years apart.

Malware Analysis Training

Malware analysis is a specialized skill that is essential for effective incident response and threat intelligence. Understanding what a piece of malware does — how it establishes persistence, communicates with command-and-control infrastructure, moves laterally, and achieves its objectives — directly informs containment strategy, indicator extraction, and defensive improvements.

The Malware Analysis Academy is ForgeWork's structured training platform for building malware analysis skills from fundamentals through advanced reverse engineering.

6 Learning Paths

The Academy is organized into progressive learning paths that build skills systematically:

  1. Foundations: Introduces core concepts — file formats, operating system internals, basic static and dynamic analysis techniques, and the safe handling of malicious samples. Designed for analysts with security experience but limited malware analysis background.
  2. Static Analysis: Deep dive into analyzing malware without executing it — PE file structure, string analysis, import/export analysis, code pattern recognition, and packer identification. Students learn to extract indicators and assess capability from binary analysis alone.
  3. Dynamic Analysis: Executing malware in controlled environments to observe behavior — sandbox configuration, network traffic analysis, file system and registry monitoring, process behavior analysis, and API call tracing. This path teaches students to safely detonate samples and systematically capture behavioral indicators.
  4. Reverse Engineering: Disassembly and decompilation of malware using tools like Ghidra and IDA Pro. Covers x86/x64 assembly fundamentals, control flow analysis, function identification, data structure recovery, and the analysis of obfuscated code. This is the most technically demanding path.
  5. Threat Intelligence Integration: Connecting malware analysis findings to the broader threat intelligence lifecycle — attribution indicators, YARA rule development, MITRE ATT&CK mapping, threat actor tracking, and intelligence report writing. This path bridges the gap between technical analysis and strategic intelligence.
  6. Advanced Topics: Specialized modules covering anti-analysis techniques (VM detection, debugger evasion, code obfuscation), firmware and embedded system malware, mobile malware (Android and iOS), and fileless/living-off-the-land techniques that challenge traditional analysis approaches.

Hands-On Exercises with Real-World Samples

Every module includes hands-on exercises using real-world malware samples in safety-gated environments. Students work with actual threat samples — not contrived examples — in isolated analysis environments that prevent accidental exposure. Exercises are designed to build specific skills progressively: early exercises are structured with guided steps, while advanced exercises present samples with minimal guidance, simulating the conditions of real-world incident analysis.

25+ Cheatsheets and Reference Materials

The Academy includes over 25 cheatsheets covering: common file format structures, assembly instruction references, analysis tool quick-start guides, indicator extraction workflows, YARA rule syntax, and reporting templates. These materials serve as both learning aids during training and quick references during real-world analysis work.

18+ Modules and Progression Tracking

With more than 18 structured modules across the six learning paths, the Academy provides enough depth for months of continuous learning. Progression tracking lets analysts and their managers see exactly where each team member stands: which skills have been demonstrated, where gaps remain, and what the recommended next steps are for continued development.

Custom Training Programs

While the Malware Analysis Academy and IR TTX platform provide structured self-service training, many organizations need customized programs designed for their specific team composition, skill levels, and operational context.

Tailored Content

Custom training programs are built around the organization's actual environment, tooling, and threat landscape. Rather than teaching generic incident response procedures, we design exercises using the client's SIEM, EDR platform, and network monitoring tools. Scenarios reflect the specific threat actors and techniques most relevant to the client's sector and geography. Training materials reference the client's actual playbooks and procedures, so participants practice the exact workflows they'll use during real incidents.

Delivery Options

Training is delivered onsite or remotely, depending on the organization's preference and the training type. Hands-on technical training typically benefits from in-person delivery where the instructor can observe participant progress and provide real-time guidance. Leadership and coordination exercises work well in remote or hybrid formats. Multi-day programs can combine both: remote pre-work and knowledge sessions, followed by intensive onsite practical exercises.

Role-Specific Curricula

Different roles need different training. A SOC analyst needs detection tuning and triage skills. An incident responder needs forensic analysis and containment decision-making. An IT administrator needs evidence preservation awareness and secure system recovery procedures. An executive needs decision-making frameworks for crisis management. We design role-specific curricula that build the skills each group actually needs, rather than delivering one-size-fits-all security awareness training.

Assessment-Based Placement

For technical training programs, we begin with skills assessments that place participants at the appropriate starting level. This prevents advanced analysts from sitting through material they've already mastered, and prevents junior analysts from being dropped into content that's beyond their current capability. Placement assessments also provide baseline measurements that can be compared against post-training assessments to measure actual skill development.

Measuring Training Effectiveness

Training that can't demonstrate results is training that will lose its budget. ForgeWork builds measurement into every training program, providing concrete evidence of capability improvement.

Before and After Metrics

We establish baseline measurements before training begins — through skills assessments, exercise scoring, or operational metrics (mean time to detect, mean time to triage, false positive rates). After training, we re-measure against the same criteria. This before/after comparison provides clear evidence of improvement and identifies areas where additional training is needed.

Exercise Scoring Trends

For organizations that conduct regular tabletop exercises, our 5D scoring framework produces trend data across successive exercises. Improvements in speed, correctness, coordination, adherence, and documentation are tracked over time, showing whether the organization's incident response capability is improving, stagnating, or degrading. These trends are valuable for justifying continued investment in training and for identifying specific dimensions that need targeted improvement.

Skill Gap Identification

Assessment results and exercise performance reveal specific skill gaps at both individual and organizational levels. An organization might discover that their analysts are strong at detection but weak at forensic evidence collection, or that their incident commander role consistently struggles with cross-team coordination. These findings drive targeted training investments rather than generic refresher courses that address issues the team doesn't actually have.

Continuous Improvement Framework

Training is not a one-time event — it's a continuous cycle. We help organizations establish training cadences, rotation schedules, and progression milestones that maintain and advance capabilities over time. This framework accounts for staff turnover (new team members need onboarding), evolving threats (new attack techniques require new skills), and organizational changes (new tools, new processes, new regulatory requirements).

Integration with ForgeWork Tools

ForgeWork's training services are complemented by three purpose-built platforms that extend training beyond consulting engagements into continuous capability development.

Analysis Platform

DFIR Assist

Our digital forensics and incident response platform accelerates evidence processing and analysis. During training engagements, DFIR Assist provides realistic tooling that mirrors the experience of working actual cases. Analysts trained on DFIR Assist build skills that transfer directly to incident work.

Explore DFIR Assist →
Learning Platform

Malware Analysis Academy

Six learning paths, 18+ modules, 25+ cheatsheets, and hands-on exercises with real-world samples. The Academy provides structured progression from fundamentals to advanced reverse engineering, with safety-gated access and progression tracking for both individual analysts and organizational teams.

Start learning →
Exercise Platform

IR TTX Training

Role-based tabletop exercise simulations with 5-dimensional scoring and automated after-action reports. The platform enables both facilitator-led and self-guided exercises, making it possible to maintain exercise frequency between consultant-led engagements.

Run an exercise →

These platforms grew directly from our consulting work. The patterns we saw repeatedly in incident response engagements — analysts who lacked malware analysis fundamentals, teams that hadn't exercised their IR plan, forensic workflows that were manual and error-prone — drove us to build tooling that addresses these gaps at scale. The platforms complement consulting-led training by providing continuous practice opportunities between engagements, extending the shelf life of skills developed during intensive training sessions.

Related Resources

Build a team that's ready

From tabletop exercises that test your incident response coordination to hands-on malware analysis training that builds deep technical skills — ForgeWork's training programs prepare teams for the incidents that matter. Let's design a program that fits your organization.