Know where you're exposed.
Penetration testing, vulnerability assessments, and attack surface mapping that show you what an adversary would actually find — and what they could do with it. Actionable findings, not checkbox compliance.
Understanding Security Assessments
A security assessment is a structured evaluation of an organization's defenses — its systems, networks, applications, and processes — to identify weaknesses that could be exploited by an attacker. The purpose is not to generate a list of CVEs. It is to provide an honest, evidence-based picture of what an adversary could achieve given your current security posture.
Assessments exist on a spectrum. At one end, automated vulnerability scanners run broad checks against known signatures and configuration issues. At the other, full red team engagements simulate a determined adversary over weeks or months, testing not just technical controls but also detection capabilities, human responses, and organizational decision-making under pressure.
Where your organization should be on that spectrum depends on your maturity, threat model, and regulatory obligations. An organization that has never conducted a vulnerability assessment should start there — not jump to a red team engagement. Conversely, a mature organization with a well-staffed SOC and regularly patched infrastructure gains little from another automated scan; they need adversary simulation that tests their detection and response capabilities at the boundary of what they've prepared for.
The assessment landscape is also plagued by misused terminology. "Penetration test" is frequently applied to engagements that are really just automated scans with a branded PDF report. Understanding the differences matters, because each type of assessment answers different questions, costs different amounts, and is appropriate at different stages of security maturity.
Why Assessments Matter
Organizations exist in a state of continuous change — new systems deployed, configurations modified, employees onboarded, applications updated, cloud resources provisioned. Each change can introduce vulnerabilities. Assessments provide periodic reality checks: a snapshot of your actual security posture, measured against what an adversary would encounter.
Without regular assessments, organizations operate on assumptions. They assume their firewall rules are correct, their patches are current, their cloud configurations follow best practices, and their web applications handle input safely. Assessments replace assumptions with evidence. Sometimes the evidence confirms that things are working well. More often, it reveals gaps that the organization didn't know existed — gaps that an attacker would eventually find.
Types of Assessments
The following table breaks down the primary assessment types, their methodology, and what they're best suited for. Understanding these distinctions is essential for selecting the right engagement and setting appropriate expectations.
| Type | Approach | Depth | Duration | Best For |
|---|---|---|---|---|
| Vulnerability Scan | Automated tools scan systems against known vulnerability databases and configuration benchmarks | Low | Hours | Baseline hygiene, compliance requirements, continuous monitoring between deeper assessments |
| Vulnerability Assessment | Automated scanning plus manual validation, false positive elimination, and contextual risk analysis | Medium | Days | Organizations building initial security posture, periodic compliance validation |
| Penetration Test | Simulated attack with manual exploitation, chaining of vulnerabilities, and demonstration of real-world impact | High | 1–3 weeks | Validating defenses, demonstrating real risk to stakeholders, regulatory requirements (PCI DSS, SOC 2) |
| Red Team | Adversary simulation using stealth, social engineering, physical access, and custom tooling over an extended period | Very High | 4–8 weeks | Mature organizations testing detection and response capabilities, board-level risk demonstration |
| Purple Team | Collaborative exercise where offensive testers work alongside defensive teams to test and improve detection capabilities in real time | High | 1–2 weeks | Detection engineering validation, SOC skill development, closing specific detection gaps |
Vulnerability Scan vs. Penetration Test: The Critical Distinction
The most common misunderstanding in security assessments is conflating vulnerability scanning with penetration testing. A vulnerability scan identifies potential weaknesses by checking systems against databases of known vulnerabilities — essentially asking "does this system have software version X that is known to be vulnerable?" The output is a list of findings with severity ratings, many of which may be false positives or technically accurate but practically unexploitable in the given environment.
A penetration test goes further. The tester actively attempts to exploit identified vulnerabilities, chains multiple findings together to demonstrate real attack paths, and shows what an attacker could actually achieve: accessing sensitive data, escalating privileges to domain administrator, pivoting from an internet-facing application to internal financial systems. The output is not a list of vulnerabilities — it's a narrative of what happened when a skilled attacker spent focused time on your environment.
Red Team vs. Penetration Test
Penetration tests typically have a defined scope ("test this application" or "test this network segment") and the defenders know the test is happening. Red team engagements simulate a real adversary: the scope is broader (usually the entire organization), only a few senior leaders know the engagement is occurring, and the red team uses the same techniques real attackers use — including social engineering, physical access, and custom malware. The primary question a red team answers is not "what vulnerabilities exist?" but rather "can our defenses detect and respond to a realistic attack?"
Purple Team
Purple teaming flips the adversarial model into a collaborative one. The offensive tester executes techniques from a structured framework (typically MITRE ATT&CK) while the defensive team watches their monitoring tools and attempts to detect each technique. When detection fails, both teams work together in real time to understand why and build better detections. Purple teaming is highly efficient for improving detection coverage because it combines the offensive team's knowledge of how attacks work with the defensive team's knowledge of the environment and tooling.
Our Methodology
ForgeWork's assessment methodology draws from established frameworks — the Penetration Testing Execution Standard (PTES), OWASP Testing Guide, and MITRE ATT&CK — adapted through our operational experience. Every engagement follows these phases, scaled appropriately for the assessment type.
1. Scoping and Rules of Engagement
Before any testing begins, we work with the client to define exactly what will be tested, what's off-limits, what techniques are authorized, and what communication protocols apply if a critical vulnerability is discovered during testing. The scoping document becomes the contract for the engagement — it protects both parties and ensures that expectations are aligned.
Scoping conversations cover: target systems and networks, testing window, authorized attack vectors, escalation procedures for critical findings, point-of-contact information, and any legal or compliance constraints that affect methodology.
2. Reconnaissance
We begin with the same step a real attacker would: gathering information. Passive reconnaissance uses open-source intelligence (OSINT) to map the client's external footprint: DNS records, certificate transparency logs, public code repositories, employee information on social media and professional networks, leaked credentials in breach databases, and exposed services on internet-facing infrastructure.
Active reconnaissance extends this with direct interaction: port scanning, service enumeration, technology fingerprinting, and web application crawling. The reconnaissance phase often reveals surprising exposure — forgotten development servers, unmonitored cloud assets, deprecated applications still accessible from the internet.
3. Vulnerability Identification
Combining automated scanning with manual analysis, we identify vulnerabilities across the target scope. Automated tools provide coverage breadth; manual testing provides depth and context. We focus on findings that an attacker would actually exploit, not theoretical vulnerabilities that require conditions unlikely to exist in the real environment.
4. Exploitation
For penetration tests and red team engagements, we attempt to exploit identified vulnerabilities to demonstrate real-world impact. This phase distinguishes a true assessment from a scan — it answers the question "so what?" by showing what access an attacker gains and what they can do with it. We chain findings where possible, demonstrating how a low-severity misconfiguration combined with a medium-severity application flaw can produce a critical-impact attack path.
5. Post-Exploitation
Once initial access is achieved, we assess the extent of compromise possible: lateral movement through the network, privilege escalation to administrative access, access to sensitive data repositories, and potential for persistent access. This phase reveals the actual blast radius of a breach — which is often far larger than the initial entry point would suggest.
6. Reporting
The deliverable that matters most. Our reports are designed to be actionable for both executive and technical audiences.
Assessment Areas
Security assessments can be focused on specific domains depending on the organization's priorities, threat model, and infrastructure.
Network Infrastructure
Internal and external network assessments evaluate the security of routers, switches, firewalls, VPNs, wireless networks, and network segmentation. We test whether an attacker who gains a foothold on one network segment can reach critical assets on another. Common findings include: overly permissive firewall rules, flat network architectures with no segmentation, unencrypted management protocols, default credentials on network devices, and VLAN hopping opportunities.
Web Applications
Web application assessments follow the OWASP Testing Guide methodology, covering the full spectrum of application-layer vulnerabilities: injection flaws (SQL, command, LDAP), authentication and session management weaknesses, access control bypass, cross-site scripting (XSS), insecure direct object references, security misconfiguration, and business logic flaws that automated scanners cannot detect. We test both the application itself and its supporting infrastructure — APIs, databases, authentication services, and third-party integrations.
Cloud Environments
Cloud assessments evaluate the security posture of AWS, Azure, and GCP environments. The cloud introduces a distinct set of risks: misconfigured IAM policies that grant excessive permissions, publicly accessible storage buckets, unencrypted data at rest and in transit, overly permissive security groups, missing audit logging, and serverless functions with embedded credentials. We assess against CIS Cloud Benchmarks and cloud provider-specific security best practices.
Mobile Applications
Mobile assessments examine both the client application (iOS and Android) and its backend services. Testing covers: insecure data storage on the device, weak transport layer security, improper authentication and authorization, client-side injection, reverse engineering resistance, and runtime manipulation. We follow the OWASP Mobile Application Security Testing Guide (MASTG) methodology.
Social Engineering
Social engineering assessments test the human layer of defense through phishing campaigns, vishing (voice phishing), pretexting, and — where authorized — physical access attempts. These assessments measure how well security awareness training translates into actual behavior under realistic conditions. Results inform targeted training improvements and help organizations understand their real exposure to human-focused attacks.
Understanding Your Report
A security assessment is only as valuable as the report it produces. If findings sit unread because the report is impenetrable, or if remediation stalls because recommendations are vague, the assessment has failed regardless of how thorough the testing was.
ForgeWork assessment reports include the following components:
Executive Summary
A non-technical overview of the assessment's purpose, scope, key findings, and overall risk posture. Written for executives, board members, and stakeholders who need to understand the business implications without reading technical details. This section answers: "How secure are we, what are the most important risks, and what should we prioritize?"
Technical Findings
Each finding includes: a clear description of the vulnerability, its location within the environment, the steps taken to discover and exploit it, proof-of-concept evidence (screenshots, command output, captured data), CVSS v3.1 scoring with environmental adjustments, and an assessment of real-world exploitability in the client's specific context. Findings are categorized by severity and by affected system or application.
Attack Narratives
For penetration tests and red team engagements, we include narrative descriptions of attack paths — how individual findings were chained together to achieve significant impact. These narratives are powerful tools for communicating risk to non-technical stakeholders because they tell a story: "We started with a phishing email, used the captured credentials to access the VPN, exploited a misconfiguration to reach the internal network, escalated privileges through a Kerberoasting attack, and accessed the financial database containing 200,000 customer records."
Remediation Guidance
Every finding includes specific, actionable remediation guidance — not just "apply the patch" but detailed steps the client's team can follow to fix the issue. Recommendations are prioritized by risk and effort: quick wins (high impact, low effort) are flagged for immediate action, while longer-term architectural changes are positioned within a strategic roadmap.
Risk Prioritization Matrix
A consolidated view that maps findings by severity and exploitability, helping the client's team focus remediation effort where it will have the greatest impact on reducing real-world risk. This matrix explicitly calls out which findings an attacker would exploit first and which represent the most damaging attack paths.
How to Prepare for an Assessment
Preparation on the client side directly affects the quality and efficiency of the assessment. Here's a practical checklist.
Define Scope Clearly
Identify the specific systems, applications, networks, or environments to be tested. Include IP ranges, URLs, cloud account identifiers, and any systems that are explicitly out of scope. If certain systems are fragile (legacy production systems that might crash under testing), flag them so the testing team can adjust their approach.
Establish Rules of Engagement
Determine what testing techniques are authorized. Can the testers use social engineering? Are denial-of-service techniques permitted? What hours can testing occur? What's the escalation path if a critical vulnerability is found during testing? Document these in writing before the engagement begins.
Prepare Credentials
For authenticated assessments (which provide more comprehensive coverage), create test accounts at each privilege level being assessed. Provide these credentials securely and plan to disable them after the engagement. For web applications, prepare accounts for each user role.
Notify Relevant Teams
Depending on the engagement type, your SOC, IT operations, and cloud teams may need to be aware. For a penetration test with known scope, notifying the SOC prevents wasted effort investigating the tester's activity. For a red team engagement where detection is part of the test, only the designated point of contact should know.
Gather Documentation
Network diagrams, application architecture documents, and prior assessment reports help the testing team work more efficiently and focus on the areas most likely to yield significant findings.
Compliance Context
Many regulatory and compliance frameworks require or recommend security assessments. Understanding which frameworks require which types of assessments helps organizations plan their assessment programs efficiently.
PCI DSS
The Payment Card Industry Data Security Standard requires quarterly network vulnerability scans by an Approved Scanning Vendor (ASV) for external-facing systems, and annual penetration testing of both network and application layers. PCI DSS v4.0 introduced more rigorous requirements for authenticated scanning and broader scope for penetration testing.
SOC 2
While SOC 2 does not mandate specific assessment types, the Common Criteria require organizations to evaluate the design and operating effectiveness of their controls. Penetration testing provides strong evidence for the CC7.1 (detection of security events) and CC6.1 (logical and physical access) criteria. Most auditors expect to see regular vulnerability assessments and periodic penetration testing.
ISO 27001
ISO 27001 Annex A control A.8.8 (Management of technical vulnerabilities) requires organizations to identify, evaluate, and address technical vulnerabilities. Regular vulnerability assessments and penetration testing are the primary mechanisms for satisfying this control. ISO 27001 also requires organizations to evaluate the effectiveness of their information security management system, which assessment findings directly support.
NIS2
The NIS2 Directive requires essential and important entities to implement risk-based security measures, including regular testing and auditing of security. Security assessments — particularly penetration testing — provide the evidence base for demonstrating compliance with Article 21's requirement for "policies and procedures to assess the effectiveness of cybersecurity risk-management measures."
Compliance vs. Security
Compliance requirements represent a minimum bar, not a ceiling. An organization can be fully compliant with PCI DSS, SOC 2, and ISO 27001 while still having significant security gaps that a motivated attacker would exploit. Use compliance requirements as a baseline for your assessment program, then go further based on your actual threat model and risk appetite.
Related Resources
- Security Assessment Guide — A comprehensive guide to choosing and preparing for security assessments.
- Security Engineering Services — Turn assessment findings into hardened defenses.
Understand your real security posture
Whether you need a focused penetration test, a broad vulnerability assessment, or an adversary simulation that tests your entire defensive capability — ForgeWork delivers findings you can act on, not reports that collect dust.