What Should a Pentest Report Include? Anatomy of a Professional Deliverable
Learn what a quality penetration testing report looks like — executive summary, CVSS-scored findings, proof-of-concept evidence, and how to use it internally.
The Report Is the Product
A penetration test produces exactly one tangible deliverable: the report. Every hour of reconnaissance, every exploit attempt, every post-exploitation pivot -- it all exists to inform the document that lands on your desk at the end of the engagement. If the report is poor, the entire investment is diminished regardless of how skilled the tester was.
This creates a practical problem for buyers. You cannot evaluate report quality before purchasing the engagement. You can, however, evaluate it before signing -- any reputable provider will share a redacted sample report during the sales process. If they will not, consider that a disqualifying red flag as we discuss in our guide to choosing a pentest company.
This guide breaks down what a professional pentest report should contain, how to distinguish a quality report from a repackaged scan output, and how to use the report effectively within your organization.
The Structure of a Professional Report
A well-structured pentest report serves multiple audiences -- executive leadership, security teams, developers, and compliance auditors -- within a single document. The structure should guide each reader to the sections most relevant to them.
Cover Page and Document Control
Basic but important: the report should include the client name, engagement dates, report version, classification level (typically confidential), the provider's name and contact information, and a document revision history. If the report goes through multiple versions (initial findings, updated after retesting), the revision history provides an audit trail.
Executive Summary
The executive summary exists for one audience: people who will read two pages and nothing else. This is your CISO presenting to the board, your CTO briefing the CEO, or your compliance officer summarizing for an auditor.
A quality executive summary includes:
- Engagement overview -- What was tested, when, and what type of assessment was performed. One paragraph.
- Overall risk assessment -- A clear statement of the organization's security posture based on the findings. Not a list of vulnerabilities, but a risk-level judgment: "The application's security posture is insufficient for processing payment card data" or "The external network presents low risk with no critical or high findings."
- Key findings summary -- The three to five most significant findings described in business terms. Not "reflected XSS in the search parameter" but "an attacker can steal user session credentials by sending a crafted link, enabling account takeover without knowing the user's password."
- Statistical overview -- A breakdown of findings by severity (critical, high, medium, low, informational) presented as a table or chart.
- Strategic recommendations -- Two to four high-level recommendations that address the root causes behind multiple findings. "Implement a centralized input validation library" addresses twelve individual injection findings more effectively than listing twelve separate fixes.
Scope and Methodology
This section documents what was tested and how. It serves both as context for the findings and as compliance evidence.
Scope definition should include the specific targets (URLs, IP ranges, application names), the test type (black-box, gray-box, white-box), any out-of-scope systems or techniques, and the testing window (dates and hours).
Methodology reference should identify the framework(s) followed (OWASP Testing Guide, PTES, NIST SP 800-115) and describe how the methodology was applied to this specific engagement. A provider referencing established penetration testing methodologies in their reporting demonstrates rigor beyond ad-hoc testing.
Tools used should list the primary tools employed during the engagement. This is not exhaustive (testers use dozens of small utilities), but should cover the major tools: scanners, proxies, exploitation frameworks, and custom scripts. This transparency helps your technical team understand how findings were identified and verified.
Findings
The findings section is the core of the report. Each finding should be a self-contained document that a developer can pick up and act on without reading anything else.
Finding Structure
A professional finding includes these elements:
Title. Clear and specific. "SQL Injection in User Search Endpoint" not "Injection Vulnerability."
Severity rating. CVSS v3.1 (or v4.0) base score with the full vector string. The vector string is critical because it shows the reasoning behind the score:
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H — Score: 9.8 (Critical)
This tells the reader: network-accessible, low complexity, no privileges required, no user interaction needed, high impact on confidentiality, integrity, and availability. A numeric score without the vector is insufficient because the reader cannot evaluate whether the scoring is appropriate.
Description. A clear explanation of what the vulnerability is, written for a developer who may not be a security specialist. Avoid jargon where possible, and define it where necessary.
Evidence. This is where the report proves the finding is real. Evidence should include:
- HTTP request/response pairs showing the vulnerability
- Screenshots of the exploitation result
- Command output demonstrating access or data retrieval
- Step-by-step reproduction instructions
The evidence should be specific enough that a developer can reproduce the finding independently. Vague evidence like "SQL injection was detected" without showing the actual injection point, payload, and response is not professional-grade.
For context on how professional evidence is structured, see our discussion of proof-of-concept methodology.
Impact analysis. What can an attacker actually do with this vulnerability in the context of your specific environment? A SQL injection in a read-only public API is different from a SQL injection in the admin panel of a payment processing system. The impact analysis should reflect your business context, not a generic description from a vulnerability database.
Remediation recommendation. Specific, actionable guidance. "Implement parameterized queries" is a start, but "replace the string concatenation in UserController.search() with parameterized queries using your ORM's query builder, and add input validation to reject search terms containing SQL metacharacters" is actionable.
References. Links to relevant CWE entries, OWASP descriptions, or vendor advisories that provide additional context.
Finding Severity Distribution
The findings section should include a summary table or chart showing the distribution across severity levels. This serves as a quick reference for remediation planning:
| Severity | Count | Examples |
|---|---|---|
| Critical (9.0-10.0) | 2 | SQL injection, authentication bypass |
| High (7.0-8.9) | 5 | Stored XSS, privilege escalation, SSRF |
| Medium (4.0-6.9) | 8 | CSRF, missing security headers, verbose errors |
| Low (0.1-3.9) | 4 | Cookie attributes, minor info disclosure |
| Informational | 6 | Best practice recommendations |
Positive Findings
A mature report includes what was tested and found to be secure, not just what was broken. This serves two purposes: it demonstrates testing coverage (the tester actually checked these things), and it gives your team credit for controls that are working.
Examples of positive findings:
- "Multi-factor authentication is implemented correctly and resisted bypass attempts including session fixation, token replay, and time-based attacks."
- "Role-based access controls were tested across all three user roles. No horizontal or vertical privilege escalation was achieved."
- "API rate limiting is effective, preventing brute force attacks against authentication endpoints."
Remediation Priority Matrix
Beyond individual finding severity, the report should provide a prioritized remediation roadmap. This accounts for factors that CVSS alone does not capture: which findings share a root cause, which fixes have dependencies on other fixes, and which findings pose the most immediate risk given your specific threat model.
A practical priority matrix might look like:
Immediate (within one week):
- Patch the SQL injection in the authentication endpoint (Critical)
- Disable the exposed debug endpoint (High)
Short-term (within 30 days):
- Implement CSRF protection across all state-changing forms (Medium, but 6 separate findings)
- Deploy Content Security Policy headers (Medium)
Medium-term (within 90 days):
- Refactor session management to use framework-provided session handling (addresses 3 findings)
- Implement centralized input validation (addresses 4 findings)
Appendices
Technical appendices may include raw scan output, detailed tool configurations, network diagrams annotated during testing, and full request/response logs for complex attack chains. These are reference material for your security and development teams, not primary reading.
What Makes a Good Report vs. a Bad One
Having reviewed hundreds of pentest reports over the years, certain patterns reliably distinguish quality work from checkbox exercises.
Signs of a Quality Report
- Narrative attack chains. The report describes how the tester combined findings: "Using the information disclosure from Finding 3 to identify the internal API endpoint, the tester exploited the SSRF in Finding 7 to access the internal service, which returned credentials that enabled the privilege escalation in Finding 12."
- Business-context risk analysis. Findings are analyzed in terms of what they mean for your organization, not what they mean in abstract.
- Custom remediation guidance. Recommendations reference your specific technology stack, not generic advice that could apply to any application.
- Tested-and-secure documentation. The report shows what was tested and passed, not just what failed.
- Manual findings. The report includes findings that no automated scanner would produce: business logic flaws, authorization bypasses, race conditions, and creative attack chains.
Signs of a Poor Report
- Scanner output with a logo. If every finding includes a "Plugin ID" or looks like it was copy-pasted from a Nessus or Qualys report, you received a vulnerability scan, not a penetration test.
- Generic descriptions. If the description of an XSS finding reads identically to the OWASP definition of XSS without any reference to where in your application it was found, the report was templated rather than written.
- Missing evidence. Findings without screenshots, request/response pairs, or reproduction steps cannot be verified or acted upon.
- No executive summary or a one-line summary. If the executive summary is "we found 23 vulnerabilities, 2 critical," it was not written for an executive audience.
- Flat severity with no CVSS justification. Findings marked "High" or "Medium" without a CVSS vector or explanation of why that rating was chosen suggest the severity was assigned arbitrarily.
- No methodology section. If the report does not describe how the testing was conducted, there is no way to evaluate coverage or completeness.
How to Use the Report Internally
A pentest report sitting in a shared drive is worthless. The value comes from how you use it across your organization.
Board and Executive Communication
Use the executive summary directly for board presentations. Supplement it with trend data if you have reports from previous engagements. Executives care about: Is our risk level acceptable? Is it improving or worsening? What investment is needed to address the gaps?
Do not present the full technical findings to a non-technical audience. Translate the findings into business risk: "An attacker could access our customer database" is meaningful. "Reflected XSS via the q parameter in /search with insufficient output encoding" is not.
Development Team Remediation
Distribute the findings section to the teams responsible for fixing them. Each finding should be actionable enough to create a JIRA ticket or GitHub issue directly from the report. If your development team reads a finding and says "I don't understand what to fix," the report has failed at its primary purpose.
Organize remediation as a sprint or dedicated effort. Treating pentest findings as a backlog that competes with feature work guarantees that low and medium findings never get fixed.
Compliance Evidence
The full report, including methodology, scope, and retesting results, serves as evidence for compliance audits. For compliance frameworks that require penetration testing, the report demonstrates that testing was performed, what was found, and how findings were addressed.
Keep the report and retesting addendum together. Auditors want to see not just the findings but evidence that they were remediated. The retesting report closes that loop.
Vendor Risk Management
If your clients require evidence of security testing as part of vendor risk assessments, you may need to share a version of the report. Most organizations share the executive summary and a findings summary (with sensitive details redacted) rather than the full technical report. Establish a process for this before the situation arises.
Security Program Improvement
Look at the findings in aggregate, not individually. If the report contains five different injection findings, the root cause is probably a missing input validation framework, not five separate bugs. If authorization findings appear across multiple endpoints, the issue is likely an architectural gap in access control, not individual endpoint oversights.
These patterns inform strategic security investments that prevent entire categories of findings on future engagements.
Red Flags in Reports You Have Already Received
If you have an existing pentest report and are unsure of its quality, check for these indicators:
- Count the manual findings. If every finding maps to a known CVE or scanner check, no manual testing was performed. A quality web application pentest should include at least several findings that required manual analysis -- business logic issues, authorization flaws, or attack chains.
- Check the evidence. Open three random findings and look for specific evidence: actual request/response data, screenshots of the exploit in action, or commands that reproduce the issue. If the evidence is generic or missing, the finding may not have been verified.
- Read the remediation guidance. If the guidance for an XSS finding says "implement input validation and output encoding" without referencing your specific framework, language, or architecture, it was templated.
- Look for positive findings. If the report only lists what was broken and says nothing about what was tested and found secure, the coverage is unclear. The tester may have focused on easy wins rather than thorough testing.
Next Steps
If you are evaluating a pentest report you have received, use the criteria above to assess its quality. If it falls short, consider whether the engagement delivered the value you paid for and whether a different provider would serve you better for your next assessment.
If you are preparing for your first penetration test and want to understand the full process, our guide on what to expect from a penetration test covers the complete engagement lifecycle. And when evaluating providers, our guide on how to choose a penetration testing company helps you identify the providers most likely to deliver the report quality described here.
Ready to scope your next engagement? Our PT Scoping Wizard helps you define the targets, requirements, and compliance context that drive a quality assessment and report.
Explore our security assessment services to see how CyberShield delivers on the standards described in this guide.
Continue Reading
Creating Compliance-Ready Reports: PCI-DSS, SOC 2, ISO 27001
Map CyberShield security findings to PCI-DSS, SOC 2, and ISO 27001 compliance controls, generate audit-ready reports, and maintain continuous compliance posture with delta tracking.
PTES, NIST SP 800-115, and OSSTMM: Penetration Testing Standards Compared
Compare the major penetration testing standards -- PTES, NIST SP 800-115, and OSSTMM -- to understand which framework fits your compliance and testing needs.
When Your Business Needs a Penetration Test: Compliance Triggers, Business Triggers, and the Cost of Waiting
Know exactly when your business needs a penetration test. Compliance mandates, business triggers, frequency guidelines, and what happens when you wait too long.