The 70/30 split
After shipping more than 200 engagements, the pattern is consistent: automated scans surface roughly 70% of findings, and the remaining 30% can only be caught by manual review. The 30% is also where the highest-severity issues live.
What scanners are good at
- CVE matching against known versions
- TLS / cipher / header posture
- Reflected XSS in obvious sinks
- SQL injection where the payload is straightforward
- Public S3 buckets and exposed admin panels
This is genuinely valuable. We always start with Nessus + ZAP + Nikto in parallel and let them chew through the surface.
What scanners systematically miss
- Business-logic flaws — cart tampering, discount-code abuse, authorisation bypass via tenant-id swapping
- Chained exploits — a low-severity self-XSS combined with a CSRF flaw becomes account takeover
- BOLA / IDOR — scanners can't tell that
/api/orders/42should not be viewable by user 17 - Auth flow logic — password reset that leaks the new password to old email, MFA bypass via SMS interception, session fixation through subdomain takeover
- Race conditions — withdrawal endpoints that double-spend, voucher endpoints that mint extra credit
Why this matters
We routinely walk into engagements where the customer paid five figures for a third-party scan that came back clean. Within a day of manual review we find a critical. The customer is grateful; the scanner-only vendor is embarrassed.
What we do differently
Every engagement at VulnerabilityScanPro reserves the back half of the timeline for human review. Senior analysts pair the scan output with their own enumeration, walk authenticated flows manually, and chain findings together. That's where the real report lives.