Most third-party risk programs are built to pass a review, not to work in the field.
That's not a criticism of the people who built them. It's a structural problem — one that appears in organisations of every size, across every industry I've worked in.
The program exists. The policy is documented. Vendors are assessed. And yet, when something actually goes wrong, the gaps are always the same.
Here's what I've seen break first, and what it looks like from the inside.
Before getting into the gaps, it's worth understanding why this has become a top-tier problem in 2025.
29% of all data breaches now originate from third-party vendors. SecurityScorecard That number has been climbing for years, and the reason is straightforward: attackers pursue the path of least resistance.
If your perimeter is well-defended, the target moves. It moves to your payroll provider. Your outsourced IT helpdesk. Your claims management platform. The vendor with privileged access that nobody is monitoring week to week.
According to the World Economic Forum, 54% of large organisations identify supply chain challenges as the biggest barrier to achieving cyber resilience. ProcessUnity
This isn't an emerging risk anymore. It's a live operational problem. And for most security teams, the current approach is not keeping up.
The vendor questionnaire is the backbone of most TPRM programs.
Send it out. Score the responses. Tier the vendor. File the result. Review annually.
The problem is that questionnaires are self-reported, point-in-time snapshots. Many organisations rely on these self-assessment questionnaires and compliance certifications as static assessments that provide a false sense of security. SecurityScorecard
A vendor can answer every question correctly in January and experience a significant control failure by March. Nothing in the typical TPRM cycle catches that.
What works better is treating questionnaires as a starting point for a conversation, not a final verdict. The score matters less than what the vendor says when you ask a follow-up.
In practice, ask:
- "Walk me through what happened the last time a control failed."
- "Who in your organisation owns the response to a notification from us?"
- "What did your last audit find, and what's still open?"
The answers will tell you more than any scored spreadsheet.
Most programs assign vendor tiers at onboarding based on data access, criticality, and system integration.
That's the right framework. The problem is that the tiering almost never gets updated.
A vendor that started as a low-risk marketing analytics tool can, over time, gain access to customer records, integrate directly with CRM systems, and grow into a critical operational dependency. The tier doesn't follow the relationship.
Covered entities should classify third-party service providers based on system access, data sensitivity, location, and how critical the service is to operations Department of Financial Services — but that classification needs to be a living process, not a one-time intake exercise.
Build a simple trigger review into your vendor management workflow. Any time a vendor relationship expands — new data types, new integrations, new access levels — a reassessment should happen automatically.
Most organisations don't have that trigger. They find out during an incident that a vendor they considered low-risk had been accessing sensitive systems for eighteen months.
Security clauses in vendor contracts have gotten better. Right to audit provisions, breach notification timelines, minimum control standards — these are increasingly standard.
But contracts set expectations. They don't change day-to-day behaviour inside a vendor organisation.
Global regulations like GDPR, HIPAA, and the Digital Operational Resilience Act (DORA) are enforcing stricter controls around third-party security, holding enterprises accountable not only for their own practices but also for the security posture of their vendors. Cyble
That's regulatory pressure pushing in the right direction. But between the contract clause and actual compliance, there's a lot of room for drift.
The organisations I've seen handle this well do three things differently:
- They hold vendor kick-off calls specifically on security expectations. Not the procurement call. A separate session with the vendor's technical and security contacts.
- They build minimum viable monitoring into the contract itself — not just the right to audit, but a defined cadence of evidence submissions.
- They treat the breach notification clause as a test. Run a tabletop. See how long it takes the vendor to actually notify. You may be surprised.
Here is one of the most consistent failures I've observed across industries.
When a third-party incident occurs, no one is sure who owns the response internally.
Is it Procurement, because they manage the vendor relationship? Security Operations, because it's a cyber event? Legal, because there may be notification obligations? The business unit that contracted the service, because they understand the operational impact?
Managing suppliers needs a well-orchestrated program that includes interactions with procurement, legal, IT, and the information security team. Optiv
That's true for the program overall. It's even more true in the first 90 minutes of an active vendor incident.
Without a pre-defined RACI for third-party incidents, teams default to whoever escalates loudest. That's usually not the right person.
What good looks like:
- A named "vendor incident lead" role in your IR playbooks
- Pre-agreed escalation paths by vendor tier
- Decision rights documented before anything happens
This is less about process and more about making sure the right people have been briefed before the incident, not during.
The industry has shifted heavily toward continuous monitoring as the answer to point-in-time assessments. Automated security ratings. Real-time posture signals. Ongoing vulnerability scanning of vendor-facing infrastructure.
73% of organisations have implemented continuous monitoring solutions to track the security performance of vendors throughout the contract lifecycle. Optiv
That sounds like progress. But in practice, continuous monitoring generates more alerts than most security teams have capacity to action.
The result is a dashboard with amber and red indicators that nobody is investigating, because the triage queue is already full.
Continuous monitoring only works if there's a clear process behind it: what signals trigger a formal review, who reviews them, what the response options are, and how fast you expect to move.
Without that operational layer, the monitoring becomes background noise. It looks like capability. It doesn't function like one.
The through-line across all five of these failures is the same.
Third-party risk management gets designed to satisfy a compliance requirement or an audit cycle. It gets built around documentation and scoring. And then it gets handed off to teams who didn't build it, don't fully understand it, and are already stretched.
What survives that handoff isn't the program with the most coverage. It's the program with the clearest decisions built into it.
Who owns this. What triggers a review. What the vendor is expected to produce and by when. What happens if they don't.
That's not a framework conversation. It's an operational one.
And that's exactly where most third-party risk programs stop short of becoming real.