When Andrew Morton, walked into the office, third-party risk management (TPRM) was a bit all over the place—spreadsheets, generic questionnaires, and vendors assessed identically regardless of whether they handled customer credit cards or office supplies. As an ISO 27001 Lead Auditor who reads the fine print on SOC 2 reports, Morton saw an opportunity to rebuild from the ground up.
In this wide-ranging conversation, he reveals the three design choices that matter most, explains why executives glaze over at “questionnaires completed” metrics, and shares his biggest red flag when vetting new vendors. From fourth-party visibility to the most misunderstood clause in modern data processing agreements, Morton offers a masterclass in making TPRM both scalable and defensible.
Edited excerpts of Andrew Morton’s interview below:
From Spreadsheets to Scale
“Vendors were being asked the same set of questions regardless of their risk profile, and assurance was often taken at face value.”
What was the inflection point that forced you to re-architect TPRM at Chemist Warehouse, and what did your “target operating model” look like on day 1 vs. today?
AM: Honestly, the inflection point was when I joined the company. It was clear from day one that our third-party risk management wasn’t fit for purpose – it was inconsistent, reactive, and lacked a defensible framework. Vendors were being asked the same set of questions regardless of their risk profile, and assurance was often taken at face value. I saw an opportunity to shift the program into something risk-based, scalable, and aligned with industry standards so that leadership could have real confidence in our vendor ecosystem.
Design Choices that Mattered Most
“Vendor tiering comes first because it’s the foundation – without knowing which vendors are critical, you can’t allocate resources intelligently.”
If you could only keep three design decisions in your TPRM stack—continuous external scanning, adaptive questionnaires, or vendor tiering—what stays and why?
AM: Vendor tiering comes first because it’s the foundation – without knowing which vendors are critical, you can’t allocate resources intelligently. It’s what ensures high-risk providers get deep scrutiny while low-risk vendors don’t bog down the team. Adaptive questionnaires come next. They let us dig deeper only when the risk indicators justify it, which makes the process scalable and keeps the business engaged instead of frustrated by generic questionnaires. Independent assurance reports (SOC 2, ISO 27001, PCI, etc.) are my third choice because they let us leverage established, externally validated audits. They give us confidence in a vendor’s baseline controls without reinventing the wheel, and they free up capacity to focus on real risk areas.
I’d actually put continuous external scanning just behind those three. It’s valuable, but without tiering, adaptive assessments, and assurance reports, scanning can generate noise without context. The three I chose give me a defensible, risk-based foundation – everything else builds on top of that.
Fourth-Party Visibility that Actually Works
“When it comes to vendors’ vendors, I go one layer deep and focus on critical sub-processors.”
How deep do you go on your vendors’ vendors? What’s your minimum viable view (e.g., critical sub-processors list, region & data-type mapping, alerting on material changes), and how do you enforce it contractually?
AM: When it comes to vendors’ vendors, I go one layer deep and focus on critical sub-processors. My minimum viable view includes knowing who those sub-processors are, what regions they operate in, the types of data they handle, and being alerted to any material changes. Just as importantly, I look at whether the vendor has a mature third-party risk assessment process of their own, because I want assurance they’re applying the same standards downstream that we expect from them.
Pre-Production Gates
“Sometimes scanning surfaces outdated domains or low-value assets.”
You’ve talked about passive scanning in your earlier conversations. What’s your “go/no-go” policy for a new SaaS vendor if external posture looks weak but the business is pushing?
AM: Passive scanning is a useful early signal, but it’s not an automatic no-go. If a vendor’s external posture looks weak, my first step is to validate with them – sometimes scanning surfaces outdated domains or low-value assets. If it’s confirmed, we take a risk-based approach: for critical vendors, weak posture is a red flag that may pause or even stop onboarding until compensating controls or remediation commitments are in place. For lower-tier vendors, we may accept the risk with conditions – for example, requiring stronger internal controls on our side or limiting the data shared.
The no-go line is when the vendor is both critical to operations and unwilling to address or evidence improvements. At that point, I’d escalate to leadership with a clear risk statement: ‘Here’s what the business wants, here’s the security posture, here are the potential consequences.’ That way, the decision is transparent and defensible, even if it means saying no.
Beyond Time-to-Assess
“When we cut assessment time, the metrics that really resonated with execs were the ones tied directly to business exposure.”
You have spoken about cutting assessment time dramatically—great. Which risk metrics resonated most with execs (e.g., % critical vendors with open highs >30 days, time-to-remediate by tier, control coverage drift), and which fell flat?
AM: When we cut assessment time, the metrics that really resonated with execs were the ones tied directly to business exposure. Things like the percentage of critical vendors with open high-severity findings older than 30 days, or the risk level by tier, gave them a clear view of where risk was lingering and whether vendors were responsive.
What fell flat were the more operational or technical metrics – things like the number of questionnaires sent. That’s important to know for us internally for running the program, but executives tune out because this doesn’t translate to risk or business impact. The key is to frame metrics around exposure and risk.
Assurance You Actually Trust
“When a vendor presents an ISO 27001 certificate or SOC 2 report, I never just take the badge at face value. I treat assurance reports as one input, not a guarantee.”
You are an ISO 27001 Lead Auditor/Implementer, so, when a vendor presents an ISO cert or SOC 2, what do you verify beyond the badge—scope boundaries, carve-outs, sampling, last major NCs?
AM: When a vendor presents an ISO 27001 certificate or SOC 2 report, I never just take the badge at face value. I go deeper into the scope boundaries – does the certification actually cover the systems and services we’re relying on, or just a data center or narrow business unit? I also look closely at carve-outs and exclusions – for example, if key cloud services or sub-processors aren’t covered, that’s a material gap. With SOC 2, I review the sampling approach and the audit period to make sure the testing was meaningful, not just point-in-time or limited in coverage.
Finally, I always check whether there were any major non-conformities or exceptions noted, and how they were closed out. In short, I treat assurance reports as one input, not a guarantee – the detail behind the badge tells me whether I can rely on it or whether I need to dig deeper.
Shifting Culture, Not Just Tools
“I’d engage stakeholders earlier, co-design parts of the process so they feel ownership, and communicate in a way that links their priorities back to the shared goal.”
What did you learn about stakeholder change—procurement, legal, store ops—when you rolled out the new TPRM model? If you had to repeat it post-merger, what would you do differently?
AM: Rolling out the new TPRM model reinforced that every stakeholder has different priorities and perspectives. But the underlying purpose is the same: to protect the business from risk while enabling it to operate effectively. If I had to do it again, I’d engage stakeholders earlier, co-design parts of the process so they feel ownership, and communicate in a way that links their priorities back to the shared goal. That alignment makes adoption smoother and ensures that, despite different lenses, everyone’s working toward the same outcome.
Vendor Onboarding Efficiency
“We shifted to a risk-tiered model with adaptive questionnaires and pre-vetted assurance reports. Low-risk vendors go through a lightweight process, while critical ones get deeper scrutiny.”
What are the biggest challenges you see when onboarding new third parties at scale, and how have you streamlined that process without slowing down the business?
AM: The biggest challenges in onboarding third parties at scale are consistency, visibility, and speed. Every business unit wants to go live with their vendor yesterday, so security can sometimes be seen as slowing things down. You don’t want to treat all vendors the same, because that overwhelms the process and creates bottlenecks.
To streamline, we shifted to a risk-tiered model with adaptive questionnaires and pre-vetted assurance reports. Low-risk vendors go through a lightweight process, while critical ones get deeper scrutiny. We also built in early checkpoints with procurement and legal, so security isn’t a last-minute hurdle. That’s allowed us to reduce onboarding friction, keep the business moving, and still be confident we’re focusing our effort where it matters most.
Building Risk Tiers that Make Sense
“A vendor handling PI, for example, will always sit in a higher tier, while a vendor with no data access and no system integration will land much lower.”
How do you classify vendors into critical, high, medium, and low-risk tiers in practice, and what criteria have proven most reliable in your experience?
AM: We classify vendors into risk tiers using a structured model – for us it’s tiers 1 through 5. The criteria that have proven the most reliable are:
- Data classification – what types of data the vendor stores or accesses, especially sensitive or regulated data like PI/SI.
- System and infrastructure access – whether they interface with or have privileged access to our core/critical applications or infrastructure.
- Regulatory and contractual obligations – if the vendor falls under specific regimes like PCI, GDPR, or local privacy laws, they’re automatically in a higher tier), and
- Business criticality – whether their failure could materially disrupt operations or customer experience.
These inputs together determine the tier. So, a vendor handling PI, for example, will always sit in a higher tier, while a vendor with no data access and no system integration will land much lower. This approach means we can defend our decisions, scale assessments, and ensure critical vendors get proportionate scrutiny without overwhelming the business.
Balancing Questionnaires with Evidence
“Self-attestation questionnaires are useful for coverage and efficiency – they give us a first view across the vendor landscape.”
How do you strike the balance between using self-attestation questionnaires versus validating controls with independent evidence when assessing third parties?
AM: For me it’s about balance and proportionality. Self-attestation questionnaires are useful for coverage and efficiency – they give us a first view across the vendor landscape. But on their own they’re not reliable, especially for higher-tier vendors. That’s where independent evidence comes in – things like SOC 2 reports and/or ISO27001 certificates. Lower-tier vendors may only need to self-attest, mid-tier vendors provide self-attestation plus some supporting documentation, and higher-tier vendors must back it up with independent evidence. That way we scale the program, but still get defensible assurance where it matters most.
Collaboration with Procurement and Legal
“Procurement is on the front line. Legal ensures the right protections are baked into contracts.”
What role do procurement and legal teams play in strengthening third-party risk management, and how do you foster alignment across these functions?
AM: Procurement and legal are key to making TPRM effective. Procurement is on the front line – they’re the ones who see new vendors first, so they help us embed risk assessments early instead of security being a last-minute hurdle. Legal ensures the right protections are baked into contracts – breach notification, sub-processor transparency, audit rights, data handling requirements.
One of the things we’ve done to foster alignment is that we’ve created a simple flow chart that maps who does what, and when. By framing it as a shared purpose rather than separate processes, we’ve been able to work as one team.
Communicating Risk to the Board
My focus is always on clarity, and consequence so risks map directly to business impact.
When reporting to senior leadership or the board, how do you frame third-party and supply-chain risks in terms they find most actionable?
AM: I try to frame third-party risk for leadership in terms of business outcomes – like regulatory exposure, business disruption, or reputational harm – rather than telling them technical details. My focus is always on clarity, and consequence so risks map directly to business impact – that’s what tends to land or where the conversation will naturally want to go.
Lessons Learned from Scaling
“You can’t assess everyone the same way – tiering and a risk-based approach are critical to avoid bottlenecks.”
What were the biggest lessons you learned while scaling third-party risk management across hundreds of vendors, and what advice would you give to organizations just starting that journey?
AM: The biggest lesson I learned scaling TPRM across hundreds of vendors is that you can’t assess everyone the same way – tiering and a risk-based approach are critical to avoid bottlenecks. Another was that stakeholder alignment matters as much as tools or processes. Procurement, legal, and the business all need to see TPRM as an enabler, not a blocker.
Finally, I learned that while automation and adaptive questionnaires save time, you still need independent assurance like SOC 2 reports or ISO27001 certifications to validate. My advice to those starting out is to begin with a clear tiering model, early stakeholder buy-in, and simple, scalable processes – you can add sophistication later, but without those foundations, you’ll struggle at scale.
Looking Ahead in GRC
“Routine tasks like evidence collection, monitoring, and control testing will increasingly be handled by AI and automation.”
How do you see the discipline of GRC itself evolving over the next three to five years, especially with increasing automation and AI support?
AM: I see GRC evolving into a more automated, insight-driven discipline over the next three to five years. Routine tasks like evidence collection, monitoring, and control testing will increasingly be handled by AI and automation, freeing teams to focus on strategic risk decisions and exception management. I also expect GRC to become more integrated across the enterprise, connecting IT, compliance, privacy, and third-party risk so decisions are informed by real-time data.
Ultimately, the value will shift from just checking boxes to providing actionable insights that help the business make informed, risk-aware decisions faster.
Rapid Fire
One vendor control you’d mandate tomorrow if you could.
AM: If I could mandate one vendor control tomorrow, it would be multi-factor authentication, especially for all administrative and privileged access. It’s a simple but highly effective control that dramatically reduces the likelihood of account compromise, applies across all vendor types, and immediately strengthens our security posture without adding unnecessary complexity.
One metric you’d delete from TPRM dashboards.
AM: If I could remove one metric from TPRM dashboards, it would be the number of questionnaires sent or completed. It’s useful internally to show the volume of work and the team’s effort, but it doesn’t actually reflect risk or control effectiveness. Executives respond better to metrics tied to business impact – like open high-severity findings – because that’s what drives informed decisions.
Most misunderstood clause in modern DPAs.
AM: The most misunderstood clause in modern DPAs in my opinion is typically the sub-processor notification and approval section. Misalignment here can introduce downstream risks, especially for critical data or cross-border processing, so it’s important to clarify expectations up front and ensure the clause is actionable, not just boilerplate.
Your “Red Flag” in a vendor’s first 5 minutes.
AM: Beyond transparency, the other key red flag I watch for is reluctance to commit contractually to basic security obligations – like notifying us of sub-processor changes or breaches. If a vendor hesitates on these points, it can signal deeper gaps in controls or governance, and it prompts a much closer review before proceeding.