The Risk Manager’s Memo: DORA and the Supply Chain

Executive Summary
Three hard truths about AI and operational resilience:
- The cloud dependency is a single point of failure. When OpenAI goes down, your compliance workflow stops—and DORA considers that a critical operational risk.
- Third-party AI providers are now ICT third-party service providers. Under DORA, every API call to a US hyperscaler must be mapped, monitored, and stress-tested for continuity.
- Concentration risk isn't theoretical anymore. If 60% of Luxembourg's fund industry uses the same LLM provider, a geopolitical disruption (sanctions, export controls, API deprecation) becomes a systemic event.
The uncomfortable conclusion: Your AI vendor might be your most material outsourcing relationship—and you haven't done the DORA assessment yet.
The Regulatory Context: DORA's Supply Chain Mandate
The Digital Operational Resilience Act (EU 2022/2554), effective January 2025, fundamentally redefines how financial entities manage ICT (Information and Communication Technology) risk. Unlike CSSF 22/806, which focused on outsourcing governance, DORA focuses on operational resilience across the entire digital supply chain.
Key obligations under DORA Articles 28–30:
- Financial entities must maintain a register of all ICT third-party service providers, categorized by criticality.
- For critical providers (those supporting essential business functions), firms must conduct exit strategies, substitutability assessments, and annual performance reviews.
- Concentration risk is explicitly called out: If multiple firms rely on the same provider, regulators can designate that provider as systemically important and subject them to direct oversight (Article 31).
What this means for AI tools: If you're using GPT-4 to draft compliance reports, screen transactions, or generate client communications, that API contract isn't just a "software license"—it's a material ICT dependency that must be risk-assessed under DORA's framework.
The Luxembourg angle: The CSSF has already signaled that AI-as-a-Service falls under the ICT outsourcing perimeter. In its 2024 guidance notes, the regulator explicitly mentioned "cloud-based machine learning platforms" as requiring materiality assessments. DORA amplifies this by demanding supply chain mapping—you need to know where OpenAI's data centers are, who their subprocessors are, and what happens if Microsoft (their infrastructure provider) experiences a multi-day outage.
The Hidden Risk: Concentration Without Visibility
The uncomfortable reality: the financial sector has outsourced its intelligence layer to a handful of US companies, and most firms don't realize they've created a concentration risk.
The problem with proprietary AI-as-a-Service:
1. No exit strategy. DORA Article 28(3) requires you to demonstrate that you can "transition to alternative providers without disrupting critical operations." If your compliance team has spent 18 months fine-tuning prompts for GPT-4, and OpenAI announces they're discontinuing that model version (as they did with GPT-3.5-turbo), you have no migration path. Your "provider substitutability" is zero.
2. Opaque subprocessor chains. OpenAI runs on Microsoft Azure. Azure uses third-party data centers. Those data centers have hardware from Nvidia, networking from Arista, power from regional utilities. Under DORA, you're supposed to map this chain. But OpenAI's terms of service don't disclose it. You're flying blind.
3. Geopolitical fragility. The US CLOUD Act allows American law enforcement to compel data production from US companies—even if the data is stored in Europe. In October 2024, the Biden administration imposed new export controls on advanced AI chips to China. What happens if similar restrictions are placed on AI inference services to certain jurisdictions? Your Luxembourg fund servicing platform suddenly loses access to its core AI tool.
4. Correlated failures. If 60% of Luxembourg PSFs use the same LLM provider for AML screening, and that provider experiences a 12-hour outage, the entire industry's compliance function halts simultaneously. This is the textbook definition of systemic concentration risk that DORA was designed to prevent.
The shadow IT dimension: DORA applies to all ICT dependencies, not just those formally contracted by IT departments. If your middle office is using Claude for data analysis via personal accounts, or your risk team is running ChatGPT queries on client data, those are unregistered ICT dependencies—a direct DORA violation. The decentralized nature of AI adoption makes this incredibly dangerous.
The Sovereign Alternative: Why Local Models Pass DORA Scrutiny
The solution isn't to abandon AI—it's to eliminate the third-party dependency by bringing the model in-house.
Why open-weights models solve the DORA problem:
1. Exit strategy = instant. If you're running Llama 3.1 (70B) on your own infrastructure, there's no "provider" to exit from. You control the model weights, the inference pipeline, and the deployment environment. If Meta stops releasing updates, your existing deployment continues functioning indefinitely. DORA's substitutability requirement is automatically satisfied.
2. No subprocessor mapping required. When the AI runs in your Tier IV Luxembourg data center (or even on-premise servers), the ICT supply chain collapses to: your hardware vendor, your power supplier, and your network provider—relationships you already have DORA-compliant contracts for. You're not adding a new third party; you're consolidating an existing one.
3. Geopolitical insulation. Open-weights models like Mistral (French), Llama (US but open), or Qwen (Chinese but downloadable) can be frozen at a specific version and run entirely offline. US export controls can't revoke a model you've already deployed to your private network. You've eliminated the "dependency on cross-border services" risk that DORA flags as critical.
4. Auditability for stress testing. DORA Article 24 requires you to conduct "scenario-based testing" of your ICT systems, including "switching to backup systems." If your AI is a black-box API, you can't simulate a provider failure without actually experiencing one. If your AI is a local Docker container, you can spin up a secondary instance, intentionally fail the primary, and measure recovery time—exactly what DORA demands.
The compliance narrative shift: Instead of telling the CSSF, "We mitigate OpenAI risk by having a data processing agreement," you can say: "We eliminated third-party AI dependency by self-hosting a transparent model with deterministic outputs. Our backup strategy is redeploying the same container to a secondary data center, tested quarterly."
The Luxembourg Implementation: Making Local AI DORA-Compliant
For a Luxembourg PSF to deploy local AI and satisfy DORA, here's the tactical playbook:
Step 1: Infrastructure Selection
Host the model in a Luxembourg-based Tier IV data center (e.g., LuxConnect's Tier IV+ facilities in Bettembourg). This keeps you within CSSF's preferred jurisdictions and simplifies cross-border data transfer assessments. Alternatively, deploy on-premise if you have ISO 27001-certified server rooms—DORA doesn't mandate cloud.
Step 2: The ICT Register Entry
List the AI system as an internal ICT asset, not an external service provider. In your DORA Article 28 register, it appears as:
- Asset Name: Internal NLP Engine (Llama 3.1)
- Criticality: High (supports AML screening workflow)
- Provider: None (self-hosted)
- Subprocessors: Hardware vendor (Dell/HP), data center operator (LuxConnect)
- Exit Strategy: Redeploy to secondary data center; no vendor dependencies
Step 3: Testing Requirements
DORA Article 25 mandates advanced testing at least annually. For your local AI:
- Threat-led penetration testing: Can an attacker poison the model or extract training data?
- Scenario-based resilience testing: Simulate data center power loss; measure time to failover to backup inference server.
- Recovery time objective (RTO): If the primary GPU server fails, how long to restore AI-powered compliance checks? (Target: <4 hours for critical functions.)
Step 4: Documentation
Prepare the DORA continuity plan for your AI system:
- Model versioning: Which Llama 3.1 checkpoint is in production? Where is the .safetensors file backed up?
- Inference environment: Docker image registry location, Kubernetes YAML manifests, GPU driver versions.
- Rollback procedure: How to revert to the previous model version if new fine-tuning introduces errors.
The Luxembourg advantage: By hosting locally, you avoid the complexity of cross-border data flow impact assessments (GDPR Article 46) and the DORA third-party register burden. Your AI is just another IT asset—no different from your accounting software server.
Final Recommendation
Stop treating AI providers like software vendors—start treating them like systemic dependencies.
If your firm is still routing sensitive data through OpenAI, Anthropic, or Google Vertex AI, ask yourself:
- Can you operate for 72 hours if that API goes offline?
- Do you know where their subprocessors are located?
- Can you transition to a competitor without retraining your entire team?
If the answer to any of these is "no," you have a DORA compliance gap.
The path forward:
- Audit today. Inventory every AI tool used by any department—not just IT-approved ones.
- Test local models. Spin up Llama 3.1 or Mistral-7B on a Luxembourg cloud instance. Run your compliance queries. Measure performance.
- Build the business case. Frame local AI not as "cost savings," but as DORA risk mitigation—a regulatory requirement, not a nice-to-have.
The firms that treat AI resilience as a strategic priority will pass DORA audits. The ones that treat it as an IT footnote will discover—too late—that their most critical workflow depends on a third party they never properly assessed.
Your AI provider is now in your supply chain. Make sure it's not your single point of failure.
Stay Updated
Get product updates, blog articles, or both. You decide. No spam, ever.
Related Articles

The Small PSF Advantage: How Tiny Teams Can Move Faster with Open-Source AI Than Big Banks
Small PSFs can deploy AI in 30 days while big banks are still in procurement. Limited budgets force focus. Small teams move fast. Open-source models run locally. Constraints become advantages.

Model Drift is the New Operational Risk: Why "Set It and Forget It" Fails
AI models drift over time and change via vendor updates. Risk managers need version-controlled, static models to ensure reproducible compliance reports—only possible with self-hosted open-source AI.

The SaaS Trap: Why Your AI Strategy Needs an Exit Plan Before It Starts.
AI SaaS lock-in is a compliance risk. When vendors change pricing or deprecate models, your workflows break. Open-source models eliminate dependency—you own the asset forever.