Navigating the AWS European Sovereign Cloud: Practical Strategies for a Multi-Partition Future
Abstract
The AWS European Sovereign Cloud (ESC) represents a fundamental shift in cloud computing for EU public sector and regulated industries. ESC is an independent AWS partition, physically and logically separate from the commercial partition, operated entirely within the EU to address digital sovereignty requirements. This whitepaper provides a practical framework for organizations adopting ESC through a strategic dual‑partition approach that combines ESC sovereignty with commercial AWS capabilities.
We examine the architectural implications of partition boundaries, control plane separation, and the operational complexities of managing infrastructure across multiple AWS partitions. Through detailed analysis of connectivity patterns, identity federation, and compliance mapping, this paper presents proven strategies for navigating ESC adoption whilst maintaining operational excellence.
Key takeaways: ESC solves sovereignty challenges but introduces operational complexity; dual‑partition strategies offer targeted sovereignty with retained innovation velocity; success requires deliberate operating models and partition‑aware tooling from day one.
1. Introduction
The European regulatory landscape increasingly demands data sovereignty, operational autonomy, and governance structures that align with EU values and oversight. Digital sovereignty has evolved from a policy aspiration to a technical requirement, particularly for public sector entities and organizations operating in highly regulated industries such as financial services, healthcare, and critical infrastructure.
The AWS European Sovereign Cloud (ESC) addresses these requirements through a new AWS partition that is operated, governed, and controlled entirely within the EU. Scheduled to launch by the end of 2025, ESC provides complete independence of control planes, metadata handling, and operational oversight that goes beyond regional data residency features within the commercial partition. This creates both opportunities and challenges for organizations seeking to leverage cloud computing whilst meeting stringent sovereignty requirements.
This paper provides practical engineering patterns and operating models for ESC adoption and multi‑partition environments. We examine the technical implications of partition boundaries, present proven integration patterns, and offer guidance for organizations navigating the complexity of operating across multiple AWS partitions whilst maintaining operational excellence and innovation velocity.
This document is informational and not legal advice. Engage legal and compliance experts for interpretations of regulatory obligations.
Summary:
- ESC addresses EU sovereignty requirements through independent partition operation
- Multi‑partition strategies introduce complexity but enable targeted sovereignty
- Practical patterns exist for managing identity, connectivity, and operations across partitions
- Success requires deliberate operating models and partition‑aware architectural decisions
2. AWS Partitions
Figure 1: Partition model and control‑plane boundaries
AWS partitions represent fundamentally separate cloud environments with independent control planes, billing systems, and operational boundaries. The global commercial partition serves most customers worldwide, whilst specialised partitions like AWS GovCloud (US) and the upcoming ESC address specific regulatory and sovereignty requirements.
ESC operates with complete separation from other partitions, including its own identity and access management system, billing infrastructure, and service control planes. This comprehensive separation ensures that all administrative actions, metadata, and operational telemetry remain within ESC's EU governance domain. For ESC, this means customer content, customer‑created metadata (including IAM roles, resource tags, and configuration data), and operational oversight remain entirely under EU control with no shared components or cross‑partition dependencies.
The implications extend beyond data residency. Organizations must establish new AWS accounts within each partition, as accounts cannot span partition boundaries. Service availability, feature roadmaps, and ecosystem maturity may differ between partitions. Support boundaries align with partition operations, meaning ESC support is provided by EU‑resident staff operating under EU governance structures.
Tooling and automation must be partition‑aware. APIs, CLI configurations, and Infrastructure as Code templates require specific endpoints and credentials for each partition. Cross‑partition resource references are impossible; integration occurs through application‑level connectivity patterns or shared external systems.
| Partition | Identifier | Primary Use Case | Geographic Scope |
|---|---|---|---|
| Commercial | aws | Global commercial cloud services | Worldwide (excluding China) |
| China | aws-cn | China-specific cloud services operated by local partners | China mainland |
| GovCloud (US) | aws-us-gov | US federal government and regulated industries | United States |
| ESC | aws-eusc | EU sovereignty and regulatory compliance | European Union |
The AWS European Sovereign Cloud launches with the initial region Brandenburg (eusc-de-east-1) in Germany. This represents the first EU-sovereign AWS region with complete operational independence from commercial AWS partitions.
ESC uses a distinct domain namespace (*.amazonaws.eu) separate from commercial AWS (*.amazonaws.com), reinforcing the complete partition separation at the API and service endpoint level.
The partition identifier forms part of every Amazon Resource Name (ARN), enabling precise resource identification within the appropriate governance domain:
arn:partition:service:region:account:resource
ARN Examples across partitions:
# Commercial partition
arn:aws:iam::123456789012:role/my-role
# GovCloud partition
arn:aws-us-gov:iam::123456789012:role/my-role
# ESC partition (anticipated format)
arn:aws-eusc:iam::123456789012:role/my-role
Summary:
- Partitions provide complete control plane separation and independent governance
- Cross‑partition account sharing is impossible; new accounts required per partition
- Service availability and ecosystem maturity may differ between partitions
- Tooling must be explicitly designed for partition‑aware operations
3. Why ESC Exists and What It Solves
The AWS European Sovereign Cloud (ESC) addresses three primary challenges facing EU organizations: data residency requirements that extend beyond customer content to include metadata and operational telemetry; operational autonomy ensuring that cloud infrastructure management occurs within EU governance frameworks; and EU‑centric oversight providing independent governance aligned with European values and regulatory expectations.
ESC delivers these benefits through EU‑based operations with staff residing in and operating from EU member states. Customer content and customer‑created metadata remain within EU borders, including IAM configurations, resource tags, CloudTrail logs, and billing data. An independent governance structure provides European oversight of operations, security practices, and business decisions affecting ESC customers.
AWS Staff Requirements: ESC infrastructure is operated exclusively by EU‑resident AWS personnel under EU governance. This includes data centre operations, support escalations, security monitoring, and infrastructure management.
Customer Location Flexibility: ESC customers can be located anywhere globally. A US‑based multinational corporation, Asian financial institution, or Australian government agency can all use ESC to meet EU sovereignty requirements for their European operations, data processing, or regulatory compliance needs. Customer location, citizenship, or corporate domicile do not restrict ESC access.
The Value Proposition: ESC enables global organizations to leverage EU‑sovereign cloud infrastructure regardless of their headquarters location, providing EU data residency and governance without requiring the customer organisation itself to be EU‑based.
Figure 2: AWS European Sovereign Cloud (ESC) Governance Overview
The trade‑offs are significant. Organizations start from scratch and must rebuild their AWS foundation within the ESC partition. Service parity will evolve gradually, with initial ESC regions offering core services whilst niche capabilities and newest instance types arrive later. Pricing and procurement processes may differ from commercial AWS. Geographic redundancy is initially limited until multiple ESC regions become available, creating disaster recovery considerations that don't exist in the mature commercial partition.
Organizations must weigh stronger sovereignty posture against reduced service breadth, potentially higher costs, and operational complexity. The decision requires careful analysis of regulatory requirements, service dependencies, performance needs, and operational readiness for managing multi‑partition environments.
| Benefits | Trade‑offs |
|---|---|
| EU‑based operations and governance | Start from scratch and rebuild infrastructure |
| Customer content and metadata residency | Evolving service parity and feature availability |
| Independent European oversight | Potentially different pricing and procurement |
| Regulatory compliance assurance | Limited geographic redundancy initially |
| Familiar AWS APIs and tooling | Operational complexity for multi‑partition setups |
3.1 Regulatory Compliance Alignment
ESC addresses specific regulatory frameworks prevalent in EU public sector and regulated industries. Understanding how ESC characteristics map to compliance requirements enables organizations to leverage ESC capabilities effectively whilst maintaining comprehensive compliance postures.
GDPR (General Data Protection Regulation) benefits from ESC's EU‑resident data processing and storage capabilities. Customer content and metadata remain within EU jurisdiction, supporting data protection impact assessments and privacy by design requirements. EU‑resident operations provide direct accountability under European legal frameworks.
NIS2 (Network and Information Systems Directive) requirements for security measures and incident reporting align with ESC's EU‑based security operations and incident response capabilities. Critical infrastructure operators can demonstrate EU‑controlled cybersecurity measures and governance structures that align with national implementation requirements.
DORA (Digital Operational Resilience Act) for financial services benefits from ESC's independent governance and EU‑resident operations. Financial institutions can demonstrate operational resilience through EU‑controlled cloud services whilst maintaining ICT risk management frameworks that align with European supervisory expectations.
Sector‑specific regulations overlay additional requirements that ESC's governance model supports. Healthcare data processing under national implementations of GDPR, defence and security applications requiring national oversight, and public sector applications with citizen data protection requirements all benefit from ESC's sovereignty characteristics.
Artifact collection and evidence generation require partition‑aware approaches. Audit logs, configuration evidence, and compliance reports remain within appropriate jurisdictional boundaries whilst providing comprehensive evidence for regulatory examinations and assessments.
Summary:
- ESC provides complete sovereignty through EU‑based operations and governance
- Benefits include metadata residency and independent European oversight
- Trade‑offs include service parity evolution and operational complexity
- ESC addresses key EU regulatory frameworks through sovereignty and governance characteristics
- Evidence collection requires partition‑aware approaches to maintain jurisdictional boundaries
- Decision requires balancing sovereignty needs against functionality and complexity
4. Comparison: AWS GovCloud (US) and ESC
AWS GovCloud (US) provides a useful reference point for understanding partition design patterns and operational models. Both ESC and GovCloud represent separate partitions created to address specific regulatory and sovereignty requirements, operating with distinct control planes and governance structures.
Key commonalities include partition‑based isolation ensuring complete separation from commercial AWS operations, compliance‑driven design addressing specific regulatory frameworks, and distinct operational models with specialised staffing and oversight. Both partitions require new account establishment and feature staged service availability compared to commercial regions.
The differences reflect distinct regulatory contexts and operational requirements. ESC targets EU sovereignty requirements with EU‑resident operations, whilst GovCloud addresses US federal requirements with US‑person operations. Regulatory frameworks differ significantly: ESC aligns with GDPR, NIS2, and emerging EU digital sovereignty legislation, whilst GovCloud focuses on FedRAMP, ITAR, and US federal security standards.
Access eligibility varies significantly between partitions. ESC will be available to any organisation or individual, similar to commercial AWS, enabling broad adoption for sovereignty requirements. GovCloud (US) has strict access restrictions requiring account holders to be US entities incorporated to do business in the United States, based on US soil, and operated by US persons (citizens or active Green Card holders) capable of handling ITAR export‑controlled data.
Staffing and operations residency requirements vary, with ESC emphasising EU residency and governance, whilst GovCloud requires US persons for certain operations. Service availability trajectories reflect different market priorities and regulatory approval processes. Marketplace ecosystems develop independently, with different vendor participation patterns and compliance requirements.
AWS European Sovereign Cloud (ESC) operates with dedicated billing systems independent from commercial AWS, providing complete financial sovereignty and EU-resident billing operations.
AWS GovCloud (US) has significant billing limitations: all billing and cost management must be accessed through an associated standard commercial AWS account. GovCloud accounts cannot view billing directly within the GovCloud console. Cost and Usage Reports for GovCloud are only available in the commercial partition, and Savings Plans must be purchased through the commercial account to apply to GovCloud usage.
Summary:
- Both partitions address sovereignty through separate control planes and governance
- Regulatory contexts differ significantly between US federal and EU sovereignty requirements
- Operational models reflect distinct staffing and oversight requirements
- Service and marketplace evolution follows different trajectories based on market needs
4.1 Service Availability Comparison
The following interactive table compares service availability across Commercial AWS (Frankfurt), AWS GovCloud (US), and AWS European Sovereign Cloud (ESC). This represents a point-in-time snapshot and actual availability may vary. Even when a service is listed as available, some features or capabilities may differ from the commercial partition or may not be immediately available.
Use the search and filter controls to explore services by category or ESC availability:
| Service | Category | Commercial (Frankfurt) | GovCloud (US) | ESC (Planned) |
|---|---|---|---|---|
| Amazon Athena | Analytics | ✅ | ✅ | ✅ |
| Amazon CloudSearch | Analytics | ✅ | ❌ | ❌ |
| Amazon Data Firehose | Analytics | ✅ | ✅ | ✅ |
| Amazon DataZone | Analytics | ✅ | ❌ | ❌ |
| Amazon EMR | Analytics | ✅ | ✅ | ✅ |
| Amazon FinSpace | Analytics | ✅ | ❌ | ❌ |
| Amazon Forecast | Analytics | ✅ | ❌ | ❌ |
| Amazon Kinesis Data Streams | Analytics | ✅ | ✅ | ✅ |
| Amazon Kinesis Video Streams | Analytics | ✅ | ✅ | ❌ |
| Amazon Lookout for Metrics | Analytics | ✅ | ❌ | ❌ |
| Amazon Lookout for Vision | Analytics | ✅ | ❌ | ❌ |
| Amazon Managed Grafana | Analytics | ✅ | ❌ | ❌ |
| Amazon Managed Service for Apache Flink | Analytics | ✅ | ✅ | ✅ |
| Amazon Managed Service for Prometheus | Analytics | ✅ | ✅ | ❌ |
| Amazon Managed Streaming for Apache Kafka (MSK) | Analytics | ✅ | ✅ | ✅ |
| Amazon Managed Workflows for Apache Airflow | Analytics | ✅ | ❌ | ❌ |
| Amazon OpenSearch Service | Analytics | ✅ | ✅ | ✅ |
| Amazon QuickSight | Analytics | ✅ | ✅ | ❌ |
| Amazon Redshift | Analytics | ✅ | ✅ | ✅ |
| Amazon Timestream | Analytics | ✅ | ❌ | ❌ |
| AWS Glue | Analytics | ✅ | ✅ | ✅ |
| AWS Lake Formation | Analytics | ✅ | ✅ | ✅ |
| Amazon API Gateway | Application Integration | ✅ | ✅ | ✅ |
| Amazon AppFlow | Application Integration | ✅ | ❌ | ❌ |
| Amazon EventBridge | Application Integration | ✅ | ✅ | ✅ |
| Amazon MQ | Application Integration | ✅ | ✅ | ❌ |
| Amazon Simple Notification Service (SNS) | Application Integration | ✅ | ✅ | ✅ |
| Amazon Simple Queue Service (SQS) | Application Integration | ✅ | ✅ | ✅ |
| Amazon Simple Workflow Service (SWF) | Application Integration | ✅ | ✅ | ✅ |
| AWS Step Functions | Application Integration | ✅ | ✅ | ✅ |
| Amazon Augmented AI (A2I) | AI/ML | ✅ | ❌ | ❌ |
| Amazon Bedrock | AI/ML | ✅ | ✅ | ✅ |
| Amazon CodeGuru | AI/ML | ✅ | ❌ | ❌ |
| Amazon Comprehend | AI/ML | ✅ | ❌ | ❌ |
| Amazon DevOps Guru | AI/ML | ✅ | ❌ | ❌ |
| Amazon Lex | AI/ML | ✅ | ❌ | ❌ |
| Amazon Personalize | AI/ML | ✅ | ❌ | ❌ |
| Amazon Polly | AI/ML | ✅ | ❌ | ❌ |
| Amazon Q Developer | AI/ML | ✅ | ❌ | ✅ |
| Amazon Rekognition | AI/ML | ✅ | ❌ | ❌ |
| Amazon SageMaker AI | AI/ML | ✅ | ✅ | ✅ |
| Amazon Textract | AI/ML | ✅ | ✅ | ❌ |
| Amazon Transcribe | AI/ML | ✅ | ✅ | ❌ |
| Amazon Translate | AI/ML | ✅ | ❌ | ❌ |
| Amazon Chime | Business Applications | ✅ | ✅ | ❌ |
| Amazon Chime SDK | Business Applications | ✅ | ✅ | ❌ |
| Amazon Connect | Business Applications | ✅ | ❌ | ❌ |
| Amazon Pinpoint and End User Messaging | Business Applications | ✅ | ✅ | ❌ |
| Amazon Simple Email Service (SES) | Business Applications | ✅ | ✅ | ✅ |
| AWS Wickr | Business Applications | ✅ | ❌ | ❌ |
| AWS Budgets | Cloud Financial Management | ✅ | ❌ | ✅ |
| AWS Cost Explorer | Cloud Financial Management | ✅ | ❌ | ✅ |
| Amazon EC2 Auto Scaling | Compute | ✅ | ✅ | ✅ |
| Amazon Elastic Compute Cloud (EC2) | Compute | ✅ | ✅ | ✅ |
| Amazon Lightsail | Compute | ✅ | ❌ | ❌ |
| AWS Batch | Compute | ✅ | ✅ | ✅ |
| AWS Elastic Beanstalk | Compute | ✅ | ✅ | ❌ |
| AWS Lambda | Compute | ✅ | ✅ | ✅ |
| AWS Outposts | Compute | ✅ | ✅ | ❌ |
| AWS Parallel Computing Service (PCS) | Compute | ✅ | ✅ | ❌ |
| EC2 Image Builder | Compute | ✅ | ✅ | ✅ |
| Amazon Elastic Container Registry (ECR) | Containers | ✅ | ✅ | ✅ |
| Amazon Elastic Container Service (ECS) | Containers | ✅ | ✅ | ✅ |
| Amazon Elastic Kubernetes Service (EKS) | Containers | ✅ | ✅ | ✅ |
| AWS App Runner | Containers | ✅ | ❌ | ❌ |
| AWS Fargate | Containers | ✅ | ✅ | ✅ |
| Red Hat OpenShift Service on AWS (ROSA) | Containers | ✅ | ✅ | ❌ |
| Amazon CloudFront | Content Delivery | ✅ | ❌ | ❌ |
| Amazon Aurora | Database | ✅ | ✅ | ✅ |
| Amazon DocumentDB (with MongoDB compatibility) | Database | ✅ | ✅ | ❌ |
| Amazon DynamoDB | Database | ✅ | ✅ | ✅ |
| Amazon DynamoDB Accelerator | Database | ✅ | ❌ | ❌ |
| Amazon ElastiCache | Database | ✅ | ✅ | ✅ |
| Amazon Keyspaces (for Apache Cassandra) | Database | ✅ | ✅ | ❌ |
| Amazon MemoryDB | Database | ✅ | ✅ | ❌ |
| Amazon Neptune | Database | ✅ | ✅ | ✅ |
| Amazon Quantum Ledger Database (QLDB) | Database | ✅ | ❌ | ❌ |
| Amazon Relational Database Service (RDS) | Database | ✅ | ✅ | ✅ |
| AWS Application Discovery Service | Developer Tools | ✅ | ❌ | ❌ |
| AWS Cloud9 | Developer Tools | ✅ | ❌ | ❌ |
| AWS CodeArtifact | Developer Tools | ✅ | ❌ | ❌ |
| AWS CodeBuild | Developer Tools | ✅ | ✅ | ❌ |
| AWS CodeCommit | Developer Tools | ✅ | ✅ | ❌ |
| AWS CodeDeploy | Developer Tools | ✅ | ✅ | ✅ |
| AWS CodePipeline | Developer Tools | ✅ | ✅ | ❌ |
| AWS Proton | Developer Tools | ✅ | ❌ | ❌ |
| AWS X-Ray | Developer Tools | ✅ | ✅ | ✅ |
| Amazon AppStream 2.0 | End User Computing | ✅ | ✅ | ❌ |
| Amazon WorkSpaces | End User Computing | ✅ | ✅ | ❌ |
| Amazon WorkSpaces Secure Browser | End User Computing | ✅ | ❌ | ❌ |
| Amazon WorkSpaces Thin Client | End User Computing | ✅ | ❌ | ❌ |
| AWS Amplify | Front-End Web & Mobile | ✅ | ❌ | ❌ |
| AWS AppSync | Front-End Web & Mobile | ✅ | ❌ | ❌ |
| Amazon GameLift Servers | Game Tech | ✅ | ❌ | ❌ |
| Amazon GameLift Streams | Game Tech | ✅ | ❌ | ❌ |
| Amazon Lumberyard | Game Tech | ✅ | ❌ | ❌ |
| AWS IoT Analytics | Internet of Things | ✅ | ❌ | ❌ |
| AWS IoT Core | Internet of Things | ✅ | ✅ | ❌ |
| AWS IoT Device Defender | Internet of Things | ✅ | ✅ | ❌ |
| AWS IoT Device Management | Internet of Things | ✅ | ✅ | ❌ |
| AWS IoT Events | Internet of Things | ✅ | ❌ | ❌ |
| AWS IoT FleetWise | Internet of Things | ✅ | ❌ | ❌ |
| AWS IoT Greengrass | Internet of Things | ✅ | ✅ | ❌ |
| AWS IoT SiteWise | Internet of Things | ✅ | ❌ | ❌ |
| AWS IoT TwinMaker | Internet of Things | ✅ | ❌ | ❌ |
| FreeRTOS | Internet of Things | ✅ | ❌ | ❌ |
| Amazon Location Service | Location Services | ✅ | ❌ | ❌ |
| Amazon CloudWatch | Management & Governance | ✅ | ✅ | ✅ |
| Amazon CloudWatch Logs | Management & Governance | ✅ | ✅ | ❌ |
| AWS Audit Manager | Management & Governance | ✅ | ❌ | ❌ |
| AWS Auto Scaling | Management & Governance | ✅ | ✅ | ❌ |
| AWS Chatbot | Management & Governance | ✅ | ❌ | ❌ |
| AWS Cloud Control API | Management & Governance | ✅ | ✅ | ❌ |
| AWS CloudFormation | Management & Governance | ✅ | ✅ | ✅ |
| AWS CloudShell | Management & Governance | ✅ | ✅ | ❌ |
| AWS CloudTrail | Management & Governance | ✅ | ✅ | ✅ |
| AWS Compute Optimizer | Management & Governance | ✅ | ✅ | ❌ |
| AWS Config | Management & Governance | ✅ | ✅ | ✅ |
| AWS Control Tower | Management & Governance | ✅ | ✅ | ✅ |
| AWS Health Dashboard | Management & Governance | ✅ | ✅ | ✅ |
| AWS Launch Wizard | Management & Governance | ✅ | ✅ | ❌ |
| AWS License Manager | Management & Governance | ✅ | ✅ | ✅ |
| AWS Managed Services | Management & Governance | ✅ | ✅ | ❌ |
| AWS OpsWorks for Chef Automate | Management & Governance | ✅ | ❌ | ❌ |
| AWS OpsWorks for Puppet Enterprise | Management & Governance | ✅ | ❌ | ❌ |
| AWS Organizations | Management & Governance | ✅ | ✅ | ✅ |
| AWS Resource Explorer | Management & Governance | ✅ | ❌ | ❌ |
| AWS Resource Groups | Management & Governance | ✅ | ✅ | ❌ |
| AWS Service Catalog | Management & Governance | ✅ | ✅ | ❌ |
| AWS Systems Manager | Management & Governance | ✅ | ✅ | ✅ |
| AWS Trusted Advisor | Management & Governance | ✅ | ✅ | ✅ |
| AWS User Notifications | Management & Governance | ✅ | ❌ | ❌ |
| AWS Well-Architected Tool | Management & Governance | ✅ | ✅ | ❌ |
| Amazon Elastic Transcoder | Media Services | ✅ | ❌ | ❌ |
| Amazon IVS | Media Services | ✅ | ❌ | ❌ |
| AWS Elemental MediaConnect | Media Services | ✅ | ❌ | ❌ |
| AWS Elemental MediaConvert | Media Services | ✅ | ❌ | ❌ |
| AWS Elemental MediaLive | Media Services | ✅ | ❌ | ❌ |
| AWS Elemental MediaPackage | Media Services | ✅ | ❌ | ❌ |
| AWS Elemental MediaStore | Media Services | ✅ | ❌ | ❌ |
| AWS Elemental MediaTailor | Media Services | ✅ | ❌ | ❌ |
| AWS Application Migration Service (MGN) | Migration & Modernization | ✅ | ✅ | ❌ |
| AWS Database Migration Service (DMS) | Migration & Modernization | ✅ | ✅ | ✅ |
| AWS DataSync | Migration & Modernization | ✅ | ✅ | ✅ |
| AWS Elastic Disaster Recovery (DRS) | Migration & Modernization | ✅ | ✅ | ❌ |
| AWS Mainframe Modernization | Migration & Modernization | ✅ | ✅ | ❌ |
| AWS Migration Hub | Migration & Modernization | ✅ | ❌ | ❌ |
| AWS Transfer Family | Migration & Modernization | ✅ | ✅ | ✅ |
| CloudEndure Disaster Recovery | Migration & Modernization | ✅ | ❌ | ❌ |
| CloudEndure Migration | Migration & Modernization | ✅ | ✅ | ❌ |
| Amazon Route 53 | Networking & Content Delivery | ✅ | ✅ | ✅ |
| Amazon VPC Lattice | Networking & Content Delivery | ✅ | ❌ | ❌ |
| Amazon Virtual Private Cloud (VPC) | Networking & Content Delivery | ✅ | ✅ | ✅ |
| AWS App Mesh | Networking & Content Delivery | ✅ | ❌ | ❌ |
| AWS Cloud Map | Networking & Content Delivery | ✅ | ✅ | ✅ |
| AWS Direct Connect | Networking & Content Delivery | ✅ | ✅ | ✅ |
| AWS Global Accelerator | Networking & Content Delivery | ✅ | ❌ | ❌ |
| AWS Network Firewall | Networking & Content Delivery | ✅ | ✅ | ❌ |
| AWS PrivateLink | Networking & Content Delivery | ✅ | ✅ | ❌ |
| AWS Site-to-Site VPN | Networking & Content Delivery | ✅ | ✅ | ✅ |
| AWS Transit Gateway | Networking & Content Delivery | ✅ | ✅ | ✅ |
| AWS Client VPN | Networking & Content Delivery | ✅ | ✅ | ❌ |
| Elastic Load Balancing (ELB) | Networking & Content Delivery | ✅ | ✅ | ✅ |
| Amazon Braket | Quantum Technologies | ✅ | ❌ | ❌ |
| AWS RoboMaker | Robotics | ✅ | ❌ | ❌ |
| AWS Ground Station | Satellite | ✅ | ❌ | ❌ |
| Amazon Cloud Directory | Security, Identity, & Compliance | ✅ | ❌ | ❌ |
| Amazon Cognito | Security, Identity, & Compliance | ✅ | ✅ | ✅ |
| Amazon Detective | Security, Identity, & Compliance | ✅ | ✅ | ❌ |
| Amazon GuardDuty | Security, Identity, & Compliance | ✅ | ✅ | ✅ |
| Amazon Inspector | Security, Identity, & Compliance | ✅ | ✅ | ❌ |
| Amazon Inspector Classic | Security, Identity, & Compliance | ✅ | ✅ | ❌ |
| Amazon Macie | Security, Identity, & Compliance | ✅ | ❌ | ❌ |
| Amazon Security Lake | Security, Identity, & Compliance | ✅ | ✅ | ❌ |
| Amazon Verified Permissions | Security, Identity, & Compliance | ✅ | ✅ | ❌ |
| AWS Artifact | Security, Identity, & Compliance | ✅ | ✅ | ✅ |
| AWS Certificate Manager (ACM) | Security, Identity, & Compliance | ✅ | ✅ | ✅ |
| AWS CloudHSM | Security, Identity, & Compliance | ✅ | ✅ | ❌ |
| AWS Directory Service | Security, Identity, & Compliance | ✅ | ✅ | ✅ |
| AWS Firewall Manager | Security, Identity, & Compliance | ✅ | ✅ | ❌ |
| AWS IAM Identity Center | Security, Identity, & Compliance | ✅ | ✅ | ✅ |
| AWS Identity and Access Management (IAM) | Security, Identity, & Compliance | ✅ | ✅ | ✅ |
| AWS Key Management Service (KMS) | Security, Identity, & Compliance | ✅ | ✅ | ✅ |
| AWS Payment Cryptography | Security, Identity, & Compliance | ✅ | ❌ | ❌ |
| AWS Private Certificate Authority | Security, Identity, & Compliance | ✅ | ✅ | ✅ |
| AWS Resource Access Manager (RAM) | Security, Identity, & Compliance | ✅ | ✅ | ✅ |
| AWS Secrets Manager | Security, Identity, & Compliance | ✅ | ✅ | ✅ |
| AWS Security Hub | Security, Identity, & Compliance | ✅ | ✅ | ✅ |
| AWS Security Incident Response | Security, Identity, & Compliance | ✅ | ❌ | ❌ |
| AWS Shield | Security, Identity, & Compliance | ✅ | ❌ | ✅ |
| AWS Shield Advanced | Security, Identity, & Compliance | ✅ | ❌ | ❌ |
| AWS Signer | Security, Identity, & Compliance | ✅ | ✅ | ❌ |
| AWS Verified Access | Security, Identity, & Compliance | ✅ | ✅ | ❌ |
| AWS WAF | Security, Identity, & Compliance | ✅ | ✅ | ✅ |
| IAM Access Analyzer | Security, Identity, & Compliance | ✅ | ❌ | ✅ |
| Amazon Elastic Block Store (EBS) | Storage | ✅ | ✅ | ✅ |
| Amazon Elastic File System (EFS) | Storage | ✅ | ✅ | ✅ |
| Amazon FSx | Storage | ✅ | ✅ | ❌ |
| Amazon FSx for Lustre | Storage | ✅ | ✅ | ✅ |
| Amazon FSx for NetApp ONTAP | Storage | ✅ | ✅ | ✅ |
| Amazon FSx for OpenZFS | Storage | ✅ | ✅ | ✅ |
| Amazon FSx for Windows File Server | Storage | ✅ | ✅ | ✅ |
| Amazon File Cache | Storage | ✅ | ❌ | ❌ |
| Amazon Simple Storage Service (S3) | Storage | ✅ | ✅ | ✅ |
| AWS Backup | Storage | ✅ | ✅ | ✅ |
| AWS Snowball | Storage | ✅ | ✅ | ❌ |
| AWS Snowcone | Storage | ✅ | ❌ | ❌ |
| AWS Storage Gateway | Storage | ✅ | ✅ | ✅ |
| AWS Serverless Application Repository | Additional Services | ✅ | ✅ | ❌ |
| AWS Clean Rooms | Additional Services | ✅ | ❌ | ❌ |
| AWS Data Exchange | Additional Services | ✅ | ❌ | ❌ |
| AWS Deadline Cloud | Additional Services | ✅ | ❌ | ❌ |
| AWS Entity Resolution | Additional Services | ✅ | ❌ | ❌ |
| AWS Fault Injection Service | Additional Services | ✅ | ✅ | ❌ |
| AWS HealthOmics | Additional Services | ✅ | ❌ | ❌ |
| AWS IQ | Additional Services | ✅ | ❌ | ❌ |
| AWS Marketplace | Additional Services | ✅ | ✅ | ✅ |
| AWS Resilience Hub | Additional Services | ✅ | ✅ | ❌ |
| AWS SimSpace Weaver | Additional Services | ✅ | ✅ | ❌ |
| AWS Support | Additional Services | ✅ | ✅ | ✅ |
| AWS Telco Network Builder | Additional Services | ✅ | ❌ | ❌ |
| AWS Transform | Additional Services | ✅ | ❌ | ❌ |
| VMware Cloud on AWS | Additional Services | ✅ | ✅ | ❌ |
- ESC Services: AWS Blog - Initial Services Available in AWS European Sovereign Cloud
- GovCloud & Commercial Services: AWS Global Infrastructure - Regional Product Services
5. Adoption Scenarios and Decision Framework
Organizations face three primary adoption scenarios when considering ESC. Each scenario presents distinct advantages and challenges that must be evaluated against specific requirements for sovereignty, service availability, operational complexity, and innovation velocity.
Scenario A: All‑in on ESC provides the strongest sovereignty posture by operating exclusively within the ESC partition. This approach simplifies compliance assurance and audit evidence collection, as all operations occur within the sovereign environment. However, organizations accept service availability limitations, reduced ecosystem maturity, and geographic redundancy constraints until multiple ESC regions become available.
Scenario B: Remain in Commercial Partition maintains full service parity and access to the global AWS ecosystem. Organizations benefit from mature services, extensive marketplace offerings, and proven operational patterns. This approach may not satisfy sovereignty requirements for regulated workloads or public sector entities with strict data residency mandates.
Scenario C: Selective ESC + Commercial (Dual‑Partition) offers targeted sovereignty where regulatory requirements demand it whilst retaining innovation velocity for non‑regulated workloads. Organizations can place sensitive data and regulated processes in ESC whilst leveraging commercial AWS for development, testing, and non‑sensitive operations. This approach introduces the highest operational complexity but enables both compliance and innovation.
The following decision tree provides a starting point for evaluating ESC adoption scenarios based on core regulatory and operational drivers. While sovereignty requirements and service dependencies form the primary decision criteria, organizations must also consider additional factors including latency sensitivity, cost constraints, operational readiness, and long-term strategic objectives.
Figure 3: Decision tree mapping regulatory, architectural, and operational drivers
Organizations should use this decision tree as an initial assessment tool, then conduct deeper analysis of their specific requirements. Beyond the primary sovereignty and service considerations, factors such as existing multi-region compliance experience, organizational change readiness, vendor relationship complexity, and technical team capabilities all influence the optimal adoption path. The decision tree simplifies complex trade-offs to provide directional guidance, but each organization's unique context requires thorough evaluation of all relevant considerations.
Summary:
- Three primary scenarios address different sovereignty and operational priorities
- All‑in ESC maximises sovereignty but limits service availability
- Dual‑partition enables targeted sovereignty with operational complexity trade‑offs
- Decision framework should weigh compliance needs against service and operational requirements
6. Operating Across Multiple AWS Partitions
Operating across both commercial AWS and ESC (or any other partition) requires deliberate organisational design, clear ownership models, and partition‑aware operational procedures. The dual‑partition approach shares many characteristics with multi‑cloud strategies, introducing similar complexity patterns whilst offering unique value propositions that make the operational overhead worthwhile for organizations with sovereignty requirements.
The Multi‑Cloud Parallel: Like multi‑cloud environments, dual‑partition operations present a "best of both worlds" opportunity alongside significant complexity challenges. Organizations gain access to innovation velocity from commercial AWS, with its rapid service development, extensive marketplace, and mature ecosystem, whilst meeting strict compliance requirements through ESC's EU‑sovereign operations. However, this approach introduces the familiar multi‑cloud challenges of disparate service portfolios, configuration drift risks, operational complexity, and the need for unified visibility across heterogeneous environments.
Benefits: Innovation Meets Compliance: The primary value proposition lies in targeted sovereignty deployment. Regulated workloads benefit from ESC's complete EU residency and governance whilst non‑regulated workloads leverage commercial AWS's full service breadth and global reach. This enables organizations to adopt emerging AI/ML services, leverage extensive marketplace offerings, and access bleeding‑edge capabilities for development and innovation workloads, whilst ensuring that customer data processing and regulated operations meet the strictest sovereignty requirements.
Complexity: The Multi‑Environment Tax: The operational overhead mirrors multi‑cloud challenges but with partition‑specific constraints. Service availability differences require constant gap analysis and alternative implementations. Configuration drift becomes a persistent risk as changes in one partition may not replicate to the other, demanding robust automation and governance processes. Consolidated visibility requires external tooling and aggregation platforms, as native AWS observability services cannot span partition boundaries. Organizations must maintain duplicate foundational infrastructure, manage separate billing relationships, and develop partition‑aware operational procedures.
The operating model establishes ownership boundaries and responsibilities across partitions. Platform teams typically own the foundational infrastructure in both partitions, including account vending, networking, and security tooling. Application teams focus on workload deployment within appropriate partitions based on data classification and regulatory requirements. Security and compliance teams develop partition‑specific policies whilst maintaining consistent control objectives.
Figure 4: Reference operating model for two partitions
6.1 Risks and Anti‑Patterns
Multi‑partition operations introduce specific risks that require active monitoring and mitigation. Understanding these risks and avoiding common anti‑patterns enables successful long‑term operations across partition boundaries.
Primary Risk Categories: Top risks include tooling fragmentation leading to operational blind spots and increased complexity, hidden cross‑partition data flows that violate sovereignty boundaries or compliance requirements, and configuration drift between partitions creating security vulnerabilities or operational inconsistencies. Service parity evolution risks include dependency on services that may not migrate to ESC or features that differ between partitions.
Cost and Security Escalation: Cost escalation risks emerge from duplicate infrastructure, increased licensing, and operational overhead. Security risks include credential sharing across partitions, misconfigured cross‑partition networking, and inconsistent security policies. Operational risks encompass incident response complexity, knowledge fragmentation across teams, and reduced operational efficiency due to partition constraints.
Critical Anti‑Patterns to Avoid:
- Sharing IAM credentials or access keys across partitions violates security boundaries and complicates audit trails
- Unmanaged public endpoints without proper authentication create security vulnerabilities and potential data exfiltration risks
- Copy‑paste Infrastructure as Code without partition‑aware configuration leads to deployment failures and maintenance overhead
- Ignoring service parity differences leads to deployment failures when services or features are unavailable
- Bypassing approved connectivity patterns creates security risks and operational complications
Mitigation Strategies: Mitigation strategies include automated policy enforcement through Service Control Policies and guardrails, regular configuration audits to detect drift and non‑compliance, partition‑aware monitoring and alerting to maintain operational visibility, and comprehensive documentation of approved patterns and procedures.
| Risk | Likelihood | Impact | Indicator | Mitigation |
|---|---|---|---|---|
| Tooling Fragmentation | High | Medium | Multiple tool instances | Unified dashboards, automation |
| Cross‑Partition Data Flows | Medium | High | Unexpected network traffic | Network monitoring, Data Loss Prevention |
| Configuration Drift | High | Medium | Policy violations | Automated compliance scanning |
| Service Parity Issues | Medium | High | Deployment failures | Service matrix tracking |
| Cost Escalation | Medium | Medium | Budget variance | Unified cost reporting |
Summary:
- Dual‑partition operations require clear ownership models and governance boundaries
- Organisational structures can mirror across partitions or maintain separate hierarchies
- AWS operational constraints in ESC affect customer incident response and change management
- Multi‑partition operations introduce systematic risks requiring active management
- Anti‑patterns like credential sharing and unmanaged endpoints create significant security risks
- Mitigation requires automated enforcement, regular auditing, and comprehensive monitoring
- Success depends on deliberate risk management and adherence to approved patterns
7. Lessons Learned from Multi‑Region Compliance Strategies
Many organizations have implemented multi‑region compliance strategies to address data residency and regulatory requirements. These experiences provide valuable insights for multi‑partition architectures, though partition boundaries introduce harder constraints than regional separation.
Regulated industries often run sensitive workloads in specific regions (such as Zurich for Swiss government or financial requirements) whilst operating innovation workloads in broader regions like Frankfurt. This pattern establishes data classification frameworks, workload placement policies, and operational boundaries that translate to partition environments.
Key patterns that transfer to multi‑partition environments include workload classification based on data sensitivity and regulatory requirements, network segmentation ensuring traffic isolation between compliance domains, telemetry handling with appropriate data residency controls, and audit evidence collection tailored to regulatory frameworks.
Service Availability Challenges Exist in Both Models: Multi‑region compliance strategies already deal with service availability drift, particularly when using opt‑in regions. Zurich (eu‑central‑2) exemplifies this challenge. As an opt‑in region designed for Swiss data residency requirements, it offers a significantly reduced service portfolio compared to Frankfurt (eu‑central‑1). Organizations operating across Zurich and Frankfurt must architect workloads that accommodate service gaps, implement alternative solutions for missing capabilities, and manage the complexity of multi-region deployments. This mirrors the partition challenge but within the same governance domain. However, partitions introduce harder constraints that don't exist in multi‑region deployments.
The lesson for ESC adoption is that existing multi‑region compliance expertise provides a foundation, but partition‑specific patterns require additional consideration for identity, networking, and service parity management.
Summary:
- Multi‑region compliance strategies provide foundation patterns for partition architectures
- Workload classification, network segmentation, and audit evidence patterns transfer effectively
- Partition boundaries introduce harder constraints than regional separation
- Existing compliance expertise accelerates ESC adoption but requires partition‑specific adaptation
8. Core Challenges in Multi‑Partition Engineering
Multi‑partition architectures introduce specific engineering challenges that require deliberate mitigation strategies. These challenges span infrastructure management, tooling integration, service availability, cost optimisation, workload connectivity, and security operations.
Dual Landing Zone Complexity
A Landing Zone provides the foundational infrastructure and governance framework for AWS environments, including account structures, networking components, security baselines, and operational procedures that enable teams to deploy workloads safely and consistently. Two Landing Zones create duplication versus divergence risks:
- Organizations must maintain foundational infrastructure in both partitions, including account vending, networking components, and security tooling
- Configuration drift becomes a significant risk as changes in one partition may not propagate to the other
- Baseline configurations require partition‑aware templates and mechanisms to prevent divergence
Cross‑Partition Connectivity Constraints
AWS provides no native networking mechanisms between partitions, eliminating traditional connectivity options and forcing alternative architectural patterns:
- VPC peering, Transit Gateway connectivity, and PrivateLink options are unavailable across partition boundaries
- Workload communication requires alternative patterns including application‑level APIs, VPN tunnels over the internet, or routing through on‑premises via Direct Connect
- Each approach introduces latency, complexity, and potential security considerations that must be carefully evaluated
Security Operations Fragmentation
Unified security operations become significantly more complex across partition boundaries, requiring external systems and processes:
- Security services operate independently within each partition, requiring external aggregation for consolidated visibility
- Incident response procedures must accommodate different operational models and partition‑specific constraints
- Cross‑partition security correlation whilst maintaining appropriate data residency controls for telemetry originating from sovereign environments
Tooling and Integration Complexity
Tooling fragmentation affects every aspect of operations, multiplying administrative overhead and integration requirements:
- Identity systems require separate configurations for each partition with potential federation challenges
- Billing and cost management span multiple systems, complicating financial reporting and optimization
- Monitoring and SIEM solutions need partition‑specific deployments with aggregation capabilities
- Backup and disaster recovery solutions must accommodate partition boundaries and data residency requirements
Service Parity Drift
Service availability differences create ongoing architectural and operational challenges as partitions evolve at different rates:
- Commercial AWS introduces new services and features continuously, whilst ESC service availability evolves based on regulatory approval and operational readiness
- Organizations must maintain service parity matrices and track capability gaps between partitions
- Development teams need workarounds for missing capabilities and migration paths as services become available
- Feature inconsistencies can create technical debt and limit architectural flexibility
Cost and Procurement Complexity
Multi‑partition operations introduce financial and procurement challenges that extend beyond technical considerations:
- Organizations need to manage separate billing relationships and potentially different pricing models
- Software licence entitlements may not transfer between partitions, requiring duplicate procurement
- Marketplace availability is anticipated to differ, potentially affecting vendor relationships and sourcing strategies
- Cost attribution and optimization become more complex across multiple billing systems and governance domains
| Challenge | Impact | Mitigation |
|---|---|---|
| Dual Landing Zones | Configuration drift, duplicate maintenance | Shared configuration, partition‑aware IaC templates |
| Cross‑partition connectivity | Limited networking options, latency, complexity | Application APIs, VPN tunnels, on-premises routing |
| Unified security operations | Fragmented visibility, complex incident response | cross-partition aggregation, partition‑aware playbooks |
| Tooling fragmentation | Operational overhead, licence proliferation | Aggregated dashboards, centralised identity |
| Service parity drift | Feature gaps, technical debt | Service matrix tracking, architectural flexibility |
| Cost complexity | Split billing, unclear attribution | Unified reporting, partition‑aware tagging |
Summary:
- Multi‑partition engineering introduces systematic complexity across all operational domains
- Configuration management and tooling fragmentation require architectural solutions
- Service parity drift demands ongoing tracking and flexible architectural patterns
- Cost and procurement complexity needs unified reporting and governance approaches
9. Multi-Partition Integration Patterns (Blueprints)
This chapter presents seven comprehensive pattern areas that address the core integration challenges in multi-partition environments. Each pattern area provides detailed implementation approaches, trade-off analysis, and practical guidance for organizations managing infrastructure across AWS partitions.
Each pattern area includes:
- Multiple implementation options with detailed pros/cons analysis
- Comprehensive comparison matrices for pattern selection
- Practical code examples and configuration guidance
- Real-world considerations including costs, complexity, and operational impact
- ESC-specific constraints and service availability considerations
9.1 Landing Zone and Account Vending
A Landing Zone provides the foundational infrastructure and governance framework for AWS environments, including account structures, networking components, security baselines, and operational procedures that enable teams to deploy workloads safely and consistently. Multiple partitions require multiple Landing Zones, leading duplication and divergence risks:
- Organizations must maintain foundational infrastructure in both partitions, including account vending, networking components, and security tooling
- Configuration drift becomes a significant risk as changes in one partition may not propagate to the other
- Baseline configurations require partition‑aware templates and mechanisms to prevent divergence
Managing Landing Zones across partitions requires careful consideration of operational models that balance governance consistency with partition isolation. Organizations face two primary approaches based on their Infrastructure as Code maturity and tolerance for operational complexity.
Infrastructure as Code (IaC) is absolutely essential for managing multi-partition Landing Zones and account vending without operational failure. The complexity of maintaining consistent configurations across separate AWS partitions makes manual management approaches unsustainable at scale.
Preventing Configuration Drift: IaC templates ensure identical configurations deploy across partitions, with version control tracking all changes and automated drift detection identifying deviations between partitions.
Compliance Parity Assurance: Template-driven deployments guarantee that compliance controls remain consistent across partitions, enabling auditors to verify control implementation through code review rather than environment-by-environment inspection.
Operational Scalability: Automated account vending and baseline deployment workflows ensure consistent user experiences and security configurations without manual intervention or partition-specific procedures.
Change Management: Centralized IaC repositories with proper branching strategies and approval workflows ensure that changes undergo review before deployment to either partition, reducing the risk of introducing inconsistencies.
AWS Control Tower requires separate deployment in each partition, as it operates within the scope of a single AWS Organization and cannot manage resources across partition boundaries.
Landing Zone Accelerator on AWS (LZA) has the same limitation - it maps to a single AWS Organization and requires duplication of configurations for multi-partition deployments. Organizations using LZA must maintain separate configuration files and operational procedures for each partition.
Multi-Partition Implications:
- Each partition requires its own Control Tower setup with separate organizational unit structures, guardrails, and account baselines
- Account vending becomes a separate process for each partition
- Configuration synchronization between partitions requires manual coordination or custom automation
- Operational overhead increases significantly compared to single-partition deployments
Mitigation Strategy: Organizations should evaluate Terraform/OpenTofu-based Landing Zone solutions like the Nuvibit Terraform Collection (NTC) that natively support multi-partition deployment through provider aliasing and shared configuration templates.
Pattern 1 Independent Landing Zone Operations
- Nuvibit Terraform Collection (NTC)
- Landing Zone Accelerator on AWS (LZA)
Figure 5a: Completely independent Landing Zone management
Figure 5b: Completely independent Landing Zone management
This pattern maintains completely independent Landing Zone deployments with separate Infrastructure as Code pipelines, governance processes, and operational procedures for each partition. Each partition operates its own Landing Zone with isolated account vending, baseline configurations, and organizational structures.
Implementation: Deploy separate Landing Zone solutions (Landing Zone Accelerator on AWS, Nuvibit Terraform Collection) in each partition with independent configuration management. Each partition maintains its own organizational unit structures, Service Control Policies, and account baselines without shared components or cross-partition coordination.
Account Vending: Separate account vending processes operate independently in each partition, requiring users to request accounts through partition-specific interfaces. Account baseline configurations may differ between partitions due to service availability differences and independent configuration management.
Infrastructure as Code Approaches:
- Nuvibit Terraform Collection (Terraform/OpenTofu): Separate Terraform configurations with independent state management and deployment pipelines for each partition. Requires careful version management and testing across partitions.
- Landing Zone Accelerator on AWS (CloudFormation): Requires complete duplication of configuration files for each partition. Each partition's CloudFormation stacks operate independently with separate parameter files and organizational mappings.
Benefits: Maximum partition isolation with no cross-partition dependencies, simplified compliance boundaries with clear separation of concerns, independent operational procedures that can accommodate partition-specific constraints, and reduced complexity in governance structures.
Challenges: Code duplication across partition configurations increases maintenance overhead, configuration drift risk without automated synchronization, separate account vending processes create user experience friction, and increased operational overhead for platform teams managing dual environments.
Operational Considerations: Platform teams require expertise in partition-specific operational constraints. Change management processes must coordinate across separate environments to maintain consistency whilst respecting partition boundaries.
Pattern 2 Unified Landing Zone Operations
- Nuvibit Terraform Collection (NTC)
- Landing Zone Accelerator on AWS (LZA)
Figure 6a: Shared configuration with unified deployment pipelines
Landing Zone Accelerator on AWS cannot be deployed in a unified pattern.
LZA maps to a single AWS Organization and requires complete duplication of configurations for multi-partition deployments. Each partition needs its own LZA setup with separate configuration files, organizational structures, and operational procedures.
For unified multi-partition operations, consider Terraform/OpenTofu-based solutions like the Nuvibit Terraform Collection (NTC) that natively support provider aliasing and shared configuration templates.
This pattern uses shared Infrastructure as Code configurations with unified deployment pipelines that target multiple partitions from a single source of truth. Shared templates and configurations minimize code duplication whilst maintaining partition isolation through provider aliasing and conditional logic.
Implementation: Single Infrastructure as Code repository with partition-aware templates that deploy to multiple AWS Organizations simultaneously. Shared organizational unit structures, policy definitions, and baseline configurations adapt to partition-specific service availability through feature flags and conditional resource creation.
Account Vending: Unified account vending interface abstracts partition complexity while routing requests to appropriate partition-specific provisioning logic. Single account request workflow with partition selection capability, shared approval processes, and consistent user experience across partitions.
Infrastructure as Code Approaches:
- Nuvibit Terraform Collection (Terraform/OpenTofu): Native support for multi-partition deployment through provider aliasing, shared module libraries, and conditional resource creation based on partition characteristics.
- Landing Zone Accelerator on AWS (CloudFormation): Not supported Landing Zone Accelerator maps to a single AWS Organization and cannot deploy across multiple partitions.
Provider Aliasing: Terraform/OpenTofu natively supports multiple AWS provider configurations in a single configuration, enabling deployment to multiple partitions simultaneously from shared templates.
Conditional Logic: Rich conditional expressions and feature flags allow templates to adapt to partition-specific service availability without code duplication.
Module Reusability: Shared module libraries with partition-aware parameters eliminate code duplication whilst maintaining consistency across partitions.
Cross-Partition Resources: Single Terraform/OpenTofu configuration can manage resources across partitions, enabling unified account vending and baseline deployment workflows.
Benefits: Minimized code duplication through shared configurations reduces maintenance overhead, consistent organizational structures and policies across partitions, unified account vending provides superior user experience, and single source of truth for governance configurations.
Challenges: Increased complexity in template design with partition-aware logic, dependency on Infrastructure as Code tooling that supports multi-partition deployment, potential for cascading failures affecting multiple partitions, and higher skill requirements for platform teams managing shared configurations.
Account Vending Workflow: Users request accounts through a single interface with partition selection based on workload classification and data residency requirements. Conditional logic routes requests to appropriate partition-specific account factories whilst maintaining consistent baseline configurations and approval workflows. Any self-service portal can be easily integrated via GitOps workflows, where account requests trigger git push operations to update the account factory repository, enabling automated provisioning through Infrastructure as Code pipelines.
Figure 7: Self-service account vending implementation with multi-partition support
Landing Zone Pattern Comparison
| Pattern | Best For | Pros | Cons | Operational Model | Complexity | Governance |
|---|---|---|---|---|---|---|
| 1. Independent Landing Zone Operations | Organizations prioritizing maximum partition isolation, simple governance boundaries, Landing Zone Accelerator on AWS users | ✅ Maximum partition isolation ✅ Independent operational procedures ✅ Clear compliance boundaries ✅ No cross-partition failure risk ✅ Supports any IaC approach | ❌ Code duplication overhead ❌ Configuration drift risk ❌ Separate account vending processes ❌ Increased maintenance burden ❌ User experience friction | Dual Operations Separate teams or procedures per partition | Low-Medium Independent but duplicated | Separated Independent governance per partition |
| 2. Unified Landing Zone Operations | Organizations with Terraform/OpenTofu expertise, unified operational preferences, emphasis on consistency | ✅ Minimized code duplication ✅ Consistent governance structures ✅ Unified account vending ✅ Single source of truth ✅ Reduced operational overhead | ❌ Increased template complexity ❌ Cross-partition failure risk ❌ Higher skill requirements ❌ Limited to Terraform/OpenTofu ❌ Potential cascading issues | Unified Operations Single team managing both partitions | Medium-High Shared configuration logic | Unified Consistent policies across partitions |
Summary:
- Landing Zone management is the foundation of successful multi-partition operations
- Infrastructure as Code is critical for maintaining consistency and preventing configuration drift
- Pattern selection depends on organizational IaC maturity and governance preferences
- Terraform/OpenTofu provides significant advantages over CloudFormation for multi-partition architectures
- Account vending complexity varies significantly between separated and aggregated approaches
9.2 Infrastructure as Code and CI/CD
Managing Infrastructure as Code across partitions requires partition‑aware templates, dynamic resource configuration, and careful handling of partition‑specific constraints. The primary challenge lies in creating reusable configurations that adapt to different partition characteristics without hardcoding partition‑specific values.
Infrastructure as Code: The Foundation for Multi-Partition Consistency
Infrastructure as Code is absolutely critical for managing consistent configurations across partitions whilst accommodating service availability differences and partition‑specific constraints. Manual configuration management becomes unsustainable when dealing with partition boundaries and service parity gaps.
Preventing Hardcoded Partition Dependencies: Existing IaC code often contains hardcoded ARNs, service endpoints, or partition identifiers that break when deployed across different partitions. Templates must use dynamic resource references and partition‑aware logic to ensure portability.
Partition-Aware Resource Configuration: IaC modules include partition detection logic that adapts resource configurations based on target partition characteristics. Feature flags handle service availability differences, disabling resources or substituting alternatives when services are unavailable in specific partitions.
Dynamic Service Discovery: Rather than hardcoding service endpoints or ARNs, templates use data sources and dynamic references to discover partition‑appropriate resources and endpoints at deployment time.
Avoiding Hardcoded Partition Dependencies
Existing Infrastructure as Code often contains hardcoded references that break when deployed across different partitions. Common issues include hardcoded ARNs with partition identifiers, service endpoints using commercial AWS domains, and resource references that assume specific partition characteristics.
Terraform Example:
# Use data source for dynamic discovery
data "aws_caller_identity" "current" {}
data "aws_partition" "current" {}
locals {
partition = data.aws_partition.current.partition
account_id = data.aws_caller_identity.current.account_id
}
output "role_arn" {
# Instead of hardcoded ARN (e.g. arn:aws:iam::123456789012:role/MyRole)
value = "arn:${local.partition}:iam::${local.account_id}:role/MyRole"
}
CloudFormation Example:
# Use pseudo parameters
Outputs:
RoleArn:
# Instead of hardcoded ARN (e.g. arn:aws:iam::123456789012:role/MyRole)
Value: !Sub arn:${AWS::Partition}:iam::${AWS::AccountId}:role/MyRole
Service Endpoints Use Different Domains: AWS European Sovereign Cloud uses *.amazonaws.eu domain endpoints instead of the commercial *.amazonaws.com domains. This affects:
- API Endpoints: Service APIs use ESC-specific endpoints
- S3 Bucket URLs: S3 bucket access URLs use the ESC domain namespace
- CloudFormation Template URLs: Template and artifact references must use appropriate domain endpoints
- Third-Party Tool Configuration: External tools and integrations require ESC-specific endpoint configuration
Terraform AWS Provider Support: The Terraform AWS provider is expected to support ESC from day one, with AWS SDKs already updated for ESC compatibility. Any unexpected behavior can be tracked via GitHub Issue #44437.
CodeBuild and CodePipeline Dependencies: Landing Zone Accelerator on AWS (LZA) and Control Tower Account Factory for Terraform (AFT) requires AWS CodeBuild and CodePipeline services for its deployment automation and ongoing configuration management.
ESC Launch Service Gap: CodeBuild and CodePipeline are not currently included in the initial AWS European Sovereign Cloud launch services, making the deployment of those solutions currently impossible in ESC.
Alternative Approach: Terraform/OpenTofu-based solutions like Nuvibit Terraform Collection (NTC) can deploy to ESC immediately using available services and don't have a mandatory dependency on CodeBuild/CodePipeline.
Pattern 1 Individual Pipelines per Partition
- Nuvibit Terraform Collection (NTC)
- Landing Zone Accelerator on AWS (LZA)
Figure 8a: Independent CI/CD pipelines with partition-specific deployment workflows
Figure 8b: Independent CI/CD pipelines with partition-specific deployment workflows
This pattern maintains completely separate CI/CD pipelines for each partition, providing maximum isolation and partition-specific optimisation whilst requiring careful coordination to maintain consistency across deployments.
Implementation: Deploy separate CI/CD infrastructure in each partition or use external CI/CD platforms with partition-specific pipelines. Each pipeline handles source code checkout, testing, artifact creation, and deployment for its respective partition independently. Authentication, service endpoints, and deployment targets remain partition-specific throughout the pipeline execution.
Benefits: Complete pipeline isolation with no cross-partition dependencies, partition-specific optimisation for testing and deployment strategies, clear operational boundaries aligned with compliance requirements, and independent failure domains preventing cascading issues across partitions.
Challenges: Code duplication across pipeline configurations increases maintenance overhead, configuration drift risk between partition-specific pipelines, separate artifact management requiring manual synchronisation, and increased operational complexity for platform teams managing multiple pipeline infrastructures.
Coordination Mechanisms: Version tagging strategies ensure consistent releases across partitions, shared configuration templates minimise pipeline code duplication, cross-partition promotion workflows enable testing in one partition before deploying to another, and unified monitoring provides visibility across all partition-specific pipelines.
Use OpenID Connect (OIDC) for Authentication of External Pipelines: CI/CD pipelines should use OIDC identity providers to authenticate with AWS instead of static access keys. OIDC provides temporary, scoped credentials that improve security and eliminate the need to manage long-lived access keys in pipeline configurations.
ESC OIDC Configuration: For AWS European Sovereign Cloud, ensure your OIDC identity provider is configured with the ESC-specific STS endpoint:
- ESC STS Endpoint:
sts.eusc-de-east-1.amazonaws.eu(Brandenburg region) - Commercial STS Endpoint:
sts.amazonaws.com(default)
Implementation: Configure your CI/CD platform (GitHub Actions, GitLab CI, Azure DevOps, etc.) to assume IAM roles via OIDC rather than using stored AWS access keys. This provides automatic credential rotation, improved audit trails, and eliminates credential storage security risks.
Pattern 2 Unified Pipelines for Multi-Partition Deployment
- Nuvibit Terraform Collection (NTC)
- Landing Zone Accelerator on AWS (LZA)
Figure 9a: Single pipeline with multi-partition deployment capabilities
Landing Zone Accelerator on AWS cannot be deployed in a unified pattern.
LZA maps to a single AWS Organization and requires complete duplication of configurations for multi-partition deployments. Each partition needs its own LZA setup with separate configuration files, organizational structures, and operational procedures.
For unified multi-partition operations, consider Terraform/OpenTofu-based solutions like the Nuvibit Terraform Collection (NTC) that natively support provider aliasing and shared configuration templates.
This pattern uses a single CI/CD pipeline that can deploy to multiple partitions from shared source configurations, providing operational efficiency and consistency whilst handling partition-specific deployment requirements through conditional logic and feature flags.
Implementation: Single CI/CD pipeline with partition-aware deployment stages that can target multiple AWS partitions based on configuration parameters. Pipeline includes conditional logic to handle service availability differences, partition-specific authentication mechanisms, and environment-specific deployment parameters.
Benefits: Single source of truth for deployment configurations reduces maintenance overhead, consistent deployment processes across all partitions, shared artifact management with unified promotion workflows, and reduced operational complexity through single pipeline management interface.
Challenges: Increased pipeline complexity handling multiple partition scenarios, potential cascading failures affecting multiple partitions simultaneously, higher skill requirements for teams managing multi-partition pipeline logic, and dependency on CI/CD platforms supporting complex conditional deployment workflows.
Partition-Aware Features: Dynamic endpoint configuration based on target partition, conditional resource deployment handling service availability gaps, partition-specific authentication and credential management, and environment-specific testing and validation strategies.
Example: Multi-Partition Configuration with Nuvibit Terraform Collection (NTC)
# --------------------------------------------------------------------------------------------
# ¦ PROVIDER - MULTI-PARTITION
# --------------------------------------------------------------------------------------------
provider "aws" {
alias = "aws_frankfurt"
region = "eu-central-1"
# OpenID Connect (OIDC) integration
assume_role_with_web_identity {
role_arn = "arn:aws:iam::111111111111:role/oidc-role"
session_name = "unified-pipeline"
web_identity_token_file = "/tmp/web-identity-token"
}
}
provider "aws" {
alias = "aws_eusc_brandenburg"
region = "eusc-de-east-1"
# OpenID Connect (OIDC) integration
assume_role_with_web_identity {
role_arn = "arn:aws-eusc:iam::222222222222:role/oidc-role"
session_name = "unified-pipeline"
web_identity_token_file = "/tmp/web-identity-token"
}
}
# --------------------------------------------------------------------------------------------
# ¦ LOCALS
# --------------------------------------------------------------------------------------------
# Define shared configuration that will be deployed across multiple partitions
locals {
organizational_unit_paths = [
"/root/core",
"/root/sandbox",
"/root/suspended",
"/root/transitional",
"/root/workloads",
"/root/workloads/prod",
"/root/workloads/dev",
"/root/workloads/test"
]
}
# --------------------------------------------------------------------------------------------
# ¦ NTC ORGANIZATIONS - COMMERCIAL
# --------------------------------------------------------------------------------------------
module "ntc_organizations" {
source = "github.com/nuvibit-terraform-collection/terraform-aws-ntc-organizations?ref=X.X.X"
# list of nested (up to 5 levels) organizational units
organizational_unit_paths = local.organizational_unit_paths
# additional inputs...
providers = {
aws = aws.aws_frankfurt
}
}
# --------------------------------------------------------------------------------------------
# ¦ NTC ORGANIZATIONS - ESC
# --------------------------------------------------------------------------------------------
module "ntc_organizations_esc" {
source = "github.com/nuvibit-terraform-collection/terraform-aws-ntc-organizations?ref=X.X.X"
# list of nested (up to 5 levels) organizational units
organizational_unit_paths = local.organizational_unit_paths
# additional inputs...
providers = {
aws = aws.aws_eusc_brandenburg
}
}
Multi-Partition CI/CD Pattern Comparison
| Pattern | Best For | Pros | Cons | Operational Model | Complexity | Consistency |
|---|---|---|---|---|---|---|
| 1. Individual Pipelines per Partition | Organizations prioritizing maximum partition isolation, partition-specific deployment requirements, strict compliance boundaries | ✅ Complete pipeline isolation ✅ Partition-specific optimisation ✅ Independent failure domains ✅ Clear operational boundaries ✅ Supports any CI/CD platform | ❌ Pipeline code duplication ❌ Configuration drift risk ❌ Separate artifact management ❌ Increased maintenance overhead ❌ Complex cross-partition coordination | Separated Operations Independent pipeline teams or processes | Medium Duplicated but isolated | Manual Requires coordination mechanisms |
| 2. Unified Pipelines for Multi-Partition | Organizations with advanced CI/CD maturity, emphasis on consistency, unified operational preferences | ✅ Single source of truth ✅ Consistent deployment processes ✅ Shared artifact management ✅ Reduced operational overhead ✅ Unified promotion workflows | ❌ Increased pipeline complexity ❌ Cross-partition failure risk ❌ Higher skill requirements ❌ Platform dependency for multi-partition support ❌ Potential cascading issues | Unified Operations Single team managing multi-partition pipeline | High Complex conditional logic | Automated Built-in consistency mechanisms |
Summary:
- Infrastructure as Code is essential for consistent multi-partition configuration management
- Dynamic resource discovery prevents hardcoded partition dependencies
- ESC introduces endpoint and service availability differences requiring careful handling
- CI/CD pattern selection depends on organizational maturity and isolation requirements
9.3 Identity and Access Management (SSO)
Managing identity and access across partitions requires careful consideration of authentication patterns, permission management, and operational complexity. Organizations face four primary approaches based on their identity infrastructure maturity and external system dependencies.
Infrastructure as Code: The Foundation for Multi-Partition Identity Management
Infrastructure as Code is absolutely critical for managing IAM Identity Center across multiple partitions without configuration drift. Manual console-based configuration of permission sets, account assignments, and user/group mappings quickly becomes operationally unsustainable and introduces significant security risks when scaled across partitions.
Preventing Configuration Drift: IaC templates ensure identical identity configurations deploy across partitions, with version control tracking all permission set changes and automated drift detection identifying deviations between partitions. This eliminates the manual errors that commonly occur when administrators configure identity settings separately in each partition.
Permission Set Consistency: Template-driven permission set deployments guarantee that access controls remain consistent across partitions, enabling security teams to verify permissions through code review rather than partition-by-partition inspection. Policy-as-code approaches embed access requirements directly into infrastructure definitions.
Account Assignment Automation: Automated account assignment workflows ensure users receive consistent access across appropriate partitions based on their role and data classification requirements. Templates can deploy conditional assignments based on partition-specific service availability or compliance requirements.
Pattern 1 Independent Identity Centers Without External IdP
Figure 10: Independent Identity Centers with separate user stores
Organizations without external identity providers such as Microsoft Entra ID or Okta must maintain separate IAM Identity Center instances with independent user stores across partitions. This approach provides maximum simplicity but introduces significant operational overhead and user experience challenges.
Implementation: Deploy IAM Identity Center in each partition with separate user directories. Users must register separately in each partition, set independent passwords, and configure separate multi-factor authentication devices. Each partition provides distinct access portal URLs requiring separate bookmark management.
User Experience: Users maintain separate credentials for each partition and must remember which applications reside in which partition. Password policies, MFA requirements, and account lifecycle management operate independently.
Operational Considerations: Identity administrators must maintain separate user directories, duplicate password reset procedures, and manage independent MFA device registrations. User onboarding requires separate account creation workflows for each partition where access is required.
Security Implications: Independent MFA devices reduce security correlation across partitions but provide stronger isolation if one partition is compromised. Password complexity and rotation policies can differ between partitions unless explicitly standardized through operational procedures.
Independent identity centers create significant operational challenges:
Duplicate User Management: Every user requiring access to both partitions needs separate account creation, password management, and MFA configuration. This doubles administrative overhead and increases the likelihood of configuration inconsistencies.
Inconsistent Access Patterns: Users may have different permission levels across partitions due to separate assignment processes, creating security risks and compliance challenges. Manual synchronization of access rights becomes error-prone at scale.
Support Fragmentation: Password resets, account lockouts, and MFA issues require partition-specific support procedures. Help desk teams need training on multiple systems and access to separate administrative consoles.
Pattern 2 Identity Centers with External IdP Integration
Figure 11: External IdP integration with SCIM synchronization
External identity providers such as Microsoft Entra ID or Okta serve as the authoritative source for user identities and group memberships across partitions. SCIM (System for Cross-domain Identity Management) protocols synchronize user and group information to IAM Identity Center instances in both partitions.
Implementation: Configure SCIM provisioning from external IdP to IAM Identity Center instances in both partitions. Users and groups synchronize automatically, maintaining consistent identity representation whilst preserving partition isolation. External IdP provides centralized authentication with partition-specific authorization.
User Experience: Users access a centralized application portal (such as myapps.microsoft.com, Okta dashboard, or corporate intranet) containing links to partition-specific AWS access portals. Single sign-on (SSO) provides seamless authentication to both partitions with partition-aware application tiles.
Permission Management: Permission sets replicate across partitions with partition-specific adaptations for service availability differences. Attribute-based access control (ABAC) patterns use user attributes and group memberships to determine access rights across partitions. Role-based access control (RBAC) provides structured permission hierarchies that adapt to partition capabilities.
Group-Based Access Control: External IdP groups control access to specific partitions and permission levels. Groups like "ESC-Developers" and "Commercial-Admins" provide granular control over who can access which partition with what permissions. Group membership changes in the external IdP automatically propagate to appropriate partitions via SCIM.
Pattern 3 External IdP with Just-in-Time Elevated Access
Figure 12: Just-in-time elevated access through external IdP automation
Organizations requiring temporary elevated permissions can implement just-in-time (JIT) access patterns through external IdP automation. This approach provides time-bound administrative access with appropriate approval workflows and audit trails across partitions.
Implementation: Custom automation workflows temporarily add users to privileged groups in the external IdP, triggering SCIM synchronization to grant elevated permissions in target partitions. Approval workflows gate access requests with time-bound assignments that automatically expire. Integration with ticketing systems provides audit trails and business justification.
Workflow Process: Users request elevated access through self-service portals or API integrations, specifying justification, duration, and target partition(s). Approval workflows route requests to appropriate managers or security teams. Upon approval, automation temporarily adds users to privileged IdP groups, enabling elevated access that automatically expires at the specified time.
Cross-Partition Coordination: Automation workflows can grant elevated access across multiple partitions simultaneously or independently based on request requirements. Permission elevation in ESC may require separate approval workflows due to sovereignty requirements, whilst commercial partition access follows standard procedures.
Audit and Compliance: All elevation requests, approvals, and access grants generate comprehensive audit logs across external IdP and AWS CloudTrail. Time-bound access ensures privileged permissions don't persist beyond business requirements, reducing security exposure and simplifying compliance reporting.
AWS TEAM (Temporary Elevated Access Management) is a custom AWS solution for just-in-time access management. However, AWS TEAM does not support multi-partition environments and must be deployed independently in each partition where elevated access is required.
Multi-Partition Implications: Organizations using AWS TEAM must:
- Deploy separate TEAM instances per partition with independent workflows
- Maintain separate approval processes and audit trails
- Train administrators on partition-specific TEAM operations
- Accept operational overhead of managing multiple TEAM deployments
Alternative Approaches: External IdP automation or third-party JIT solutions provide better multi-partition support with unified workflows and centralized audit trails.
Pattern 4 Third-Party Just-in-Time Access Tools
Figure 13: Third-party just-in-time access management platform
Specialized third-party tools like Apono, CyberArk, Tenable, or Entitle provide advanced just-in-time access management with single entry points and sophisticated approval workflows. These platforms offer unified access request interfaces whilst handling partition-specific integration requirements.
Implementation: Deploy third-party JIT platforms with custom integrations to each AWS partition. Platforms provide centralized request interfaces with approval workflows that can route to partition-specific approvers. Integration APIs handle permission elevation across multiple partitions through a single user interface.
Advanced Features: Third-party tools often provide sophisticated features including risk-based access decisions, session recording, privileged account discovery, and automated access reviews. Integration with identity governance platforms enables comprehensive access certification and compliance reporting.
Single Pane of Glass: Users submit access requests through unified interfaces without needing to understand partition boundaries. Administrative teams manage approval workflows, policies, and compliance reporting through centralized dashboards that span multiple AWS partitions and cloud providers.
Custom Integration Requirements: Each third-party tool requires custom integration development to support multiple AWS partitions effectively. Integration complexity varies significantly between platforms, with some providing built-in multi-cloud support whilst others require extensive customization.
ESC Compatibility: Third-party identity and access management tools are expected to support AWS European Sovereign Cloud following general availability, but support is not guaranteed from day zero.
Expected Challenges:
- Integration Development: Vendors need time to develop and test ESC-specific integrations
- API Compatibility: ESC-specific endpoints and authentication patterns may require platform updates
- Feature Parity: Initial ESC support may lack feature parity with commercial AWS integrations
- Certification Timelines: Security certifications and compliance validations may delay ESC support
Planning Recommendations: Organizations should engage with identity platform vendors early to understand ESC roadmaps and support timelines. Plan for potential delays or limitations in third-party tool support during ESC early adoption phases.
Multi-Partition Identity Pattern Comparison
| Pattern | Best For | Pros | Cons | User Experience | Complexity | Cost Profile |
|---|---|---|---|---|---|---|
| 1. Independent Identity Centers | Small organizations without external IdP infrastructure, maximum partition isolation requirements | ✅ Maximum partition isolation ✅ No external dependencies ✅ Simple initial setup ✅ AWS-native authentication ✅ No additional licensing | ❌ Duplicate user management overhead ❌ Inconsistent access patterns ❌ Multiple credential sets per user ❌ Fragmented support procedures ❌ Manual synchronization required | Poor Multiple logins, passwords, MFA devices | Medium Dual administration | Low No additional licensing costs; High operational overhead |
| 2. External IdP Integration | Organizations with existing IdP infrastructure, need for consistent identity management, SCIM provisioning capabilities | ✅ Centralized identity management ✅ Automatic user/group synchronization ✅ Consistent permission models ✅ Single authentication source ✅ Familiar user experience | ❌ External IdP dependency ❌ SCIM configuration complexity ❌ Potential sync delays ❌ Additional licensing costs ❌ Permission set replication overhead | Good Centralized portal with partition links | Medium SCIM configuration & maintenance | Medium External IdP licensing; SCIM provisioning costs |
| 3. External IdP JIT Access | Organizations requiring temporary elevated permissions, strong audit requirements, time-bound administrative access | ✅ Time-bound elevated access ✅ Comprehensive audit trails ✅ Automated approval workflows ✅ Centralized request interface ✅ Cross-partition coordination | ❌ Custom automation development ❌ Complex approval workflow design ❌ External IdP dependency ❌ Potential sync delays for elevation ❌ Limited to IdP group-based permissions | Good Self-service elevation requests | High Custom automation & workflows | Medium-High External IdP licensing; Custom development costs |
| 4. Third-Party JIT Tools | Large enterprises with complex access requirements, advanced audit needs, multi-cloud environments | ✅ Advanced JIT capabilities ✅ Unified access request interface ✅ Sophisticated approval workflows ✅ Session recording & monitoring ✅ Multi-cloud platform support | ❌ Additional platform licensing ❌ Custom integration development ❌ Vendor dependency ❌ Potential ESC support delays ❌ Complex platform management | Excellent Single interface for all access requests | High Platform integration & management | High Third-party platform licensing; Integration development costs |
Summary:
- Identity management complexity increases significantly across partitions without external IdP integration
- External IdP integration provides the best balance of user experience and operational efficiency
- Just-in-time access patterns require careful design but significantly improve security posture
- Infrastructure as Code is essential for maintaining consistent identity configurations across partitions
- Third-party tool support for ESC may be delayed, requiring fallback planning for early adoption
9.4 Security and Compliance
Maintaining consistent security posture across partitions whilst respecting sovereignty boundaries requires partition‑aware security architectures and compliant telemetry handling. Organizations face three primary approaches based on their operational preferences, sovereignty requirements, and existing security infrastructure investments.
Infrastructure as Code: The Foundation for Multi-Partition Security Management
Infrastructure as Code is absolutely critical for achieving multi-partition security and compliance parity whilst preventing configuration drift. Without IaC, maintaining consistent security postures across partitions becomes operationally unsustainable and introduces significant compliance risks.
Preventing Configuration Drift: IaC templates ensure identical security configurations deploy across partitions, with version control tracking all changes and automated drift detection identifying deviations. This eliminates the manual configuration errors that commonly occur when managing multiple environments through console-based administration.
Compliance Parity Assurance: Template-driven deployments guarantee that compliance controls remain consistent across partitions, enabling auditors to verify control implementation through code review rather than environment-by-environment inspection. Policy-as-code approaches embed compliance requirements directly into infrastructure definitions.
Partition-Aware Templates: Use conditional logic and feature flags to adapt to service availability differences between partitions. Templates can deploy alternative implementations based on the current partition, ensuring functional equivalence despite service parity gaps.
Guardrails Consistency: Shared guardrails include fundamental security controls such as root account restrictions, encryption requirements, network access limitations, and audit logging mandates. Partition-specific guardrails adapt to service availability, implementing alternative controls when services are unavailable or configuring partition‑aware resource restrictions.
Figure 14: Guardrails distribution across partitions
Pattern 1 Separate Native AWS Security Operations
Figure 15: Independent security operations with partition-specific teams
Operating Security Hub independently in each partition provides native AWS integration whilst maintaining clear sovereignty boundaries. Each partition operates its own security dashboard with partition-specific alerting and response procedures, ensuring ESC security events remain within EU governance domains.
Implementation: Deploy Security Hub, GuardDuty, Config, and other security services in both partitions with separate configuration and alerting. Security operations teams maintain partition-specific playbooks and escalation procedures, with alerts routing to appropriate regional or compliance-specific response teams.
Operational Model: Dedicated security teams or team members focus on specific partitions, developing deep expertise in partition-specific service availability and operational constraints. Incident response procedures accommodate AWS operational differences between partitions, particularly ESC's EU-resident support model.
Benefits: Complete sovereignty compliance with no cross-partition data movement, native AWS tooling integration without additional platform costs, clear operational boundaries aligned with compliance requirements, and simplified compliance auditing with partition-specific evidence trails.
Dashboard Management: Separate CloudWatch dashboards, Security Hub findings, and compliance reports maintain clear partition boundaries whilst requiring security analysts to monitor multiple interfaces. Custom tagging strategies help maintain context when switching between partition-specific tools.
Pattern 2 Cross-Partition Security Hub Forwarding
Figure 16: Centralised Security Hub with cross-partition finding forwarding
Designating one partition's Security Hub as the primary aggregation point enables centralised security operations whilst leveraging native AWS tooling. Lambda functions forward security findings from the secondary partition to the primary Security Hub, providing a single pane of glass within AWS native infrastructure.
Implementation: Deploy Security Hub in both partitions but designate one as the primary aggregation point. Lambda functions in the secondary partition consume Security Hub findings and forward them to the primary partition via APIs. Custom tagging and metadata preserve finding origin and partition context.
Forwarding Architecture: Event-driven Lambda functions trigger on Security Hub finding updates, transforming and enriching findings with partition context before forwarding via cross-partition API calls. Custom finding formats maintain traceability whilst enabling unified reporting and dashboards.
Benefits: Native AWS tooling throughout the security stack, centralised security operations reducing operational overhead, familiar Security Hub interface for security analysts, and reduced third-party licensing costs compared to external platforms.
Data Residency Considerations: Careful evaluation required when forwarding ESC security findings to commercial partition Security Hub, as this may violate sovereignty requirements depending on organisational compliance obligations. Consider implementing filtering to exclude sensitive ESC findings from cross-partition forwarding.
Pattern 3 Third-Party Security Aggregation Platforms
Figure 17: Third-party security aggregation with unified dashboards
External security platforms like Wiz.io, Splunk, QRadar, PagerDuty, and Datadog provide unified security dashboards by aggregating findings from both partitions into a single pane of glass. These platforms offer advanced correlation capabilities, custom alerting, and comprehensive reporting that spans partition boundaries.
Implementation: Deploy security services (Security Hub, GuardDuty, Config) in both partitions with API integrations forwarding findings to external platforms. Configure data pipelines to aggregate security telemetry whilst maintaining appropriate metadata for partition identification and compliance tracking.
Benefits: Unified security visibility across all environments, advanced correlation and threat hunting capabilities, custom dashboards tailored to organisational needs, and integration with existing security operations workflows. Many platforms provide pre-built connectors for AWS services and can normalise findings across different security tools.
Advanced Features: Third-party platforms often provide sophisticated capabilities including machine learning-based anomaly detection, automated incident response workflows, cross-cloud security posture management, and integrated vulnerability assessment that spans multiple cloud providers and on-premises environments.
Data Residency Considerations: Organizations must carefully evaluate data residency requirements, particularly for ESC-originated security telemetry. Assessment of whether external platforms can meet sovereignty requirements or if telemetry aggregation violates compliance obligations is essential for ESC adoption.
ESC Compatibility: Third-party security platform support for AWS European Sovereign Cloud (ESC) is expected but not yet guaranteed. Platform vendors are anticipated to develop ESC integrations following general availability, but support timelines, feature parity, and compliance certifications may vary.
Planning Recommendations: Organizations should verify ESC compatibility with their chosen security platforms and plan for potential integration delays or feature limitations during ESC early adoption phases.
Multi-Partition Security Pattern Comparison
| Pattern | Best For | Pros | Cons | Sovereignty | Complexity | Cost Profile |
|---|---|---|---|---|---|---|
| 1. Separate Native AWS Operations | Organizations with strict sovereignty requirements, simplified compliance auditing, partition-specific SOC teams | ✅ Complete sovereignty compliance ✅ No cross-partition data movement ✅ Native AWS tooling integration ✅ Clear operational boundaries ✅ Simplified compliance auditing | ❌ Operational overhead of dual dashboards ❌ Potential blind spots in attack correlation ❌ Duplicate security tool costs ❌ Increased training requirements ❌ No unified security view | High Complete partition isolation | Medium Dual operations model | Medium Duplicate AWS security service costs; No additional platform fees |
| 2. Cross-Partition Security Hub Forwarding | Organizations prioritizing centralized operations, cost optimization, AWS-native security stacks | ✅ Centralized security operations ✅ Native AWS tooling throughout ✅ Familiar Security Hub interface ✅ Reduced third-party costs ✅ Single pane of glass | ❌ Data residency concerns for ESC findings ❌ Lambda execution & API costs ❌ Non-seamless investigation experience ❌ Synchronization complexity ❌ Potential finding duplication | Low-Medium ESC findings forwarded to commercial | Medium Custom forwarding logic | Low-Medium Lambda execution costs; Cross-partition API calls; Single AWS security service deployment |
| 3. Third-Party Security Platforms | Organizations with existing SIEM investments, advanced threat hunting requirements, complex multi-cloud environments | ✅ Unified security visibility ✅ Advanced correlation capabilities ✅ Custom dashboards & alerting ✅ Integration with existing SOC workflows ✅ Multi-cloud platform support | ❌ Additional licensing costs ❌ Data residency compliance risks ❌ External platform dependencies ❌ Integration complexity ❌ Potential vendor lock-in | Medium May require careful ESC telemetry handling | Medium-High Platform integration & management | Medium-High Platform licensing scales with data volume; API & log forwarding costs |
Summary:
- Security patterns range from complete partition separation to unified aggregation with varying sovereignty trade-offs
- Infrastructure as Code is essential for maintaining consistent security configurations across partitions
- Native AWS security operations provide sovereignty compliance but require separate management overhead
- Third-party platforms provide advanced capabilities but require careful data residency evaluation
9.5 Connectivity
Cross‑partition connectivity represents one of the most significant technical challenges, as AWS provides no native mechanisms for VPC peering, Transit Gateway connectivity, or PrivateLink between partitions. Traditional AWS networking patterns cannot span partition boundaries, forcing organizations to implement alternative connectivity approaches that balance security, performance, and operational complexity.
The connectivity patterns for cross‑partition architectures mirror those used in multi‑cloud and hybrid on‑premises environments. Organizations with experience connecting AWS to other cloud providers (Azure, Google Cloud) or integrating cloud workloads with on‑premises data centres will find familiar architectural patterns and trade‑offs. The same considerations around latency, security, cost, and operational complexity that apply to multi‑cloud connectivity also apply to multi‑partition scenarios.
Three viable patterns address cross‑partition connectivity requirements, each with distinct trade‑offs regarding security, performance, cost, and operational complexity. The selection depends on application requirements, existing infrastructure, security posture, and performance needs.
Backbone Network Architecture: Traffic routing behavior varies by partition due to separate backbone infrastructures:
-
Commercial ↔ GovCloud (US): Traffic between these partitions remains on AWS's backbone network and does not traverse the public internet, as both partitions share the same global AWS backbone infrastructure.
-
ESC ↔ Commercial/GovCloud: ESC operates with a completely separate backbone infrastructure (similar to AWS China), meaning cross-partition traffic between ESC and other partitions will traverse the public internet rather than AWS's private backbone network. This separation reinforces ESC's sovereignty model but affects network performance and security considerations for cross-partition connectivity.
-
China ↔ Any Partition: AWS China operates with a separate backbone, so all traffic to/from China regions traverses the public internet.
This backbone separation is fundamental to ESC's sovereignty architecture, ensuring complete infrastructure independence whilst requiring careful planning for cross-partition connectivity patterns.
Pattern 1 Connectivity over the Internet using Native TLS Encryption
Figure 18: TLS encryption at the application layer between partitions using Application Load Balancers (ALBs)
This pattern leverages application‑native TLS encryption for cross‑partition communication through managed AWS services that provide secure external endpoints. Rather than exposing backend services directly, this approach uses Application Load Balancers (ALBs) or API Gateways as secure intermediaries that handle TLS termination and traffic routing to private backend services.
Implementation: Deploy backend services in private subnets within both partitions, configure Application Load Balancers or API Gateways with TLS certificates to provide secure public endpoints, and establish cross‑partition communication through these managed service endpoints. Backend services remain private and communicate only with the load balancers or API Gateways within their respective partitions.
Security Architecture: Backend compute resources (EC2, ECS, EKS) operate in private subnets without direct internet access. ALBs or API Gateways serve as the only public‑facing components, handling TLS termination, authentication, and request routing. Security groups restrict backend access to only the load balancer or API Gateway security groups, ensuring no direct public exposure of application services.
Service Options:
- Application Load Balancer (ALB): Ideal for HTTP/HTTPS traffic with advanced routing capabilities, health checks, and integration with AWS Certificate Manager for automatic TLS certificate management
- API Gateway: Optimal for REST APIs, providing built‑in authentication, throttling, caching, and request/response transformation capabilities
- Network Load Balancer (NLB): Suitable for non‑HTTP protocols requiring high performance and low latency
Security Considerations: Organizations remain responsible for ensuring encryption‑in‑transit modules and algorithms meet compliance requirements. Implement mutual TLS authentication where supported, use AWS Certificate Manager for certificate lifecycle management, and configure appropriate security group rules to restrict access to authorised cross‑partition communication only.
Improper security group configuration or load balancer settings can inadvertently expose backend services directly to the public internet, allowing unintended public-facing traffic rather than the intended secure cross‑partition communication. Implement defence‑in‑depth controls including restrictive security groups that only allow traffic from load balancers, regular security audits of public‑facing endpoints, and automated compliance scanning to detect and alert on unintended public exposure. Use AWS Config rules and Security Hub findings to continuously monitor for compliance violations.
Pattern 2 Connectivity using IPSec AWS Site‑to‑Site VPN over the Internet
Figure 19: Site‑to‑Site VPN between partitions using Transit Gateway and self‑managed EC2‑based appliances
AWS Site‑to‑Site VPN uses Internet Protocol Security (IPSec) to create encrypted tunnels between partitions, providing network‑layer encryption for applications that cannot support application‑layer encryption or require private connectivity without internet exposure.
Implementation: Deploy third‑party virtual appliances (such as Cisco CSR, Palo Alto Networks VM‑Series, Fortinet FortiGate, or open‑source solutions like StrongSwan) in one or both partitions. Configure IPSec tunnels between appliances, with Transit Gateway providing connectivity within each partition. Virtual appliances handle tunnel establishment, encryption, routing, and can provide additional security services including deep packet inspection, intrusion detection/prevention, and application-layer filtering.
Security Enhancement: Modern firewall appliances combine VPN capabilities with comprehensive traffic inspection, enabling organizations to implement unified security policies across partition boundaries. These appliances can perform application identification, threat detection, content filtering, and compliance logging whilst maintaining encrypted connectivity between partitions.
Performance: Maximum bandwidth per VPN tunnel is 1.25 Gbps. Enable Equal‑Cost Multi‑Path (ECMP) routing to aggregate throughput across multiple VPN connections and scale beyond the default 1.25 Gbps limit. This approach can achieve higher aggregate bandwidth but introduces additional complexity.
Security Requirements: Ensure IPSec implementations use NIST‑approved algorithms and FIPS‑validated cryptographic modules to meet compliance requirements. Configure strong authentication and encryption policies appropriate for your security posture.
Pattern 2 introduces operational challenges that require careful planning and ongoing management:
VPC CIDR Overlap Constraints: VPC CIDR ranges cannot overlap across partitions when using VPN connectivity. This requires careful IP address space planning between partitions to prevent address conflicts. Consider implementing a centralized IPAM (IP Address Management) strategy to manage address allocation across both partitions.
Third-Party Appliance Dependencies: VPN connectivity relies on third-party virtual appliances (EC2 instances) that introduce operational overhead including regular patching, security updates, monitoring, and high availability configuration. If these appliances fail or require maintenance, cross-partition connectivity is disrupted, potentially affecting critical business operations.
Single Points of Failure: Without proper redundancy design, VPN appliances can become single points of failure for cross-partition communication. Implement active-passive or active-active clustering with health monitoring and automated failover to maintain connectivity during appliance failures or maintenance windows.
Pattern 3 Connectivity through Direct Connect
This hub‑and‑spoke model routes cross‑partition traffic through a customer gateway infrastructure connected via AWS Direct Connect, enabling centralised security controls and traffic inspection. The pattern accommodates two primary deployment scenarios based on existing infrastructure.
Option A: On‑Premises Datacenter Gateway
Figure 20: Connectivity through on‑premises datacenter gateway infrastructure
For organizations with existing datacenter infrastructure, establish Direct Connect connections from both partitions to the on‑premises datacenter. Deploy firewall and routing appliances in the datacenter to control cross‑partition communication.
- Implementation: Connect each partition to on‑premises datacenter using AWS Direct Connect transit virtual interfaces (VIFs)
- Gateway Infrastructure: Customer‑managed routing and security appliances in physical datacenter
- Benefits: Leverages existing datacenter investments and operational expertise
- Considerations: Requires existing datacenter infrastructure and operational teams
Option B: Cloud Exchange Provider Gateway
Figure 21: Connectivity through cloud exchange provider gateway infrastructure
For organizations without datacenter infrastructure, establish a customer gateway at cloud exchange providers (Equinix, Digital Realty, etc.) with Direct Connect connectivity to both partitions.
- Implementation: Deploy virtual customer gateway infrastructure at cloud exchange facilities with Direct Connect connections
- Gateway Infrastructure: Virtual appliances (firewalls, routers) hosted at provider facilities
- Benefits: No datacenter infrastructure required, provider‑managed physical infrastructure
- Considerations: Recurring costs for provider space, power, and connectivity
Direct Connect Architecture: Direct Connect gateways are global constructs that enable connectivity to any AWS Region but do not enable BGP prefix propagation between associated Transit Gateways, preventing direct east‑west traffic. Communication occurs through advertised supernets that overlap with VPC address spaces.
Routing Behaviour: VPCs can communicate through Direct Connect endpoints when supernets (such as 10.0.0.0/8) are advertised that encompass the VPC address ranges. This enables controlled inter‑VPC communication through customer gateway infrastructure.
Performance Impact: Hair‑pinning traffic through customer gateway infrastructure introduces additional latency, data transfer out (DTO) costs, and Transit Gateway data transfer charges. Network performance depends on gateway infrastructure capacity and Direct Connect location proximity.
ESC Direct Connect Availability: AWS European Sovereign Cloud will launch with Direct Connect support, but the specific Direct Connect locations have not yet been announced.
Impact on Cross-Partition Connectivity: Organizations planning to connect ESC and commercial partitions via Direct Connect (Pattern 3) should anticipate:
- New Cross-Connect Requirements: ESC may launch at different Direct Connect facilities than existing commercial AWS locations, requiring additional cross-connects between facilities
- Increased Infrastructure Costs: Cross-connects between different Direct Connect locations typically incur monthly recurring charges and installation fees
- Extended Lead Times: Establishing new cross-connects can take weeks or months depending on facility availability and carrier provisioning
- Geographic Considerations: ESC Direct Connect locations may prioritize EU facilities, potentially affecting latency and routing optimization for global organizations
Planning Recommendations: Organizations should budget for potential additional cross-connect costs and extended deployment timelines when designing Direct Connect-based cross-partition connectivity. Consider engaging AWS sales teams early to understand ESC Direct Connect location planning and availability timelines.
Pattern 3 introduces infrastructure and operational dependencies that require careful consideration:
VPC CIDR Planning Constraints: Direct Connect requires that VPC CIDR ranges do not overlap with on-premises networks (for Option A) and across partitions. This demands comprehensive IP address space planning between on-premises and partitions to prevent routing conflicts and ensure proper traffic flow.
Direct Connect Redundancy Requirements: Achieving high availability requires redundant Direct Connect connections across multiple locations and diverse paths to avoid single points of failure. Organizations must establish multiple Direct Connect connections at different facilities, implement diverse network paths, and ensure proper BGP routing configurations. This significantly increases infrastructure complexity and costs, as organizations need multiple Direct Connect ports, cross-connects, and carrier relationships to achieve enterprise-grade redundancy.
Customer Gateway Management: Success depends on properly configured, maintained, and managed customer gateway infrastructure (routers, firewalls). These gateways require regular firmware updates, security patching, configuration management, and 24/7 monitoring. Gateway failures directly impact cross-partition connectivity, making robust operational procedures and redundancy planning essential.
Security Considerations Across All Patterns
All cross‑partition connectivity patterns require careful security implementation:
- Split‑horizon DNS: Ensure appropriate name resolution per partition to prevent misrouted traffic
- Egress controls: Implement strict egress filtering to prevent unintended cross‑partition communication
- TLS policy enforcement: Mandate TLS encryption for all cross‑partition application traffic regardless of network‑layer encryption
- Network segmentation: Use security groups and NACLs to restrict traffic to authorised flows only
- Monitoring and logging: Implement comprehensive flow logging and traffic analysis for all cross‑partition communication
- Certificate management: Establish robust PKI for TLS certificates and VPN authentication
Performance and Cost Optimisation
Optimise cross‑partition connectivity for performance and cost:
- Latency optimisation: Select connectivity patterns and routing that minimise additional latency
- Bandwidth planning: Size connections and appliances appropriately for expected traffic volumes
- Caching strategies: Implement intelligent caching to reduce cross‑partition traffic
- Data transfer optimisation: Use compression and efficient data formats to minimise bandwidth consumption
- Cost monitoring: Track data transfer costs and optimise traffic patterns based on usage analysis
Cross-Partition Connectivity Pattern Comparison
| Pattern | Best For | Pros | Cons | Performance | Complexity | Cost Profile |
|---|---|---|---|---|---|---|
| 1. TLS over Internet | Modern applications with native TLS support, API-based integrations, microservices | ✅ Excellent performance ✅ AWS best practices ✅ Simple implementation ✅ No additional infrastructure | ❌ Requires application-level TLS ❌ Not suitable for legacy apps ❌ Public endpoint required ❌ No centralized traffic inspection | High Minimal latency | Low Standard ALB/API Gateway | Low-Medium ALB/API Gateway + NAT Gateway fees; Internet data transfer rates |
| 2. IPSec VPN | Legacy applications, network-layer encryption requirements, centralized security policies within AWS partitions | ✅ Works with any application ✅ Network-layer encryption ✅ Centralized security at VPN appliances ✅ Predictable performance ✅ Familiar VPN patterns | ❌ 1.25 Gbps per tunnel limit ❌ Higher latency ❌ Appliance management overhead ❌ Costs scale with bandwidth | Medium VPN tunnel overhead | Medium Virtual appliance management | Medium-High VPN connection fees scale linearly; Virtual appliance costs; Higher data transfer rates |
| 3. Direct Connect | Centralized security controls at customer premises, highest bandwidth requirements, existing datacenter integration | ✅ Centralized security at customer gateway ✅ Highest bandwidth potential ✅ Works with/without datacenters ✅ Predictable performance ✅ Long-term data transfer savings | ❌ Higher latency (hair-pinning) ❌ High initial investment ❌ Gateway infrastructure dependency ❌ Complex redundancy requirements | Medium Varies by infrastructure | High Gateway infrastructure + redundancy | High initial, Low ongoing Direct Connect port fees; Gateway infrastructure costs; Lower data transfer rates at scale |
Summary:
- Three distinct patterns address cross‑partition connectivity with different trade‑offs
- Application‑layer TLS provides optimal performance but requires native encryption support
- IPSec VPN enables network‑layer encryption for legacy applications at performance cost
- Direct Connect routing centralises security controls but introduces latency and complexity
9.6 Observability and Operations
Unified observability across partitions whilst maintaining compliant data residency requires careful consideration of telemetry collection, cross‑partition correlation, and partition‑aware incident response workflows. Organizations must balance operational visibility requirements with sovereignty constraints and data residency regulations.
The fundamental challenge lies in providing unified operational views whilst respecting partition boundaries and service availability differences. CloudWatch, X‑Ray, and other observability services operate independently within each partition, requiring coordination mechanisms to achieve comprehensive monitoring and alerting across multi‑partition environments.
Customer incident response playbooks require adaptation to ESC's operational model whilst maintaining consistent service level objectives across partitions.
Pattern 1 Independent Observability per Partition
Figure 22: Independent observability with partition-specific operational procedures
This pattern maintains completely separate observability stacks in each partition, providing maximum data residency compliance and partition isolation whilst requiring manual coordination for cross‑partition operational visibility and incident correlation.
Implementation: Deploy independent CloudWatch, X‑Ray, and monitoring infrastructure in each partition with separate dashboards, alerting configurations, and operational procedures. Each partition operates its own observability stack without cross‑partition data sharing or automated correlation.
Data Residency: Telemetry data never leaves its originating partition, ensuring complete compliance with data residency requirements. Metrics, logs, traces, and operational data remain within appropriate sovereignty boundaries throughout their lifecycle.
Operational Model: Platform teams maintain separate monitoring dashboards and alerting systems for each partition. Incident response procedures operate independently per partition, with manual coordination required for issues affecting multiple partitions simultaneously.
Benefits: Maximum data residency compliance with no cross‑partition data movement, complete partition isolation eliminating cross‑partition failure scenarios, simplified compliance boundaries with clear data provenance, and independent operational procedures that can accommodate partition‑specific constraints.
Challenges: Manual correlation required for cross‑partition issues, duplicate dashboard and alert configuration maintenance, separate operational procedures increasing response complexity, and potential blind spots for issues spanning multiple partitions.
Use Cases: Organizations with strict data residency requirements, regulatory frameworks prohibiting cross‑border telemetry data, maximum partition isolation priorities, and acceptance of manual cross‑partition correlation overhead.
Pattern 2 Centralized Observability with Log Forwarding
Figure 23: Log forwarding to main partition for centralized analysis
This pattern forwards telemetry data from distributed partitions to a centralized observability stack in a designated "main" partition, providing unified operational views whilst requiring careful consideration of data residency implications and cross‑partition connectivity requirements.
Implementation: Configure log forwarding from ESC partition to commercial partition (or vice versa) using Kinesis Data Firehose with HTTP endpoint delivery to Lambda function URLs, API Gateway endpoints, or other HTTP-accessible services in the target partition. Centralized partition aggregates telemetry data for unified analysis, alerting, and operational dashboards.
Data Flow Architecture: ESC partition forwards logs, metrics, and traces to the commercial partition over the internet through secure HTTPS endpoints. Kinesis Data Firehose delivers log streams to Lambda function URLs or API Gateway endpoints in the target partition, which then process and store the data in centralized CloudWatch, OpenSearch, or other analysis tools providing unified operational views with data from both partitions.
Compliance Considerations: Log forwarding may conflict with data residency requirements depending on regulatory interpretation and data classification. Organizations must evaluate whether telemetry forwarding violates sovereignty constraints or compliance obligations.
Benefits: Unified operational dashboards providing complete cross‑partition visibility, centralized alerting with correlation across partitions, single operational interface reducing complexity for platform teams, and shared observability infrastructure reducing maintenance overhead.
Challenges: Potential data residency compliance violations requiring legal evaluation, cross‑partition connectivity dependencies affecting reliability, network egress costs for telemetry forwarding, and potential data loss during connectivity disruptions.
Use Cases: Organizations with acceptable data residency implications for telemetry data, established cross‑partition connectivity infrastructure, unified operational team preferences, and tolerance for compliance complexity in observability data handling.
Regulatory Compliance Risk: Forwarding telemetry data across partitions may violate data residency requirements depending on regulatory interpretation and data classification schemes.
Legal Evaluation Required: Organizations must engage legal and compliance teams to evaluate whether operational telemetry forwarding conflicts with sovereignty obligations, particularly for:
- Personal Data in Logs: Application logs may contain customer identifiers or personal information subject to GDPR residency requirements
- Business Data Correlation: Aggregated telemetry patterns may reveal business-sensitive information requiring specific jurisdictional protection
- Regulatory Interpretation: Different regulatory frameworks may classify operational telemetry differently regarding cross-border transfer restrictions
Mitigation Strategies: Implement data classification and filtering at forwarding points, anonymize or pseudonymize telemetry data before cross‑partition transfer, maintain parallel independent observability as fallback, and document compliance rationale for regulatory examination.
Pattern 3 External Third‑Party Observability Platform
Figure 24: Third-party observability platform aggregating telemetry from multiple partitions
This pattern uses external third‑party observability platforms to aggregate telemetry data from multiple partitions, providing unified operational views whilst maintaining clear data provenance and potentially addressing data residency concerns through platform jurisdiction selection.
Implementation: Deploy observability agents or configure native integrations to forward telemetry data from both partitions to external platforms such as Datadog, New Relic, Splunk, or Dynatrace. External platforms provide unified dashboards, alerting, and analysis capabilities across partitions.
Platform Selection: Choose observability platforms with appropriate jurisdictional presence to align with data residency requirements. EU‑based observability platforms or platforms with EU data centers may provide better alignment with ESC sovereignty objectives.
Integration Architecture: Use native AWS integrations, observability agents, or API‑based forwarding to send telemetry data to external platforms. Maintain consistent data classification and filtering across partitions to ensure appropriate data handling.
Benefits: Unified cross‑partition observability without AWS‑specific limitations, advanced analytics and correlation capabilities beyond native AWS services, potential data residency alignment through platform jurisdiction selection, and vendor expertise in multi‑cloud observability scenarios.
Challenges: Additional vendor relationships and licensing costs, external dependency for critical operational visibility, potential data residency complications depending on platform architecture, and vendor lock‑in risks for observability infrastructure.
Advanced Features: Third‑party platforms often provide sophisticated capabilities including anomaly detection, predictive analytics, automated incident correlation, and integration with incident management workflows that span multiple cloud providers and partitions.
Use Cases: Organizations with existing third‑party observability investments, requirements for advanced analytics beyond native AWS capabilities, acceptable external dependencies for operational visibility, and budget for additional observability platform licensing.
Jurisdiction Alignment: Select observability platforms with data processing and storage aligned to your sovereignty requirements:
- EU‑Based Platforms: European observability vendors or EU data centers for ESC alignment
- Jurisdictional Flexibility: Platforms offering data residency controls and regional deployment options
- Compliance Certifications: Platforms with relevant compliance certifications (ISO 27001, SOC 2, etc.)
ESC Integration Support: Third‑party observability platforms may require time to develop ESC‑specific integrations:
- Early Adoption Risk: Limited platform support during ESC early adoption phases
- Integration Development: Vendors need time to develop and test ESC‑specific connectors
- Feature Parity: Initial ESC support may lack feature parity with commercial AWS integrations
Planning Recommendations: Engage with observability platform vendors early to understand ESC roadmaps and integration timelines. Plan for potential delays or limitations in platform support during ESC adoption.
Multi‑Partition Observability Pattern Comparison
| Pattern | Best For | Pros | Cons | Data Handling | Operational Model | Compliance |
|---|---|---|---|---|---|---|
| 1. Independent Observability per Partition | Organizations with strict data residency requirements, maximum partition isolation priorities | ✅ Maximum data residency compliance ✅ Complete partition isolation ✅ Simplified compliance boundaries ✅ No cross‑partition dependencies ✅ Partition‑specific operational procedures | ❌ Manual cross‑partition correlation ❌ Duplicate configuration maintenance ❌ Separate operational procedures ❌ Potential blind spots ❌ Increased operational complexity | Isolated Data never leaves partition | Separated Independent procedures per partition | Maximum Clear compliance boundaries |
| 2. Centralized Observability with Log Forwarding | Organizations with unified operational preferences, acceptable data residency implications for telemetry | ✅ Unified operational dashboards ✅ Centralized alerting and correlation ✅ Single operational interface ✅ Reduced infrastructure overhead ✅ Complete cross‑partition visibility | ❌ Potential compliance violations ❌ Cross‑partition connectivity dependency ❌ Network egress costs ❌ Data loss risk during outages ❌ Complexity in data classification | Centralized Forwarded to main partition | Unified Single operational interface | Complex Requires legal evaluation |
| 3. External Third‑Party Observability Platform | Organizations with existing platform investments, requirements for advanced analytics, acceptable external dependencies | ✅ Advanced analytics capabilities ✅ Unified multi‑partition visibility ✅ Vendor expertise and features ✅ Potential jurisdiction alignment ✅ Multi‑cloud support | ❌ Additional licensing costs ❌ External dependency for operations ❌ Potential vendor lock‑in ❌ Platform‑specific data residency ❌ ESC integration development delays | External Forwarded to third‑party platform | Vendor‑Managed Platform‑specific interfaces | Platform‑Dependent Varies by vendor jurisdiction |
Summary:
- Observability patterns balance operational visibility with sovereignty and compliance requirements
- Independent observability provides maximum compliance but requires manual correlation overhead
- Centralized patterns improve operational efficiency but introduce compliance complexity
- Third‑party platforms offer advanced capabilities but introduce external dependencies and costs
- Pattern selection should consider data classification, regulatory requirements, and operational maturity
9.7 Backup and Recovery
Multi‑partition backup and recovery introduces significant complexity beyond traditional single‑partition operations. Organizations must navigate cross‑partition data movement constraints, service availability differences, and sovereignty requirements whilst maintaining comprehensive data protection and recovery capabilities across distributed environments.
The Multi-Partition Backup Challenge
Traditional AWS backup strategies assume unified control planes and cross‑region replication capabilities within a single partition. Multi‑partition environments break these assumptions, requiring alternative approaches for data protection, disaster recovery, and business continuity across partition boundaries.
Service Availability Gaps: AWS Backup availability may differ between partitions, with ESC potentially launching with limited backup service capabilities compared to the mature commercial partition. Organizations must plan for service parity evolution and implement alternative solutions for missing capabilities.
Single Region Limitations: ESC launches with a single region (Brandenburg, eusc-de-east-1), eliminating traditional cross‑region backup patterns within the ESC partition. Organizations cannot leverage multiple ESC regions for geographic backup redundancy until additional ESC regions become available, creating dependency on alternative backup strategies for disaster recovery.
Cross-Partition Replication Constraints: Native AWS backup services cannot replicate data directly between partitions, eliminating traditional cross‑region backup patterns. Data protection requires alternative replication mechanisms that respect partition boundaries whilst providing comprehensive coverage.
Sovereignty and Compliance Implications: Backup data movement across partitions may violate data residency requirements depending on regulatory interpretation and data classification. Organizations must carefully evaluate whether cross‑partition backup storage conflicts with sovereignty obligations.
Recovery Time and Point Objectives: Multi‑partition recovery scenarios may introduce additional complexity and latency compared to single‑partition recovery operations. Organizations must reassess Recovery Time Objectives (RTO) and Recovery Point Objectives (RPO) to account for partition‑specific constraints and cross‑partition coordination requirements.
Operational Complexity: Managing backup policies, retention schedules, and recovery procedures across multiple partitions increases operational overhead and requires partition‑aware automation to maintain consistency and prevent configuration drift.
AWS Backup is Partition-Bound: AWS Backup operates exclusively within partition boundaries and cannot export backup data to other partitions or offsite. This creates fundamental limitations for cross-partition backup strategies:
Cross-Partition Limitations:
- ❌ Cross-Partition Export: AWS Backup cannot export or replicate backup data to different AWS partitions
- ❌ Cross-Partition Sharing: Backup vaults cannot be shared across partition boundaries
- ❌ Cross-Partition Recovery: Direct recovery from AWS Backup to a different partition is not supported
Implications: Organizations requiring cross-partition backup capabilities must implement alternative solutions such as application-level backup tools, third-party backup platforms, or custom backup orchestration using S3 replication and external storage systems.
Three primary patterns address multi‑partition backup and recovery requirements, each with distinct trade‑offs regarding sovereignty compliance, operational complexity, and recovery capabilities.
Pattern 1 Independent Backup Operations per Partition
Figure 25: AWS Backup deployed independently in each partition with isolated data protection
This pattern deploys separate AWS Backup infrastructure in each partition, providing maximum sovereignty compliance by ensuring backup data never crosses partition boundaries. Each partition operates independent backup policies, retention schedules, and recovery procedures without cross‑partition coordination or data sharing.
Implementation: Deploy AWS Backup in both partitions with partition‑specific backup vaults, policies, and schedules. Configure backup jobs to protect resources within each partition independently, with recovery operations constrained to the same partition where backup data resides.
Backup Vault Configuration: Create separate backup vaults in each partition with appropriate encryption keys, access policies, and retention configurations.
Policy Management: Implement consistent backup policies across partitions using Infrastructure as Code templates with partition‑aware configurations. Shared policy definitions adapt to service availability differences between partitions whilst maintaining equivalent data protection coverage.
Benefits: Maximum sovereignty compliance with backup data never leaving originating partition, simplified compliance boundaries with clear data provenance, independent recovery operations eliminating cross‑partition dependencies, and familiar AWS‑native backup tooling and procedures.
Challenges: No cross‑partition disaster recovery capabilities without additional solutions, duplicate backup infrastructure increasing costs and operational overhead, separate recovery procedures requiring partition‑specific operational knowledge, and potential backup storage optimization limitations due to partition isolation.
Recovery Scenarios: Point‑in‑time recovery operates within each partition independently. Geographic disaster recovery within ESC is not possible due to single region availability, making cross‑partition or external backup solutions necessary for comprehensive disaster recovery capabilities. Cross‑partition disaster recovery requires alternative mechanisms such as application‑level replication, database clustering, or external backup solutions that can operate across partition boundaries.
Use Cases: Organizations with strict data residency requirements prohibiting cross‑partition data movement, regulatory frameworks requiring complete data sovereignty for backup operations, maximum compliance simplicity priorities, and acceptance of partition‑isolated recovery capabilities.
ESC Launch Considerations: AWS Backup availability in AWS European Sovereign Cloud may be limited during initial launch phases. Organizations should verify AWS Backup service availability and feature parity before implementing partition‑specific backup strategies.
Single Region Impact: ESC's single region deployment (Brandenburg) eliminates cross‑region backup capabilities within the ESC partition. Organizations cannot use AWS Backup's cross‑region backup features for geographic redundancy until additional ESC regions become available.
Mitigation Planning: Organizations should plan alternative backup approaches for ESC environments until full AWS Backup feature parity is achieved, including manual backup procedures, application‑level backup solutions, cross‑partition backup strategies, or third‑party backup tools for geographic redundancy.
Pattern 2 Cross-Partition Backup via Offsite Storage
Figure 26: Cross‑partition backup coordination through offsite storage infrastructure
This pattern uses offsite storage infrastructure as an intermediary for cross‑partition backup coordination, enabling disaster recovery capabilities whilst maintaining control over data movement and residency through customer‑managed infrastructure. Offsite storage options include on‑premises datacenters, other cloud providers, or even the alternate AWS partition depending on sovereignty requirements and operational preferences.
Implementation: Configure application‑level backup tools (pg_dump, MySQL backup, application exports) to create backup artifacts that are stored in Amazon S3 within each partition. Use AWS DataSync, S3 replication, or custom automation to copy backup artifacts to offsite storage infrastructure for centralized backup management and cross‑partition recovery coordination.
Application-Level Backup Tools:
- Database Backups: Use native database backup tools (pg_dump, mysqldump, Oracle RMAN) to create portable backup artifacts
- Application Data: Implement application‑specific export procedures for business data, configuration, and state information
- File System Backups: Use file‑level backup tools for persistent storage volumes and application data directories
Data Flow Architecture: Backup jobs create artifacts within each partition's S3 storage, then copy data to offsite storage via Direct Connect, VPN, or internet connectivity. Offsite storage provides centralized backup management with the ability to restore data to either partition based on recovery requirements.
Offsite Storage Options:
- On‑Premises Datacenters: Traditional datacenter infrastructure with customer‑controlled storage systems
- Other Cloud Providers: Azure Blob Storage, Google Cloud Storage, or other cloud provider storage services
- Alternate AWS Partition: Commercial partition for ESC backups, or ESC partition for commercial backups (subject to sovereignty evaluation)
- Cloud Storage Gateways: Hybrid storage solutions that bridge on‑premises and cloud storage environments
Benefits: Cross‑partition disaster recovery capabilities through centralized backup storage, customer control over backup data movement and residency decisions, flexible recovery options with ability to restore to either partition, and leverage existing backup infrastructure investments across various storage platforms.
Challenges: Requires offsite storage infrastructure and operational expertise, network bandwidth constraints for backup data transfer, increased backup storage costs across multiple locations, and complex recovery coordination procedures across partitions and external storage systems.
Recovery Procedures: Standard recovery restores data within the same partition from local S3 backup artifacts. Cross‑partition disaster recovery retrieves backup data from offsite storage and restores to the target partition, potentially requiring data format conversion or application reconfiguration.
Data Sovereignty Considerations: Customer‑controlled offsite storage enables organizations to maintain data residency control throughout the backup lifecycle. Organizations can implement geographic restrictions, encryption standards, and access controls that align with specific sovereignty requirements. Storage location selection (on‑premises, specific cloud regions, or alternative partitions) must align with regulatory obligations and compliance frameworks.
Use Cases: Organizations with existing backup infrastructure (on‑premises, multi‑cloud, or cross‑partition), requirements for cross‑partition disaster recovery capabilities, need for geographic redundancy beyond ESC's single region deployment, acceptable operational complexity for customer‑managed backup coordination, and tolerance for increased backup storage and network costs.
Infrastructure Considerations: Successfully implementing Pattern 2 requires robust offsite storage infrastructure with sufficient capacity, performance, and redundancy to handle backup data from multiple partitions.
Storage Platform Selection:
- On‑Premises: Requires physical infrastructure, operational expertise, and capital investment
- Other Cloud Providers: May offer cost advantages but introduces additional vendor relationships and data egress charges
- Alternate AWS Partition: Simplifies operational procedures but may conflict with sovereignty requirements
- Hybrid Solutions: Storage gateways provide flexibility but add architectural complexity
Capacity Planning: Calculate total backup storage requirements across all partitions, including retention periods, backup frequency, and data growth projections. Factor in additional storage for backup retention policies and disaster recovery scenarios.
Network Bandwidth: Ensure adequate network connectivity between partitions and offsite storage for backup data transfer. Consider backup windows, data transfer costs, and network performance impact during backup operations.
Security and Encryption: Implement appropriate encryption for backup data in transit and at rest within offsite storage systems. Maintain consistent encryption standards across partition boundaries to ensure data protection compliance.
Pattern 3 Third‑Party Backup Solutions
Figure 27: Third‑party backup platforms providing unified backup management across partitions
This pattern uses specialized third‑party backup solutions designed for multi‑cloud and hybrid environments to provide unified backup management, advanced features, and cross‑partition recovery capabilities through external backup platforms.
Implementation: Deploy third‑party backup agents or configure API integrations with platforms such as N2WS (N2W Software), Veeam Backup for AWS, Druva Cloud Platform, or Rubrik Cloud Data Management. These platforms provide centralized backup management with cross‑partition visibility and recovery coordination.
Platform Capabilities:
- N2WS (N2W Software): Enterprise AWS backup with cross‑region replication, disaster recovery orchestration, and compliance reporting capabilities
- Veeam Backup for AWS: Native AWS backup with advanced recovery options, application‑aware backup, and cloud‑native integration
- Druva Phoenix: SaaS‑based backup platform with multi‑cloud support, automated backup policies, and ransomware protection
- Rubrik Polaris: Cloud data management platform with unified backup, disaster recovery, and data governance across multiple cloud providers
Advanced Features: Third‑party platforms often provide sophisticated capabilities including automated backup policy management, application‑aware backup and recovery, advanced compression and deduplication, ransomware detection and recovery, compliance reporting and audit trails, and cross‑cloud disaster recovery orchestration.
Cross‑Partition Coordination: External backup platforms can orchestrate backup and recovery operations across multiple partitions through unified interfaces, providing centralized policy management, cross‑partition disaster recovery capabilities, and consistent backup standards regardless of underlying partition constraints.
Benefits: Unified backup management across partitions through single interface, advanced backup features beyond native AWS capabilities, cross‑partition disaster recovery coordination, vendor expertise in multi‑cloud backup scenarios, and potential cost optimization through backup deduplication and compression.
Challenges: Additional vendor relationships and licensing costs, external dependency for critical backup operations, potential data residency implications depending on platform architecture, and vendor‑specific operational procedures and skill requirements.
Data Residency Control: Evaluate third‑party platform data handling, storage locations, and data processing capabilities to ensure alignment with sovereignty requirements. Some platforms offer regional deployment options or customer‑controlled data residency configurations.
Use Cases: Organizations with existing third‑party backup platform investments, requirements for advanced backup features beyond native AWS capabilities, acceptable external dependencies for backup operations, and budget for specialized backup platform licensing and operational expertise.
ESC Integration Development: Third‑party backup platforms will require time to develop and test ESC‑specific integrations following general availability. Support timelines vary significantly between vendors based on development priorities and customer demand.
Expected Challenges:
- API Integration: Platforms need time to develop ESC‑specific API integrations and endpoint configurations
- Agent Deployment: Backup agents may require updates for ESC‑specific networking and authentication patterns
- Feature Parity: Initial ESC support may lack feature parity with commercial AWS integrations
- Certification Requirements: Security certifications and compliance validations may delay ESC support availability
Planning Recommendations: Engage with backup platform vendors early to understand ESC roadmaps and support timelines. Plan for potential delays or manual backup procedures during ESC early adoption phases. Consider fallback approaches (Pattern 1 or Pattern 2) until third‑party platform support is fully available.
Multi‑Partition Backup Pattern Comparison
| Pattern | Best For | Pros | Cons | Cross‑Partition Recovery | Complexity | Cost Profile |
|---|---|---|---|---|---|---|
| 1. Independent Backup Operations per Partition | Organizations with strict data residency requirements, maximum sovereignty compliance, simple backup operations | ✅ Maximum sovereignty compliance ✅ Familiar AWS‑native tooling ✅ Simple compliance boundaries ✅ Independent partition operations ✅ No external dependencies | ❌ No cross‑partition disaster recovery ❌ Duplicate backup infrastructure ❌ Separate operational procedures ❌ Potential service availability gaps ❌ Limited backup optimization | Not Supported Partition‑isolated recovery only | Low Standard AWS Backup | Medium Duplicate AWS Backup costs; Separate backup storage |
| 2. Cross‑Partition Backup via Offsite Storage | Organizations with existing backup infrastructure, cross‑partition disaster recovery requirements, customer‑controlled data movement | ✅ Cross‑partition disaster recovery ✅ Customer‑controlled data residency ✅ Leverages existing infrastructure ✅ Flexible recovery options ✅ Multiple storage platform options | ❌ Offsite infrastructure required ❌ Network bandwidth constraints ❌ Complex recovery procedures ❌ Increased operational overhead ❌ Multiple backup storage locations | Supported Via offsite coordination | High Multi‑location coordination | High Offsite storage costs; Network transfer fees; Operational overhead |
| 3. Third‑Party Backup Solutions | Organizations with advanced backup requirements, existing platform investments, unified operational preferences | ✅ Advanced backup features ✅ Unified management interface ✅ Cross‑partition recovery coordination ✅ Vendor expertise and support ✅ Multi‑cloud platform support | ❌ Additional platform licensing ❌ External operational dependency ❌ Potential ESC support delays ❌ Vendor‑specific procedures ❌ Data residency platform dependency | Supported Platform‑coordinated recovery | Medium Platform‑managed operations | High Third‑party licensing; Platform operational costs |
Summary:
- Multi‑partition backup requires careful consideration of sovereignty, recovery, and operational requirements
- ESC's single region deployment eliminates geographic redundancy within the partition, making cross‑partition or external backup strategies necessary for comprehensive disaster recovery
- Independent backup operations provide maximum compliance but limit disaster recovery capabilities
- Offsite coordination enables cross‑partition recovery with customer‑controlled data movement across various storage platforms
- Third‑party solutions offer advanced features but introduce external dependencies and potential ESC support delays
- Pattern selection should balance compliance requirements, recovery capabilities, and operational complexity whilst addressing single‑region limitations
10. Nuvibit Multi-Partition Reference Architecture
The Challenge: Organizations face months of complex implementation work to deploy multi-partition AWS architectures while maintaining security, compliance, and operational excellence. Traditional approaches using AWS Control Tower or Landing Zone Accelerator cannot span partition boundaries, forcing manual duplication and creating significant configuration drift risks. Each partition requires separate organizational structures, identity systems, and networking infrastructure, multiplying operational overhead whilst introducing security vulnerabilities through inconsistent configurations.
The Solution: Nuvibit's Multi-Partition Reference Architecture, built on the battle-tested Nuvibit AWS Reference Architecture (NARA) and powered by the Nuvibit Terraform Collection (NTC), provides day-zero ESC support with partition-aware Infrastructure as Code that deploys consistently across AWS partitions using provider aliasing. This approach eliminates configuration drift, reduces implementation timelines from months to weeks, and enables organizations to adopt ESC the moment it becomes available (Q4 2025) with proven architectural patterns and operational procedures.
10.1 Architectural Framework Options
Three foundational architectural patterns serve as starting points for multi-partition implementation planning, each addressing different organisational priorities and regulatory requirements:
Pattern A: Regulated Core in ESC, Innovation Edge in Commercial optimises for compliance whilst preserving innovation velocity through workload-based data classification. Regulated workloads handling sensitive data (customer PII, financial records, healthcare data, government information) operate in ESC for sovereignty compliance. Non-regulated workloads requiring advanced services (AI/ML experimentation, analytics platforms, content delivery, developer tooling, third-party integrations) leverage commercial AWS service breadth and innovation capabilities.
Pattern B: All-in ESC with External Integration maximises sovereignty compliance by operating entirely within ESC whilst maintaining essential business connectivity. Highly regulated entities adopt this pattern, using secure API gateways and approved external connectivity patterns for SaaS integrations and business partner connectivity.
Pattern C: Commercial Core with Sovereign Offering positions commercial AWS as the primary platform whilst ESC serves as a fully isolated sovereign offering for specific customers or markets. ISVs and enterprise software providers operate their main SaaS platforms in commercial AWS for global reach and service breadth, then deploy specialized sovereign instances (such as SAP Sovereign Cloud) in ESC for customers requiring data residency compliance. This pattern enables dual-market strategies with complete partition isolation.
Selecting Your Architectural Pattern
| Pattern | Best For | Primary Driver | Key Trade-off |
|---|---|---|---|
| A: Regulated Core in ESC, Innovation Edge in Commercial | Organizations with mixed workload sensitivity requiring both sovereignty and innovation | Data classification & service breadth | Operational complexity vs innovation velocity |
| B: All-in ESC with External Integration | Highly regulated public sector entities with strict sovereignty mandates | Maximum sovereignty compliance | Service limitations vs regulatory assurance |
| C: Commercial Core with Sovereign Offering | ISVs and SaaS providers serving distinct customer segments | Dual-market strategy | Infrastructure duplication vs market reach |
Decision Questions:
- Workload Classification: Do all your workloads require EU sovereignty, or can you classify by data sensitivity?
- Service Requirements: Can your applications accept ESC's initial service limitations, or do you need commercial AWS service breadth?
- Customer Segmentation: Are you serving distinct customer segments with different sovereignty requirements?
- Operational Tolerance: What's your tolerance for multi-partition operational complexity and coordination overhead?
- Innovation Velocity: How critical is access to the latest AWS services and features for your competitive advantage?
Pattern Selection Guide:
- Choose Pattern A if you need sovereignty for regulated data whilst maintaining innovation capabilities for development and analytics workloads
- Choose Pattern B if regulatory requirements mandate complete EU sovereignty with no exceptions for external services
- Choose Pattern C if you operate a SaaS platform requiring both global commercial reach and specialized sovereign offerings
10.2 Nuvibit's Implementation Approach
Foundation: Nuvibit AWS Reference Architecture per Partition
The Nuvibit AWS Reference Architecture (NARA) serves as the proven, battle-tested foundation for enterprise and regulated industries. This secure and compliant AWS foundation provides the essential building blocks that can be adapted to support any of the multi-partition architectural patterns outlined above.

NARA's core components, including Organization Management, Security Tooling, Log Archive, Connectivity, and dedicated Workload Accounts, form the single-partition backbone that gets either replicated with partition isolation or extended with cross-partition integration based on your chosen architectural pattern. The modular design ensures that whether you implement Pattern A, Pattern B, or Pattern C, you maintain the same level of governance, security, and operational excellence.
Multi-Partition Implementation Strategy
The Nuvibit Terraform Collection (NTC) will support all multi-partition architectural patterns outlined in this whitepaper, providing Infrastructure as Code blueprints that adapt to customer requirements. NTC streamlines deployment across any pattern whilst preventing configuration drift and delivering the best possible user experience through unified, partition-aware automation.
Key Implementation Areas:
- Account Management: Unified account factory with partition-aware provisioning workflows
- Identity Management: External IdP federation, SCIM provisioning, break-glass procedures
- Security: Cross-partition security tooling, audit logging, compliance controls
- Connectivity: VPC architecture, Transit Gateway, Network Firewall, centralized connectivity patterns
- Observability: Independent monitoring, log forwarding, third-party platform integration
- Backup & Recovery: Partition-isolated, cross-partition, and third-party backup strategies
Terraform/OpenTofu-Based Multi-Partition Blueprints
NTC is designed with partition awareness from the ground up, enabling single Terraform/OpenTofu configurations to deploy resources across multiple AWS partitions whilst maintaining operational continuity. Provider aliasing enables seamless multi-partition deployments when ESC becomes available.
Why Unified Terraform/OpenTofu Deployment Matters:
Traditional AWS Landing Zone solutions require complete duplication of configurations across partitions. This creates:
- ❌ Configuration Drift: Manual synchronization of OUs, policies, and baselines across partitions
- ❌ Maintenance Overhead: Changes must be applied twice with partition-specific modifications
- ❌ Testing Complexity: Separate validation required for each partition deployment
- ❌ Human Error Risk: Manual processes introduce inconsistencies and security vulnerabilities
The NTC approach uses Terraform/OpenTofu provider aliasing to deploy identical configurations to multiple partitions from a single source of truth:
- ✅ Single Configuration: One OU structure, one set of policies, one deployment workflow
- ✅ Automatic Consistency: Changes apply to both partitions simultaneously
- ✅ Reduced Overhead: Maintain one codebase instead of two separate configurations
- ✅ Version Control: Unified change tracking and audit trail across partitions
The following code snippets demonstrate how to implement NTC in a unified multi-partition pattern, where a single Terraform/OpenTofu configuration deploys shared configurations across both commercial AWS and ESC partitions using provider aliasing.
# --------------------------------------------------------------------------------------------
# ¦ PROVIDER - MULTI-PARTITION
# --------------------------------------------------------------------------------------------
provider "aws" {
alias = "aws_frankfurt"
region = "eu-central-1"
# OpenID Connect (OIDC) integration
assume_role_with_web_identity {
role_arn = "arn:aws:iam::111111111111:role/oidc-role"
session_name = "unified-pipeline"
web_identity_token_file = "/tmp/web-identity-token"
}
}
provider "aws" {
alias = "aws_eusc_brandenburg"
region = "eusc-de-east-1"
# OpenID Connect (OIDC) integration
assume_role_with_web_identity {
role_arn = "arn:aws-eusc:iam::222222222222:role/oidc-role"
session_name = "unified-pipeline"
web_identity_token_file = "/tmp/web-identity-token"
}
}
- NTC Organizations
- NTC Identity Center
- NTC Core Network
# --------------------------------------------------------------------------------------------
# ¦ LOCALS
# --------------------------------------------------------------------------------------------
locals {
# shared OU structure across partitions
organizational_unit_paths = [
"/root/core",
"/root/sandbox",
"/root/suspended",
"/root/transitional",
"/root/workloads",
"/root/workloads/prod",
"/root/workloads/dev",
"/root/workloads/test"
]
}
# --------------------------------------------------------------------------------------------
# ¦ NTC ORGANIZATIONS - COMMERCIAL
# --------------------------------------------------------------------------------------------
module "ntc_organizations" {
source = "github.com/nuvibit-terraform-collection/terraform-aws-ntc-organizations?ref=X.X.X"
# list of nested (up to 5 levels) organizational units
organizational_unit_paths = local.organizational_unit_paths
# additional inputs...
providers = {
aws = aws.aws_frankfurt
}
}
# --------------------------------------------------------------------------------------------
# ¦ NTC ORGANIZATIONS - ESC
# --------------------------------------------------------------------------------------------
module "ntc_organizations_esc" {
source = "github.com/nuvibit-terraform-collection/terraform-aws-ntc-organizations?ref=X.X.X"
# list of nested (up to 5 levels) organizational units
organizational_unit_paths = local.organizational_unit_paths
# additional inputs...
providers = {
aws = aws.aws_eusc_brandenburg
}
}
# --------------------------------------------------------------------------------------------
# ¦ LOCALS
# --------------------------------------------------------------------------------------------
locals {
# permission sets can be a combination of aws and customer managed policies
# https://docs.aws.amazon.com/singlesignon/latest/userguide/permissionsetcustom.html
permission_sets = [
{
name : "AdministratorAccess"
description : "This permission set grants administrator access"
session_duration : 2
inline_policy_json : ""
managed_policies : [
{
managed_by : "aws"
policy_name : "AdministratorAccess"
policy_path : "/"
}
]
boundary_policy : {}
},
{
name : "Billing+ViewOnlyAccess"
description : "This permission set grants billing and read-only access"
session_duration : 10
inline_policy_json : ""
managed_policies : [
{
managed_by : "aws"
policy_name : "Billing"
policy_path : "/job-function/"
},
{
managed_by : "aws"
policy_name : "ViewOnlyAccess"
policy_path : "/job-function/"
}
]
boundary_policy : {}
},
{
name : "SupportUser+ReadOnlyAccess"
description : "This permission set grants support and read-only access"
session_duration : 10
inline_policy_json : ""
managed_policies : [
{
managed_by : "aws"
policy_name : "SupportUser"
policy_path : "/job-function/"
},
{
managed_by : "aws"
policy_name : "ReadOnlyAccess"
policy_path : "/"
}
]
boundary_policy : {}
}
]
}
# --------------------------------------------------------------------------------------------
# ¦ NTC IDENTITIY CENTER - COMMERCIAL
# --------------------------------------------------------------------------------------------
module "ntc_identity_center" {
source = "github.com/nuvibit-terraform-collection/terraform-aws-ntc-identity-center?ref=X.X.X"
# automatic provisioning (synchronization) of users and groups from external identity provider
# using the System for Cross-domain Identity Management (SCIM) v2.0 protocol
is_automatic_provisioning_enabled = true
permission_sets = local.permission_sets
# additional inputs...
providers = {
aws = aws.aws_frankfurt
}
}
# --------------------------------------------------------------------------------------------
# ¦ NTC IDENTITIY CENTER - ESC
# --------------------------------------------------------------------------------------------
module "ntc_identity_center_esc" {
source = "github.com/nuvibit-terraform-collection/terraform-aws-ntc-identity-center?ref=X.X.X"
# automatic provisioning (synchronization) of users and groups from external identity provider
# using the System for Cross-domain Identity Management (SCIM) v2.0 protocol
is_automatic_provisioning_enabled = true
permission_sets = local.permission_sets
# additional inputs...
providers = {
aws = aws.aws_eusc_brandenburg
}
}
# --------------------------------------------------------------------------------------------
# ¦ LOCALS
# --------------------------------------------------------------------------------------------
locals {
transit_gateway_route_tables = [
"tgw-core-rtb-hub",
"tgw-core-rtb-spoke-prod",
"tgw-core-rtb-spoke-dev",
"tgw-core-rtb-spoke-int",
"tgw-core-rtb-onprem",
]
}
# --------------------------------------------------------------------------------------------
# ¦ NTC CORE NETWORK - FRANKFURT - COMMERCIAL
# --------------------------------------------------------------------------------------------
module "ntc_core_network" {
source = "github.com/nuvibit-terraform-collection/terraform-aws-ntc-core-network?ref=X.X.X"
transit_gateway = {
name = "tgw-core-frankfurt"
description = "core network in frankfurt"
amazon_side_asn = 64512
default_route_table_association = false
default_route_table_propagation = false
dns_support = true
multicast_support = false
vpn_ecmp_support = true
security_group_referencing_support = true
cidr_blocks = []
route_table_names = local.transit_gateway_route_tables
# (optional) share Transit Gateway with Organizations, OUs or Accounts
auto_accept_shared_attachments = true
ram_share_principals = [
local.ntc_parameters["mgmt-organizations"]["ou_ids"]["/root/workloads"]
]
ram_share_allow_external_principals = false
}
# additional inputs...
providers = {
aws = aws.aws_frankfurt
}
}
# --------------------------------------------------------------------------------------------
# ¦ NTC CORE NETWORK - BRANDENBURG - ESC
# --------------------------------------------------------------------------------------------
module "ntc_core_network_esc" {
source = "github.com/nuvibit-terraform-collection/terraform-aws-ntc-core-network?ref=X.X.X"
transit_gateway = {
name = "tgw-core-brandenburg"
description = "core network in brandenburg"
amazon_side_asn = 64512
default_route_table_association = false
default_route_table_propagation = false
dns_support = true
multicast_support = false
vpn_ecmp_support = true
security_group_referencing_support = true
route_table_names = local.transit_gateway_route_tables
# (optional) share Transit Gateway with Organizations, OUs or Accounts
auto_accept_shared_attachments = true
ram_share_principals = [
local.ntc_parameters["mgmt-organizations"]["ou_ids"]["/root/workloads"]
]
ram_share_allow_external_principals = false
}
# additional inputs...
providers = {
aws = aws.aws_eusc_brandenburg
}
}
Account Factory with Partition Awareness
The account factory implementation uses a GitOps-driven approach with a single repository containing shared configuration that deploys to both partitions through unified pipelines. Account requests are captured through flexible interfaces, with user partition selection determining the deployment target while maintaining consistent configuration and operational procedures. A partition-isolated approach is also possible with individual repositories for each partition.
Implementation Options:
- Self-Service Portals: Integrate with platforms like Port, Backstage, ServiceNow, or custom portals that can push account details to the repository (unified or partition-specific)
- API Integration: Direct API calls that create account definition files in the appropriate repository based on selected approach
- Direct Git Workflow: Users create Pull Requests directly in the chosen repository structure (unified or partition-isolated)
- Hybrid Approaches: Combination of portals, APIs, and direct Git access based on user preferences and organizational workflows
- Unified Deployment
- Partition-Isolated Deployment
Figure 28a: Nuvibit's partition-aware account factory implementation
- Single Source of Truth: One repository contains shared account templates, policies, and configuration for both partitions
- Partition Selection: User chooses target partition (commercial or ESC) during account request process
- Unified Pipeline: Single CI/CD pipeline deploys accounts to the selected partition using provider aliasing
- Shared Configuration: Consistent account standards, security policies, and operational procedures across partitions
- Partition-Aware Deployment: Pipeline logic routes deployment to appropriate partition based on user selection
Figure 28b: Partition-isolated account factory with separate repositories and pipelines
- Separate Repositories: Independent repositories for commercial and ESC partition account factory configurations
- Partition-Specific Routing: User interface directs account requests to the appropriate repository based on partition selection
- Independent Pipelines: Each repository has its own CI/CD pipeline deploying exclusively to its target partition
- Duplicate Configuration: Account templates and policies maintained separately in each repository for maximum isolation
- Compliance Boundaries: Clear separation ensures no cross-partition data or configuration sharing
Architecture Selection Considerations:
- Unified Deployment: Choose for operational efficiency, reduced configuration drift, and simplified maintenance
- Partition-Isolated Deployment: Choose for maximum compliance separation, clear audit boundaries, and regulatory requirements demanding complete partition isolation
Modular Repository and CI/CD Pipeline Architecture
The Nuvibit approach leverages a modular repository structure with dedicated CI/CD pipelines for each functional area, providing superior operational flexibility and risk management compared to monolithic implementations.
- Unified Multi-Partition Pipelines
- Partition-Isolated Pipelines
Figure 29a: Unified CI/CD pipelines deploying to multiple partitions using provider aliasing
- Single CI/CD pipeline per functional area with multi-partition deployment logic
- Provider aliasing enables deployment to both commercial and ESC partitions
- Shared pipeline configuration with partition-specific conditional logic
- Unified artifact management and promotion workflows
- Single pipeline monitoring and operational interface
Figure 29b: Partition-isolated CI/CD pipelines with separate deployment workflows
- Dedicated CI/CD pipeline per functional area per partition
- Complete isolation between commercial and ESC deployment workflows
- Separate pipeline configurations, credentials, and monitoring
- Independent artifact management and promotion per partition
- Partition-specific operational procedures and troubleshooting
This architecture can be implemented with a wide selection of tools based on organizational preferences and existing infrastructure:
Repository Platforms: GitHub, GitLab, Bitbucket, AWS CodeCommit (deprecated), Azure Repos
Generic CI/CD Tooling: GitHub Actions, GitLab CI/CD, Jenkins, AWS CodePipeline, Azure DevOps Pipelines
Specialized IaC Platforms: Spacelift, HCP Terraform, env0, Scalr, Atlantis
The modular repository and pipeline approach works consistently across all platforms, enabling organizations to leverage existing tooling investments whilst maintaining the architectural benefits of segregation of duty, smaller blast radius, and team independence. Choose the tools that best align with your operational preferences and compliance requirements.
Repository Structure:
- Organizations Repository: Organizational structure, organizational policies (guardrails), organizational cloudtrail
- Account Factory Repository: Account vending and lifecycle, baseline configurations
- Identity Management Repository: IAM Identity Center, permission sets, user/group provisioning
- Log Archive Repository: Audit relevant log archive
- Security Repository: Security tooling, compliance controls
- Connectivity Repository: Transit Gateway, Direct Connect, VPN configurations, network routing
- Observability Repository: Monitoring, logging, alerting, dashboard configurations
- Backup Repository: Backup policies, disaster recovery
Each repository operates with its own dedicated CI/CD pipeline that can deploy to either single partitions or multiple partitions based on the chosen architectural pattern.
Key Advantages:
🎯 Smaller Blast Radius
- Pipeline failures are isolated to specific functional areas
- Broken connectivity pipeline doesn't block security deployments
- Failed observability updates don't impact account provisioning
- Infrastructure changes can be deployed incrementally with controlled risk
⚡ Segregation of Duty
- Different teams can own and operate separate repositories and pipelines
- Network team manages connectivity repository independently
- Security team controls security tooling without dependencies
- Platform team handles account factory with clear ownership boundaries
- Role-based access controls align with organizational responsibilities
🔧 Flexibility and Efficiency
- Use only the building blocks needed for your specific requirements
- Expand with additional Terraform/OpenTofu modules as requirements evolve
- Upgrade individual components without impacting other areas
- Run deployments only when changes are needed in specific functional areas
- Scale team responsibilities based on organizational structure
Operational Workflow:
- Incremental Development: Teams work on functional areas independently without coordination overhead
- Selective Deployment: Deploy only changed components, reducing deployment time and risk
- Parallel Operations: Multiple teams can deploy simultaneously to different functional areas
- Targeted Rollbacks: Rollback specific functional areas without affecting other components
- Scaling Teams: Add new teams and repositories as organizational requirements grow
This modular approach enables organizations to start with essential components and gradually expand their multi-partition infrastructure as capabilities and team maturity evolve, while maintaining operational excellence and clear ownership boundaries throughout the journey.
10.3 Multi-Partition Migration Path
Organizations with existing AWS commercial partition infrastructure must balance ESC adoption with operational continuity. A phased, risk-mitigated approach establishes ESC capabilities in parallel with existing infrastructure before progressively migrating workloads based on sovereignty requirements.
Phased Migration Approach
Phase 1: Foundation Deployment (Months 1-2)
- Deploy NTC in ESC partition using proven architectural patterns
- Establish organizational structure, identity federation, and security baselines
- Configure cross-partition connectivity if required (e.g. Direct Connect, VPN)
- Maintain complete operational independence of existing commercial infrastructure
- Validate ESC patterns with non-production pilot workloads
Key Activities:
- Set up AWS Organizations structure in ESC partition
- Configure IAM Identity Center with external IdP federation
- Deploy core networking (Transit Gateway, VPC architecture)
- Establish security tooling and audit logging
- Configure cross-partition connectivity based on architectural pattern requirements
- Document operational procedures and runbooks
Success Criteria:
- ESC foundation operational with complete security baseline
- Cross-partition connectivity validated and documented (if required)
- Pilot workload successfully deployed and operational
- Team trained on ESC-specific operational procedures
Phase 2: Progressive Workload Migration (Months 3-6)
- Classify existing workloads by data sensitivity and sovereignty requirements
- Migrate regulated workloads handling sensitive data to ESC partition
- Maintain innovation and development workloads in commercial partition
- Establish operational procedures for dual-partition management
- Implement cross-partition observability and cost management
Migration Priorities:
- High Sovereignty Requirements: Customer PII, financial records, healthcare data, government information
- Medium Sovereignty Requirements: Internal business data, employee information, operational systems
- Low Sovereignty Requirements: Development environments, analytics platforms, third-party integrations
Migration Methodology:
- Lift and Shift: Recreate infrastructure in ESC for stateless workloads
- Database Migration: Use AWS DMS or native tools for database transfers
- Application Refactoring: Adapt applications for ESC service availability
- Data Synchronization: Establish temporary cross-partition data flows during transition
Risk Mitigation:
- Maintain parallel operations during migration
- Test thoroughly in ESC before commercial decommissioning
- Implement rollback procedures for critical workloads
- Document lessons learned and update procedures iteratively
Phase 3: Operational Optimization (Month 6+)
- Consolidate monitoring and observability across both partitions
- Optimize cross-partition connectivity patterns and costs
- Refine account factory and vending procedures based on operational experience
- Implement advanced automation for multi-partition operations
- Document comprehensive operational playbooks and disaster recovery procedures
Optimization Focus Areas:
- Cost Management: Rightsize resources, eliminate redundancy, optimize data transfer
- Operational Efficiency: Automate repetitive tasks, streamline workflows, reduce manual intervention
- Security Posture: Continuous compliance monitoring, automated remediation, regular audits
- Team Training: Build expertise in multi-partition operations, document tribal knowledge
Continuous Improvement:
- Regular architecture reviews and pattern refinement
- Stay current with ESC service launches and capability updates
- Adapt procedures as organizational requirements evolve
- Share learnings across teams and improve documentation
NTC Advantage for Migration
Non-Disruptive Adoption: Organizations can deploy NTC in ESC alongside existing infrastructure without replacement, enabling gradual adoption and risk mitigation. Existing commercial infrastructure remains untouched during ESC foundation deployment.
Proven Patterns: Nuvibit's reference architecture in ESC mirrors commercial best practices, reducing operational learning curves and enabling consistent management procedures across both partitions.
Unified Operations: Once both partitions run NTC-based infrastructure, unified Terraform/OpenTofu configurations enable consistent policy management, security controls, and operational procedures with reduced overhead.
Flexible Timeline: Organizations control migration pace based on business priorities, resource availability, and risk tolerance. No forced cutover dates or all-or-nothing decisions.
Summary:
- Nuvibit's reference architecture provides a comprehensive framework for multi-partition deployment
- Terraform/OpenTofu foundation with provider aliasing enables seamless multi-partition operations
- Modular repository structure delivers operational flexibility with controlled blast radius
- Clear team ownership boundaries support organizational scaling and responsibility distribution
- NTC accelerate ESC adoption from months to weeks
11. Conclusion
The AWS European Sovereign Cloud (ESC) represents a significant advancement for organizations requiring EU sovereignty whilst leveraging cloud computing capabilities. The decision framework presented in this paper requires careful consideration of sovereignty requirements, service dependencies, operational complexity, and cost implications.
All‑in ESC adoption maximises sovereignty assurance but accepts service parity limitations and operational constraints. Commercial partition retention preserves full service availability but may not satisfy regulatory requirements. Dual‑partition strategies enable targeted sovereignty whilst retaining innovation velocity, though at the cost of operational complexity.
Success in multi‑partition environments requires deliberate operating models that address identity federation, connectivity patterns, compliance mapping, and cost management. The integration patterns and reference architectures presented provide proven approaches for managing these complexities whilst maintaining operational excellence.
Organizations must approach ESC adoption incrementally, starting with comprehensive assessment of sovereignty requirements and service dependencies. Pilot implementations validate architectural patterns and operational procedures before broader rollout. The lessons learned from multi‑region compliance strategies provide valuable foundation knowledge, though partition‑specific patterns require additional consideration.
Multi‑partition strategies will become increasingly important as organizations balance sovereignty requirements with innovation velocity in the evolving EU cloud landscape. Success depends on operational excellence, robust architectural patterns, and continuous evolution as ESC service parity and geographic redundancy mature.
Final Recommendations:
- Assess sovereignty requirements comprehensively before making partition decisions
- Invest early in partition‑aware tooling and operational procedures
- Start with pilot implementations to validate patterns before production adoption
- Maintain flexibility as ESC service parity and capabilities evolve
- Focus on operational excellence and incremental improvement over time
Ready to Navigate Your ESC Journey?
As an AWS European Sovereign Cloud (ESC) Launch Partner, Nuvibit brings deep expertise in AWS Landing Zone architectures and sovereign cloud operations to help organizations navigate their ESC adoption journey.
The Nuvibit Terraform Collection (NTC) will support ESC from day zero, accelerating your ESC deployment from months to weeks with battle-tested, enterprise-grade Infrastructure as Code building blocks that adapt seamlessly to multi-partition AWS environments.
Through our strategic partnerships with leading system integrators in Germany, we provide comprehensive ESC implementation support that combines cloud-native expertise with local regulatory knowledge and enterprise integration capabilities.
-
🎯 Schedule your ESC strategy session - Book a free 30-minute meeting with our specialists
-
📧 Contact our experts directly: aws-esc@nuvibit.com
What You'll Get:
- ESC readiness assessment for your specific requirements
- Custom multi-partition architecture recommendations
- NTC implementation roadmap tailored to your timeline
- Regulatory compliance strategy aligned with your industry
References
[1] About Amazon EU - ESC Governance, EU Operations, Trust and DNS/Direct Connect Notes
[2] AWS Security Blog - Initial Services Available in the AWS European Sovereign Cloud
[3] AWS Security Blog - EU Trust Service Provider and EU CA
[4] AWS Security Blog - €7.8B Investment and Brandenburg Region Timeline
