Skip to main content

Account Baseline

The Account Baseline feature in NTC Account Factory allows you to establish and maintain consistent configurations, guardrails, and security controls across multiple AWS accounts through comprehensive infrastructure as code.

Overview

Account Baselines ensure that all AWS accounts in your organization follow a consistent set of standards and configurations. Unlike the event-driven Account Lifecycle Management, baselines provide ongoing governance through CI/CD pipelines that apply Terraform code to maintain the desired state of your accounts.

How It Works

  1. Terraform code defines the desired configuration for accounts
  2. S3 buckets store the Terraform code and state files
  3. CodePipeline orchestrates the deployment process
  4. CodeBuild executes Terraform to apply the configuration
  5. DynamoDB tables protect against accidental deletion

NTC Account Factory Account Baseline

Key Features

FeatureDescriptionBenefits
Scoped DeploymentApply different baselines to specific accounts based on OU paths, names, or tagsTarget configurations to the right environments
Multi-Region SupportExecute the same baseline content across multiple AWS regions with region-aware contextMaintain consistency across global infrastructure while respecting regional differences and main region concepts
Modular DesignBuild baselines from reusable templates or custom codeReduce duplication and maintenance overhead
Cross-Account OrchestrationConfigure resources that require access to multiple accountsSimplify complex cross-account dependencies
Scheduled UpdatesAutomatically reapply baselines at scheduled intervalsEnforce compliance and correct drift
State ManagementMaintain Terraform state to detect and correct configuration driftEnsure resources stay properly configured

NTC Account Baseline Templates

The NTC Account Baseline Templates module provides a set of pre-built, reusable templates for common account configurations. These templates can be easily integrated into your Account Factory configuration to standardize governance across accounts.

Available Templates

The following templates are available in the NTC Account Baseline Templates module:

TemplateDescriptionWhen to Use
iam_roleCreates a standardized IAM role with configurable trust policy, principal access, and permission settingsWhen you need consistent IAM roles across accounts for cross-account access, service roles, or EC2 instance profiles
aws_configSets up AWS Config recorders and rulesFor continuous compliance monitoring
openid_connectConfigures OIDC integration for identity providersFor setting up an OpenID Connect (OIDC) identity provider in IAM with assumable role and permissions for CI/CD systems like GitHub Actions, GitLab CI, or Spacelift
tfstate_backendCreates secure Terraform state management infrastructure with S3 backend, KMS encryption, and configurable lockingFor establishing centralized state storage across accounts with automated CI/CD integration, cost-effective locking options, and fine-grained access control

Implementation

The NTC Account Baseline Templates can be implemented in your NTC Account Factory configuration through a few simple steps:

  1. Include the Templates Module

First, include the NTC Account Baseline Templates module in your Terraform configuration:

module "account_baseline_templates" {
source = "github.com/nuvibit-terraform-collection/terraform-aws-ntc-account-baseline-templates?ref=X.X.X"

account_baseline_templates = [
{
file_name = "iam_monitoring_reader"
template_name = "iam_role"
iam_role_inputs = {
role_name = "CloudWatch-CrossAccountSharingRole"
# policy can be submitted directly as JSON or via data source aws_iam_policy_document
policy_json = data.aws_iam_policy_document.monitoring_reader.json
role_principal_type = "AWS"
# grant account (org management) permission to assume role in member account
role_principal_identifiers = [123456789102] # monitoring account id
}
},
{
file_name = "iam_instance_profile"
template_name = "iam_role"
iam_role_inputs = {
role_name = "ntc-ssm-instance-profile"
# use 'policy_arn' to reference an aws managed policy
policy_arn = "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"
role_principal_type = "Service"
# grant account (org management) permission to assume role in member account
role_principal_identifiers = ["ec2.amazonaws.com"]
# (optional) set to true to create an instance profile
role_is_instance_profile = true
}
},
{
file_name = "oidc_github"
template_name = "openid_connect"
openid_connect_inputs = {
provider = "token.actions.githubusercontent.com"
audience = "sts.amazonaws.com"
role_name = "ntc-oidc-github-role"
role_path = "/"
role_max_session_in_hours = 1
permission_boundary_arn = ""
permission_policy_arn = "arn:aws:iam::aws:policy/AdministratorAccess"
subject_list = ["repo:GITHUB_ORGANIZATION_NAME/$${var.current_account_name}:*"]
subject_list_encoded = ""
}
},
{
file_name = "oidc_spacelift"
template_name = "openid_connect"
openid_connect_inputs = {
provider = "example.app.spacelift.io"
audience = "example.app.spacelift.io"
role_name = "ntc-oidc-spacelift-role"
role_path = "/"
role_max_session_in_hours = 1
permission_boundary_arn = ""
permission_policy_arn = "arn:aws:iam::aws:policy/AdministratorAccess"
subject_list = []
subject_list_encoded = <<EOT
flatten([
[
"space:SPACELIFT_SPACE_ID:stack:$${var.current_account_name}:*"
],
[
for subject in try(var.current_account_customer_values.additional_oidc_subjects, []) : "space:SPACELIFT_SPACE_ID:stack:$${subject}:*"
]
])
EOT
}
},
{
file_name = "aws_config"
template_name = "aws_config"
aws_config_inputs = {
config_log_archive_bucket_arn = "CONFIG_LOG_ARCHIVE_BUCKET_ARN"
config_log_archive_kms_key_arn = "CONFIG_LOG_ARCHIVE_KMS_KEY_ARN"
# optional inputs
config_recorder_name = "ntc-config-recorder"
config_delivery_channel_name = "ntc-config-delivery"
config_iam_role_name = "ntc-config-role"
config_iam_path = "/"
config_delivery_frequency = "One_Hour"
# (optional) override account baseline main region with main region of security tooling
# this is necessary when security tooling uses a different main region
# omit to use the main region of the account baseline
config_security_main_region = ""
}
},
{
file_name = "tfstate_backend"
template_name = "tfstate_backend"
tfstate_backend_inputs = {
# the name of the S3 terraform / opentofu state bucket needs to be globally unique
# use account baseline injected variables to dynamically generate unique names (e.g. account name as prefix)
s3_bucket_name = "$${var.current_account_name}-tfstate"
# WARNING: setting force_destroy to true will delete the S3 bucket and the Terraform / OpenTofu state files
# for testing purposes we set it to true, but in production you should be aware of the risk of accidental deletion
s3_bucket_force_destroy = true
# state locking mechanism: 's3' (Terraform/OpenTofu 1.10.0+) or 'dynamodb' (backwards compatible)
# s3 locking is more cost-effective for new deployments, dynamodb provides broader compatibility
state_locking_mechanism = "s3"

# optional: use existing KMS key instead of creating a new one
existing_kms_key_arn = ""
# kms key settings (only used when creating new key)
kms_deletion_window_in_days = 30
kms_key_rotation_enabled = true
kms_key_owners = [] # additional KMS key administrators

# grant access for the s3 bucket and kms key to use the tfstate backend
access_rules = [
{
name = "TFstate Backend Access"
description = "Grant access to the tfstate backend S3 bucket and KMS key"
role_arns = [
# grant access to oidc IAM role in account where s3 bucket is stored
"arn:$${var.aws_partition}:iam::$${var.current_account_id}:role/ntc-oidc-github-role",
]
# grant access only to specific object key prefixes (default: all object prefixes)
allowed_prefixes = ["*"]
}
]
}
},
]
}
  1. Reference Templates in Account Factory Configuration

After defining the templates, reference them in your NTC Account Factory configuration:

module "ntc_account_factory" {
source = "github.com/nuvibit-terraform-collection/terraform-aws-ntc-account-factory?ref=X.X.X"

account_factory_baseline_bucket_name = "ntc-account-factory-baseline"

# Other account factory configuration...

account_baseline_scopes = [
{
scope_name = "workloads"
terraform_binary = "opentofu"
terraform_version = "1.8.5"
aws_provider_version = "5.76.0"

provider_default_tags = {
ManagedBy = "ntc-account-factory",
BaselineScope = "workloads",
BaselineVersion = "1.3"
}

baseline_execution_role_name = "OrganizationAccountAccessRole"

# Reference pre-defined templates
baseline_terraform_files = [
module.account_baseline_templates.account_baseline_terraform_files["iam_monitoring_reader"],
module.account_baseline_templates.account_baseline_terraform_files["iam_instance_profile"],
module.account_baseline_templates.account_baseline_terraform_files["aws_config"],
module.account_baseline_templates.account_baseline_terraform_files["oidc_github"],
module.account_baseline_templates.account_baseline_terraform_files["tfstate_backend"],
]

baseline_regions = ["us-east-1", "eu-central-1"]
baseline_main_region = "eu-central-1"

# Pass additional parameters to baseline templates (available as var.baseline_parameters)
baseline_parameters_json = jsonencode({
org_id = "o-1234567890"
connectivity_account = "123456789012"
budget_limit = "1000"
ipam_pool_ids = {
us_east_1 = "ipam-121412512341234"
eu_central_1 = "ipam-121412512341234"
}
})

# Import existing resources into baseline management
baseline_import_resources = [
{
import_to = "module.baseline_us_east_1[0].aws_iam_role.legacy_role[0]"
import_id = "ExistingLegacyRole"
import_condition_account_names = ["prod-account-001"]
}
]

# Target specific accounts
include_accounts_all = false
include_accounts_by_ou_paths = [
"/root/workloads/prod",
"/root/workloads/dev"
]
}
]
}
  1. Testing and Monitoring

Once your baseline configuration is deployed, you can monitor its status through:

  • AWS CodePipeline console to track deployment
  • CodeBuild logs for detailed execution information
  • Account-level resources to verify proper configuration

Advanced Configuration

Baseline Parameters

The baseline_parameters_json feature allows you to pass custom configuration data to your baseline templates, making them more dynamic and environment-specific. Parameters are passed as a JSON-encoded string and become available in templates as var.baseline_parameters.

Use Cases for Baseline Parameters

  • Organization Configuration: Pass organization ID, management account details, and organizational settings
  • Networking Information: Share IPAM pool IDs, VPC configurations, and connectivity details
  • Environment-Specific Settings: Configure different values for development, staging, and production environments
  • Budget and Cost Controls: Set account-specific budget limits and cost allocation tags
  • Security and Compliance: Pass security baselines, compliance requirements, and audit configurations

Example Configuration

baseline_parameters_json = jsonencode({
# Organization details
org_id = "o-1234567890"
management_account_id = "123456789012"

# Networking configuration
connectivity_account = "123456789012"
ipam_pool_ids = {
us_east_1 = "ipam-121412512341234"
eu_central_1 = "ipam-121412512341234"
}
transit_gateway_ids = {
us_east_1 = "tgw-0a1b2c3d4e5f6g7h8"
eu_central_1 = "tgw-1b2c3d4e5f6g7h8i9"
}

# Environment-specific settings
environment = "production"
budget_limit = "5000"
backup_retention_days = 30

# Security configuration
security_account = "234567890123"
log_archive_account = "345678901234"
audit_roles = [
"arn:aws:iam::234567890123:role/SecurityAuditRole",
"arn:aws:iam::345678901234:role/ComplianceRole"
]
})

Using Parameters in Templates

Templates can access these parameters through the injected var.baseline_parameters variable:

# Example: Create VPC with IPAM integration
resource "aws_vpc" "main" {
ipv4_ipam_pool_id = var.baseline_parameters["ipam_pool_ids"][var.current_region]
ipv4_netmask_length = 24

tags = {
Environment = var.baseline_parameters["environment"]
ManagedBy = "ntc-account-factory"
}
}

# Example: Environment-aware resource configuration
locals {
is_production = var.baseline_parameters["environment"] == "production"
backup_config = {
retention_days = local.is_production ? var.baseline_parameters["backup_retention_days"] : 7
frequency = local.is_production ? "daily" : "weekly"
}
}

# Example: Cross-account role assumption
data "aws_iam_policy_document" "cross_account_access" {
statement {
principals {
type = "AWS"
identifiers = var.baseline_parameters["audit_roles"]
}
actions = ["sts:AssumeRole"]
}
}

Resource Import Management

The baseline_import_resources feature enables you to import existing AWS resources into baseline management without recreating them. This is particularly useful when migrating from manual configurations to baseline-managed infrastructure.

Import Configuration

baseline_import_resources = [
{
# Terraform resource address where the resource should be imported
import_to = "module.baseline_us_east_1[0].aws_iam_role.legacy_role[0]"
# AWS resource identifier (e.g., role name, instance ID, etc.)
import_id = "ExistingLegacyRole"
# Optional: limit import to specific accounts (default: all accounts in scope)
import_condition_account_names = ["prod-account-001", "staging-account-002"]
},
{
import_to = "module.baseline_eu_central_1[0].aws_s3_bucket.existing_logs[0]"
import_id = "existing-log-bucket-12345"
# Import only in production accounts
import_condition_account_names = ["prod-account-001"]
}
]

Import Use Cases

  • Legacy Infrastructure: Import manually created resources into baseline management
  • Gradual Migration: Incrementally move existing resources under baseline control
  • Shared Resources: Import resources that exist across multiple accounts with different identifiers
  • Compliance Remediation: Bring non-compliant resources under standardized management

Best Practices for Resource Imports

  1. Plan Carefully: Test imports in non-production environments first
  2. State Validation: Verify that imported resources match your Terraform configuration exactly
  3. Gradual Rollout: Import resources incrementally rather than all at once
  4. Backup State: Always backup Terraform state before performing imports
  5. Account Targeting: Use import_condition_account_names to control which accounts are affected

Resource Address Format

The Account Factory generates a single baseline_contents module that contains all your baseline Terraform definitions. This module is then called once for each region specified in baseline_regions, with different regional context variables.

Baseline Module Architecture

The Account Factory creates a unified baseline architecture:

# Configuration example
baseline_regions = ["us-east-1", "eu-central-1", "eu-central-2"]
baseline_main_region = "us-east-1"

# This generates the following module calls:
# - module.baseline_us_east_1[0] (calls baseline_contents with us-east-1 context)
# - module.baseline_eu_central_1[0] (calls baseline_contents with eu-central-1 context)
# - module.baseline_eu_central_2[0] (calls baseline_contents with eu-central-2 context)

Key Architecture Concepts:

  • Single Source: One baseline_contents module contains all your baseline logic
  • Regional Execution: The same baseline content executes in each specified region
  • Context-Aware: Each execution receives region-specific injected variables
  • Main Region Priority: Global resources should be deployed only in the baseline_main_region
Module Structure and Naming
RegionModule CallBaseline Content Context
us-east-1module.baseline_us_east_1[0]var.current_region = "us-east-1", var.is_current_region_main_region = true
eu-central-1module.baseline_eu_central_1[0]var.current_region = "eu-central-1", var.is_current_region_main_region = false
eu-central-2module.baseline_eu_central_2[0]var.current_region = "eu-central-2", var.is_current_region_main_region = false
Region-Aware Resource Deployment

Your baseline templates should use the injected variables to deploy resources appropriately:

# In your baseline templates - same logic executed in each region
# but different resources created based on context

# Global resources: Deploy only in main region to avoid conflicts
resource "aws_iam_role" "admin_role" {
count = var.is_current_region_main_region ? 1 : 0

name = "${var.current_account_name}-AdminRole"
# IAM is global - only create once in main region
}

# Regional resources: Deploy in every region
resource "aws_s3_bucket" "regional_logs" {
bucket = "${var.current_account_id}-logs-${var.current_region}"
# S3 buckets are regional - create in each region
}

# Conditional regional resources: Deploy based on specific region logic
resource "aws_vpc" "main" {
count = contains(["us-east-1", "eu-central-1"], var.current_region) ? 1 : 0

cidr_block = var.current_region == "us-east-1" ? "10.0.0.0/16" : "10.1.0.0/16"
# Deploy VPC only in specific regions with different CIDR blocks
}
Resource Address Patterns for Imports

The import_to field must specify the exact Terraform resource address including the regional module context:

# Pattern: module.baseline_{region_underscore}[0].resource_type.resource_name[index]

baseline_import_resources = [
{
# Import global IAM role in main region only
import_to = "module.baseline_us_east_1[0].aws_iam_role.admin_role[0]"
import_id = "ExistingAdminRole"
# Only import in main region since IAM is global
},
{
# Import regional S3 bucket - separate import for each region needed
import_to = "module.baseline_us_east_1[0].aws_s3_bucket.regional_logs[0]"
import_id = "existing-logs-us-east-1"
},
{
# Import the same logical resource in different region
import_to = "module.baseline_eu_central_1[0].aws_s3_bucket.regional_logs[0]"
import_id = "existing-logs-eu-central-1"
}
]
Main Region Concept

The baseline_main_region is crucial for avoiding deployment conflicts:

  • Global Resources: IAM roles, policies, and other global AWS resources should only be created in the main region
  • Cross-Region References: Other regions can reference global resources created in the main region
  • Template Logic: Use var.is_current_region_main_region to control resource creation
# Example: Global resource in main region, regional references elsewhere
resource "aws_iam_role" "cross_account_role" {
count = var.is_current_region_main_region ? 1 : 0
name = "CrossAccountRole"
}

# Reference the global role from any region
data "aws_iam_role" "cross_account_role" {
count = var.is_current_region_main_region ? 0 : 1
name = "CrossAccountRole"
}

locals {
cross_account_role_arn = var.is_current_region_main_region ?
aws_iam_role.cross_account_role[0].arn :
data.aws_iam_role.cross_account_role[0].arn
}
Count-Based Decommissioning

The [0] index is essential because the Account Factory uses Terraform's count parameter to control baseline deployment:

  • Normal Operation: count = 1 → Baseline content executes in each region, accessible via [0] index
  • Decommissioning: count = 0 → All regional module calls are disabled, resources are destroyed

This mechanism allows for safe baseline decommissioning across all regions simultaneously.

Custom Baseline Templates

Injected Variables

The NTC Account Factory automatically injects several predefined variables into your baseline Terraform templates, providing essential account and deployment context information. These variables are automatically injected by the NTC Account Factory CodePipeline and are available in all baseline templates without any additional configuration.

VariableTypeDescription
var.current_regionstringThe AWS region where the baseline is currently being deployed
var.main_regionstringThe primary region designated for the account baseline
var.partitionstringThe AWS partition where the account resides (e.g., "aws", "aws-cn", "aws-us-gov")
var.is_current_region_main_regionboolBoolean flag indicating whether the current deployment region is the main region
var.current_account_idstringThe AWS account ID where the baseline is being applied
var.current_account_namestringThe name of the AWS account
var.current_account_emailstringThe email address associated with the AWS account
var.current_account_ou_pathstringThe organizational unit path where the account is located
var.current_account_tagsmapKey-value pairs of tags assigned to the account
var.current_account_alternate_contactslistList of alternate contact information for the account
var.current_account_customer_valuesanyCustom values provided during account creation or configuration
var.baseline_parametersanyConfiguration parameters specific to the baseline scope

The var.partition variable is particularly useful when working with AWS partitions other than the standard AWS commercial partition. It allows your baseline templates to construct proper ARNs for different AWS environments:

  • aws: Standard AWS commercial partition
  • aws-cn: AWS China partition
  • aws-us-gov: AWS GovCloud partition

While the pre-built templates cover many common scenarios, you may need to create custom baseline templates for unique requirements. The injected variables provide powerful capabilities for creating dynamic, account-aware configurations that adapt based on the deployment context.

Leveraging Injected Variables in Custom Templates

The injected variables enable you to create sophisticated baseline templates that automatically adapt to different accounts, regions, and organizational contexts:

# Example: Account-aware resource naming
resource "aws_s3_bucket" "logs" {
bucket = "${var.current_account_id}-${var.current_region}-audit-logs"

tags = merge(var.current_account_tags, {
Purpose = "Audit Logging"
Region = var.current_region
})
}

# Example: Region-specific deployments
resource "aws_iam_role" "cross_account_role" {
# Only create in main region since IAM is global
count = var.is_current_region_main_region ? 1 : 0

name = "${var.current_account_name}-CrossAccountRole"
# ... role configuration
}

# Example: Partition-aware ARN construction
locals {
environment = can(regex("/prod", var.current_account_ou_path)) ? "production" : "non-production"

# Different settings based on environment
backup_retention_days = local.environment == "production" ? 30 : 7
monitoring_enabled = local.environment == "production" ? true : false

# Use customer values for additional customization
custom_budget_limit = try(var.current_account_customer_values.budget_limit, "100")

# Partition-aware ARN construction for cross-partition compatibility
admin_role_arn = "arn:${var.partition}:iam::${var.current_account_id}:role/AdministratorRole"
}

# Example: Cross-partition compatible resource references
data "aws_iam_policy_document" "assume_role_policy" {
statement {
effect = "Allow"
principals {
type = "AWS"
identifiers = ["arn:${var.partition}:iam::123456789012:root"]
}
actions = ["sts:AssumeRole"]
}
}

Simple Account Budget Template

For straightforward budget management, here's a simple template that uses injected variables for basic account-aware budgeting:

# files/simple_account_budget.tf

resource "aws_budgets_budget" "account_budget" {
# Only create in main region to avoid duplicates
count = var.is_current_region_main_region ? 1 : 0

name = "${var.current_account_name}-monthly-budget"
budget_type = "COST"
limit_amount = var.baseline_parameters["budget_limit"]
limit_unit = "USD"
time_unit = "MONTHLY"
time_period_start = "2024-01-01_00:00"

# 80% threshold notification
notification {
comparison_operator = "GREATER_THAN"
threshold = 80
threshold_type = "PERCENTAGE"
notification_type = "ACTUAL"
subscriber_email_addresses = [var.current_account_email]
}

# 100% threshold notification
notification {
comparison_operator = "GREATER_THAN"
threshold = 100
threshold_type = "PERCENTAGE"
notification_type = "ACTUAL"
subscriber_email_addresses = [var.current_account_email]
}

tags = merge(var.current_account_tags, {
ManagedBy = "ntc-account-factory"
Purpose = "cost-monitoring"
})
}

This simple template:

  • Uses var.current_account_name for unique budget naming
  • Gets budget limit from var.baseline_parameters["budget_limit"]
  • Sends notifications to var.current_account_email
  • Only deploys in the main region using var.is_current_region_main_region
  • Merges account tags using var.current_account_tags

You can configure different budget limits per scope in your Account Factory configuration:

account_baseline_scopes = [
{
scope_name = "production_accounts"

baseline_parameters = {
budget_limit = "1000"
}

baseline_terraform_files = [
{
file_name = "simple_account_budget.tf"
content = file("${path.module}/files/simple_account_budget.tf")
}
]

include_accounts_by_ou_paths = ["/root/workloads/prod"]
},
{
scope_name = "development_accounts"

baseline_parameters = {
budget_limit = "200"
}

baseline_terraform_files = [
{
file_name = "simple_account_budget.tf"
content = file("${path.module}/files/simple_account_budget.tf")
}
]

include_accounts_by_ou_paths = ["/root/workloads/dev"]
}
]

Account Budget Template with Dynamic Configuration

Here's an enhanced budget template that leverages injected variables for dynamic configuration based on account context:

  1. Create the Enhanced Terraform File

    # files/account_budget_dynamic.tf

    locals {
    # Determine environment from OU path
    environment = can(regex("/prod", var.current_account_ou_path)) ? "production" : (
    can(regex("/dev", var.current_account_ou_path)) ? "development" : "sandbox"
    )

    # Set budget limits based on environment and custom values
    default_budget_limits = {
    production = "1000"
    development = "200"
    sandbox = "50"
    }

    # Allow override via customer values, fallback to environment defaults
    budget_limit = try(
    var.current_account_customer_values.budget_limit,
    local.default_budget_limits[local.environment],
    "100"
    )

    # Environment-specific notification settings
    notification_emails = {
    production = ["finance-prod@company.com", "team-lead@company.com"]
    development = ["finance-dev@company.com"]
    sandbox = ["admin@company.com"]
    }

    # Use account email as fallback if environment-specific emails not defined
    budget_emails = try(
    var.current_account_customer_values.budget_notification_emails,
    local.notification_emails[local.environment],
    [var.current_account_email]
    )
    }

    resource "aws_budgets_budget" "account_budget" {
    # Only create in main region to avoid duplicates
    count = var.is_current_region_main_region ? 1 : 0

    name = "${var.current_account_name}-monthly-budget"
    budget_type = "COST"
    limit_amount = local.budget_limit
    limit_unit = "USD"
    time_unit = "MONTHLY"
    time_period_start = "2024-01-01_00:00"

    # 80% threshold notification
    notification {
    comparison_operator = "GREATER_THAN"
    threshold = 80
    threshold_type = "PERCENTAGE"
    notification_type = "ACTUAL"
    subscriber_email_addresses = local.budget_emails
    }

    # 100% threshold notification
    notification {
    comparison_operator = "GREATER_THAN"
    threshold = 100
    threshold_type = "PERCENTAGE"
    notification_type = "ACTUAL"
    subscriber_email_addresses = local.budget_emails
    }

    # Forecasted 100% threshold
    notification {
    comparison_operator = "GREATER_THAN"
    threshold = 100
    threshold_type = "PERCENTAGE"
    notification_type = "FORECASTED"
    subscriber_email_addresses = local.budget_emails
    }

    tags = merge(var.current_account_tags, {
    ManagedBy = "ntc-account-factory"
    Environment = local.environment
    BudgetLimit = local.budget_limit
    Partition = var.partition
    })
    }

    # Optional: Create CloudWatch alarm for budget exceeded
    resource "aws_cloudwatch_metric_alarm" "budget_alarm" {
    count = var.is_current_region_main_region ? 1 : 0

    alarm_name = "${var.current_account_name}-budget-exceeded"
    comparison_operator = "GreaterThanThreshold"
    evaluation_periods = "1"
    metric_name = "ActualSpend"
    namespace = "AWS/Billing"
    period = "86400"
    statistic = "Maximum"
    threshold = local.budget_limit
    alarm_description = "Budget exceeded for account ${var.current_account_name}"

    dimensions = {
    Currency = "USD"
    }

    tags = var.current_account_tags
    }
  2. Use in Account Factory Configuration

    account_baseline_scopes = [
    {
    scope_name = "all_workloads"

    baseline_terraform_files = [
    {
    file_name = "account_budget_dynamic.tf"
    content = file("${path.module}/files/account_budget_dynamic.tf")
    }
    ]

    # Apply to all workload accounts
    include_accounts_by_ou_paths = [
    "/root/workloads/prod",
    "/root/workloads/dev",
    "/root/workloads/sandbox"
    ]

    baseline_regions = ["us-east-1", "eu-central-1"]
    baseline_main_region = "us-east-1"
    }
    ]

Multi-Region Resource Template

This template demonstrates how to develop region-aware baseline content. The Account Factory executes the same baseline content in each region, but with different regional context through injected variables:

# files/multi_region_resources.tf
# This same template executes in: module.baseline_us_east_1[0], module.baseline_eu_central_1[0], etc.

# Global resources (created only in main region to avoid conflicts)
resource "aws_iam_role" "application_role" {
count = var.is_current_region_main_region ? 1 : 0

name = "${var.current_account_name}-ApplicationRole"

assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "ec2.amazonaws.com"
}
}
]
})

tags = merge(var.current_account_tags, {
CreatedIn = var.main_region
IsGlobal = "true"
})
}

# Regional resources (created in each region where baseline executes)
resource "aws_s3_bucket" "regional_bucket" {
bucket = "${var.current_account_id}-${var.current_region}-regional-data"

tags = merge(var.current_account_tags, {
Region = var.current_region
IsMainRegion = var.is_current_region_main_region
})
}

# Cross-region resource references (non-main regions reference global resources)
data "aws_iam_role" "application_role" {
count = var.is_current_region_main_region ? 0 : 1
name = "${var.current_account_name}-ApplicationRole"
}

# Use global role ARN in all regions
locals {
application_role_arn = var.is_current_region_main_region ?
aws_iam_role.application_role[0].arn :
data.aws_iam_role.application_role[0].arn
}

# Regional instance profile using global role
resource "aws_iam_instance_profile" "application_profile" {
name = "${var.current_account_name}-ApplicationProfile-${var.current_region}"

# Extract role name from ARN for instance profile
role = element(split("/", local.application_role_arn), 1)

tags = {
Region = var.current_region
}
}

# Cross-region reference (data source in non-main regions)
data "aws_iam_role" "application_role" {
count = var.is_current_region_main_region ? 0 : 1
name = "${var.current_account_name}-ApplicationRole"
}

locals {
# Get role ARN whether created locally or referenced from main region
application_role_arn = var.is_current_region_main_region ?
aws_iam_role.application_role[0].arn :
data.aws_iam_role.application_role[0].arn
}

# Use the role ARN in regional resources
resource "aws_instance_profile" "application_profile" {
name = "${var.current_account_name}-${var.current_region}-profile"
role = local.application_role_arn

tags = var.current_account_tags
}

These enhanced examples demonstrate how the injected variables enable you to create dynamic, intelligent baseline templates that automatically adapt to different deployment contexts while maintaining consistency across your AWS organization.

Cross-Account Orchestration

Account Baseline supports configuring resources that require access to multiple accounts. This is particularly useful when your baseline needs to manage networking resources (like Transit Gateway attachments) or DNS delegations that are centrally managed in dedicated accounts.

baseline_assume_role_providers = [
{
configuration_alias = "connectivity"
role_arn = "REPLACE_WITH_THE_ROLE_ARN_THAT_ALLOWS_BASELINE_TO_MANAGE_NETWORKING_RESOURCES" # local.ntc_parameters["connectivity"]["baseline_assume_role_arn"]
session_name = "ntc-account-factory"
}
]

This configuration creates additional AWS provider aliases that can be used in your baseline Terraform code. For example, you might use the connectivity provider to:

  • Attach VPCs to Transit Gateways: Automatically connect new account VPCs to your central networking infrastructure
  • Create DNS subdomain delegations: Set up Route53 hosted zones and delegate subdomains from a central DNS account
  • Configure cross-account security group rules: Allow traffic between accounts through centrally managed security groups

Setting Up Cross-Account Access

To use cross-account orchestration, you need to create a role in the target account (e.g., connectivity account) that can be assumed by the baseline pipeline:

  1. Create the cross-account role in the target account:
# In your connectivity account
resource "aws_iam_role" "ntc_baseline" {
name = "ntc-baseline-role"
assume_role_policy = data.aws_iam_policy_document.ntc_baseline_trust.json
}

data "aws_iam_policy_document" "ntc_baseline_trust" {
statement {
effect = "Allow"
principals {
type = "AWS"
identifiers = ["ACCOUNT_FACTORY_BASELINE_ROLE_ARN"] # local.ntc_parameters["mgmt-account-factory"]["baseline_role_arns"]
}
actions = ["sts:AssumeRole"]
}
}

# Grant specific permissions needed for baseline operations
resource "aws_iam_role_policy" "ntc_baseline" {
name = "ntc-baseline-permissions"
role = aws_iam_role.ntc_baseline.id
policy = data.aws_iam_policy_document.ntc_baseline_permissions.json
}

data "aws_iam_policy_document" "ntc_baseline_permissions" {
# permission required to manage transit gateway attachments
statement {
sid = "ManageTransitGatewayAttachments"
effect = "Allow"
actions = [
"ec2:CreateTags",
"ec2:DescribeTransitGatewayAttachments",
"ec2:AssociateTransitGatewayRouteTable",
"ec2:EnableTransitGatewayRouteTablePropagation",
"ec2:GetTransitGatewayAttachmentPropagations",
]
resources = [
module.ntc_core_network_frankfurt.transit_gateway_arn,
module.ntc_core_network_zurich.transit_gateway_arn
]
}
# permissions required to manage subdomain delegations
statement {
sid = "ManageRoute53SubdomainDelegations"
effect = "Allow"
actions = [
"route53:ChangeResourceRecordSets",
"route53:ListResourceRecordSets",
"route53:ListTagsForResource",
"route53:GetHostedZone",
]
resources = ["*"] # Use "*" to allow access to all Route53 hosted zones, or specify specific ARNs if needed like module.ntc_route53_dev.zone_arn
}
statement {
effect = "Allow"
actions = [
"route53:ListHostedZones",
"route53:GetChange",
]
resources = ["*"]
}

# add additional permissions as needed
}

  1. Use the provider in your baseline templates:
# Example: Attach new account VPC to Transit Gateway
resource "aws_ec2_transit_gateway_vpc_attachment" "workload_attachment" {
provider = aws.connectivity
subnet_ids = [aws_subnet.workload_subnet.id]
transit_gateway_id = var.transit_gateway_id
vpc_id = aws_vpc.workload_vpc.id

tags = {
Name = "${var.current_account_name}-attachment"
}
}

# Example: Create subdomain delegation
resource "aws_route53_record" "subdomain_delegation" {
provider = aws.connectivity
zone_id = var.parent_zone_id
name = "${var.current_account_name}.example.com"
type = "NS"
ttl = 300
records = aws_route53_zone.account_zone.name_servers
}

Account Scoping

You can precisely target which accounts receive specific baselines using several scoping mechanisms:

  1. Include by OU Path

    include_accounts_by_ou_paths = [
    "/root/workloads/prod",
    "/root/workloads/dev"
    ]
  2. Include by Account Name

    include_accounts_by_names = ["exclusive-account"]
  3. Include by Tags

    include_accounts_by_tags = [
    {
    key = "AccountType"
    value = "workload"
    }
    ]
  4. Exclusion Options

    exclude_accounts_by_ou_paths = ["/root/workloads/sandbox"]
    exclude_accounts_by_names = ["test-account"]
    exclude_accounts_by_tags = [
    {
    key = "ExcludeFromBaseline"
    value = "true"
    }
    ]

Scheduled Updates

To ensure your baselines remain consistently applied, you can schedule automatic reapplication:

schedule_rerun_every_x_hours = 24  # Rerun daily

This feature is particularly useful for:

  • Correcting manual changes that cause drift
  • Ensuring new resources meet compliance requirements
  • Applying updates to resources that might be modified by other processes

Decommissioning

When you need to remove resources created by a baseline:

decommission_accounts_all = false
decommission_accounts_by_tags = [
{
key = "AccountDecommission"
value = true
}
]

Pipeline Delay Options

To handle dependencies and ensure resources are available before applying the baseline:

pipeline_delay_options = {
wait_for_seconds = 120 # Sets a fixed delay before starting baseline deployment, useful for allowing AWS operations to propagate
wait_retry_count = 5 # Number of times to retry dependency checks before failing, combined with wait_for_seconds can wait up to 10 minutes
wait_for_execution_role = true # Ensures the IAM execution role exists and is accessible in target accounts before proceeding
wait_for_regions = false # If true, checks that all AWS regions specified in baseline_regions are enabled in the account
wait_for_securityhub = false # If true, verifies AWS Security Hub is properly configured before proceeding with the baseline
wait_for_guardduty = false # If true, checks if AWS GuardDuty is properly configured before applying the baseline
}

Differences from Account Lifecycle Management

While Account Lifecycle Management provides event-driven, reactive automation for specific moments, Account Baseline is focused on:

Account BaselineAccount Lifecycle Management
Deployment MethodCI/CD pipeline-based execution
Operational ModeComprehensive, ongoing governance
State ManagementState-based approach that maintains and reconciles desired state
Execution TimingScheduled or on-demand execution
Account TargetingPrecision targeting based on OU paths, tags, or account names
Typical Use CasesStandardized security controls, compliance requirements, organizational policies

Best Practices

  1. Start Simple: Begin with a few essential resources and gradually expand
  2. Test Thoroughly: Test baseline changes in a development environment first
  3. Version Control: Store baseline templates in version control
  4. Modular Design: Break down complex baselines into modular components
  5. Documentation: Document the purpose and requirements of each baseline component
  6. Tagging Strategy: Develop a consistent tagging strategy for resources
  7. Error Handling: Include proper error handling in your baseline code
  8. Idempotency: Ensure your Terraform code is idempotent to avoid issues with repeated applications
  9. Scoping: Use precise account targeting to avoid applying baselines to the wrong accounts
  10. Dependency Management: Consider dependencies between resources and baseline components
  11. Partition Awareness: Use the var.partition variable when constructing ARNs to ensure cross-partition compatibility

FAQ

How do Account Baselines differ from traditional Terraform deployments?

Account Baselines provide several advantages over traditional Terraform deployments:

  • Centralized management through a single point of configuration
  • Multiple account targeting without managing separate state files for each account
  • Automated deployment pipelines that eliminate manual terraform apply steps
  • Consistent provider configuration across all targeted accounts
  • Scheduled reapplication to maintain compliance and prevent drift
  • Coordinated multi-region deployments from a single baseline definition

Can I use both pre-defined and custom templates together?

Yes, you can combine pre-defined templates from the NTC Account Baseline Templates module with your custom templates in the same baseline. This allows you to leverage existing solutions for common tasks while still implementing custom logic for your specific requirements.

How do I handle secrets in my baseline templates?

For handling secrets in your baseline templates, you have several options:

  1. AWS Secrets Manager: Store secrets in Secrets Manager and retrieve them at runtime
  2. AWS Parameter Store: Use Parameter Store for configuration values, especially with SecureString parameters
  3. IAM Role Assumption: Use IAM roles with specific permissions rather than hardcoded credentials
  4. Environment Variables: Pass sensitive values as environment variables to CodeBuild jobs

Never hardcode secrets in your Terraform files. Instead, use:

data "aws_secretsmanager_secret_version" "example" {
secret_id = "arn:${var.partition}:secretsmanager:region:account:secret:name"
}

locals {
secret_value = jsondecode(data.aws_secretsmanager_secret_version.example.secret_string)
}

How do I debug issues with my baseline deployment?

When troubleshooting baseline deployment issues:

  1. Check the CodeBuild logs for detailed error messages
  2. Verify that the execution role has the necessary permissions
  3. Inspect the Terraform plan output for expected changes
  4. Check for timeouts or connectivity issues in cross-account operations
  5. Verify that the resources defined in your baseline are valid for all targeted regions
  6. Test your Terraform templates locally before adding them to your baseline

Can I use OpenTofu instead of Terraform for my baselines?

Yes, NTC Account Factory supports both Terraform and OpenTofu. To use OpenTofu:

account_baseline_scopes = [
{
scope_name = "workloads"
terraform_binary = "opentofu" # Specify OpenTofu instead of Terraform
terraform_version = "1.8.5" # Specify OpenTofu version
# ...other configuration...
}
]

What is the var.partition variable and when should I use it?

The var.partition variable represents the AWS partition where your account resides. AWS has different partitions for different environments:

  • aws: Standard AWS commercial partition (most common)
  • aws-cn: AWS China partition
  • aws-us-gov: AWS GovCloud partition

You should use var.partition when constructing ARNs in your baseline templates to ensure they work correctly across different AWS partitions:

# Instead of hardcoding "aws"
role_arn = "arn:aws:iam::${var.current_account_id}:role/MyRole"

# Use the partition variable
role_arn = "arn:${var.partition}:iam::${var.current_account_id}:role/MyRole"

This is especially important if your organization operates in multiple AWS partitions or if you're designing templates that need to work across different AWS environments.

How do I choose between S3 and DynamoDB locking for the tfstate_backend template?

The choice between S3 and DynamoDB locking depends on your specific requirements:

Use S3 locking when:

  • You're using Terraform/OpenTofu 1.10.0 or later

Use DynamoDB locking when:

  • You need compatibility with older Terraform/OpenTofu versions
warning

DynamoDB-based locking is deprecated and will be removed in a future minor version

The template automatically creates the appropriate locking mechanism based on your state_locking_mechanism setting. Both options provide the same level of state protection and concurrent access prevention.

How do I use baseline_parameters_json to pass custom data to my templates?

The baseline_parameters_json feature allows you to pass custom configuration data to your baseline templates. Parameters are JSON-encoded and become available as var.baseline_parameters in your templates:

# In your Account Factory configuration
baseline_parameters_json = jsonencode({
environment = "production"
budget_limit = "5000"
vpc_cidr = "10.0.0.0/16"
})

# In your baseline templates
resource "aws_budgets_budget" "account_budget" {
name = "${var.current_account_name}-budget"
limit_amount = var.baseline_parameters["budget_limit"]
# ... other configuration
}

This is particularly useful for:

  • Environment-specific configurations
  • Organization-wide settings (IPAM pools, Transit Gateway IDs)
  • Account-specific customizations (budget limits, backup policies)
  • Cross-account resource references

When should I use baseline_import_resources?

Use baseline_import_resources when you need to bring existing AWS resources under baseline management without recreating them:

Common scenarios:

  • Legacy Migration: Import manually created resources into baseline control
  • Gradual Adoption: Incrementally move existing infrastructure to baseline management
  • Compliance Remediation: Standardize non-compliant resources without downtime
  • Shared Resources: Manage resources that exist with different configurations across accounts

Example:

baseline_import_resources = [
{
import_to = "module.baseline_us_east_1[0].aws_iam_role.legacy_admin[0]"
import_id = "LegacyAdminRole"
import_condition_account_names = ["prod-account-001"]
}
]

Important considerations:

  • Test imports in non-production environments first
  • Ensure your Terraform configuration exactly matches the existing resource
  • Use import_condition_account_names to control which accounts are affected
  • Always backup your Terraform state before performing imports

Can I use both baseline_parameters_json and baseline_import_resources together?

Yes, both features work independently and can be used together in the same baseline scope. They serve different purposes:

  • baseline_parameters_json: Provides dynamic configuration data to your templates
  • baseline_import_resources: Imports existing resources into Terraform state management

A common pattern is to use parameters to configure imported resources based on their environment or account context:

account_baseline_scopes = [
{
baseline_parameters_json = jsonencode({
environment = "production"
legacy_cleanup = true
})

baseline_import_resources = [
{
import_to = "aws_iam_role.legacy_role[0]"
import_id = "LegacyRole"
}
]

# Template can access both imported resources and parameters
baseline_terraform_files = [
# Template uses var.baseline_parameters["legacy_cleanup"] to decide
# whether to modify imported resources
]
}
]