Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.equinix.dev/llms.txt

Use this file to discover all available pages before exploring further.

Banks, payment processors, healthcare platforms, and federal contractors all hit the same problem: their workloads span multiple clouds, and their compliance posture forbids any of that traffic from touching the public internet. This recipe is the canonical Fabric answer — direct private connectivity to AWS Direct Connect, Azure ExpressRoute, and Google Cloud Interconnect from a single Cloud Router, with route filters and aggregation policies enforced at the Fabric layer.

The problem

Your compute lives in AWS. Your analytics lives in Azure. Your AI training jobs live in GCP. Your auditor lives in your inbox. Public-internet egress to any of these is a non-starter for regulated workloads — it puts data in transit on untrusted hops, makes data residency hard to prove, and creates an attack surface your CISO can’t justify. The hyperscalers solve this individually (Direct Connect, ExpressRoute, Cloud Interconnect), but stitching them together leaves you with three separate vendor relationships, three failure modes, and no single place to enforce routing policy. You need one routing brain, multiple cloud reaches, redundant metros, and a single audit-grade route policy.

The architecture

                    Equinix Fabric (IAD primary, DFW secondary)
  ┌──────────────┐  ┌────────────────────────────────────────────────────────┐
  │ Origin VPC   │  │   FCR-IAD (BASIC)            ──┬── route filter ──▶  AWS Direct Connect
  │ or on-prem   │──┤                                ├── route filter ──▶  Azure ExpressRoute
  │              │  │   FCR-DFW (BASIC, secondary)   └── route filter ──▶  GCP Cloud Interconnect
  └──────────────┘  └────────────────────────────────────────────────────────┘

                                    └── BGP communities tag traffic by data classification
                                        (PCI / PHI / SOX / public)
Two FCRs in two metros for active/standby redundancy. Each FCR has three Fabric Connections out — one per cloud — with route filters that limit which prefixes get advertised in either direction. Route aggregation rolls up customer subnets into clean BGP announcements.

Required provider packages

equinix/fabric-cloud-router

Two routers, IAD + DFW. Both BASIC package, BGP-redundant.

equinix/fabric-connection

Six connections (3 clouds × 2 metros), each redundant.

equinix/route-filter

Per-cloud prefix filters. Inbound and outbound rules.

equinix/route-aggregation

Aggregation policies for clean upstream BGP advertisements.

aws/direct-connect

AWS-side Direct Connect Gateway and VIF.

azure/expressroute

Azure-side ExpressRoute Circuit + Authorization.

gcp/cloud-interconnect

GCP-side Partner Interconnect Attachment.

Add the packages

equinix-dev init multi-cloud-regulated
cd multi-cloud-regulated

# Equinix surfaces
equinix-dev add equinix/fabric-cloud-router
equinix-dev add equinix/fabric-connection
equinix-dev add equinix/route-filter
equinix-dev add equinix/route-aggregation

# Cloud on-ramps
equinix-dev add aws/direct-connect
equinix-dev add azure/expressroute
equinix-dev add gcp/cloud-interconnect

equinix-dev plan --metros IAD,DFW

Terraform recipe (excerpted)

The full recipe runs ~250 lines. The most important fragment — the two FCRs plus three primary connections out of IAD with route filters — is below.
locals {
  metros = {
    iad = "DC"
    dfw = "DA"
  }
  clouds = ["aws", "azure", "gcp"]
}

# Two Fabric Cloud Routers.
module "fcr" {
  for_each = local.metros
  source   = "equinix/fabric-equinix/fabric"
  version  = "0.28.1"

  cloud_router_name        = "fcr-multicloud-${each.key}"
  cloud_router_metro_code  = each.value
  cloud_router_package     = "BASIC"
  cloud_router_account_num = var.equinix_account_number
}

# Outbound route filter — only PCI-tagged customer subnets get
# advertised to the clouds. Internal management traffic stays inside.
resource "equinix_fabric_route_filter" "pci_only" {
  name        = "rf-pci-egress"
  description = "Allow only PCI-classified customer subnets to be advertised toward clouds."
  type        = "BGP_IPv4_PREFIX_FILTER"

  rules = [
    {
      action = "PERMIT"
      prefix = "10.16.0.0/12"
      ge     = 16
      le     = 24
    },
    {
      action = "DENY"
      prefix = "0.0.0.0/0"
    }
  ]
}

# AWS Direct Connect connection from FCR-IAD.
resource "equinix_fabric_connection" "aws_iad" {
  name      = "conn-aws-iad"
  type      = "IP_VC"
  bandwidth = 1000
  redundancy { priority = "PRIMARY" }
  notifications {
    type   = "ALL"
    emails = ["platform@example.com"]
  }
  a_side {
    access_point {
      type   = "CLOUD_ROUTER"
      router { uuid = module.fcr["iad"].cloud_router_id }
    }
  }
  z_side {
    access_point {
      type           = "SP"
      profile        { uuid = data.equinix_fabric_service_profiles.aws_iad.data[0].uuid }
      location       { metro_code = "DC" }
      seller_region  = "us-east-1"
      authentication_key = data.aws_dx_connection.iad.aws_id   # exchanges with AWS DX side
    }
  }
}

# Bind the route filter to the AWS connection.
resource "equinix_fabric_route_filter_attachment" "aws_iad_pci" {
  route_filter_id = equinix_fabric_route_filter.pci_only.id
  connection_id   = equinix_fabric_connection.aws_iad.id
  direction       = "OUTBOUND"
}

# (Repeat the connection + filter pair for Azure ExpressRoute and
# GCP Cloud Interconnect, and again for the DFW secondary FCR.)
The cloud-side resources (aws_dx_gateway, azurerm_express_route_circuit, google_compute_interconnect_attachment) are provided by the matching cloud Terraform providers and are staged automatically by the per-cloud equinix.dev packages. The cloud account numbers, VLAN IDs, and authentication keys flow between sides via Terraform outputs in a single terraform apply plan.

MCP trace

// 1. Find each cloud's service profile in IAD and DFW.
{
  "tool": "search_service_profile",
  "arguments": { "metro_code": "DC", "name_like": "AWS" },
  "result": { "profiles": [{ "uuid": "...", "name": "AWS Direct Connect" }] }
}

// 2. Validate prefix filter shape against the active filter rules.
{
  "tool": "validate_route_filter",
  "arguments": { "rules": [{ "action": "PERMIT", "prefix": "10.16.0.0/12" }] },
  "result": { "valid": true, "warnings": [] }
}

// 3. Mutating tool — create the connection — BLOCKED until preflight passes.
{
  "tool": "create_connection",
  "arguments": { "name": "conn-aws-iad", "...": "..." },
  "result": {
    "status": "BLOCKED",
    "reason": "mutation_policy = blocked_by_default_requires_human_confirmation",
    "preflight_gates": [
      "aws_account_id_validated",
      "azure_subscription_validated",
      "gcp_project_billing_enabled",
      "route_filter_audit_log_enabled",
      "compliance_classification_set"
    ]
  }
}

Compliance gates

For a real regulated deployment, these gates pass before any apply:
GateOwner
AWS account in correct OUCloud-platform team
Azure subscription in landing zoneCloud-platform team
GCP project under correct folderCloud-platform team
Route filter rules reviewedNetwork engineering
Audit logging enabled on the FCRsInfoSec
Data classification tag setCompliance / Legal
Change ticket in approved stateChange management
Two-person approval recordedInfoSec
equinix-dev preflight --policy compliance runs all of these and emits a JSON report suitable for SOC2 / ISO 27001 evidence.

Variants

Drop the for_each over local.metros and pin to one metro. Loses HA but halves Fabric port spend.
Drop the matching equinix-dev add oracle/fastconnect (or ibm/direct-link, alibaba/express-connect) and replicate the connection + route-filter pair. The Fabric side is provider-agnostic.
Set redundancy.priority = "PRIMARY" on both metro connections and use BGP local-preference instead. Saves the standby cost in exchange for traffic-engineering complexity.

Next

Private AI inference path

Same FCR pattern, but reaching a GPU partner instead of a cloud.

Distributed AI observability

Add Fabric Streams to feed Datadog / Grafana the route and metrics events from these connections.