Documentation Index
Fetch the complete documentation index at: https://docs.equinix.dev/llms.txt
Use this file to discover all available pages before exploring further.
Banks, payment processors, healthcare platforms, and federal
contractors all hit the same problem: their workloads span multiple
clouds, and their compliance posture forbids any of that traffic
from touching the public internet. This recipe is the canonical
Fabric answer — direct private connectivity to AWS Direct Connect,
Azure ExpressRoute, and Google Cloud Interconnect from a single
Cloud Router, with route filters and aggregation policies enforced
at the Fabric layer.
The problem
Your compute lives in AWS. Your analytics lives in Azure. Your AI training jobs live in GCP. Your auditor lives in your inbox. Public-internet egress to any of these is a non-starter for regulated workloads — it puts data in transit on untrusted hops, makes data residency hard to prove, and creates an attack surface your CISO can’t justify. The hyperscalers solve this individually (Direct Connect, ExpressRoute, Cloud Interconnect), but stitching them together leaves you with three separate vendor relationships, three failure modes, and no single place to enforce routing policy. You need one routing brain, multiple cloud reaches, redundant metros, and a single audit-grade route policy.The architecture
Required provider packages
equinix/fabric-cloud-router
Two routers, IAD + DFW. Both
BASIC package, BGP-redundant.equinix/fabric-connection
Six connections (3 clouds × 2 metros), each redundant.
equinix/route-filter
Per-cloud prefix filters. Inbound and outbound rules.
equinix/route-aggregation
Aggregation policies for clean upstream BGP advertisements.
aws/direct-connect
AWS-side Direct Connect Gateway and VIF.
azure/expressroute
Azure-side ExpressRoute Circuit + Authorization.
gcp/cloud-interconnect
GCP-side Partner Interconnect Attachment.
Add the packages
Terraform recipe (excerpted)
The full recipe runs ~250 lines. The most important fragment — the two FCRs plus three primary connections out of IAD with route filters — is below.MCP trace
Compliance gates
For a real regulated deployment, these gates pass before anyapply:
| Gate | Owner |
|---|---|
| AWS account in correct OU | Cloud-platform team |
| Azure subscription in landing zone | Cloud-platform team |
| GCP project under correct folder | Cloud-platform team |
| Route filter rules reviewed | Network engineering |
| Audit logging enabled on the FCRs | InfoSec |
| Data classification tag set | Compliance / Legal |
| Change ticket in approved state | Change management |
| Two-person approval recorded | InfoSec |
equinix-dev preflight --policy compliance runs all of these and
emits a JSON report suitable for SOC2 / ISO 27001 evidence.
Variants
Single-metro instead of redundant
Single-metro instead of redundant
Drop the
for_each over local.metros and pin to one metro.
Loses HA but halves Fabric port spend.Add a fourth cloud (Oracle, IBM, Alibaba)
Add a fourth cloud (Oracle, IBM, Alibaba)
Drop the matching
equinix-dev add oracle/fastconnect (or
ibm/direct-link, alibaba/express-connect) and replicate the
connection + route-filter pair. The Fabric side is provider-agnostic.Drop redundancy, run active-active across metros
Drop redundancy, run active-active across metros
Set
redundancy.priority = "PRIMARY" on both metro connections
and use BGP local-preference instead. Saves the standby cost in
exchange for traffic-engineering complexity.Next
Private AI inference path
Same FCR pattern, but reaching a GPU partner instead of a cloud.
Distributed AI observability
Add Fabric Streams to feed Datadog / Grafana the route and
metrics events from these connections.