Build a private path from your enterprise origin to a GPU partner — Lambda, CoreWeave, or Crusoe — through Equinix Fabric in IAD, with Network Edge enforcing traffic shaping. Plan-only by default.
Use this file to discover all available pages before exploring further.
This is the canonical Distributed AI use case. It assembles five real
Equinix primitives — Fabric Cloud Router, Fabric Connection, Network
Edge, Fabric Streams, and the Fabric MCP server — into one
plan-only flow. The Terraform shown here resolves to real resources
in terraform-provider-equinix v4.15 and the
terraform-equinix-fabric v0.28.1 module.
You run an AI workload — RAG over enterprise documents, batch
fine-tuning, real-time inference — and you need GPU capacity that
your hyperscaler can’t price competitively. The neoclouds (Lambda,
CoreWeave, Crusoe, Nebius) have what you need, but they live behind
public IPs and untrusted egress paths. For regulated, sovereign, or
latency-sensitive workloads, that’s not acceptable.You need a private path: enterprise origin to GPU compute, with
no public internet hop, observable end-to-end, and shaped to your
cost and compliance constraints.
The Fabric Cloud Router (FCR) is the routing brain. The
Network Edge device (a virtualized firewall like Fortinet or
Palo Alto) is the policy plane — rate limits, traffic shaping, DPI
where required. Fabric Streams exports metrics and route events to
your observability stack so you can prove the path stays healthy.
The packages compile to real Terraform. This is the resolved HCL the
CLI stages into .equinix-dev/terraform/main.tf:
terraform { required_providers { equinix = { source = "equinix/equinix" version = "~> 4.15" } }}provider "equinix" { client_id = var.equinix_client_id client_secret = var.equinix_client_secret}# Fabric Cloud Router — the routing brain in Ashburn.module "fcr_iad" { source = "equinix/fabric-equinix/fabric" version = "0.28.1" cloud_router_name = "fcr-private-ai-iad" cloud_router_metro_code = "DC" # Ashburn / IAD cloud_router_package = "BASIC" # 2 Gbps, sufficient for inference paths cloud_router_account_num = var.equinix_account_number cloud_router_notification_emails = ["platform@example.com"]}# Connection from your AWS VPC into the FCR.resource "equinix_fabric_connection" "aws_to_fcr" { name = "private-ai-iad-aws-to-fcr" type = "EVPL_VC" bandwidth = 1000 # 1 Gbps redundancy { priority = "PRIMARY" } notifications { type = "ALL" emails = ["platform@example.com"] } a_side { access_point { type = "COLO" port { uuid = var.aws_dx_port_uuid } } } z_side { access_point { type = "CLOUD_ROUTER" router { uuid = module.fcr_iad.cloud_router_id } } }}# Connection from the FCR out to the GPU partner's Fabric service profile.data "equinix_fabric_service_profiles" "lambda_iad" { filter { property = "/name" operator = "LIKE" values = ["%Lambda Labs Private AI%"] }}resource "equinix_fabric_connection" "fcr_to_lambda" { name = "private-ai-iad-fcr-to-lambda" type = "IP_VC" bandwidth = 1000 redundancy { priority = "PRIMARY" } notifications { type = "ALL" emails = ["platform@example.com"] } a_side { access_point { type = "CLOUD_ROUTER" router { uuid = module.fcr_iad.cloud_router_id } } } z_side { access_point { type = "SP" profile { uuid = data.equinix_fabric_service_profiles.lambda_iad.data[0].uuid } location { metro_code = "DC" } seller_region = "us-east-1" } }}# Network Edge VNF — Fortinet FortiGate with traffic-shaping ACL template.resource "equinix_network_device" "fortigate_iad" { name = "fortigate-private-ai-iad" metro_code = "DC" type_code = "FG" # FortiGate package_code = "VM02" notifications = ["platform@example.com"] hostname = "fg-private-ai-iad" account_number = var.equinix_account_number version = "7.4.4" core_count = 2 term_length = 1 ssh_key { username = "platform" key_name = var.fortigate_ssh_key_name }}
Variables (equinix_client_id, aws_dx_port_uuid, etc.) live in
.env.local and are staged automatically by equinix-dev add. The
CLI never echoes secrets to stdout.
When you ask Claude / Cursor / your runtime “design a private AI
inference path in IAD to Lambda,” the agent walks through this
sequence on the Fabric MCP at mcp.equinix.com/fabric:
Three reads, one blocked write. The agent surfaces the plan + the
preflight gate list to you, then waits for human confirmation. Even
if the agent decides to push, the gateway refuses.
Stand up two FCRs — one in DC (Ashburn / IAD), one in DA (Dallas /
DFW) — and let BGP do the work. The same module call with
cloud_router_metro_code = "DA" produces the secondary path.
Connections become redundant (redundancy.priority = "SECONDARY").
CoreWeave or Crusoe instead of Lambda
Swap lambda/gpu-cloud for coreweave/gpu-cloud or
crusoe/gpu-cloud. The package shape is identical; only the
service_profile lookup name changes.
Add observability via Fabric Streams
See Distributed AI observability
— the FCR + connections from this recipe attach to Fabric Streams
subscriptions to push metrics and route events into Datadog or
Grafana.