Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.equinix.dev/llms.txt

Use this file to discover all available pages before exploring further.

This is the canonical Distributed AI use case. It assembles five real Equinix primitives — Fabric Cloud Router, Fabric Connection, Network Edge, Fabric Streams, and the Fabric MCP server — into one plan-only flow. The Terraform shown here resolves to real resources in terraform-provider-equinix v4.15 and the terraform-equinix-fabric v0.28.1 module.

The problem

You run an AI workload — RAG over enterprise documents, batch fine-tuning, real-time inference — and you need GPU capacity that your hyperscaler can’t price competitively. The neoclouds (Lambda, CoreWeave, Crusoe, Nebius) have what you need, but they live behind public IPs and untrusted egress paths. For regulated, sovereign, or latency-sensitive workloads, that’s not acceptable. You need a private path: enterprise origin to GPU compute, with no public internet hop, observable end-to-end, and shaped to your cost and compliance constraints.

The architecture

                                                    Equinix Fabric (IAD)
  ┌──────────────────┐        ┌──────────────────────────────────────────┐        ┌────────────────┐
  │  Enterprise VPC  │ ─────▶ │  Fabric Cloud Router  ──▶  Network Edge  │ ─────▶ │  GPU partner   │
  │  AWS / Azure /   │  AWS   │  (BGP, redundant)    Fortinet / Palo Alt │  AWS   │  Lambda /      │
  │  on-prem         │   DX   │                                          │   DX   │  CoreWeave /   │
  └──────────────────┘        └──────────────────────────────────────────┘        │  Crusoe        │
            │                                                                     └────────────────┘
            │                                                                              │
            └──────────────── Fabric Streams (metrics + logs)  ───────────────────────────┘


                                    Datadog / Grafana
The Fabric Cloud Router (FCR) is the routing brain. The Network Edge device (a virtualized firewall like Fortinet or Palo Alto) is the policy plane — rate limits, traffic shaping, DPI where required. Fabric Streams exports metrics and route events to your observability stack so you can prove the path stays healthy.

Required provider packages

equinix/fabric-cloud-router

Regional virtual routing for Fabric networks. Real Terraform resource, real MCP tools.

equinix/fabric-connection

Private virtual connections to clouds, partners, service profiles, and customer endpoints.

equinix/network-edge-device

Virtual network functions on Network Edge: Fortinet, Palo Alto, Cisco, Juniper, BGP, ACL templates.

lambda/gpu-cloud

GPU partner endpoints. CoreWeave / Crusoe / Nebius are drop-in alternatives.

Add the packages

1

Initialize the project

equinix-dev init private-ai-inference-iad
cd private-ai-inference-iad
2

Add the Equinix surfaces

equinix-dev add equinix/fabric-cloud-router
equinix-dev add equinix/fabric-connection
equinix-dev add equinix/network-edge-device
equinix-dev add equinix/fabric-streams
3

Pick a GPU partner

Each of these adds a provider package targeting that partner’s Fabric service profile in IAD.
equinix-dev add lambda/gpu-cloud
4

Inspect the staged Terraform

equinix-dev plan --metro IAD
The CLI prints what it would run. No terraform apply happens — apply is blocked by default until the readiness gates pass and a human confirms.

Terraform recipe

The packages compile to real Terraform. This is the resolved HCL the CLI stages into .equinix-dev/terraform/main.tf:
terraform {
  required_providers {
    equinix = {
      source  = "equinix/equinix"
      version = "~> 4.15"
    }
  }
}

provider "equinix" {
  client_id     = var.equinix_client_id
  client_secret = var.equinix_client_secret
}

# Fabric Cloud Router — the routing brain in Ashburn.
module "fcr_iad" {
  source  = "equinix/fabric-equinix/fabric"
  version = "0.28.1"

  cloud_router_name        = "fcr-private-ai-iad"
  cloud_router_metro_code  = "DC"          # Ashburn / IAD
  cloud_router_package     = "BASIC"       # 2 Gbps, sufficient for inference paths
  cloud_router_account_num = var.equinix_account_number
  cloud_router_notification_emails = ["platform@example.com"]
}

# Connection from your AWS VPC into the FCR.
resource "equinix_fabric_connection" "aws_to_fcr" {
  name      = "private-ai-iad-aws-to-fcr"
  type      = "EVPL_VC"
  bandwidth = 1000   # 1 Gbps
  redundancy { priority = "PRIMARY" }
  notifications {
    type   = "ALL"
    emails = ["platform@example.com"]
  }
  a_side {
    access_point {
      type = "COLO"
      port { uuid = var.aws_dx_port_uuid }
    }
  }
  z_side {
    access_point {
      type    = "CLOUD_ROUTER"
      router  { uuid = module.fcr_iad.cloud_router_id }
    }
  }
}

# Connection from the FCR out to the GPU partner's Fabric service profile.
data "equinix_fabric_service_profiles" "lambda_iad" {
  filter {
    property = "/name"
    operator = "LIKE"
    values   = ["%Lambda Labs Private AI%"]
  }
}

resource "equinix_fabric_connection" "fcr_to_lambda" {
  name      = "private-ai-iad-fcr-to-lambda"
  type      = "IP_VC"
  bandwidth = 1000
  redundancy { priority = "PRIMARY" }
  notifications {
    type   = "ALL"
    emails = ["platform@example.com"]
  }
  a_side {
    access_point {
      type   = "CLOUD_ROUTER"
      router { uuid = module.fcr_iad.cloud_router_id }
    }
  }
  z_side {
    access_point {
      type            = "SP"
      profile         { uuid = data.equinix_fabric_service_profiles.lambda_iad.data[0].uuid }
      location        { metro_code = "DC" }
      seller_region   = "us-east-1"
    }
  }
}

# Network Edge VNF — Fortinet FortiGate with traffic-shaping ACL template.
resource "equinix_network_device" "fortigate_iad" {
  name        = "fortigate-private-ai-iad"
  metro_code  = "DC"
  type_code   = "FG"            # FortiGate
  package_code = "VM02"
  notifications = ["platform@example.com"]
  hostname    = "fg-private-ai-iad"
  account_number = var.equinix_account_number
  version     = "7.4.4"
  core_count  = 2
  term_length = 1

  ssh_key {
    username = "platform"
    key_name = var.fortigate_ssh_key_name
  }
}
Variables (equinix_client_id, aws_dx_port_uuid, etc.) live in .env.local and are staged automatically by equinix-dev add. The CLI never echoes secrets to stdout.

MCP trace — what the agent actually does

When you ask Claude / Cursor / your runtime “design a private AI inference path in IAD to Lambda,” the agent walks through this sequence on the Fabric MCP at mcp.equinix.com/fabric:
// 1. Discover available router packages in IAD.
{
  "tool": "search_router",
  "arguments": { "metro_code": "DC", "package": "BASIC" },
  "result": {
    "routers": [
      { "package": "BASIC", "throughput_gbps": 2,  "monthly_usd": 145 },
      { "package": "ADVANCED", "throughput_gbps": 5, "monthly_usd": 410 }
    ]
  }
}

// 2. Inspect the GPU partner's service profile.
{
  "tool": "search_service_profile",
  "arguments": { "metro_code": "DC", "name_like": "Lambda" },
  "result": {
    "profiles": [
      { "uuid": "fa…", "name": "Lambda Labs Private AI", "type": "L3_PROFILE" }
    ]
  }
}

// 3. Look up the proposed package's connection plan + price.
{
  "tool": "get_router_package",
  "arguments": { "package": "BASIC" },
  "result": {
    "connections_max": 4,
    "monthly_usd": 145,
    "billing_metro": "DC"
  }
}

// 4. Mutating tool — would create the FCR — but BLOCKED.
{
  "tool": "create_router",
  "arguments": { "metro_code": "DC", "package": "BASIC", "name": "fcr-private-ai-iad" },
  "result": {
    "status": "BLOCKED",
    "reason": "mutation_policy = blocked_by_default_requires_human_confirmation",
    "preflight_gates": [
      "equinix_account_authorized",
      "ssh_key_uploaded",
      "billing_owner_acknowledged"
    ]
  }
}
Three reads, one blocked write. The agent surfaces the plan + the preflight gate list to you, then waits for human confirmation. Even if the agent decides to push, the gateway refuses.

Readiness gates

Before this plan can move from plan to apply, every check below must pass:
GateSource
Equinix account authorizedEQUINIX_CLIENT_ID reachable
Account number setEQUINIX_ACCOUNT_NUMBER env
FortiGate SSH key uploadedEquinix Network Edge inventory
GPU partner service profile validFabric search_service_profile
Billing notification email setEquinix portal config
equinix-dev preflight runs the full set and exits non-zero on any failure. CI uses the same exit code.

Variants

Stand up two FCRs — one in DC (Ashburn / IAD), one in DA (Dallas / DFW) — and let BGP do the work. The same module call with cloud_router_metro_code = "DA" produces the secondary path. Connections become redundant (redundancy.priority = "SECONDARY").
Swap lambda/gpu-cloud for coreweave/gpu-cloud or crusoe/gpu-cloud. The package shape is identical; only the service_profile lookup name changes.
See Distributed AI observability — the FCR + connections from this recipe attach to Fabric Streams subscriptions to push metrics and route events into Datadog or Grafana.

Next

Multi-cloud private interconnect

The same FCR pattern, but reaching AWS + Azure + GCP for regulated workloads.

Distributed AI observability

Telemetry across multiple metros via Fabric Streams to Datadog or Grafana.