Network Edge
97 TopicsIssues deploying Equinix Connections
I have managed to use several terraform modules without problem. I have used the palo alto cloudgenix vm modules, palo alto firewalls modules, device link modules, and fabric cloud router module. All work okay on the lastest version of equinix/equinix. However as I dive into other connections I get a lot version issues. For example. Leveraging the cloud-router-port connection. https://registry.terraform.io/modules/equinix/fabric/equinix/latest/examples/cloud-router-2-port-connection This would be used to connect the cloud router to the pa firewall mgmt interface. Documentation covers the following version. required_providers { equinix = { source = "equinix/equinix" version = ">= 2.9.0" } } } But code doesnt work without this version terraform { required_providers { equinix = { source = "equinix/equinix" version = "~> 1.26.0" } } } When using new versions error occurs. Failure. Ive redone this several times in my code base. ephemeral.aws_secretsmanager_secret_version.equinix_iac_credentials: Opening... ephemeral.aws_secretsmanager_secret_version.equinix_iac_credentials: Opening complete after 0s module.equinix_deployment.module.fw-mgmt-to-fcr-connection-ch-1a.equinix_fabric_connection.this: Creating... ╷ │ Error: 400 Bad Request Code: EQ-3142558, Message: Json syntax error, please check request body, Details: Unknown json property : aSide.accessPoint.router.package.code. Unexpected value '' │ │ with module.equinix_deployment.module.fw-mgmt-to-fcr-connection-ch-1a.equinix_fabric_connection.this, │ on ..\..\modules\cloud-router-2-port-connection\main.tf line 1, in resource "equinix_fabric_connection" "this": │ 1: resource "equinix_fabric_connection" "this" { I can get past this error and deploy the connection with an older version but then run into version issues when attempting to leverage the virtual-device-2-eia-connection https://registry.terraform.io/modules/equinix/fabric/equinix/latest/examples/virtual-device-2-eia-connection here the primary problem is that on older versions. there is no resource virtual-device-connection. only fabric-device-connection. One thought is to completely separate the fabric cloud router to port module Here is the original cloud router to port module main.tf resource "equinix_fabric_connection" "this" { name = var.connection_name type = var.connection_type bandwidth = var.bandwidth notifications { type = var.notifications_type emails = var.notifications_emails } a_side { access_point { type = "CLOUD_ROUTER" router { uuid = var.aside_fcr_uuid } } } z_side { access_point { type = var.zside_ap_type virtual_device { type = var.zside_vd_type uuid = var.zside_vd_uuid } interface { type = var.zside_interface_type id = var.zside_interface_id } location { metro_code = var.zside_location } } } order { purchase_order_number = var.purchase_order_number } } variables variable "connection_name" { type = string description = "Name of the Fabric connection" } variable "connection_type" { type = string description = "Type of the Fabric connection (e.g., IP_VC)" } variable "bandwidth" { type = number description = "Connection bandwidth in Mbps" } variable "notifications_type" { type = string default = "ALL" description = "Notification type" } variable "notifications_emails" { type = list(string) description = "Emails for notifications" } variable "purchase_order_number" { type = string default = "" } variable "aside_fcr_uuid" { type = string description = "UUID of the FCR device" } variable "zside_ap_type" { type = string default = "VD" } variable "zside_vd_type" { type = string default = "EDGE" } variable "zside_vd_uuid" { type = string } variable "zside_interface_type" { type = string default = "NETWORK" } variable "zside_interface_id" { type = number } variable "zside_location" { type = string } versions.tf terraform { required_providers { equinix = { source = "equinix/equinix" version = "~> 1.26.0" } } } module module "fw-mgmt-to-fcr-connection-ch-1a" { #FCR Router to FW Management Interface Connection source = "../../modules/cloud-router-2-port-connection" connection_name = "fcr-2-fw-mgmt-ch" connection_type = "IP_VC" bandwidth = 50 notifications_type = "ALL" notifications_emails = var.notifications purchase_order_number = "mgmt-connection" #aside Fabric Cloud Router aside_fcr_uuid = module.fcr_ch.id #zside Palo aAlto Firewall zside_ap_type = "VD" #Virtual Device zside_vd_type = "EDGE" zside_vd_uuid = module.pa_vm_ch.id zside_interface_type = "NETWORK" zside_interface_id = 1 # Palo Alto Firewall Management Port zside_location = "CH" #metro code }Solved143Views1like5CommentsTerraform Module Equinix Internet Access. Does it exist?
I scavenged the terraform registry looking for a Equinix Internet Access Terraform module. ] I found an example of a connection to EIA.This is the only thing that comes up in the registry as EIA. https://github.com/equinix/terraform-equinix-fabric/tree/v0.22.0/examples/virtual-device-2-eia-connection But none that creates the EIA itself. Can anyone answer if this may or may not be supported in terraform and if so, what would the resource name be for it? The GUI terminology doesn't always translate one for one, so maybe I'm missing something.52Views0likes2CommentsEdge Service and Virtual Circuit Traffic Summary
Hello everyone, We're using Virtual circuits between Fabric ports & Edge virtual devices, it works fine but still it's impossible to get the link usage statistics as for other circuits. Any news about that option that is missing? Thanks 🙂16KViews1like10CommentsBuilding Highly Resilient Networks in Network Edge - Part 1 - Architecture
This is first part of a series of posts highlighting the best practices for customers who desire highly resilient networks in Network Edge. This entry will focus on the foundational architecture of Plane A and Plane B in Network Edge and how these building blocks should be utilized for resiliency. For more information on Network Edge and device plane resiliency please refer to the Equinix Docs page here. Dual Plane Architecture Network Edge is built upon a standard data center redundancy architecture with multiple pods that have dedicated power supplies and a dedicated Top of Rack (ToR) switch. It consists of multiple compute planes commonly referred to as Plane A and Plane B for design simplicity Plane A connects to the Primary Fabric network and Plane B connects to the Secondary Fabric network The most important concept for understanding Network Edge resiliency: the device plane determines which Fabric switch is used for device connections. Future posts will dive much deeper in the various ways that Network Edge devices connect to other devices, Clouds and co-location. While referred to as Plane A and Plane B, in reality there are multiple compute planes in each NE pod The actual number varies based on the size of the metro This allows devices to be deployed in a manner where they are not co-mingled on the same compute plane, eliminating compute as a single point of failure Network Edge offers multiple resiliency options that can be summarized as device options and connection options Device options are local and provide resiliency against failures at the compute and device level in the local metro This is analogous to what the industry refers to as ”High Availability (HA)” Connection resiliency is a separate option for customers that require additional resiliency with device connections (DLGs, VCs and EVP-LAN networks). This will be discussed in depth in separate sections. It is common to combine both local resiliency and connection resiliency, but it is not required—ultimately it depends on the customer’s requirements Geo-redundancy is an architecture that expands on local resiliency by utilizing multiple metros to eliminate issues that may affect NE at the metro level (not covered in this presentation) Single Devices Single devices have no resiliency for compute and device failures The first single device is always connected to the Primary Compute Plane * Single devices always make connections over the Primary Fabric network Single devices can convert to Redundant Devices via the anti-affinity feature Anti-Affinity Deployment Option By default single devices have no resiliency However, single devices can be placed in divergent compute planes. This is commonly called anti-affinity and is part of the device creation workflow Checking the "Select Diverse From" box allows customers to add new devices that are resilient to each other Customers can verify this by viewing their NE Admin Dashboard and sorting their devices by "Plane" This feature allows customer to convert a single device install to Redundant Devices By default, the first single device was deployed on the Primary Fabric The actual compute plane is irrelevant until the 2nd device is provisioned The 2nd device is deployed on the Secondary Fabric AND in a different compute plane than the first device Resilient Device Options These options provide local (intra-metro) resiliency to protect against hardware or power failures in the local Network Edge pod By default, the two virtual devices are deployed in separate compute planes (A and B) In reality there are more than two compute planes, but they are distinct from each other The primary device is connected to the Primary Fabric network and the secondary/passive device is connected to the Secondary Fabric network Redundant Devices Clustered Devices Deployment Two devices, both Active, appearing as two devices in the Network Edge portal. Both devices have all interfaces forwarding Two devices, only one is ever Active. The Passive (non-Active) device data plane is not forwarding WAN Management Both devices get a unique L3 address that is active for WAN management Each node gets a unique L3 address for WAN management as well as a Cluster address that connects to the active node (either 0 or 1) Device Linking Groups None are created at device inception Two are created by default to share config synchronization and failover communication Fabric Virtual Connections Connections can be built to one or both devices Single connections are built to a special VNI that connects to the Active Cluster node only. Customer can create optional, additional secondary connection(s) Supports Geo-Redundancy ? Yes, Redundant devices can be deployed in different metros No, Clustered devices can only be deployed in the same metro Vendor Support All Vendors Fortinet, Juniper, NGINX and Palo Alto The next post will cover the best practices for creating resilient device connections with Device Link Groups and can be found here.3.9KViews5likes2Comments