api integrations
28 TopicsRunning Terraform from a restricted environment
When running Terraform to provision and manage Equinix Fabric, Metal, and Network Edge, you may want to run Terraform from a restricted environment. Network filtering ACLs will need a predictable set of IP ranges to permit. This discussion will help you discover the IP services, ports, and address ranges your Terraform runner environment will need access to. We'll also discuss alternative ways to run Terraform configuration. If your ACLs permit the Terraform runner environment outbound HTTPS (TCP 443) and responses, that would cover everything Terraform needs to start provisioning infrastructure on Equinix. We'll assume we don't have unrestricted access and dig in a little further. Upon running, `terraform init`, Terraform will attempt to use DNS (UDP/TCP 53) services and HTTPS services to download provider plugins, such as the Equinix Terraform provider. The default host for fetching these plugins is registry.terraform.io, managed by Hashicorp. This is the defacto hub for public providers and published Terraform modules, although you may run your own local registry service. DNS for the Terraform registry points to CloudFront, a CDN whose addresses may change. If this presents a problem, there are options to download (or mirror) the necessary plugins in advance and use locally distributed copies. https://developer.hashicorp.com/terraform/cli/plugins Similarly, the DNS service for api.equinix.com, the one base domain that the Terraform Equinix provider will need for API access, resolves to Akamai, another CDN whose addresses may change or depend on where the request originates. As a Terraform configuration grows, you'll likely want to enable SSH access to the Metal and NE nodes being provisioned to automate OS provisioning. The SSH addresses will vary depending on the Metro where services are deployed. One way to ensure that the addresses are predictable in Metal is to provision the servers using Elastic IP addresses. A good follow-up question to this discussion is which ranges are assigned to NE devices and whether these IP addresses can be drawn from a predefined pool like Metal's Elastic IP Addresses. Terraform configurations typically include resources from multiple cloud providers. The node where the configuration is run would need to permit access to the APIs of these other providers. We'll leave the network filters needed by provisioned nodes to another discussion. Depending on your needs, cloud service providers offer managed services for Terraform or OpenTofu (a fork of Terraform persisting the original open-source license). These services can run your Terraform configuration predictably and reliably from a central location. Hashicorp provides the HCP service. https://developer.hashicorp.com/terraform/cloud-docs/run/run-environment Alternatives include: https://spacelift.io/ https://upbound.io https://www.env0.com/ https://www.scalr.com/ You can run similar CI/CD Terraform configuration control planes in your own backend with opensource tools such as: https://argoproj.github.io/cd/ https://www.crossplane.io/ https://docs.tofutf.io/ These SaaS providers or local solutions will also need access to the cloud provider APIs and nodes. With these providers you have full control of the configuration that is run and you can work these into a GitOps workflow. There are even more alternatives outside of the Terraform ecosystem. However, the Terraform ecosystem is your best option for the richest IaC integration experience with Equinix digital services. Equinix provides several Terraform modules to make it easy to get started. That extended ecosystem includes IaC tools that take advantage of the robust Equinix Terraform provider. These tools include Pulumi and Crossplane. TLDR; You'll want to expose select DNS, HTTPS, and SSH access from your Terraform runners. What alternative deployment strategies did I miss? What other network restrictions should be considered?763Views3likes0CommentsCluster-API-Provider-Packet v0.7.0 Release
Version 0.7.0 of cluster-api-provider-packet, introduces metro level support vs facility level in accordance with the soon-to-be updated Metal API. Users of previous versions will want to take advantage of this immediately. The basic requirements to upgrade your existing clusters can be found here. Please work with your Equinix support team to determine the best migration path for your architecture. Assistance can also be found in the Community Slack and Community site. Read more at our Metros Quick Reference and see the facility deprecation announcement. *Please note that if devices are already in the correct metros you’ve specified, no disruption will happen to clusters or their devices. As with any production change, test your changes before applying them to clusters in production. In addition to metro-level support, this release installs the latest cloud-provider-equinix-metal v3.6.1 by default and is built on golang version 1.19 and cluster-api 1.3. The default OS used is Ubuntu 20.04 and kube-vip is updated to v0.5.12 in the kube-vip flavor templates. Lastly, the CI workflow has been refactored with caching removed and updated tests considering cluster-api 1.3 dependencies. See more at the github release here! https://github.com/kubernetes-sigs/cluster-api-provider-packet/releases/tag/v0.7.05.2KViews3likes1CommentNeed DDI In the Cloud? Infoblox is Now on Network Edge
Hey everyone—just popping in with some exciting news for those of you building out your virtual networking environments! Infoblox NIOS DDI is now available on Equinix Network Edge 🙌 That means you now have another core piece of your networking stack—DNS, DHCP, and IP address management (IPAM)—available as a virtual networking function alongside firewalls, routers, SD-WAN, and more. What this unlocks for you? Whether you’re working across hybrid or multi-cloud environments, having your full stack available virtually means you can deploy faster, scale smarter, and keep everything centralized—all without touching hardware. Already using Infoblox on-prem? You can now extend those services to the cloud and manage everything through Network Edge. Need better automation? Infoblox helps eliminate manual IPAM tasks and streamlines network provisioning. Focused on security? Built-in Protective DNS helps block threats before they reach your network. Building for scale? Combine Infoblox with other Network Edge VNFs for a complete, cloud-ready networking solution. And yes—it’s available to deploy via the portal, APIs, or Terraform, so you can integrate it however you work best. 👉 Check out the blog post here if you want a deeper dive or to explore use cases. Would love to hear how you’re thinking about virtualizing more of your network stack—or if you’ve already started using Network Edge and want to add Infoblox into the mix. Drop your thoughts or questions below!221Views2likes0CommentsLayer-2 Networking with Interconnection and AWS
Those already using Ansible can now take advantage of templates to demonstrate configuring Layer 2 connectivity to AWS S3. You can also follow the prerequisites in the related GitHub repo to test this as a new user. Step 1: Use the initial template to rapidly create a project, VLAN, VRF, and prep for BGP peering on the virtual circuit. Step 2: Finish setting up the interconnection in Fabric console manually and accept the Direct Connect request in AWS. Step 3: Use the final playbook which takes care of deploying the VPC, the S3 VPC endpoint, the Virtual Private Gateway attached to your Direct Connect, and finally configures the end to end BGP peering. This playbook has been added to the examples section of the Ansible Collection Equinix page.190Views2likes0CommentsNutanix Examples: Protection Policy with VM Migration & Active Directory Authentication
Those looking to explore Nutanix on Equinix Metal are likely to have two concerns in mind: ease of migration and security. Thankfully, two examples have recently been added to the Equinix Terraform directory that demonstrate exactly how a user can accomplish these two things. Nutanix Clusters Setup and Protection Policy - walks a user through creating two Nutanix Clusters on Equinix Metal rapidly, create a protection policy between them, and then practice creating a VM in one cluster and migrating it to the other cluster. Nutanix on Equinix Metal with Active Directory Authentication - helps a user create a cluster on Equinix, add an AD server VM, configure AD authentication, and map a few sample roles to the AD. Both examples use a combination of Terraform and manual Prism console steps, promoting understanding while deploying with speed and convenience. Consider walking through these examples if you're interested in exploring Nutanix on Equinix Metal or learning more about making your infrastructure more reliable and secure.250Views2likes0CommentsWhat's the difference between Playground, Sandbox, and Production?
You might be deploying on Network Edge today to run through Charles_Randall's tutorial. While reading up on Network Edge at Developer Platform, then perhaps you're thinking "what's the difference between Playground, Sandbox, and Production?". In short: Playground is a test environment to test Equinix APIs, using static data without integrating within the actual API. Sandbox is a mock test environment to test Equinix APIs, using synthetic data (not production data) to integrate with Equinix APIs before moving to Production. Production is the live environment.4.3KViews2likes0CommentsTerraform-Provider-Equinix v1.16.0 Release
Equinix Terraform Provider v1.16.0 not only has the ability to create Fabric Cloud Router resources directly, but also layer 2 connections to AWS, GCP, and specific fabric ports in Equinix. Connections to Azure and Oracle via Terraform are coming soon! FCR is a great option for those who want to quickly route between clouds using BGP or static networks without worrying about specific OS, vendor requirements, or advanced configuration. For those who use Network Edge, this Terraform release allows you to disable the default internet connectivity before provisioning specific firewalls (Palo Alto Networks NGFW, Cisco FTDv, and Aviatrix FireNet) just like you can in the console.1.3KViews1like0Comments