api integrations
33 TopicsNeed DDI In the Cloud? Infoblox is Now on Network Edge
Hey everyone—just popping in with some exciting news for those of you building out your virtual networking environments! Infoblox NIOS DDI is now available on Equinix Network Edge 🙌 That means you now have another core piece of your networking stack—DNS, DHCP, and IP address management (IPAM)—available as a virtual networking function alongside firewalls, routers, SD-WAN, and more. What this unlocks for you? Whether you’re working across hybrid or multi-cloud environments, having your full stack available virtually means you can deploy faster, scale smarter, and keep everything centralized—all without touching hardware. Already using Infoblox on-prem? You can now extend those services to the cloud and manage everything through Network Edge. Need better automation? Infoblox helps eliminate manual IPAM tasks and streamlines network provisioning. Focused on security? Built-in Protective DNS helps block threats before they reach your network. Building for scale? Combine Infoblox with other Network Edge VNFs for a complete, cloud-ready networking solution. And yes—it’s available to deploy via the portal, APIs, or Terraform, so you can integrate it however you work best. 👉 Check out the blog post here if you want a deeper dive or to explore use cases. Would love to hear how you’re thinking about virtualizing more of your network stack—or if you’ve already started using Network Edge and want to add Infoblox into the mix. Drop your thoughts or questions below!48Views1like0CommentsDo not notify via email after scheduled report is generated
Hi, I'm using the reports v1 API to generate ONE_TIME reports and every time the report is finished, I get an email telling me as much. However, I want to prevent this email from going out every time. I've figured out that I can use the parameters array with name notifyEmails to add additional emails that should be notified but I can't seem to be able to remove the email tied to my account. Is there any way to achieve what I want?152Views0likes4CommentsLayer-2 Networking with Interconnection and AWS
Those already using Ansible can now take advantage of templates to demonstrate configuring Layer 2 connectivity to AWS S3. You can also follow the prerequisites in the related GitHub repo to test this as a new user. Step 1: Use the initial template to rapidly create a project, VLAN, VRF, and prep for BGP peering on the virtual circuit. Step 2: Finish setting up the interconnection in Fabric console manually and accept the Direct Connect request in AWS. Step 3: Use the final playbook which takes care of deploying the VPC, the S3 VPC endpoint, the Virtual Private Gateway attached to your Direct Connect, and finally configures the end to end BGP peering. This playbook has been added to the examples section of the Ansible Collection Equinix page.95Views2likes0CommentsNutanix Examples: Protection Policy with VM Migration & Active Directory Authentication
Those looking to explore Nutanix on Equinix Metal are likely to have two concerns in mind: ease of migration and security. Thankfully, two examples have recently been added to the Equinix Terraform directory that demonstrate exactly how a user can accomplish these two things. Nutanix Clusters Setup and Protection Policy - walks a user through creating two Nutanix Clusters on Equinix Metal rapidly, create a protection policy between them, and then practice creating a VM in one cluster and migrating it to the other cluster. Nutanix on Equinix Metal with Active Directory Authentication - helps a user create a cluster on Equinix, add an AD server VM, configure AD authentication, and map a few sample roles to the AD. Both examples use a combination of Terraform and manual Prism console steps, promoting understanding while deploying with speed and convenience. Consider walking through these examples if you're interested in exploring Nutanix on Equinix Metal or learning more about making your infrastructure more reliable and secure.112Views2likes0CommentsNew Workshop: Load Balanced Kubernetes Cluster with Cluster API
Did you know that you can take advantage of Equinix Metal Load Balancers while quickly provisioning a Kubernetes cluster? You can try this hands-on in our latest workshop that takes advantage of two Equinix integrations with the Kubernetes API: Cloud Provider Equinix Metal (CPEM) and Cluster API Provider Packet (CAPP). This workshop takes a user step-by-step through gathering configuration details from the Metal Console, setting up the launch environment, and deploying a sample application on the load balanced cluster. You’ll use CAPP to deploy the cluster on Equinix Metal machines, including three control plane nodes and two worker nodes to demonstrate the load balancing functionality. By changing the configuration in CPEM, it is enabled to set up the load balancing service in front of the control plane nodes. Later, the user deploys and verifies nginx as a sample application. This produces a website that can be accessed via an external IP managed by a dynamically provisioned Equinix Metal Load Balancer, which can be reviewed in the Equinix Metal console. Make sure to permanently delete the cluster using the workshop instructions since Cluster API clusters will attempt to repair themselves if servers and load balancers are deleted manually. This workshop is a wonderful way to gain comfort using a diverse set of Kubernetes related tools as well as Equinix load balancing and bare metal.104Views0likes0CommentsRunning Terraform from a restricted environment
When running Terraform to provision and manage Equinix Fabric, Metal, and Network Edge, you may want to run Terraform from a restricted environment. Network filtering ACLs will need a predictable set of IP ranges to permit. This discussion will help you discover the IP services, ports, and address ranges your Terraform runner environment will need access to. We'll also discuss alternative ways to run Terraform configuration. If your ACLs permit the Terraform runner environment outbound HTTPS (TCP 443) and responses, that would cover everything Terraform needs to start provisioning infrastructure on Equinix. We'll assume we don't have unrestricted access and dig in a little further. Upon running, `terraform init`, Terraform will attempt to use DNS (UDP/TCP 53) services and HTTPS services to download provider plugins, such as the Equinix Terraform provider. The default host for fetching these plugins is registry.terraform.io, managed by Hashicorp. This is the defacto hub for public providers and published Terraform modules, although you may run your own local registry service. DNS for the Terraform registry points to CloudFront, a CDN whose addresses may change. If this presents a problem, there are options to download (or mirror) the necessary plugins in advance and use locally distributed copies. https://developer.hashicorp.com/terraform/cli/plugins Similarly, the DNS service for api.equinix.com, the one base domain that the Terraform Equinix provider will need for API access, resolves to Akamai, another CDN whose addresses may change or depend on where the request originates. As a Terraform configuration grows, you'll likely want to enable SSH access to the Metal and NE nodes being provisioned to automate OS provisioning. The SSH addresses will vary depending on the Metro where services are deployed. One way to ensure that the addresses are predictable in Metal is to provision the servers using Elastic IP addresses. A good follow-up question to this discussion is which ranges are assigned to NE devices and whether these IP addresses can be drawn from a predefined pool like Metal's Elastic IP Addresses. Terraform configurations typically include resources from multiple cloud providers. The node where the configuration is run would need to permit access to the APIs of these other providers. We'll leave the network filters needed by provisioned nodes to another discussion. Depending on your needs, cloud service providers offer managed services for Terraform or OpenTofu (a fork of Terraform persisting the original open-source license). These services can run your Terraform configuration predictably and reliably from a central location. Hashicorp provides the HCP service. https://developer.hashicorp.com/terraform/cloud-docs/run/run-environment Alternatives include: https://spacelift.io/ https://upbound.io https://www.env0.com/ https://www.scalr.com/ You can run similar CI/CD Terraform configuration control planes in your own backend with opensource tools such as: https://argoproj.github.io/cd/ https://www.crossplane.io/ https://docs.tofutf.io/ These SaaS providers or local solutions will also need access to the cloud provider APIs and nodes. With these providers you have full control of the configuration that is run and you can work these into a GitOps workflow. There are even more alternatives outside of the Terraform ecosystem. However, the Terraform ecosystem is your best option for the richest IaC integration experience with Equinix digital services. Equinix provides several Terraform modules to make it easy to get started. That extended ecosystem includes IaC tools that take advantage of the robust Equinix Terraform provider. These tools include Pulumi and Crossplane. TLDR; You'll want to expose select DNS, HTTPS, and SSH access from your Terraform runners. What alternative deployment strategies did I miss? What other network restrictions should be considered?672Views3likes0CommentsMulti-cloud Routing via Pulumi Templates
Quickly spin up a Fabric Cloud Router instance for routing between GCP and AWS with available Pulumi templates (programs) and these step by step workshops: Equinix FCR to Google Cloud Platform with Pulumi (Workshop) Equinix FCR Multi-cloud with Pulumi (Workshop) By the end, you'll have seamless routing between GCP Partner Interconnect and GCP Cloud Router as well as AWS Direct Connect. You can also find this information on the deploy site here.254Views0likes0Comments