User Profile
Marques
Equinix Employee
Joined 3 years ago
User Widgets
Contributions
Re: Crossplane - failure to delete port resource in Claim
From your configuration, it looks like you want eth0 in broken bond0 with layer3 mode and eth1 out of bond in layer2 mode. This is referred to as a Hybrid Unbonded configuration: https://deploy.equinix.com/developers/docs/metal/layer2-networking/hybrid-unbonded-mode/. Using PortVlanAttachment for Hybrid Unbonded configuration will also require use of DeviceNetworkType with `type: "hybrid"`. In the Device's default `hybrid-bonded` port mode, it makes sense that attaching a VLAN to eth1 reports "still bonded". If you don't actually need `hybrid` (unbonded) mode, you could change the target portName in your PortVlanAttachment to bond0. The following Hybrid Unbonded section of the Metal Device Types guide shows how this can be done with Terraform, the Crossplane configuration would apply similarly: https://registry.terraform.io/providers/equinix/equinix/1.20.1/docs/guides/network_types#hybrid-unbonded-device-with-a-vlan https://registry.terraform.io/providers/equinix/equinix/1.20.1/docs/resources/equinix_metal_device_network_type304Views0likes0CommentsRe: Crossplane - Metal Vlan description not updated in UI
I would also expect the description of the managed resource to be updated if it is added to the spec after creation. Please open an issue for this in the community provider at https://github.com/crossplane-contrib/provider-jet-equinix/issues152Views0likes0CommentsRe: Crossplane - failure to delete port resource in Claim
I created a separate issue https://github.com/crossplane-contrib/provider-jet-equinix/issues/54 to track what sounds like a conflicting LateInitialization definition of vlanIds and vxlanIds field in Port. I'm not sure how this would prevent deletion, it would prevent the resource from reaching `Ready` from what we've seen in other resources with a similar conflict.426Views0likes3CommentsRe: Crossplane - failure to delete port resource in Claim
> What I really need is to create a device that is unbonded and add a vlan to the eth1 port and ensure that traffic is tagged to this port. We found that adding a second vlan tagged "turned on" the traffic tagging. Is this still the case? See the general overview in the docs for the network mode of your port configuration (https://deploy.equinix.com/developers/docs/metal/layer2-networking/). Generally, if only one VLAN exists, it will be made the native (untagged) VLAN. Adding additional VLANs and denoting one of these as the `nativeVlanId` (or using the Ref/Selector fields) will ensure that the other VLANs are tagged. (https://marketplace.upbound.io/providers/equinix/provider-jet-equinix/v0.6.1/resources/metal.equinix.jet.crossplane.io/Port/v1alpha1). If your additional VLANs may be deleted later, this would toggle the last remaining VLAN to become a Native VLAN. You may want to include an extra VLAN to act as a persistent native VLAN to prevent that. Editing the Port spec, to modify the list of VLANs to remove a Fabric connected VLAN should be composable. Removing VLANs will update the resource in place and not destroy the resource or change the network mode unless explicitly changed. The Port resource's VLAN fields are vlanIds or vxlandIds. Unlike nativeVlanIds, I see that the provider doesn't offer Ref and Selector variations for the vlanIds and vxlanIds fields. That could certainly be improved. For full automation, where the device may also be destroyed, you may want to use the resetOnDelete flag so that VLANs and Ports can be deleted without dependency issues (prevent problems where a vlan can not be deleted because a port is bound to it).363Views0likes0CommentsRe: Crossplane - failure to delete port resource in Claim
`PortVlanAttachment` shouldn't be needed if you have a `Port` resource. `Port` is preferable as it can be used to toggle the port's layer 2/3 settings, bonding, and VLAN attachments in one shot. In the `Port` resource, you should only define `vlanIds` referencing the `.id` (UUID) of the `Vlan` resources. The `vxlanIds` field (and its API upstream equivalent) is for convenience and allows you to reference the VLANs by logical number (1000, 1001). Specifying both would trigger a conflict. If you are already doing that, and have removed any `Port` redundant `PortVlanAttachment` resources and still getting this error, then we may have another example of https://github.com/crossplane-contrib/provider-jet-equinix/issues/50#issuecomment-2223332204, where conflicting parameters are LateInitialized from their computed values (the `Port` `vlanIds`, as read from the Equinix Metal API response, could be getting updated automatically in `spec` by the provider, creating the conflict). If so, please open an issue on the repo.427Views1like0CommentsRe: Crossplane Metal provider - lifecycle.prevent_destroy error
Are you using a management policy or non-default delete policy? https://docs.crossplane.io/latest/concepts/managed-resources/#interaction-with-management-policies I see "has lifecycle.prevent_destroy set, but the plan calls for this resource to be destroyed" in a number of provider GitHub issues, for AWS, Azure, and others. I'm curious if the deletion is being triggered due to some detected drift and a need to update based on the plan. I would expect the resource to successfully delete and recreate in that case (perhaps in a loop, which wouldn't be desirable). It would help to have a better example of the minimum XRD to reproduce this problem. Debugging output from the provider may also reveal some helpful details: https://docs.crossplane.io/latest/guides/troubleshoot-crossplane/#provider-logs201Views0likes0CommentsRe: Trouble updating Crossplane Claim status with Equinix Provider
I'll try this myself in the Equinix Crossplane provider's 0.6.1 release with the latest Crossplane and see where I end up. In a now deprecated Equinix Metal specific provider, I built a conformance testing XRD: https://github.com/cncf/crossplane-conformance/pull/22/files#diff-21d692ae5f5d0069c68cd54654e79d61dfc36eda5a7858cb681a7f772def70da This took advantage of cross-resource selectors and references, features that have semantically changed in newer cross-plane versions.262Views0likes0CommentsRunning Terraform from a restricted environment
When running Terraform to provision and manage Equinix Fabric, Metal, and Network Edge, you may want to run Terraform from a restricted environment. Network filtering ACLs will need a predictable set of IP ranges to permit. This discussion will help you discover the IP services, ports, and address ranges your Terraform runner environment will need access to. We'll also discuss alternative ways to run Terraform configuration. If your ACLs permit the Terraform runner environment outbound HTTPS (TCP 443) and responses, that would cover everything Terraform needs to start provisioning infrastructure on Equinix. We'll assume we don't have unrestricted access and dig in a little further. Upon running, `terraform init`, Terraform will attempt to use DNS (UDP/TCP 53) services and HTTPS services to download provider plugins, such as the Equinix Terraform provider. The default host for fetching these plugins is registry.terraform.io, managed by Hashicorp. This is the defacto hub for public providers and published Terraform modules, although you may run your own local registry service. DNS for the Terraform registry points to CloudFront, a CDN whose addresses may change. If this presents a problem, there are options to download (or mirror) the necessary plugins in advance and use locally distributed copies. https://developer.hashicorp.com/terraform/cli/plugins Similarly, the DNS service for api.equinix.com, the one base domain that the Terraform Equinix provider will need for API access, resolves to Akamai, another CDN whose addresses may change or depend on where the request originates. As a Terraform configuration grows, you'll likely want to enable SSH access to the Metal and NE nodes being provisioned to automate OS provisioning. The SSH addresses will vary depending on the Metro where services are deployed. One way to ensure that the addresses are predictable in Metal is to provision the servers using Elastic IP addresses. A good follow-up question to this discussion is which ranges are assigned to NE devices and whether these IP addresses can be drawn from a predefined pool like Metal's Elastic IP Addresses. Terraform configurations typically include resources from multiple cloud providers. The node where the configuration is run would need to permit access to the APIs of these other providers. We'll leave the network filters needed by provisioned nodes to another discussion. Depending on your needs, cloud service providers offer managed services for Terraform or OpenTofu (a fork of Terraform persisting the original open-source license). These services can run your Terraform configuration predictably and reliably from a central location. Hashicorp provides the HCP service. https://developer.hashicorp.com/terraform/cloud-docs/run/run-environment Alternatives include: https://spacelift.io/ https://upbound.io https://www.env0.com/ https://www.scalr.com/ You can run similar CI/CD Terraform configuration control planes in your own backend with opensource tools such as: https://argoproj.github.io/cd/ https://www.crossplane.io/ https://docs.tofutf.io/ These SaaS providers or local solutions will also need access to the cloud provider APIs and nodes. With these providers you have full control of the configuration that is run and you can work these into a GitOps workflow. There are even more alternatives outside of the Terraform ecosystem. However, the Terraform ecosystem is your best option for the richest IaC integration experience with Equinix digital services. Equinix provides several Terraform modules to make it easy to get started. That extended ecosystem includes IaC tools that take advantage of the robust Equinix Terraform provider. These tools include Pulumi and Crossplane. TLDR; You'll want to expose select DNS, HTTPS, and SSH access from your Terraform runners. What alternative deployment strategies did I miss? What other network restrictions should be considered?671Views3likes0CommentsRe: Are there any quick start guides to run Kubernetes on Equinix Metal?
Yes. There are various third party managed Kubernetes providers, Equinix Labs Terraform modules, and Ansible playbooks, and CNCF projects (ClusterAPI, Cluster Autoscaler, KubeSpray). https://deploy.equinix.com/solutions https://deploy.equinix.com/labs/kubernetes/ You may also search "kubernetes" or "K3S" in Equinix GitHub organizations to find additional projects: https://github.com/equinix-labs/ https://github.com/equinix If Terraform is your preferred approach, there are some examples in https://github.com/equinix-labs/terraform-equinix-labs/tree/main/examples. These examples can be deployed in a workshop where collaborators get limited-time access to the PoC clusters.1.2KViews0likes0CommentsWhat are your plans for the Equinix Metal Load Balancer service?
Tyler Auerbeck introduced us to the new Load Balancer (EMLB) service during his Equinix Demo Day Winter 2023 session, https://deploy.equinix.com/events/demo-day-2/. Now that all Equinix Metal users have access to this feature through the console and API, what are you planning to do with it? The service is documented at https://deploy.equinix.com/developers/docs/metal/networking/load-balancers/. One of the first integrations to be released is the Cloud Provider Equinix Metal (CPEM) support for managing Kubernetes Service resources with the LoadBalancer type. The integration will provision EMLB load balancers and pools and configure ports to map to the Kubernetes nodes and ports where the service is available. The dozens of API calls all happen automatically when CPEM is used in a Kubernetes cluster running on Equinix Metal! The CPEM docs for using this integration are in the project README: https://github.com/kubernetes-sigs/cloud-provider-equinix-metal#service-load-balancers. More API documentation is available at: https://github.com/equinix/lbaas-api-docs1.7KViews1like0Comments