Equinix Metal deploys infrastructure across nearly 20 regions, such as New York, Amsterdam, Silicon Valley, and Frankfurt. At Equinix, we call each of those a "metro."
Within each metro, we may utilize just one physical data center, but increasingly we're deployed across a number of facilities. A great example is New York, where we have a legacy facility in Parsippany NJ (EWR), a deployment in Equinix NY5, and soon another expansion in Equinix NY7.
In order to help users consume more easily, we've been hard at work on a number of adjustments that launched today. Here are the highlights, and there are some detailed FAQ's below and in our documentation.
- New deploys through our portal will feature a metro name (e.g. Silicon Valley) instead of a data center name (e.g. SJC1 or SV15).
- Our system will choose which data center to deploy your server into based upon capacity and other factors. In metros with both legacy and new Equinix facilities, we will generally preference our shiny, new, Equinix facilities.
- Everything you would expect (BGP, backend transfer, Elastic IPs) will work across facilities within a metro. Magic!
- There are no bandwidth charges between facilities within a metro.
- Existing VLANs will need to be recreated to span across facilities in a metro, but new ones will work by default.
- We have a new "metros" endpoint in our API. Over time you'll want to use this instead of the "facilities" endpoint. However, there are no plans to deprecate this endpoint so you don't have to change anything you do today.
- We'll release a new version of our Terraform provider shortly, but you'll need to pay close attention to this upgrade if you have existing infrastructure. We'll include full details in the release notes, and are happy to chat with you about it.
In short, metros is a big change to our backend systems, but for users, it should work something like this:
Let's See the FAQ's!
What is a Metro?
A Metro is an Equinix-wide concept for data centers that are grouped together geographically. At Equinix Metal, data centers within a metro now share capacity and networking features.
What is new or different about Metros?
The biggest difference is that servers are provisioned at the Metro level and not the Facility level. When you provision a server you select the Metro where you want your server to live. Our API will then determine which specific facility within the Metro your server physically resides (based on capacity and other factors).
Metro-based provisioning is supported through all the deployment options: On-Demand, Spot Market, Reserved Hardware, and Batch Deployments.
What if I really need a server in a specific facility?
If your use case requires servers in a specific facility, you can use the "facilities" endpoint in our API. The API remains backward compatible and you can also continue to use any automation integrations.
You can also contact us with details about your deployment or application needs and we can assist you, probably with the help of a Hardware Reservation.
What other features do Metros have?
The facilities within each Metro are interconnected by high-speed links with an average latency of 5 milliseconds or less. There is no billing for traffic between facilities within a Metro. Additionally, many of our Networking features are designed to take advantage of this:
- VLANs - When you provision a VLAN in a Metro, all servers in that Metro are able to connect to it.
- Elastic Public IPv4 Addresses - When you request an Elastic Public IPv4 address, you will be able to assign it to any server in the Metro where you requested it.
- Private IPv4 Addresses - While the blocks of private IPv4 addresses are facility-based, all the servers within a project in the same Metro can use them to connect to each other.
- Backend Transfer and Local BGP will work across facilities in a Metro.
I already have existing servers and VLANs. Will these continue to work?
Existing VLANs will continue to work, but traffic will be limited to servers within a single facility. To enable traffic between servers across all facilities in a Metro, you will need to create new Metro-level VLANs and add any servers to the new VLANs.
What is the impact if I’m a Terraform user?
Our 2.0.0 Terraform provider will expose the new metro concept, which is also available via the /metros endpoint in our API. You can continue using Terraform to manage your existing devices in the facilities model and begin to manage new deployments with the newer Metro model. For a detailed changelog, please follow our discussion on GitHub.
Is the Metros concept the same as Availability Zones in AWS, Azure or GCP?
Equinix Metal does not provide a public cloud-style high availability construct, and our Metros feature is not intended to provide this. If you are looking for high availability or disaster recovery at the regional level, we recommend deploying across distinct metros. Diversity within a metro can often be achieved through Hardware Reservations. Please contact our team for options!
Of all the "football cities" of the internet, Frankfurt may just be the most under-rated. What with Amsterdam a few hundred miles up and to the left.
But honestly, y'all, Germany is amazing and Equinix's Frankfurt campus is at the center of it — at least when it comes to interconnection! That's why today we're super stoked to open our latest core metro in the land of beer and late-night doner-kebab.
With FR2 online, you can deploy our latest Gen3 configurations on-demand in minutes and interconnect with all of the important networks, cloud providers, and enterprises in Germany.
Love Equinix Metal? Wish it was closer to your users? Well, today you can pull the trigger on either stock or custom infrastructure in the following expansion metros:
AMER:Los Angeles, Chicago, New York, and Toronto
EMEA:Paris, London, and Madrid
AP:Hong Kong, Seoul, and Tokyo
This is in addition to our core metros of Silicon Valley, Dallas, Washington DC, Amsterdam, Frankfurt, Singapore, Sydney (live later this month), and Sao Paulo (live in Q2).
Some notes about ordering in expansion metros:
- Servers are available on a contracted term of at least 12 months.
- There is no minimum order quantity or dollar amount.
- Deployment time is approximately 45 days or less, but ya know - call us! ![internet]
Interested in going somewhere special? Contact our customer success or sales team (live chat will get you there) and we'll get you all the details.
I guess it is finally time to explore the far edges of the internet, eh?
It's always exciting to put a new pin on the map, but some are more special than others.
I mean, we've all deployed
somethingto Ashburn, VA (US-East anyone?) but very few of us have fired off an API call to Seoul, Korea. Well, wait no longer!
Today we're live with our newest Expansion metro, right in the heart of one of Asia's most dynamic and most wired markets. Just a reminder: expansion metros are available for reserved hardware only at this time.
Interested? Let's talk!
Meet SL1, Equinix's first data center in Seoul!
This is a big one, friends.
We’ve added a new network interface configuration called “hybrid bonded” on your server page, which enables both L3 (Internet) and L2 (Private VLANs) on a single LACP bond.
This means you can now run a highly available mixed network setup on Metal! Firewall? Router? Interconnected Ingress Controller? Yup.
No More Marketing Speak, Give me the Details!
Hybrid Bonded mode view docs enables a highly available “bonded” setup of 2 networking interfaces that supports both Layer 2 and Layer 3 modes at the same time. This keeps the functionality of supporting both Layer 2 and Layer 3, but does so while maintaining a highly available bonded networking interface that spans two diverse upstream switches.
Heads up: Hybrid Bonded mode is available in all Equinix IBX locations on 3rd generation servers. How do you know if you're deploying into an Equinix IBX? Look for a two-letter facility code (e.g. DC13 or SV15). Legacy sites leverage three-letter airport-style codes (e.g. EWR1 or SJC1).
Don't worry, legacy servers in legacy data centers can still use Hybrid Unbonded mode.
Doorman is a customer-facing VPN that provides access to the private subnets within your Equinix Metal environments. At its core, Doorman a Go service that manages an OpenVPN instance and integrates everything with our API (including users, organizations, 2FA, etc) for easy access control.
Up until now, we've offered Doorman as a platform service. However, to enhance security and help customers operate more confidently, we've decided to move to a "self-service" option that each customer controls. To support that we've open-sourced Doorman and made it super easy to run.
We'll be working with current "hosted" Doorman users to transition them over. Interested to make interactions with your infrastructure more secure? Check out the repo here.
We love automation and we love Terraform, so we’ve been busy filling our Equinix registry with new modules.
- Multi-architecture OpenStack- Use Terraform to quickly create an OpenStack cloud powered by Armv8 and/or x86 bare metal servers at Equinix Metal. Check out our OpenStack Terraform module or watch our DevRel team walk through the repo.
- Distributed MinIO- Minio is a great option for Equinix Metal users that want to have easily accessible S3 compatible object storage, pairing all manner of SSDs, NVMe SSDs, and even high capacity SATA HDDs. Our Distributed MinIO Terraform module makes it easy to get going.
- Platform9 Kubernetes- Platform9 Managed Kubernetes (PMK) is a SaaS managed offering, providing a simple interface to manage all your complex Kubernetes needs. This Terraform module provides an end-to-end deployment experience that includes PMK-ready nodes, Equinix Metal network configuration, and Kubernetes cluster spin-up. Want to learn more? Watch as our DevRel team walks you through it.
If none of these modules get you out of your WFH rhythm, maybe take some time to watch Mitchell Hashimoto fly an airplane instead?
Please give a hearty welcome to v3.0.0 of our Kubernetes Cloud Controller Manager (CCM) for Equinix Metal!
In this release we've made some notable leaps forward, especially relating to load balancing options. With CCM v3.0.0 you can now:
- Use a choice of load balancer, with Kube-vip and MetalLB as first-class citizens.
- Deploy your load balancer independently with CCM managing the configuration.
- Set an empty load balancer (which does not configure any load balancer at all, but handles all of the Equinix Metal API steps, including BGP on project and nodes, Elastic IPs, and node annotations).
- Use multiple clusters in a single project.
- Use dedicated ingress nodes when using BGP.
- Select the BGP source address and set it via annotations.
We've also implemented a complete
rebrandfrom Packet to Equinix Metal, including repository and organization on GitHub and Docker Hub, configuration names, environment variable names, resource names, and documentation. Whew!
And last but not least -
bugfixesof course. :-)
Go forth and conquer! And say hello to the Kube-vip mascot:
We're excited to share a new Technical Guide that will help you explore the Equinix Metal API using Postman.
Postman is a collaboration platform for API development — as such it's a great tool for getting to know an unfamiliar API. Using an API tool like Postman will allow you to see what's possible, and to quickly make error-free requests.
The guide covers downloading, installing, and configuring Postman to use the Equinix Metal API, then making requests and visualizing the responses.
We’ve always loved working with the Anthos team and were excited to join their announcement of Anthos for Bare Metal. To make it easier and faster to deploy Anthos on Equinix Metal without needing vSphere, we’ve created a shiny new Terraform Provider.
Check out our Terraform Registry where you will also find all the info you need to get started.
Want to learn more?
- Ask any questions by joining our community Slack. Simply join the #google-anthos channel.
- Check out Brian Wong's blog: What’s Next for Google Cloud’s Anthos, Equinix and Bare Metal?