The Metal API currently has a hardware_reservation_id field (https://metal.equinix.com/developers/api/hardwarereservations/) which accepts either the Reservation UUID or a next-available option, which chooses any reservation that is provisionable and matches the instance type/location criteria. There are limitations with this approach when using 3rd party provisioning tools, such as Terraform (e.g. not having enough reserved hardware for a request or the requirement of having to redeploy the entire cluster when converting from on-demand -> reserved).
Two improvements would help the use of the Reserved Hardware capability:
  1. Allow a new string such as "hardware_reservation_id" : "prefer" so that the API will use hardware reservations that match the criteria and then fulfill the rest of the capacity requirement with on-demand instances. This would be used by every user regardless of whether the instance is on-demand or reserved which would allow for seamless instance conversions from on-demand to reserved and vice-versa.
  2. Allow hardware reservations to be used Organization wide (perhaps from a "Default" project) instead of having to move the hardware between Projects before uses it. This would allow any project to access a pool of hardware reservations instead of tying reservations to specific projects. We can offer the ability to lock reservations to specific projects as it is done today.