-
Notifications
You must be signed in to change notification settings - Fork 109
Add Support for CAPI IPAM Contract #671
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
As I had a hunch that it wasn't that simple when I had a look last time, I double checked, and it isn't. For the more relevant parts, e.g. I'd propose something simple like |
/triage accepted |
/assign @schrej |
As discussed in the last community meeting, I'm +1 for the change. |
+1 for this. @schrej Thanks a lot for working on this. |
The current IPAM contract does not support specifying DNS servers. What was the reason to specify that on the IPPool/IPAddress resources? Do we still need that? From what I can see, the current implementation would allow to reference a pool to get a Gateway from, even if the server machine has no interface that references the same pool. An IP would still be allocated from the pool, and only the Gateway would be used. Same for DNS. |
Maybe to make this a little clearer: Practically it only makes sense to allocate an IP address from a pool if it's either referenced from My proposal is to change that behaviour to only allocate addresses when the address will be used. Any other references are only valid when the address is actually used. A workaround for people abusing the ip-address-manager to only manage DNS or Gateways would be to allocate an address for metadata, and then just not use it. |
+1 for changing the behavior to only allocate addresses when the address will be used. Perhaps a bit premature, but any thoughts on how one would "upgrade" an existing cluster to the CAPI IPAM? |
I had a discussion about this with the person (left the project) who mostly worked on IPAM, there seems to be no specific reason for that and it was mostly to be close as possible to a DHCP answer. And also, we do not think we need that.
Makes sense to me, +1 |
Regarding the behaviour change regarding IP allocation: Rough thoughs regarding upgrades (would require a node roll)
|
We are planning to discuss these in the upcoming community meeting (14/09/22) a bit more in detail and also get feedback from the community. |
Decision from from community meeting on 14. Sep. 2022:
|
Maybe for reference and not to forget about it, here is 4d271bc the feature implementation using preAllocation, where claim name constructed from BMH name + ippool name if feature is enabled, otherwise keeps the current behaviour (dataname + ippool name) |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with /lifecycle stale |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with /lifecycle stale |
/remove-lifecycle stale |
Uh oh!
There was an error while loading. Please reload this page.
User Story
As a operator I would like to use the new CAPI IPAM contract with metal3 in order to integrate with different IPAM solutions.
Detailed Description
The CAPI IPAM contract is now implemented and released as part of CAPI 1.2.0-rc.0.
In order to support it, we'll need to be able to reference IP Pools implemented by various providers. Looking at the current API, I think we can easily do so by extending
FromPool
in the DataTemplate withapiGroup
andkind
parameters. The new fields should either default to the metal3-ip-address-manager types, or just stay empty. A new reference would then look like this:Since only optional fields are added, the change would be fully backwards compatible and doesn't require a new API version.
In code we can then differenciate based on
apiGroup
andkind
, and either create a metal3-ip-addressIPAddressClaim
, or a CAPI one (ipam.cluster.x-k8s.io).Anything else you would like to add:
I'm happy to work on this.
/kind feature
The text was updated successfully, but these errors were encountered: