You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I see that you are currently using the "official" Kubernetes client (Swagger Codegen).
I forked the old pykube to https://github.com/hjacobs/pykube as I'm rather unhappy about the complexity and size of the official Kubernetes client.
See also hjacobs/pykube#12
Not sure if this would even work or whether you use something specific of the Kubernetes Python client.
After #71, almost all Kubernetes-related code is consolidated in one package: kopf.k8s, where it is easy to be replaced by any other implementation. The only thing outside of this package is authentication (kopf.config.login()).
However, the whole codebase of Kopf assumes that the objects manipulated as dicts. Not even the Kubernetes client's "models". A lot of .get('metadata', {}).get('something') and .setdefault('status', {}).setdefault('kopf', {}) and similar lines are all around. It will be difficult to change that and to support both dicts & client's classes. It is better not to do so.
In addition, Kopf promises to hide implementation details from the user ⟹ which means that the user should not know which Kubernetes client is used under the hood ⟹ which means that the internal models/classes must not be exposed ⟹ which means they have to be converted to dicts on arrival.
Another tricky part will be watch-streaming. The official Kubernetes client does that in the while True cycle, and reconnects all the time. Pykube-ng exits after the first disconnection, which happens roughly ~5s, or as specified by a timeout query arg. The connection cannot stay forever, so either the client, or Kopf should handle the reconnections. Some partial workarounds can be found in #96 (pre-listing and resourceVersion usage).
So far, a list of issues detected while trying to switch to pykube:
pykube.HTTPClient object is needed on every call, there is no implicit config (as in the official client). Has to be stored globally and reused all over the code.
pykube.HTTPClient has a default timeout of 10s for any connection, including the watching. Can be overridden explicitly with timeout=None, but requires a separate pykube.HTTPClient instance for that.
pykube.HTTPClient raises exceptions from requests on timeouts, not its own.
Watch-call terminates after the connection is lost for any reason, no internal reconnection or while True. Has to be caught and repeated. In the official K8s client, it is done internally: the watch is eternal.
object_factory() prints a list of discovered resources to stdout, this is a visual garbage.
object_factory() assumes that the resource always exists, and fails on resource['namespaced'] when resource is None.
object_factory() requires a kind, and not plural; would be better if plural, singular, kind, and all aliases are accepted.
apiextensions.k8s.io/v1beta1/customresourcedefinitions does not exist in the cluster in a listing of resources, though is accessible. Pykube should assume that the developer knows what they are doing, and create the classes properly (but: only with plural name).
Patching is implemented as obj.update(), where the whole body of the object is used for a patch. And this involves the resourceVersion checking for non-conflicting patches. We need the partial patches on status field only (or finalizers, or annotations), not on the whole body. And we need no conflict resolution.
On a good side:
It was able to handle a custom resource KopfExample and a built-in resource Pod via the same code, no {resource=>classes+methods} mapping was needed. Both event-spy-handler and regular cause-handlers worked on pods.
Basically, the whole trick is achieved by this snippet (undoable in the official K8s client library):
So far so good. The switch to pykube-ng is now fully implemented. The legacy kubernetes official client library is supported optionally (if installed) for auto-authentication, but is not used anywhere else — and I consider removing it completely.
The missing pykube-ng's parts are simulated inside kopf.k8s.classes (e.g. obj.patch() method), and eventually should move into pykube-ng itself.
The codebase seems functional. And clean. Arbitrary k8s resources (custom and builtin) are supported transparently, as it was prototyped above. The k8s-events are sent, all is fine.
What is left: all preceding PRs, on which it is based (all are pending for a review); and some cleanup in general plus the remaining TODOs marks (to be sure nothing is forgotten); and maybe a test-drive for few days in our testing infrastructure with real tasks.
Diff (still the same):wip/master/20190606...pykube — The diff is huge mostly because of tests (massive changes).
The text was updated successfully, but these errors were encountered:
Originally by hjacobs :
After #71, almost all Kubernetes-related code is consolidated in one package:
kopf.k8s
, where it is easy to be replaced by any other implementation. The only thing outside of this package is authentication (kopf.config.login()
).However, the whole codebase of Kopf assumes that the objects manipulated as dicts. Not even the Kubernetes client's "models". A lot of
.get('metadata', {}).get('something')
and.setdefault('status', {}).setdefault('kopf', {})
and similar lines are all around. It will be difficult to change that and to support both dicts & client's classes. It is better not to do so.In addition, Kopf promises to hide implementation details from the user ⟹ which means that the user should not know which Kubernetes client is used under the hood ⟹ which means that the internal models/classes must not be exposed ⟹ which means they have to be converted to dicts on arrival.
Another tricky part will be watch-streaming. The official Kubernetes client does that in the
while True
cycle, and reconnects all the time. Pykube-ng exits after the first disconnection, which happens roughly ~5s, or as specified by a timeout query arg. The connection cannot stay forever, so either the client, or Kopf should handle the reconnections. Some partial workarounds can be found in #96 (pre-listing andresourceVersion
usage).So far, a list of issues detected while trying to switch to pykube:
pykube.HTTPClient
object is needed on every call, there is no implicit config (as in the official client). Has to be stored globally and reused all over the code.pykube.HTTPClient
has a default timeout of 10s for any connection, including the watching. Can be overridden explicitly withtimeout=None
, but requires a separatepykube.HTTPClient
instance for that.pykube.HTTPClient
raises exceptions fromrequests
on timeouts, not its own.while True
. Has to be caught and repeated. In the official K8s client, it is done internally: the watch is eternal.object_factory()
prints a list of discovered resources to stdout, this is a visual garbage.object_factory()
assumes that the resource always exists, and fails onresource['namespaced']
whenresource
isNone
.object_factory()
requires akind
, and notplural
; would be better if plural, singular, kind, and all aliases are accepted.apiextensions.k8s.io/v1beta1/customresourcedefinitions
does not exist in the cluster in a listing of resources, though is accessible. Pykube should assume that the developer knows what they are doing, and create the classes properly (but: only with plural name).obj.update()
, where the whole body of the object is used for a patch. And this involves the resourceVersion checking for non-conflicting patches. We need the partial patches onstatus
field only (or finalizers, or annotations), not on the whole body. And we need no conflict resolution.On a good side:
KopfExample
and a built-in resourcePod
via the same code, no {resource=>classes+methods} mapping was needed. Both event-spy-handler and regular cause-handlers worked on pods.Basically, the whole trick is achieved by this snippet (undoable in the official K8s client library):
This alone justifies the effort to continue switching.
Preview branch: https://github.com/nolar/kopf/tree/pykube (based on "resume-handlers" not yet merged branch).
Diff: wip/master/20190606...pykube
So far so good. The switch to pykube-ng is now fully implemented. The legacy kubernetes official client library is supported optionally (if installed) for auto-authentication, but is not used anywhere else — and I consider removing it completely.
The missing pykube-ng's parts are simulated inside
kopf.k8s.classes
(e.g.obj.patch()
method), and eventually should move into pykube-ng itself.The codebase seems functional. And clean. Arbitrary k8s resources (custom and builtin) are supported transparently, as it was prototyped above. The k8s-events are sent, all is fine.
What is left: all preceding PRs, on which it is based (all are pending for a review); and some cleanup in general plus the remaining TODOs marks (to be sure nothing is forgotten); and maybe a test-drive for few days in our testing infrastructure with real tasks.
Diff (still the same): wip/master/20190606...pykube — The diff is huge mostly because of tests (massive changes).
The text was updated successfully, but these errors were encountered: