-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kind should GC unused images #735
Comments
This one is tricky, how do we actually know when an image should be removed? If I have long running tests and I side-load the images, how do I ensure they don't get GCed midway through? How is this distinguished from repeatedly loading an app? I'm thinking we could maybe specifically GC image layers that are not referenced by a current tag, but that may not be sufficient, especially for workflows that generate unique automatic tags for every build. |
is this not a problem of kubernetes in general? |
Kubernetes has imageGC but it's tied to how much disk is left and not particularly sophisticated (basically evict pods when disk is low and evict images not used by pods). On a non-noisy non-shared host the cluster administrator sets a threshold and the disk is dedicated to Kubernetes. With kind the disk is just the users host disk and isn't dedicated. Users boot with low disk all the time with kind because we turned off evicting pods ... |
This problem is specific to long lived kind clusters that load / pull lots of images. |
Then, seems the same problem that docker has :) docker system prune
WARNING! This will remove:
- all stopped containers
- all networks not used by at least one container
- all dangling images
- all dangling build cache
Are you sure you want to continue? [y/N] |
A subset of docker prune is basically what kubelet does.
The difference is the user manually triggers docker prune from the host.
With Kubernetes you expect kubelet to handle this, but for kind we have it
disabled.
…On Mon, Aug 19, 2019, 10:47 Antonio Ojea ***@***.***> wrote:
The seems the same problem that docker has :)
docker system prune
WARNING! This will remove:
- all stopped containers
- all networks not used by at least one container
- all dangling images
- all dangling build cache
Are you sure you want to continue? [y/N]
—
You are receiving this because you were assigned.
Reply to this email directly, view it on GitHub
<#735?email_source=notifications&email_token=AAHADK5QPUZICV3VVJISBXDQFLMERA5CNFSM4IGLZZ4KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD4TYKNQ#issuecomment-522683702>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAHADKZNGCUTHCQJVIW26R3QFLMERANCNFSM4IGLZZ4A>
.
|
With the image GC flag deprecated, it feels that the last option is probably the more reasonable. Thoughts on either of these solutions?
|
kubernetes/enhancements#1007, I think we want this, whitelist the system images somehow and then leave GC enabled. |
That KEP seems to be abandoned but the concept seems to be generally accepted. I will try to file an updated version sometime |
reviving the KEP kubernetes/enhancements#1717 |
@BenTheElder , the image GC flags were to be deprecated in favour of disk eviction, but why are not deprecated yet, |
I couldn't say, you'd have to check in with the owners in SIG Node. |
This one remains relevant, see also considerations outlined in #2865 (comment) |
@BenTheElder Thanks Ben, I believe I came across this issue in my searching for possible fixes but it didn't have the solution. On your point mentioned in my issue, I understand kind wasn't meant for long-lived deployments but that's really a shame. It's been rock solid for some months now and I hope it continues to be. I looked into k3s, k8s but as I'm running on Unraid OS, kind has been fantastic for my usecase, sort of exactly what I was looking for. Hope the information in my issue is helpful and that a remedy can be found. Otherwise a simple pruning periodically I expect will do the trick. Cheers. |
+1 for this, I have a tiny development instance in GCP where I installed kind and it will quickly fill up the disk with the constant stream of new images I ship for my app |
Please use the thumbs up button unless you have a comment about how we might accomplish this. Unfortunately it's not simple to solve this generally, because kubelet's GC behavior is based on disk percentage which isn't aimed at development machines. In your specific case you can use a config patch to enable it, or you can exec to the node container and use crictl rmi. we have made some very small progress recently by adopting the recently available containerd feature to mark core images as pinned and not deleteable which is one part of the problem, but that work is incomplete. |
Are there any docs showing how to do this for this specific config? (I did try using |
xref #3441 |
TDLR:
Possible options:
I spoke to
@Random-Liu
about this offline, the "don't GC these images" option might make sense upstream but the deprecation of image GC flags suggests the last option might be best.https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/#deprecation-of-existing-feature-flags-to-reclaim-disk
/priority important-longterm
The text was updated successfully, but these errors were encountered: