Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WSL2 Debian Required System D #3304

Open
treatmesubj opened this issue Jul 11, 2023 · 12 comments
Open

WSL2 Debian Required System D #3304

treatmesubj opened this issue Jul 11, 2023 · 12 comments
Assignees
Labels
kind/documentation Categorizes issue or PR as related to documentation.

Comments

@treatmesubj
Copy link

What would you like to be documented:
On WSL2, System D is not enabled by default, but can be easily enabled
Docker doesn't require System D, but KinD relies on it, I guess.
Without System D enabled, KinD serial logs will show

Failed to mount cgroup at /sys/fs/cgroup/systemd: Operation not permitted
[!!!!!!] Failed to mount API filesystems.
Exiting PID 1...

Why is this needed:
KinD has WSL2 docs, but they don't mention this as a an easy fix/requirement

@treatmesubj treatmesubj added the kind/documentation Categorizes issue or PR as related to documentation. label Jul 11, 2023
@aojea
Copy link
Contributor

aojea commented Jul 12, 2023

/assign @BenTheElder

@BenTheElder
Copy link
Member

It would certainly be easier to just suggest users enable systemd. The container ecosystem is largely tested with systemd and we do not have WSL2 CI available #1529.

@wallrj
Copy link

wallrj commented Nov 28, 2023

Thanks for creating this issue and for the link to the "enabling systemd" documentation.
It fixed the problem for me.
The problem only started occurring today after I carelessly ran apt update && apt upgrade which must have upgraded docker.
kind had previously worked very well without using systemd and with me manually running sudo dockerd & in a separate terminal.

$ docker version
Client:
 Version:           24.0.5
 API version:       1.43
 Go version:        go1.20.3
 Git commit:        24.0.5-0ubuntu1~22.04.1
 Built:             Mon Aug 21 19:50:14 2023
 OS/Arch:           linux/amd64
 Context:           default

Server:
 Engine:
  Version:          24.0.5
  API version:      1.43 (minimum version 1.12)
  Go version:       go1.20.3
  Git commit:       24.0.5-0ubuntu1~22.04.1
  Built:            Mon Aug 21 19:50:14 2023
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.7.2
  GitCommit:
 runc:
  Version:          1.1.7-0ubuntu1~22.04.1
  GitCommit:
 docker-init:
  Version:          0.19.0
  GitCommit:
$ wsl.exe --version
WSL version: 2.0.11.0
Kernel version: 5.15.133.1-1
WSLg version: 1.0.59
MSRDC version: 1.2.4677
Direct3D version: 1.611.1-81528511
DXCore version: 10.0.25131.1002-220531-1700.rs-onecore-base2-hyp
Windows version: 10.0.22621.2715

@BenTheElder
Copy link
Member

Can you share more about the VM environment, what init is running, what does "docker info" output (eg we want to know about the cgroups)

kind has code to ensure this path if not created by the host but I guess even priv container is not getting permission in this environment.

@wallrj
Copy link

wallrj commented Nov 29, 2023

@BenTheElder I disabled systemd again and got the following info. Hope it helps.

Debug info
$ ps fauxwww
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root         1  0.0  0.0   2456  1604 hvc0     Sl+  09:19   0:00 /init
root         5  0.0  0.0   2624   216 hvc0     Sl+  09:19   0:00 plan9 --control-socket 5 --log-level 4 --server-fd 6 --pipe-fd 8 --log-truncate
root        12  0.0  0.0   2460   120 ?        Ss   09:19   0:00 /init
root        13  0.0  0.0   2476   124 ?        S    09:19   0:00  \_ /init
richard     14  0.1  0.1  10436  9536 pts/0    Ss   09:19   0:00      \_ -bash
richard    245  0.0  0.0   7480  3100 pts/0    R+   09:20   0:00          \_ ps fauxwww
$ file /init
/init: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, stripped
$ sudo dockerd
INFO[2023-11-29T09:22:24.446829617Z] Starting up
INFO[2023-11-29T09:22:24.448455872Z] containerd not running, starting managed containerd
INFO[2023-11-29T09:22:24.454145861Z] started new containerd process                address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=391
INFO[2023-11-29T09:22:24.579381929Z] starting containerd                           revision= version=1.7.2
INFO[2023-11-29T09:22:24.595508863Z] loading plugin "io.containerd.snapshotter.v1.aufs"...  type=io.containerd.snapshotter.v1
INFO[2023-11-29T09:22:24.606976980Z] skip loading plugin "io.containerd.snapshotter.v1.aufs"...  error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.133.1-microsoft-standard-WSL2\\n\"): skip plugin" type=io.containerd.snapshotter.v1
INFO[2023-11-29T09:22:24.607042002Z] loading plugin "io.containerd.snapshotter.v1.btrfs"...  type=io.containerd.snapshotter.v1
INFO[2023-11-29T09:22:24.607257521Z] skip loading plugin "io.containerd.snapshotter.v1.btrfs"...  error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
INFO[2023-11-29T09:22:24.607300385Z] loading plugin "io.containerd.content.v1.content"...  type=io.containerd.content.v1
INFO[2023-11-29T09:22:24.607642630Z] loading plugin "io.containerd.snapshotter.v1.native"...  type=io.containerd.snapshotter.v1
INFO[2023-11-29T09:22:24.607908834Z] loading plugin "io.containerd.snapshotter.v1.overlayfs"...  type=io.containerd.snapshotter.v1
INFO[2023-11-29T09:22:24.608414956Z] loading plugin "io.containerd.snapshotter.v1.devmapper"...  type=io.containerd.snapshotter.v1
WARN[2023-11-29T09:22:24.608459408Z] failed to load plugin io.containerd.snapshotter.v1.devmapper  error="devmapper not configured"
INFO[2023-11-29T09:22:24.608485026Z] loading plugin "io.containerd.snapshotter.v1.zfs"...  type=io.containerd.snapshotter.v1
INFO[2023-11-29T09:22:24.608654156Z] skip loading plugin "io.containerd.snapshotter.v1.zfs"...  error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
INFO[2023-11-29T09:22:24.608691620Z] loading plugin "io.containerd.metadata.v1.bolt"...  type=io.containerd.metadata.v1
WARN[2023-11-29T09:22:24.608730691Z] could not use snapshotter devmapper in metadata plugin  error="devmapper not configured"
INFO[2023-11-29T09:22:24.608746581Z] metadata content store policy set             policy=shared
INFO[2023-11-29T09:22:24.618177552Z] loading plugin "io.containerd.differ.v1.walking"...  type=io.containerd.differ.v1
INFO[2023-11-29T09:22:24.618225947Z] loading plugin "io.containerd.event.v1.exchange"...  type=io.containerd.event.v1
INFO[2023-11-29T09:22:24.618251685Z] loading plugin "io.containerd.gc.v1.scheduler"...  type=io.containerd.gc.v1
INFO[2023-11-29T09:22:24.618288315Z] loading plugin "io.containerd.lease.v1.manager"...  type=io.containerd.lease.v1
INFO[2023-11-29T09:22:24.618314942Z] loading plugin "io.containerd.nri.v1.nri"...  type=io.containerd.nri.v1
INFO[2023-11-29T09:22:24.618329297Z] NRI interface is disabled by configuration.
INFO[2023-11-29T09:22:24.618338684Z] loading plugin "io.containerd.runtime.v2.task"...  type=io.containerd.runtime.v2
INFO[2023-11-29T09:22:24.618415448Z] loading plugin "io.containerd.runtime.v2.shim"...  type=io.containerd.runtime.v2
INFO[2023-11-29T09:22:24.618426484Z] loading plugin "io.containerd.sandbox.store.v1.local"...  type=io.containerd.sandbox.store.v1
INFO[2023-11-29T09:22:24.618439112Z] loading plugin "io.containerd.sandbox.controller.v1.local"...  type=io.containerd.sandbox.controller.v1
INFO[2023-11-29T09:22:24.618451941Z] loading plugin "io.containerd.streaming.v1.manager"...  type=io.containerd.streaming.v1
INFO[2023-11-29T09:22:24.618481927Z] loading plugin "io.containerd.service.v1.introspection-service"...  type=io.containerd.service.v1
INFO[2023-11-29T09:22:24.618523350Z] loading plugin "io.containerd.service.v1.containers-service"...  type=io.containerd.service.v1
INFO[2023-11-29T09:22:24.618553054Z] loading plugin "io.containerd.service.v1.content-service"...  type=io.containerd.service.v1
INFO[2023-11-29T09:22:24.618577367Z] loading plugin "io.containerd.service.v1.diff-service"...  type=io.containerd.service.v1
INFO[2023-11-29T09:22:24.618599892Z] loading plugin "io.containerd.service.v1.images-service"...  type=io.containerd.service.v1
INFO[2023-11-29T09:22:24.618612992Z] loading plugin "io.containerd.service.v1.namespaces-service"...  type=io.containerd.service.v1
INFO[2023-11-29T09:22:24.618639412Z] loading plugin "io.containerd.service.v1.snapshots-service"...  type=io.containerd.service.v1
INFO[2023-11-29T09:22:24.618649200Z] loading plugin "io.containerd.runtime.v1.linux"...  type=io.containerd.runtime.v1
INFO[2023-11-29T09:22:24.618697828Z] loading plugin "io.containerd.monitor.v1.cgroups"...  type=io.containerd.monitor.v1
INFO[2023-11-29T09:22:24.619925442Z] loading plugin "io.containerd.service.v1.tasks-service"...  type=io.containerd.service.v1
INFO[2023-11-29T09:22:24.619976443Z] loading plugin "io.containerd.grpc.v1.introspection"...  type=io.containerd.grpc.v1
INFO[2023-11-29T09:22:24.620005518Z] loading plugin "io.containerd.transfer.v1.local"...  type=io.containerd.transfer.v1
INFO[2023-11-29T09:22:24.620048171Z] loading plugin "io.containerd.internal.v1.restart"...  type=io.containerd.internal.v1
INFO[2023-11-29T09:22:24.621121668Z] loading plugin "io.containerd.grpc.v1.containers"...  type=io.containerd.grpc.v1
INFO[2023-11-29T09:22:24.621157761Z] loading plugin "io.containerd.grpc.v1.content"...  type=io.containerd.grpc.v1
INFO[2023-11-29T09:22:24.621182027Z] loading plugin "io.containerd.grpc.v1.diff"...  type=io.containerd.grpc.v1
INFO[2023-11-29T09:22:24.621203990Z] loading plugin "io.containerd.grpc.v1.events"...  type=io.containerd.grpc.v1
INFO[2023-11-29T09:22:24.621223465Z] loading plugin "io.containerd.grpc.v1.healthcheck"...  type=io.containerd.grpc.v1
INFO[2023-11-29T09:22:24.621245505Z] loading plugin "io.containerd.grpc.v1.images"...  type=io.containerd.grpc.v1
INFO[2023-11-29T09:22:24.621253852Z] loading plugin "io.containerd.grpc.v1.leases"...  type=io.containerd.grpc.v1
INFO[2023-11-29T09:22:24.621261270Z] loading plugin "io.containerd.grpc.v1.namespaces"...  type=io.containerd.grpc.v1
INFO[2023-11-29T09:22:24.621284763Z] loading plugin "io.containerd.internal.v1.opt"...  type=io.containerd.internal.v1
INFO[2023-11-29T09:22:24.622006647Z] loading plugin "io.containerd.grpc.v1.sandbox-controllers"...  type=io.containerd.grpc.v1
INFO[2023-11-29T09:22:24.622041915Z] loading plugin "io.containerd.grpc.v1.sandboxes"...  type=io.containerd.grpc.v1
INFO[2023-11-29T09:22:24.622051183Z] loading plugin "io.containerd.grpc.v1.snapshots"...  type=io.containerd.grpc.v1
INFO[2023-11-29T09:22:24.622062977Z] loading plugin "io.containerd.grpc.v1.streaming"...  type=io.containerd.grpc.v1
INFO[2023-11-29T09:22:24.622072626Z] loading plugin "io.containerd.grpc.v1.tasks"...  type=io.containerd.grpc.v1
INFO[2023-11-29T09:22:24.622082884Z] loading plugin "io.containerd.grpc.v1.transfer"...  type=io.containerd.grpc.v1
INFO[2023-11-29T09:22:24.622090663Z] loading plugin "io.containerd.grpc.v1.version"...  type=io.containerd.grpc.v1
INFO[2023-11-29T09:22:24.622097796Z] loading plugin "io.containerd.tracing.processor.v1.otlp"...  type=io.containerd.tracing.processor.v1
INFO[2023-11-29T09:22:24.622120342Z] skip loading plugin "io.containerd.tracing.processor.v1.otlp"...  error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
INFO[2023-11-29T09:22:24.622138345Z] loading plugin "io.containerd.internal.v1.tracing"...  type=io.containerd.internal.v1
INFO[2023-11-29T09:22:24.622146064Z] skipping tracing processor initialization (no tracing plugin)  error="no OpenTelemetry endpoint: skip plugin"
INFO[2023-11-29T09:22:24.622838779Z] serving...                                    address=/var/run/docker/containerd/containerd-debug.sock
INFO[2023-11-29T09:22:24.622926188Z] serving...                                    address=/var/run/docker/containerd/containerd.sock.ttrpc
INFO[2023-11-29T09:22:24.622949047Z] serving...                                    address=/var/run/docker/containerd/containerd.sock
INFO[2023-11-29T09:22:24.622967555Z] containerd successfully booted in 0.045567s
INFO[2023-11-29T09:22:24.705437188Z] [graphdriver] using prior storage driver: overlay2
INFO[2023-11-29T09:22:25.068625843Z] Loading containers: start.
ERRO[2023-11-29T09:22:25.235685626Z] Could not add route to IPv6 network fc00:f853:ccd:e793::1/64 via device br-b8007e1e9ae1: network is down
INFO[2023-11-29T09:22:25.617737299Z] Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address
INFO[2023-11-29T09:22:25.706055018Z] Loading containers: done.
WARN[2023-11-29T09:22:25.752297901Z] WARNING: No blkio throttle.read_bps_device support
WARN[2023-11-29T09:22:25.752337056Z] WARNING: No blkio throttle.write_bps_device support
WARN[2023-11-29T09:22:25.752340795Z] WARNING: No blkio throttle.read_iops_device support
WARN[2023-11-29T09:22:25.752342975Z] WARNING: No blkio throttle.write_iops_device support
INFO[2023-11-29T09:22:25.752357683Z] Docker daemon                                 commit="24.0.5-0ubuntu1~22.04.1" graphdriver=overlay2 version=24.0.5
INFO[2023-11-29T09:22:25.752803797Z] Daemon has completed initialization
INFO[2023-11-29T09:22:26.353865491Z] API listen on /var/run/docker.sock
^[2time="2023-11-29T09:26:22.052569923Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2023-11-29T09:26:22.052630401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2023-11-29T09:26:22.052641359Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2023-11-29T09:26:22.052647822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
ERRO[2023-11-29T09:26:22.503746165Z] stream copy error: reading from a closed fifo
ERRO[2023-11-29T09:26:22.503758031Z] stream copy error: reading from a closed fifo
ERRO[2023-11-29T09:26:22.505655077Z] Error running exec 519258e878734179cc7fdc16fc7c5db109bf6cc3246f57a056994f1f813c2f78 in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown
ERRO[2023-11-29T09:26:22.506276624Z] failed to close container stdin               container=4adda0a3fffa12e8b8fd21c44a85746464ed2fa011bbe97415cd259f7b92f7bc error="process does not exist 519258e878734179cc7fdc16fc7c5db109bf6cc3246f57a056994f1f813c2f78: not found" module=libcontainerd namespace=moby
INFO[2023-11-29T09:26:22.510417481Z] ignoring event                                container=4adda0a3fffa12e8b8fd21c44a85746464ed2fa011bbe97415cd259f7b92f7bc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
INFO[2023-11-29T09:26:22.510521136Z] shim disconnected                             id=4adda0a3fffa12e8b8fd21c44a85746464ed2fa011bbe97415cd259f7b92f7bc namespace=moby
WARN[2023-11-29T09:26:22.510640201Z] cleaning up after shim disconnected           id=4adda0a3fffa12e8b8fd21c44a85746464ed2fa011bbe97415cd259f7b92f7bc namespace=moby
INFO[2023-11-29T09:26:22.510647611Z] cleaning up dead shim                         namespace=moby
ERRO[2023-11-29T09:26:22.917372732Z] restartmanger wait error: container is marked for removal and cannot be started
time="2023-11-29T09:26:35.203100301Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2023-11-29T09:26:35.203154956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2023-11-29T09:26:35.203165403Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2023-11-29T09:26:35.203172015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
INFO[2023-11-29T09:26:35.803190199Z] shim disconnected                             id=28157d6b06159ecedbb55a59a28d75cc4580460e800220b2bebdac682f615a06 namespace=moby
WARN[2023-11-29T09:26:35.803259439Z] cleaning up after shim disconnected           id=28157d6b06159ecedbb55a59a28d75cc4580460e800220b2bebdac682f615a06 namespace=moby
INFO[2023-11-29T09:26:35.803266158Z] cleaning up dead shim                         namespace=moby
INFO[2023-11-29T09:26:35.803341851Z] ignoring event                                container=28157d6b06159ecedbb55a59a28d75cc4580460e800220b2bebdac682f615a06 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
time="2023-11-29T09:26:36.297186557Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2023-11-29T09:26:36.297633451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2023-11-29T09:26:36.297646826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2023-11-29T09:26:36.297653617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
INFO[2023-11-29T09:26:36.715871930Z] ignoring event                                container=28157d6b06159ecedbb55a59a28d75cc4580460e800220b2bebdac682f615a06 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
INFO[2023-11-29T09:26:36.715966041Z] shim disconnected                             id=28157d6b06159ecedbb55a59a28d75cc4580460e800220b2bebdac682f615a06 namespace=moby
WARN[2023-11-29T09:26:36.716027395Z] cleaning up after shim disconnected           id=28157d6b06159ecedbb55a59a28d75cc4580460e800220b2bebdac682f615a06 namespace=moby
INFO[2023-11-29T09:26:36.716032325Z] cleaning up dead shim                         namespace=moby
time="2023-11-29T09:27:22.455790810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2023-11-29T09:27:22.456533256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2023-11-29T09:27:22.456553647Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2023-11-29T09:27:22.456564023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
INFO[2023-11-29T09:27:22.788567103Z] shim disconnected                             id=28157d6b06159ecedbb55a59a28d75cc4580460e800220b2bebdac682f615a06 namespace=moby
INFO[2023-11-29T09:27:22.788587937Z] ignoring event                                container=28157d6b06159ecedbb55a59a28d75cc4580460e800220b2bebdac682f615a06 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
WARN[2023-11-29T09:27:22.788645015Z] cleaning up after shim disconnected           id=28157d6b06159ecedbb55a59a28d75cc4580460e800220b2bebdac682f615a06 namespace=moby
INFO[2023-11-29T09:27:22.788652155Z] cleaning up dead shim                         namespace=moby
time="2023-11-29T09:27:23.375892623Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2023-11-29T09:27:23.375959853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
time="2023-11-29T09:27:23.375982836Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2023-11-29T09:27:23.375992267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
INFO[2023-11-29T09:27:23.660576655Z] shim disconnected                             id=28157d6b06159ecedbb55a59a28d75cc4580460e800220b2bebdac682f615a06 namespace=moby
WARN[2023-11-29T09:27:23.660651370Z] cleaning up after shim disconnected           id=28157d6b06159ecedbb55a59a28d75cc4580460e800220b2bebdac682f615a06 namespace=moby
INFO[2023-11-29T09:27:23.660657998Z] cleaning up dead shim                         namespace=moby
INFO[2023-11-29T09:27:23.660755569Z] ignoring event                                container=28157d6b06159ecedbb55a59a28d75cc4580460e800220b2bebdac682f615a06 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
^CINFO[2023-11-29T09:28:27.534928399Z] Processing signal 'interrupt'
INFO[2023-11-29T09:28:27.535885558Z] stopping event stream following graceful shutdown  error="<nil>" module=libcontainerd namespace=moby
INFO[2023-11-29T09:28:27.536041188Z] Daemon shutdown complete
INFO[2023-11-29T09:28:27.536179524Z] stopping healthcheck following graceful shutdown  module=libcontainerd
INFO[2023-11-29T09:28:27.536251641Z] stopping event stream following graceful shutdown  error="context canceled" module=libcontainerd namespace=plugins.moby
$ docker info
Client:
 Version:    24.0.5
 Context:    default
 Debug Mode: false
 Plugins:
  buildx: Docker Buildx (Docker Inc.)
    Version:  v0.11.2
    Path:     /home/richard/.docker/cli-plugins/docker-buildx

Server:
 Containers: 1
  Running: 0
  Paused: 0
  Stopped: 1
 Images: 25
 Server Version: 24.0.5
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Using metacopy: false
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version:
 runc version:
 init version:
 Security Options:
  seccomp
   Profile: builtin
 Kernel Version: 5.15.133.1-microsoft-standard-WSL2
 Operating System: Ubuntu 22.04.3 LTS
 OSType: linux
 Architecture: x86_64
 CPUs: 16
 Total Memory: 7.612GiB
 Name: LAPTOP-HJEQ9V9G
 ID: ba6b58f8-c4d0-44c2-b8c5-ecf3f4a558eb
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: No blkio throttle.read_bps_device support
WARNING: No blkio throttle.write_bps_device support
WARNING: No blkio throttle.read_iops_device support
WARNING: No blkio throttle.write_iops_device support
$ kind create cluster --retain -v 6
Creating cluster "kind" ...
DEBUG: docker/images.go:58] Image: kindest/node:v1.27.3@sha256:3966ac761ae0136263ffdb6cfd4db23ef8a83cba8a463690e98317add2c9ba72 present locally
 ✓ Ensuring node image (kindest/node:v1.27.3) 🖼
 ✓ Preparing nodes 📦
DEBUG: config/config.go:96] Using the following kubeadm config for node kind-control-plane:
apiServer:
  certSANs:
  - localhost
  - 127.0.0.1
  extraArgs:
    runtime-config: ""
apiVersion: kubeadm.k8s.io/v1beta3
clusterName: kind
controlPlaneEndpoint: kind-control-plane:6443
controllerManager:
  extraArgs:
    enable-hostpath-provisioner: "true"
kind: ClusterConfiguration
kubernetesVersion: v1.27.3
networking:
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/16
scheduler:
  extraArgs: null
---
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- token: abcdef.0123456789abcdef
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.18.0.2
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///run/containerd/containerd.sock
  kubeletExtraArgs:
    node-ip: 172.18.0.2
    node-labels: ""
    provider-id: kind://docker/kind/kind-control-plane
---
apiVersion: kubeadm.k8s.io/v1beta3
controlPlane:
  localAPIEndpoint:
    advertiseAddress: 172.18.0.2
    bindPort: 6443
discovery:
  bootstrapToken:
    apiServerEndpoint: kind-control-plane:6443
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: unix:///run/containerd/containerd.sock
  kubeletExtraArgs:
    node-ip: 172.18.0.2
    node-labels: ""
    provider-id: kind://docker/kind/kind-control-plane
---
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd
cgroupRoot: /kubelet
evictionHard:
  imagefs.available: 0%
  nodefs.available: 0%
  nodefs.inodesFree: 0%
failSwapOn: false
imageGCHighThresholdPercent: 100
kind: KubeletConfiguration
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
conntrack:
  maxPerCore: 0
iptables:
  minSyncPeriod: 1s
kind: KubeProxyConfiguration
mode: iptables
 ✓ Writing configuration 📜
DEBUG: kubeadminit/init.go:82]
 ✗ Starting control-plane 🕹️
ERROR: failed to create cluster: failed to init node with kubeadm: command "docker exec --privileged kind-control-plane kubeadm init --skip-phases=preflight --config=/kind/kubeadm.conf --skip-token-print --v=6" failed with error: exit status 137
Command Output:
Stack Trace:
sigs.k8s.io/kind/pkg/errors.WithStack
        /home/richard/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/errors/errors.go:59
sigs.k8s.io/kind/pkg/exec.(*LocalCmd).Run
        /home/richard/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/exec/local.go:124
sigs.k8s.io/kind/pkg/cluster/internal/providers/docker.(*nodeCmd).Run
        /home/richard/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/cluster/internal/providers/docker/node.go:146
sigs.k8s.io/kind/pkg/exec.CombinedOutputLines
        /home/richard/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/exec/helpers.go:67
sigs.k8s.io/kind/pkg/cluster/internal/create/actions/kubeadminit.(*action).Execute
        /home/richard/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/cluster/internal/create/actions/kubeadminit/init.go:81
sigs.k8s.io/kind/pkg/cluster/internal/create.Cluster
        /home/richard/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/cluster/internal/create/create.go:135
sigs.k8s.io/kind/pkg/cluster.(*Provider).Create
        /home/richard/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/cluster/provider.go:181
sigs.k8s.io/kind/pkg/cmd/kind/create/cluster.runE
        /home/richard/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/cmd/kind/create/cluster/createcluster.go:110
sigs.k8s.io/kind/pkg/cmd/kind/create/cluster.NewCommand.func1
        /home/richard/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/cmd/kind/create/cluster/createcluster.go:54
github.com/spf13/cobra.(*Command).execute
        /home/richard/go/pkg/mod/github.com/spf13/[email protected]/command.go:856
github.com/spf13/cobra.(*Command).ExecuteC
        /home/richard/go/pkg/mod/github.com/spf13/[email protected]/command.go:974
github.com/spf13/cobra.(*Command).Execute
        /home/richard/go/pkg/mod/github.com/spf13/[email protected]/command.go:902
sigs.k8s.io/kind/cmd/kind/app.Run
        /home/richard/go/pkg/mod/sigs.k8s.io/[email protected]/cmd/kind/app/main.go:53
sigs.k8s.io/kind/cmd/kind/app.Main
        /home/richard/go/pkg/mod/sigs.k8s.io/[email protected]/cmd/kind/app/main.go:35
main.main
        /home/richard/go/pkg/mod/sigs.k8s.io/[email protected]/main.go:25
runtime.main
        /home/richard/sdk/go1.21.3/src/runtime/proc.go:267
runtime.goexit
        /home/richard/sdk/go1.21.3/src/runtime/asm_amd64.s:1650
$ docker start --attach kind-control-plane
INFO: ensuring we can execute mount/umount even with userns-remap
INFO: remounting /sys read-only
INFO: making mounts shared
INFO: detected cgroup v1
INFO: detected cgroupns
INFO: removing misc controller
INFO: clearing and regenerating /etc/machine-id
Initializing machine ID from random generator.
INFO: setting iptables to detected mode: nft
INFO: detected IPv4 address: 172.18.0.2
INFO: detected old IPv4 address: 172.18.0.2
INFO: detected IPv6 address: fc00:f853:ccd:e793::2
INFO: detected old IPv6 address: fc00:f853:ccd:e793::2
INFO: starting init
Failed to mount cgroup at /sys/fs/cgroup/systemd: Operation not permitted
[!!!!!!] Failed to mount API filesystems.
Exiting PID 1...

@BenTheElder
Copy link
Member

Thanks, I don't understand why this is happening. We should be forcing cgroupns=private with kind v0.20.0+ and therefore /sys/fs/cgroup should be namespaced and we should be able to manipulate it as a privileged container.

I suspect that's a bad assumption versus privileged + hostns and instead we're back to relying on the host systemd to ensure the top level mounts exist and #3277 may be ~the same root cause ...?

Unfortunately still not going to get a chance to dig in for a bit and probably not on windows but maybe on alpine

@syNack0
Copy link

syNack0 commented Jul 16, 2024

Any news about this?

@BenTheElder
Copy link
Member

@syNack0 no, or else there would be a comment here.

The maintainers do not use windows. If someone with this issue can upload the logs as per the bug template we can look at those.

kind create cluster --retain
kind export logs
kind delete cluster

@BenTheElder
Copy link
Member

(Also nobody has worked on solving the windows CI problem #1529 so we rely entirely on windows-interested contributors)

@wallrj
Copy link

wallrj commented Jul 17, 2024

In #3304 (comment) @BenTheElder wrote:

Thanks, I don't understand why this is happening. We should be forcing cgroupns=private with kind v0.20.0+ and therefore /sys/fs/cgroup should be namespaced and we should be able to manipulate it as a privileged container.

I stumbled across the following answer in https://stackoverflow.com/a/73376219 by @Domest0s and thought it might be relevant:

By default, cgroup2 will likely be mounted at /sys/fs/cgroup/unified. Some apps might not like it (docker in particular). Move it to the conventional place with:
$ mount --move /sys/fs/cgroup/unified /sys/fs/cgroup

@BenTheElder
Copy link
Member

By default, cgroup2 will likely be mounted at /sys/fs/cgroup/unified. Some apps might not like it (docker in particular). Move it to the conventional place with:
$ mount --move /sys/fs/cgroup/unified /sys/fs/cgroup

Ok yeah that is weird, usually you only see that path on a systemd host that has BOTH v1 and v2 mounted ("hybrid" mode; which ... I don't recommend). /sys/fs/cgroup is the normal path in either "unified" (v2) or "legacy" (v1) mode.

It's possible that may fix things, are we sure it's not "hybrid" mode systemd though? Or is this the default WSL2 init system (don't know much about that one yet)

@BenTheElder
Copy link
Member

https://systemd.io/CGROUP_DELEGATION/#three-different-tree-setups-

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/documentation Categorizes issue or PR as related to documentation.
Projects
None yet
Development

No branches or pull requests

5 participants