-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Quadlet][Rootless] The generated systemd service exits immediately with success code 0 [Podman currently doesn't support cgroup v1 + v2 mixed systems.] [Update: FIXED in Podman 5.3.0] #23990
Comments
Please provide the full journal log for this service. |
The service above was a minimal test case to show that everything is broken (no matter what service I try to create, they all fail via systemd). But here's the service I was actually trying to run.
Edit: Log was here. It ran successfully.
|
I shared my actual container spec file above and the full journal log for that (the syncthing service). Hope I haven't missed anything there. |
I would be interested in the full logs with the simple sleep example. The systemctl status truncated output I expect much more log lines (with The conmon /sys/fs/cgroup/memory.events can be ignored I think although I haven't looked to closely what conmon is doing there.
The looks like something is sending SIGTERM to the container. |
Hmm interesting. I'm not sure what that could be from. I have a normal Fedora Workstation 40 desktop and haven't installed some task killer. But whatever it is, it gets sent to every container that is started via systemd. But never when I start them manually on the terminal or with the Pods app. My systemd services run at the user-level, and I can confirm that they are running as my user because the ExecStart command replaces
Okay good. :)
Good idea. I'll capture those now: Click for details
|
So the container is started fine but then something is causing us to stop it right away
Given the StopSignal is a podman line it means something is triggering the podman rm -f ... (ExecStop=) to be run. |
That makes sense. And it happens for every podman systemd service, but not for any of my other systemd services nor for manually executed The service is generated as a It's too suspicious that every quadlet service stops itself immediately. And I did see some line about a notify watcher failure. PS: In the meantime I am installing Fedora 40 in a VM and will see if it works there. My current system started on Fedora 35 and has upgraded over the years and perhaps some core config is broken... |
:/ Should I reinstall the entire operating system? Or is there a way to fix this on my real machine? Here are the test results with a fresh Fedora 40 virtual machine:
Click for details
|
I did a I can't really think of any differences between the VM and host... I haven't changed anything manually. |
Is this running as a logged in system. Might be systemd killing the service for some reason. |
Yeah. It's at the GNOME desktop right now. I will try rebooting and logging in a runlevel 3 with a pure text terminal and running the container. Good idea for a test. |
No I don't think that will matter. |
I have now done these tests. None of them fixed the problem. Containers still exit immediately when using systemd.
None of it solves the problem. I'm about to do one more test... I'll create a 2nd user and see if that one has more luck. |
Tried two more things, but nothing solves it:
One thing hits me though. The Fedora virtual machine was made from the Fedora 40 ISO which uses packages from March 2024. I should install it and do a full system upgrade inside the VM to see if the VM still works on the latest packages, or if having the latest Fedora 40 packages is what breaks it. |
Fedora 40 Workstation virtual machine results:
Here's the log from the last run, where I had upgraded the VM to the same kernel as the host. As mentioned, this is what a SUCCESSFUL run looks like: Click for details
I am soon out of ideas. |
Everything points to:
|
I generated lists of all systemd system units for both the VM and the Host to compare and see if there's something here that could cause the conflict. But I am more and more thinking that Fedora is just broken. It's not the first time something breaks permanently during OS-to-OS upgrades in this distro. At least we've narrowed it down to the fact that the host receives a SIGTERM as soon as the systemd podman units start. Any last advice or ideas? Maybe some idea for how to see which process sends SIGTERM? I have Pods and Podman Desktop on the machine. I might try removing those if they somehow interfere, but that's unlikely, especially since they are Flatpaks. Generated via the command: Click for details--- system-units-vm.txt 2024-09-17 22:11:16.043517907 +0200
+++ system-units-host.txt 2024-09-17 22:11:25.525331678 +0200
@@ -1,4 +1,5 @@
-553 unit files listed.
+
+594 unit files listed.
abrtd.service enabled enabled
abrt-journal-core.service enabled enabled
abrt-oops.service enabled enabled
@@ -6,6 +7,12 @@
abrt-vmcore.service enabled enabled
abrt-xorg.service enabled enabled
accounts-daemon.service enabled enabled
+adb.service disabled disabled
+[email protected] disabled disabled
+akmods-keygen.target static -
+[email protected] disabled disabled
+akmods.service enabled enabled
+akmods-shutdown.service disabled disabled
alsa-restore.service static -
alsa-state.service static -
anaconda-direct.service static -
@@ -20,6 +27,7 @@
anaconda.target static -
[email protected] static -
arp-ethers.service disabled disabled
+atd.service enabled enabled
auditd.service enabled enabled
audit-rules.service enabled enabled
auth-rpcgss-module.service static -
@@ -43,10 +51,15 @@
chronyd-restricted.service disabled disabled
chronyd.service enabled enabled
chrony-wait.service disabled disabled
+cni-dhcp.service disabled disabled
+cni-dhcp.socket disabled disabled
colord.service static -
[email protected] static -
console-getty.service disabled disabled
[email protected] static -
+coolercontrold.service enabled disabled
+coolercontrol-liqctld.service static -
+crond.service enabled enabled
cryptsetup-pre.target static -
cryptsetup.target static -
ctrl-alt-del.target alias -
@@ -83,6 +96,9 @@
display-manager.service alias -
dm-event.service static -
dm-event.socket enabled enabled
+dmraid-activation.service disabled enabled
+dnf5-offline-transaction-cleanup.service static -
+dnf5-offline-transaction.service disabled disabled
dnf-makecache.service static -
dnf-makecache.timer enabled enabled
dnf-system-upgrade-cleanup.service static -
@@ -101,6 +117,9 @@
emergency.target static -
exit.target disabled disabled
factory-reset.target static -
+fancontrol.service disabled disabled
+fcoemon.socket disabled disabled
+fcoe.service disabled disabled
fedora-third-party-refresh.service disabled disabled
final.target static -
firewalld.service enabled enabled
@@ -127,6 +146,7 @@
grub-boot-indeterminate.service static -
gssproxy.service disabled disabled
halt.target disabled disabled
+hddtemp.service disabled disabled
hibernate.target static -
home.mount generated -
htcacheclean.service disabled disabled
@@ -167,14 +187,17 @@
kmod-static-nodes.service static -
ldconfig.service static -
libvirtd-admin.socket disabled disabled
-libvirtd-ro.socket disabled disabled
-libvirtd.service disabled disabled
-libvirtd.socket disabled disabled
+libvirtd-ro.socket enabled disabled
+libvirtd.service enabled disabled
+libvirtd.socket enabled disabled
libvirtd-tcp.socket disabled disabled
libvirtd-tls.socket disabled disabled
libvirt-guests.service disabled disabled
-livesys-late.service enabled enabled
-livesys.service enabled enabled
+livesys-late.service generated -
+livesys.service generated -
+lldpad.service disabled disabled
+lldpad.socket disabled disabled
+lm_sensors.service enabled enabled
loadmodules.service disabled disabled
local-fs-pre.target static -
local-fs.target static -
@@ -185,6 +208,7 @@
lvm2-lvmpolld.service static -
lvm2-lvmpolld.socket enabled enabled
lvm2-monitor.service enabled enabled
+machine-qemu\x2d2\x2dfedora40.scope transient -
machine.slice static -
machines.target disabled enabled
man-db-cache-update.service static -
@@ -201,9 +225,13 @@
mdmonitor-oneshot.timer disabled disabled
mdmonitor.service enabled enabled
[email protected] static -
+mnt-entertainment.mount generated -
+mnt-media_storage.mount generated -
ModemManager.service enabled enabled
[email protected] static -
-.mount generated -
+multipathd.service enabled enabled
+multipathd.socket enabled disabled
multi-user.target static -
ndctl-monitor.service disabled disabled
netavark-dhcp-proxy.service disabled disabled
@@ -228,6 +256,12 @@
nss-lookup.target static -
nss-user-lookup.target static -
numad.service disabled disabled
+nvidia-fallback.service disabled disabled
+nvidia-hibernate.service enabled enabled
+nvidia-persistenced.service disabled disabled
+nvidia-powerd.service enabled enabled
+nvidia-resume.service enabled enabled
+nvidia-suspend.service enabled enabled
nvmefc-boot-connections.service disabled disabled
nvmf-autoconnect.service disabled disabled
nvmf-connect-nbft.service static -
@@ -242,6 +276,8 @@
paths.target static -
pcscd.service indirect disabled
pcscd.socket enabled enabled
+piavpn.service enabled disabled
+plexmediaserver.service enabled disabled
plocate-updatedb.service static -
plocate-updatedb.timer enabled enabled
plymouth-halt.service static -
@@ -275,10 +311,11 @@
quotaon.service static -
raid-check.service static -
raid-check.timer enabled enabled
+ratbagd.service disabled disabled
rc-local.service static -
realmd.service static -
reboot.target enabled enabled
-remote-cryptsetup.target enabled enabled
+remote-cryptsetup.target disabled enabled
remote-fs-pre.target static -
remote-fs.target enabled enabled
remote-veritysetup.target disabled disabled
@@ -291,8 +328,8 @@
rpc_pipefs.target static -
rpc-statd-notify.service static -
rpc-statd.service static -
-rpmdb-migrate.service disabled enabled
-rpmdb-rebuild.service disabled enabled
+rpmdb-migrate.service enabled enabled
+rpmdb-rebuild.service enabled enabled
rtkit-daemon.service enabled enabled
runlevel0.target alias -
runlevel1.target alias -
@@ -314,6 +351,7 @@
sleep.target static -
slices.target static -
smartcard.target static -
+smartd.service enabled enabled
sockets.target static -
soft-reboot.target static -
sound.target static -
@@ -323,7 +361,7 @@
spice-webdavd.service static -
[email protected] disabled disabled
sshd-keygen.target static -
-sshd.service disabled disabled
+sshd.service enabled disabled
[email protected] static -
sshd.socket disabled disabled
ssh-host-keys-migration.service disabled disabled
@@ -368,9 +406,10 @@
systemd-boot-random-seed.service static -
systemd-boot-update.service disabled disabled
systemd-bsod.service static -
-systemd-confext.service enabled enabled
+systemd-confext.service disabled enabled
[email protected] static -
systemd-coredump.socket static -
+systemd-cryptsetup@luks\x2d25bc83ad\x2dfefb\x2d42fd\x2d89d2\x2df5201b6ce248.service generated -
systemd-exit.service static -
systemd-firstboot.service static -
systemd-fsck-root.service static -
@@ -407,7 +446,7 @@
systemd-networkd.socket disabled disabled
systemd-networkd-wait-online.service disabled disabled
[email protected] disabled disabled
-systemd-network-generator.service enabled enabled
+systemd-network-generator.service disabled enabled
[email protected] disabled disabled
systemd-oomd.service enabled enabled
systemd-oomd.socket disabled disabled
@@ -428,7 +467,7 @@
systemd-pcrphase-sysinit.service static -
systemd-portabled.service static -
systemd-poweroff.service static -
-systemd-pstore.service enabled enabled
+systemd-pstore.service disabled enabled
systemd-quotacheck.service static -
systemd-random-seed.service static -
systemd-reboot.service static -
@@ -442,7 +481,7 @@
systemd-suspend.service static -
systemd-suspend-then-hibernate.service static -
systemd-sysctl.service static -
-systemd-sysext.service enabled enabled
+systemd-sysext.service disabled enabled
[email protected] static -
systemd-sysext.socket disabled disabled
systemd-sysupdate-reboot.service indirect disabled
@@ -479,6 +518,7 @@
system-update-cleanup.service static -
system-update-pre.target static -
system-update.target static -
+tcsd.service disabled disabled
[email protected] static -
thermald.service enabled enabled
timers.target static -
@@ -500,36 +540,37 @@
user.slice static -
var-lib-machines.mount static -
var-lib-nfs-rpc_pipefs.mount static -
+var-lib-plexmediaserver-media.mount generated -
vboxclient.service static -
vboxservice.service enabled enabled
veritysetup-pre.target static -
veritysetup.target static -
vgauthd.service enabled disabled
virt-guest-shutdown.target static -
-virtinterfaced-admin.socket enabled enabled
-virtinterfaced-ro.socket enabled enabled
+virtinterfaced-admin.socket disabled enabled
+virtinterfaced-ro.socket disabled enabled
virtinterfaced.service disabled disabled
virtinterfaced.socket enabled enabled
-virtlockd-admin.socket enabled enabled
+virtlockd-admin.socket disabled enabled
virtlockd.service disabled disabled
virtlockd.socket enabled enabled
-virtlogd-admin.socket enabled enabled
+virtlogd-admin.socket disabled enabled
virtlogd.service disabled disabled
virtlogd.socket enabled enabled
-virtnetworkd-admin.socket enabled enabled
-virtnetworkd-ro.socket enabled enabled
+virtnetworkd-admin.socket disabled enabled
+virtnetworkd-ro.socket disabled enabled
virtnetworkd.service disabled disabled
virtnetworkd.socket enabled enabled
-virtnodedevd-admin.socket enabled enabled
-virtnodedevd-ro.socket enabled enabled
+virtnodedevd-admin.socket disabled enabled
+virtnodedevd-ro.socket disabled enabled
virtnodedevd.service disabled disabled
virtnodedevd.socket enabled enabled
-virtnwfilterd-admin.socket enabled enabled
-virtnwfilterd-ro.socket enabled enabled
+virtnwfilterd-admin.socket disabled enabled
+virtnwfilterd-ro.socket disabled enabled
virtnwfilterd.service disabled disabled
virtnwfilterd.socket enabled enabled
-virtproxyd-admin.socket enabled enabled
-virtproxyd-ro.socket enabled enabled
+virtproxyd-admin.socket disabled enabled
+virtproxyd-ro.socket disabled enabled
virtproxyd.service disabled disabled
virtproxyd.socket enabled enabled
virtproxyd-tcp.socket disabled disabled
@@ -538,15 +579,16 @@
virtqemud-ro.socket enabled enabled
virtqemud.service enabled enabled
virtqemud.socket enabled enabled
-virtsecretd-admin.socket enabled enabled
-virtsecretd-ro.socket enabled enabled
+virtsecretd-admin.socket disabled enabled
+virtsecretd-ro.socket disabled enabled
virtsecretd.service disabled disabled
virtsecretd.socket enabled enabled
-virtstoraged-admin.socket enabled enabled
-virtstoraged-ro.socket enabled enabled
+virtstoraged-admin.socket disabled enabled
+virtstoraged-ro.socket disabled enabled
virtstoraged.service disabled disabled
virtstoraged.socket enabled enabled
vmtoolsd.service enabled enabled
+[email protected] static -
wpa_supplicant.service disabled disabled
wsdd.service disabled disabled
zfs-fuse-scrub.service static - |
Well I had some ideas with It did capture various signals being output BY the podman process to terminate its child (the container). And also signals from conmon detecting that podman has exited. But no sigterm "towards podman" was ever being received or sent by conmon or podman themselves. Which means:
systemctl --user daemon-reload; systemctl --user restart testquadlet --no-block; ps aux | grep podman; POD_PID=$(pidof conmon); echo "PID=${POD_PID}"; strace -p $POD_PID -e signal
johnny 19847 0.0 0.0 231328 2068 ? Ss 22:35 0:00 /usr/bin/conmon --api-version 1 -c 030c5e181d7d6fbb5d76a50de51349065d694cce0f4582103b037d6aca08f7d9 -u 030c5e181d7d6fbb5d76a50de51349065d694cce0f4582103b037d6aca08f7d9 -r /usr/bin/crun -b /home/johnny/.local/share/containers/storage/overlay-containers/030c5e181d7d6fbb5d76a50de51349065d694cce0f4582103b037d6aca08f7d9/userdata -p /run/user/1000/containers/overlay-containers/030c5e181d7d6fbb5d76a50de51349065d694cce0f4582103b037d6aca08f7d9/userdata/pidfile -n systemd-testquadlet --exit-dir /run/user/1000/libpod/tmp/exits --persist-dir /run/user/1000/libpod/tmp/persist/030c5e181d7d6fbb5d76a50de51349065d694cce0f4582103b037d6aca08f7d9 --full-attach -l journald --log-level debug --syslog --conmon-pidfile /run/user/1000/containers/overlay-containers/030c5e181d7d6fbb5d76a50de51349065d694cce0f4582103b037d6aca08f7d9/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/johnny/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1000/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --network-config-dir --exit-command-arg --exit-command-arg --network-backend --exit-command-arg netavark --exit-command-arg --volumepath --exit-command-arg /home/johnny/.local/share/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg sqlite --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg --rm --exit-command-arg 030c5e181d7d6fbb5d76a50de51349065d694cce0f4582103b037d6aca08f7d9
johnny 19852 0.7 0.0 2605668 43904 ? Ssl 22:35 0:00 /usr/bin/podman rm -v -f -i --cidfile=/run/user/1000/testquadlet.cid
johnny 19900 0.0 0.0 227924 2304 pts/6 S+ 22:35 0:00 grep --color=auto podman
PID=19847
strace: Process 19847 attached
rt_sigaction(SIGCHLD, {sa_handler=SIG_DFL, sa_mask=[CHLD], sa_flags=SA_RESTORER|SA_RESTART, sa_restorer=0x7f95c1bf2d00}, {sa_handler=SIG_DFL, sa_mask=[], sa_flags=0}, 8) = 0
+++ exited with 137 +++
The I was also quick to capture what
Edit: I even simplified the command to cut out the
|
I just had a small breakthrough, so I am not giving up yet.
[Unit]
Wants=network-online.target
After=network-online.target
Description=Sleep Container
SourcePath=/home/johnny/.config/containers/systemd/testquadlet.container
RequiresMountsFor=%t/containers
[X-Container]
Image=docker.io/alpine
PodmanArgs=--log-level=debug
Exec=sleep infinity
[Install]
WantedBy=default.target
[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
KillMode=mixed
ExecStop=/usr/bin/podman rm -v -f -i --cidfile=%t/%N.cid
ExecStopPost=-/usr/bin/podman rm -v -f -i --cidfile=%t/%N.cid
Delegate=yes
Type=notify
NotifyAccess=all
SyslogIdentifier=%N
ExecStart=/usr/bin/podman run --name=systemd-%N --cidfile=%t/%N.cid --replace --rm --cgroups=split --sdnotify=conmon --log-level=debug docker.io/alpine sleep infinity
Click for details
Conclusion so far:
Since I got this far, it might be worth investigating more and finding if this is a bug in podman in certain systems. Any ideas based on this finding, @Luap99 @rhatdan ? |
Since everything is pointing towards a
And compared it to the host logs where
And the thing that immediately stands out is this line:
I was curious, so I tried So far, there's some pretty strong hints that there's a bug, in which conmon on my host is trying to watch a non-existent file ( This is my strongest theory at the moment. I would have to debug differences in cgroup settings between guest and host. Here's a comparison of host and VM podman info: Click for host podman info
Click for VM podman info
I compared them in Meld and the only real differences are:
Any ideas for how to debug the |
Oh My GOD! I've solved it after nearly 10 hours of troubleshooting and a ruined day.There is a bug in either conmon, podman, systemd or pia-vpn. The issue is extremely easy to reproduce:
[Unit]
Description=Sleep Container
[Container]
Image=docker.io/alpine
PodmanArgs=--log-level=debug
Exec=sleep infinity
[Install]
WantedBy=default.target
$ systemctl --user daemon-reload; systemctl --user restart testquadlet --no-block; journalctl --user -f
$ podman ps
$ systemctl --user stop testquadlet
$ cd ~/Downloads
$ chmod +x pia-linux-3.6.1-08339.run
$ ./pia-linux-3.6.1-08339.run
$ systemctl --user daemon-reload; systemctl --user restart testquadlet --no-block; journalctl --user -f
$ sudo umount /opt/piavpn/etc/cgroup/net_cls
$ mount | grep -i cgroup
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime,seclabel,nsdelegate,memory_recursiveprot)
none on /opt/piavpn/etc/cgroup/net_cls type cgroup (rw,relatime,seclabel,net_cls)
IMPORTANT Edit: I found the kernel documentation for cgroups which confirms that the "do not break userspace" rule of kernel development is still in effect, and means that mounting cgroup v1 and cgroup v2 at the same time is 100% supported and valid.
From what I've been able to understand so far, their cgroupv1 at While it's regrettable that PIA VPN creates a legacy Cgroup v1, it's totally valid to do so, and it places it in a non-conflicting location at Systemd themselves added support for legacy Cgroup v1s on Cgroup v2 systems in one of the tickets I linked to above, which is their own acknowledgement that this is valid. The kernel docs confirm it too. And then we have tools such as LXC, which is another container runner technology which supports systems that have Cgroup v1 mounts at various locations on a Cgroup v2 system, as seen when people ensured that the v1 Therefore it seems pretty clear that the bug is in podman or conmon. I see so many people in other tickets who wasted days on debugging this situation. Would it perhaps be possible to revisit this issue and code cgroup detection into podman so that it tries to add itself to the system's cgroup v2 rather than getting confused by cgroup v1s from other apps? The issue doesn't seem to exist in systemd, since I have around 1000 services in total between my system and user units, and all of them run and properly use cgroupv2, except for podman's quadlets which fails to start when systems contain both cgroup v1 and cgroup v2 mounts. The small test-case in this message could be helpful for coding some detection to avoid cgroup v1s in podman. Furthermore, the systemd ticket links to some commits which show the techniques that systemd used for properly filtering and supporting mixed Cgroup v1 and v2 systems. It would be such a big improvement if Podman detects Cgroup v2 and registers itself to that while ignoring backwards-compatible v1 groups created by various other apps. I'm looking forward to hearing your feedback. :) |
I've tried to manually mount a cgroup v1 hierarchy and everything seems to work fine. I don't think it is a podman/conmon/crun issue if the container runs. The cgroup somehow confuses systemd, that terminates the container. Please try to replace the Do you get the same behaviour if you stop the VPN service (the cgroup tree is still present but there is no process running)? That could also manage the cgroup and terminate processes that are not recognized |
Just to be clear here regarding the sequence, |
I think I found the reason, we have a wrong check for the ownership for the current cgroup that causes the current process to be moved to a different scope. I've opened a PR: |
@giuseppe Thank you so much. All of you are absolutely brilliant. Podman is one of the most important projects in the world and I am grateful to all of you. :) |
is this fix in 5.2.4 or only forthcoming 5.3.0? |
the fix is not in 5.2.4 |
(Edit: The exact issue has been found.)
Issue Description
The quadlet service exits immediately. The same command runs perfectly when executed manually outside of systemd.
Steps to reproduce the issue
I have spent 5 hours trying everything I can think of and doing painstaking research, but I cannot figure out why the service exits immediately when run via systemd, but works perfectly when run manually.
Steps to reproduce the issue:
~/.config/containers/systemd/testquadlet.container
with the following contents:/usr/libexec/podman/quadlet --user --dryrun
systemctl --user daemon-reload
to generate the service andsystemctl --user start testquadlet
to start it, followed bysystemctl --user status testquadlet
:podman ps
output confirms that nothing is running.$ systemctl --user status testquadlet | grep ExecStart= Process: 1170019 ExecStart=/usr/bin/podman run --name=systemd-testquadlet --cidfile=/run/user/1000/testquadlet.cid --replace --rm --cgroups=split --sdnotify=conmon -d --log-level=debug docker.io/alpine sleep infinity (code=exited, status=0/SUCCESS)
podman ps
again:$ systemctl --user status testquadlet | grep ExecStop= Process: 1170056 ExecStop=/usr/bin/podman rm -v -f -i --cidfile=/run/user/1000/testquadlet.cid (code=exited, status=0/SUCCESS)
podman info output
Podman in a container
No
Privileged Or Rootless
Rootless
Upstream Latest Release
No
Additional environment details
Fedora Workstation 40
Additional information
It happens to every container service. I have tried several others and they all fail to start when using the systemd service, but start perfectly when running the command manually.
The text was updated successfully, but these errors were encountered: