Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Quadlet][Rootless] The generated systemd service exits immediately with success code 0 [Podman currently doesn't support cgroup v1 + v2 mixed systems.] [Update: FIXED in Podman 5.3.0] #23990

Closed
Arcitec opened this issue Sep 17, 2024 · 27 comments · Fixed by containers/common#2162
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@Arcitec
Copy link

Arcitec commented Sep 17, 2024

(Edit: The exact issue has been found.)


Issue Description

The quadlet service exits immediately. The same command runs perfectly when executed manually outside of systemd.

Steps to reproduce the issue

I have spent 5 hours trying everything I can think of and doing painstaking research, but I cannot figure out why the service exits immediately when run via systemd, but works perfectly when run manually.

Steps to reproduce the issue:

  1. Fedora Workstation 40. Cgroups v2 is enabled on that by default.
  2. ~/.config/containers/systemd/testquadlet.container with the following contents:
[Unit]
Description=Sleep Container

[Container]
Image=docker.io/alpine
PodmanArgs=--log-level=debug
Exec=sleep infinity

[Install]
WantedBy=default.target
  1. Generated systemd service: /usr/libexec/podman/quadlet --user --dryrun
---testquadlet.service---
[Unit]
Wants=network-online.target
After=network-online.target
Description=Sleep Container
SourcePath=/home/johnny/.config/containers/systemd/testquadlet.container
RequiresMountsFor=%t/containers

[X-Container]
Image=docker.io/alpine
PodmanArgs=--log-level=debug
Exec=sleep infinity

[Install]
WantedBy=default.target

[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
KillMode=mixed
ExecStop=/usr/bin/podman rm -v -f -i --cidfile=%t/%N.cid
ExecStopPost=-/usr/bin/podman rm -v -f -i --cidfile=%t/%N.cid
Delegate=yes
Type=notify
NotifyAccess=all
SyslogIdentifier=%N
ExecStart=/usr/bin/podman run --name=systemd-%N --cidfile=%t/%N.cid --replace --rm --cgroups=split --sdnotify=conmon -d --log-level=debug docker.io/alpine sleep infinity
  1. Result of running systemctl --user daemon-reload to generate the service and systemctl --user start testquadlet to start it, followed by systemctl --user status testquadlet:
○ testquadlet.service - Sleep Container
     Loaded: loaded (/home/johnny/.config/containers/systemd/testquadlet.container; generated)
    Drop-In: /usr/lib/systemd/user/service.d
             └─10-timeout-abort.conf
     Active: inactive (dead) since Tue 2024-09-17 19:34:25 CEST; 4s ago
   Duration: 8ms
    Process: 1170019 ExecStart=/usr/bin/podman run --name=systemd-testquadlet --cidfile=/run/user/1000/testquadlet.cid --replace --rm --cgroups=split --sdnotify=conmon -d --log-level=debug docker.io/alpine sleep infinity (code=exited, status=0/SUCCESS)
    Process: 1170056 ExecStop=/usr/bin/podman rm -v -f -i --cidfile=/run/user/1000/testquadlet.cid (code=exited, status=0/SUCCESS)
    Process: 1170115 ExecStopPost=/usr/bin/podman rm -v -f -i --cidfile=/run/user/1000/testquadlet.cid (code=exited, status=0/SUCCESS)
   Main PID: 1170019 (code=exited, status=0/SUCCESS)
        CPU: 66ms

Sep 17 19:34:15 aurora testquadlet[1170019]: time="2024-09-17T19:34:15+02:00" level=debug msg="No hostname set; container's hostname will default to runtime default"
Sep 17 19:34:15 aurora testquadlet[1170019]: time="2024-09-17T19:34:15+02:00" level=debug msg="Loading seccomp profile from \"/usr/share/containers/seccomp.json\""
Sep 17 19:34:15 aurora testquadlet[1170019]: time="2024-09-17T19:34:15+02:00" level=debug msg="Allocated lock 6 for container c68529ebce9c0b8185535904dcaa4cdf2274783161e75558be5dbd64932b8abb"
Sep 17 19:34:15 aurora testquadlet[1170019]: time="2024-09-17T19:34:15+02:00" level=debug msg="exporting opaque data as blob \"sha256:91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b\""
Sep 17 19:34:15 aurora testquadlet[1170019]: time="2024-09-17T19:34:15+02:00" level=debug msg="Cached value indicated that idmapped mounts for overlay are not supported"
Sep 17 19:34:15 aurora testquadlet[1170019]: time="2024-09-17T19:34:15+02:00" level=debug msg="Check for idmapped mounts support "
Sep 17 19:34:15 aurora testquadlet[1170019]: time="2024-09-17T19:34:15+02:00" level=debug msg="Created container \"c68529ebce9c0b8185535904dcaa4cdf2274783161e75558be5dbd64932b8abb\""
Sep 17 19:34:15 aurora testquadlet[1170019]: time="2024-09-17T19:34:15+02:00" level=debug msg="Container \"c68529ebce9c0b8185535904dcaa4cdf2274783161e75558be5dbd64932b8abb\" has work directory \"/home/johnny/.local/share/containers/storage/overlay-containers/c68529ebce9c0b8185535904dcaa4cdf2274783161e75558be5dbd64932b8abb/userdata\""
Sep 17 19:34:15 aurora testquadlet[1170019]: time="2024-09-17T19:34:15+02:00" level=debug msg="Container \"c68529ebce9c0b8185535904dcaa4cdf2274783161e75558be5dbd64932b8abb\" has run directory \"/run/user/1000/containers/overlay-containers/c68529ebce9c0b8185535904dcaa4cdf2274783161e75558be5dbd64932b8abb/userdata\""
Sep 17 19:34:15 aurora systemd[4838]: Started testquadlet.service - Sleep Container.
  1. podman ps output confirms that nothing is running.
CONTAINER ID  IMAGE       COMMAND     CREATED     STATUS      PORTS       NAMES
  1. Since the systemd service isn't working, I instead extracted the run-command manually as follows:
$ systemctl --user status testquadlet | grep ExecStart=
    Process: 1170019 ExecStart=/usr/bin/podman run --name=systemd-testquadlet --cidfile=/run/user/1000/testquadlet.cid --replace --rm --cgroups=split --sdnotify=conmon -d --log-level=debug docker.io/alpine sleep infinity (code=exited, status=0/SUCCESS)
  1. Then I executed that manually in a terminal:
podman run --name=systemd-testquadlet --cidfile=/run/user/1000/testquadlet.cid --replace --rm --cgroups=split --sdnotify=conmon -d --log-level=debug docker.io/alpine sleep infinity
  1. Now I can check podman ps again:
$ podman ps
CONTAINER ID  IMAGE                            COMMAND         CREATED             STATUS             PORTS       NAMES
359b47dec1aa  docker.io/library/alpine:latest  sleep infinity  About a minute ago  Up About a minute              systemd-testquadlet
  1. Now I want to shut it down cleanly, so I extracted the necessary command:
$ systemctl --user status testquadlet | grep ExecStop= 
    Process: 1170056 ExecStop=/usr/bin/podman rm -v -f -i --cidfile=/run/user/1000/testquadlet.cid (code=exited, status=0/SUCCESS)
  1. And used it to stop the container:
$ podman rm -v -f -i --cidfile=/run/user/1000/testquadlet.cid
WARN[0010] StopSignal SIGTERM failed to stop container systemd-testquadlet in 10 seconds, resorting to SIGKILL 
359b47dec1aa6104454df717914a45169791edcd93ad123063b6ce09bfa2598e
  1. And now the container is dead as intended:
podman ps
CONTAINER ID  IMAGE       COMMAND     CREATED     STATUS      PORTS       NAMES

podman info output

host:
  arch: amd64
  buildahVersion: 1.37.2
  cgroupControllers:
  - cpu
  - io
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.12-2.fc40.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.12, commit: '
  cpuUtilization:
    idlePercent: 84.14
    systemPercent: 13.77
    userPercent: 2.09
  cpus: 16
  databaseBackend: boltdb
  distribution:
    distribution: fedora
    variant: workstation
    version: "40"
  eventLogger: journald
  freeLocks: 2041
  hostname: aurora
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 6.9.12-200.fc40.x86_64
  linkmode: dynamic
  logDriver: journald
  memFree: 879673344
  memTotal: 67107377152
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: aardvark-dns-1.12.2-2.fc40.x86_64
      path: /usr/libexec/podman/aardvark-dns
      version: aardvark-dns 1.12.2
    package: netavark-1.12.2-1.fc40.x86_64
    path: /usr/libexec/podman/netavark
    version: netavark 1.12.2
  ociRuntime:
    name: crun
    package: crun-1.15-1.fc40.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.15
      commit: e6eacaf4034e84185fd8780ac9262bbf57082278
      rundir: /run/user/1000/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
  os: linux
  pasta:
    executable: /usr/bin/pasta
    package: passt-0^20240906.g6b38f07-1.fc40.x86_64
    version: |
      pasta 0^20240906.g6b38f07-1.fc40.x86_64
      Copyright Red Hat
      GNU General Public License, version 2 or later
        <https://www.gnu.org/licenses/old-licenses/gpl-2.0.html>
      This is free software: you are free to change and redistribute it.
      There is NO WARRANTY, to the extent permitted by law.
  remoteSocket:
    exists: true
    path: /run/user/1000/podman/podman.sock
  rootlessNetworkCmd: pasta
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.2.2-2.fc40.x86_64
    version: |-
      slirp4netns version 1.2.2
      commit: 0ee2d87523e906518d34a6b423271e4826f71faf
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.5
  swapFree: 7621832704
  swapTotal: 8589930496
  uptime: 90h 51m 18.00s (Approximately 3.75 days)
  variant: ""
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  localhost:5000:
    Blocked: false
    Insecure: true
    Location: localhost:5000
    MirrorByDigestOnly: false
    Mirrors: null
    Prefix: localhost:5000
    PullFromMirror: ""
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - docker.io
store:
  configFile: /home/johnny/.config/containers/storage.conf
  containerStore:
    number: 4
    paused: 0
    running: 0
    stopped: 4
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /home/johnny/.local/share/containers/storage
  graphRootAllocated: 1998678130688
  graphRootUsed: 1930459504640
  graphStatus:
    Backing Filesystem: btrfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Supports shifting: "false"
    Supports volatile: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 8
  runRoot: /run/user/1000/containers
  transientStore: false
  volumePath: /home/johnny/.local/share/containers/storage/volumes
version:
  APIVersion: 5.2.2
  Built: 1724198400
  BuiltTime: Wed Aug 21 02:00:00 2024
  GitCommit: ""
  GoVersion: go1.22.6
  Os: linux
  OsArch: linux/amd64
  Version: 5.2.2

Podman in a container

No

Privileged Or Rootless

Rootless

Upstream Latest Release

No

Additional environment details

Fedora Workstation 40

Additional information

It happens to every container service. I have tried several others and they all fail to start when using the systemd service, but start perfectly when running the command manually.

@Arcitec Arcitec added the kind/bug Categorizes issue or PR as related to a bug. label Sep 17, 2024
@Luap99
Copy link
Member

Luap99 commented Sep 17, 2024

Please provide the full journal log for this service.

@Arcitec
Copy link
Author

Arcitec commented Sep 17, 2024

The service above was a minimal test case to show that everything is broken (no matter what service I try to create, they all fail via systemd). But here's the service I was actually trying to run.

  • Edit: Removed since it was not necessary for debugging this issue.

  • Verifying that it still isn't running:

$ podman ps
CONTAINER ID  IMAGE       COMMAND     CREATED     STATUS      PORTS       NAMES
  • Checking the file/directory that conmon was complaining about above. I don't have it.
$ ls -al /sys/fs/cgroup/memory.events
ls: cannot access '/sys/fs/cgroup/memory.events': No such file or directory

  • So at this point I decided to yet again execute the command EXACT same systemd ExecStart command manually instead:
$ systemctl --user status syncthing | grep ExecStart=
    Process: 1174086 ExecStart=/usr/bin/podman run --name=systemd-syncthing --cidfile=/run/user/1000/syncthing.cid --replace --rm --cgroups=split --sdnotify=conmon -d --security-opt label:disable --userns keep-id -v /home/johnny/.config/syncthing-container:/var/syncthing/config -v /home/johnny/Sync:/var/syncthing/Sync --label io.containers.autoupdate=registry --publish 127.0.0.1:8384:8384 --publish 22000:22000/tcp --publish 22000:22000/udp --publish 21027:21027/udp --env PGID=1000 --env PUID=1000 --hostname st-aurora docker.io/syncthing/syncthing:latest (code=exited, status=0/SUCCESS)
  • And executed it:

Edit: Log was here. It ran successfully.

  • And verifying that it runs when I started it manually:
podman ps
CONTAINER ID  IMAGE                                 COMMAND     CREATED             STATUS             PORTS                                                                                                                                              NAMES
f9ce69238142  docker.io/syncthing/syncthing:latest              About a minute ago  Up About a minute  127.0.0.1:8384->8384/tcp, 0.0.0.0:22000->22000/tcp, 0.0.0.0:21027->21027/udp, 0.0.0.0:22000->22000/udp, 8384/tcp, 21027/udp, 22000/tcp, 22000/udp  systemd-syncthing
  • The server is now running and I'm able to access it on the hosted ports. So why isn't the systemd variant of the exact same podman command able to launch?

@Arcitec
Copy link
Author

Arcitec commented Sep 17, 2024

Please provide the full journal log for this service.

I shared my actual container spec file above and the full journal log for that (the syncthing service). Hope I haven't missed anything there.

@Luap99
Copy link
Member

Luap99 commented Sep 17, 2024

I would be interested in the full logs with the simple sleep example. The systemctl status truncated output I expect much more log lines (with --log-level debug that is) there that I would like to see.

The conmon /sys/fs/cgroup/memory.events can be ignored I think although I haven't looked to closely what conmon is doing there.

Sep 17 19:52:13 aurora systemd-syncthing[1174111]: [monitor] 2024/09/17 17:52:13 INFO: Signal 15 received; exiting

The looks like something is sending SIGTERM to the container.

@Arcitec
Copy link
Author

Arcitec commented Sep 17, 2024

The looks like something is sending SIGTERM to the container.

Hmm interesting. I'm not sure what that could be from. I have a normal Fedora Workstation 40 desktop and haven't installed some task killer. But whatever it is, it gets sent to every container that is started via systemd. But never when I start them manually on the terminal or with the Pods app.

My systemd services run at the user-level, and I can confirm that they are running as my user because the ExecStart command replaces %h with my home path, so systemd clearly runs it in the user context.

The conmon /sys/fs/cgroup/memory.events can be ignored I think although I haven't looked to closely what conmon is doing there.

Okay good. :)

I would be interested in the full logs with the simple sleep example.

Good idea. I'll capture those now:

Click for details
systemctl --user daemon-reload; systemctl --user restart testquadlet --no-block; journalctl --user -f
Sep 17 20:06:01 aurora gnome-shell[5080]: Can't update stage views actor unnamed [Gjs_dash-to-dock_micxgx_gmail_com_docking_DashSlideContainer] is on because it needs an allocation.
Sep 17 20:06:01 aurora gnome-shell[5080]: Can't update stage views actor dashtodockBox [StBoxLayout] is on because it needs an allocation.
Sep 17 20:06:01 aurora gnome-shell[5080]: Can't update stage views actor bms-dash-backgroundgroup [MetaBackgroundGroup] is on because it needs an allocation.
Sep 17 20:06:01 aurora gnome-shell[5080]: Can't update stage views actor bms-dash-blurred-widget [StWidget] is on because it needs an allocation.
Sep 17 20:07:13 aurora systemd[4838]: Started app-gnome-tilix-1176763.scope - Application launched by gsd-media-keys.
Sep 17 20:07:13 aurora systemd[4838]: Started vte-spawn-d4a76ebc-5255-47fd-8b67-3980dee73e37.scope - VTE child process 1176771 launched by tilix process 85080.
Sep 17 20:07:25 aurora systemd[4838]: Reloading requested from client PID 1177056 ('systemctl') (unit vte-spawn-d4a76ebc-5255-47fd-8b67-3980dee73e37.scope)...
Sep 17 20:07:25 aurora systemd[4838]: Reloading...
Sep 17 20:07:26 aurora systemd[4838]: Reloading finished in 165 ms.
Sep 17 20:07:26 aurora systemd[4838]: Starting testquadlet.service - Sleep Container...
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=info msg="/usr/bin/podman filtering at log level debug"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Called run.PersistentPreRunE(/usr/bin/podman run --name=systemd-testquadlet --cidfile=/run/user/1000/testquadlet.cid --replace --rm --cgroups=split --sdnotify=conmon -d --log-level=debug docker.io/alpine sleep infinity)"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Using conmon: \"/usr/bin/conmon\""
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=info msg="Using boltdb as database backend"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Initializing boltdb state at /home/johnny/.local/share/containers/storage/libpod/bolt_state.db"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Using graph driver overlay"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Using graph root /home/johnny/.local/share/containers/storage"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Using run root /run/user/1000/containers"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Using static dir /home/johnny/.local/share/containers/storage/libpod"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Using tmp dir /run/user/1000/libpod/tmp"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Using volume path /home/johnny/.local/share/containers/storage/volumes"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Using transient store: false"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="[graphdriver] trying provided driver \"overlay\""
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Cached value indicated that overlay is supported"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Cached value indicated that overlay is supported"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Cached value indicated that metacopy is not being used"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Cached value indicated that native-diff is usable"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="backingFs=btrfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Initializing event backend journald"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Configured OCI runtime crun-vm initialization failed: no valid executable found for OCI runtime crun-vm: invalid argument"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Using OCI runtime \"/usr/bin/crun\""
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=info msg="Setting parallel job count to 49"
Sep 17 20:07:26 aurora systemd[4838]: Started podman-1177076.scope.
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Pulling image docker.io/alpine (policy: missing)"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Looking up image \"docker.io/alpine\" in local containers storage"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux  [] }"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Trying \"docker.io/library/alpine:latest\" ..."
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="parsed reference into \"[overlay@/home/johnny/.local/share/containers/storage+/run/user/1000/containers]@91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b\""
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Found image \"docker.io/alpine\" as \"docker.io/library/alpine:latest\" in local containers storage"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Found image \"docker.io/alpine\" as \"docker.io/library/alpine:latest\" in local containers storage ([overlay@/home/johnny/.local/share/containers/storage+/run/user/1000/containers]@91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b)"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="exporting opaque data as blob \"sha256:91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b\""
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Looking up image \"docker.io/library/alpine:latest\" in local containers storage"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux  [] }"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Trying \"docker.io/library/alpine:latest\" ..."
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="parsed reference into \"[overlay@/home/johnny/.local/share/containers/storage+/run/user/1000/containers]@91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b\""
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Found image \"docker.io/library/alpine:latest\" as \"docker.io/library/alpine:latest\" in local containers storage"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Found image \"docker.io/library/alpine:latest\" as \"docker.io/library/alpine:latest\" in local containers storage ([overlay@/home/johnny/.local/share/containers/storage+/run/user/1000/containers]@91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b)"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="exporting opaque data as blob \"sha256:91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b\""
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Looking up image \"docker.io/alpine\" in local containers storage"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux  [] }"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Trying \"docker.io/library/alpine:latest\" ..."
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="parsed reference into \"[overlay@/home/johnny/.local/share/containers/storage+/run/user/1000/containers]@91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b\""
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Found image \"docker.io/alpine\" as \"docker.io/library/alpine:latest\" in local containers storage"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Found image \"docker.io/alpine\" as \"docker.io/library/alpine:latest\" in local containers storage ([overlay@/home/johnny/.local/share/containers/storage+/run/user/1000/containers]@91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b)"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="exporting opaque data as blob \"sha256:91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b\""
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Inspecting image 91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="exporting opaque data as blob \"sha256:91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b\""
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="exporting opaque data as blob \"sha256:91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b\""
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Inspecting image 91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Inspecting image 91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Inspecting image 91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="using systemd mode: false"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="setting container name systemd-testquadlet"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="No hostname set; container's hostname will default to runtime default"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Loading seccomp profile from \"/usr/share/containers/seccomp.json\""
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Allocated lock 6 for container 799bacc6b24d58f6a839fe370736beb575a12be8f95d008905b85af9d2d1454a"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="exporting opaque data as blob \"sha256:91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b\""
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Cached value indicated that idmapped mounts for overlay are not supported"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Check for idmapped mounts support "
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Created container \"799bacc6b24d58f6a839fe370736beb575a12be8f95d008905b85af9d2d1454a\""
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Container \"799bacc6b24d58f6a839fe370736beb575a12be8f95d008905b85af9d2d1454a\" has work directory \"/home/johnny/.local/share/containers/storage/overlay-containers/799bacc6b24d58f6a839fe370736beb575a12be8f95d008905b85af9d2d1454a/userdata\""
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Container \"799bacc6b24d58f6a839fe370736beb575a12be8f95d008905b85af9d2d1454a\" has run directory \"/run/user/1000/containers/overlay-containers/799bacc6b24d58f6a839fe370736beb575a12be8f95d008905b85af9d2d1454a/userdata\""
Sep 17 20:07:26 aurora podman[1177076]: 2024-09-17 20:07:26.253369902 +0200 CEST m=+0.027909479 container create 799bacc6b24d58f6a839fe370736beb575a12be8f95d008905b85af9d2d1454a (image=docker.io/library/alpine:latest, name=systemd-testquadlet, PODMAN_SYSTEMD_UNIT=testquadlet.service)
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Cached value indicated that volatile is being used"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="overlay: mount_data=lowerdir=/home/johnny/.local/share/containers/storage/overlay/l/D7ZKOFSHEJBFTRKTWDF5XML27S,upperdir=/home/johnny/.local/share/containers/storage/overlay/3d0ac47e37afd3030d9853d989c204f7c4c6335c6ba82973e4b826c63a60c083/diff,workdir=/home/johnny/.local/share/containers/storage/overlay/3d0ac47e37afd3030d9853d989c204f7c4c6335c6ba82973e4b826c63a60c083/work,userxattr,volatile,context=\"system_u:object_r:container_file_t:s0:c201,c637\""
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Made network namespace at /run/user/1000/netns/netns-83596a71-dc86-5f05-20f9-f58335d9dad9 for container 799bacc6b24d58f6a839fe370736beb575a12be8f95d008905b85af9d2d1454a"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="pasta arguments: --config-net --dns-forward 169.254.0.1 -t none -u none -T none -U none --no-map-gw --quiet --netns /run/user/1000/netns/netns-83596a71-dc86-5f05-20f9-f58335d9dad9"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Mounted container \"799bacc6b24d58f6a839fe370736beb575a12be8f95d008905b85af9d2d1454a\" at \"/home/johnny/.local/share/containers/storage/overlay/3d0ac47e37afd3030d9853d989c204f7c4c6335c6ba82973e4b826c63a60c083/merged\""
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Created root filesystem for container 799bacc6b24d58f6a839fe370736beb575a12be8f95d008905b85af9d2d1454a at /home/johnny/.local/share/containers/storage/overlay/3d0ac47e37afd3030d9853d989c204f7c4c6335c6ba82973e4b826c63a60c083/merged"
Sep 17 20:07:26 aurora pasta[1177098]: Couldn't get any nameserver address
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=info msg="pasta logged warnings: \"Couldn't get any nameserver address\\n\""
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="/etc/system-fips does not exist on host, not mounting FIPS mode subscription"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="reading hooks from /usr/share/containers/oci/hooks.d"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Workdir \"/\" resolved to host path \"/home/johnny/.local/share/containers/storage/overlay/3d0ac47e37afd3030d9853d989c204f7c4c6335c6ba82973e4b826c63a60c083/merged\""
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Created OCI spec for container 799bacc6b24d58f6a839fe370736beb575a12be8f95d008905b85af9d2d1454a at /home/johnny/.local/share/containers/storage/overlay-containers/799bacc6b24d58f6a839fe370736beb575a12be8f95d008905b85af9d2d1454a/userdata/config.json"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="/usr/bin/conmon messages will be logged to syslog"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="running conmon: /usr/bin/conmon" args="[--api-version 1 -c 799bacc6b24d58f6a839fe370736beb575a12be8f95d008905b85af9d2d1454a -u 799bacc6b24d58f6a839fe370736beb575a12be8f95d008905b85af9d2d1454a -r /usr/bin/crun -b /home/johnny/.local/share/containers/storage/overlay-containers/799bacc6b24d58f6a839fe370736beb575a12be8f95d008905b85af9d2d1454a/userdata -p /run/user/1000/containers/overlay-containers/799bacc6b24d58f6a839fe370736beb575a12be8f95d008905b85af9d2d1454a/userdata/pidfile -n systemd-testquadlet --exit-dir /run/user/1000/libpod/tmp/exits --persist-dir /run/user/1000/libpod/tmp/persist/799bacc6b24d58f6a839fe370736beb575a12be8f95d008905b85af9d2d1454a --full-attach -l journald --log-level debug --syslog --conmon-pidfile /run/user/1000/containers/overlay-containers/799bacc6b24d58f6a839fe370736beb575a12be8f95d008905b85af9d2d1454a/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/johnny/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1000/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --network-config-dir --exit-command-arg  --exit-command-arg --network-backend --exit-command-arg netavark --exit-command-arg --volumepath --exit-command-arg /home/johnny/.local/share/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg boltdb --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg --rm --exit-command-arg 799bacc6b24d58f6a839fe370736beb575a12be8f95d008905b85af9d2d1454a]"
Sep 17 20:07:26 aurora testquadlet[1177106]: [conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied
Sep 17 20:07:26 aurora conmon[1177106]: conmon 799bacc6b24d58f6a839 <ndebug>: failed to write to /proc/self/oom_score_adj: Permission denied
Sep 17 20:07:26 aurora conmon[1177107]: conmon 799bacc6b24d58f6a839 <ndebug>: addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/12/attach}
Sep 17 20:07:26 aurora conmon[1177107]: conmon 799bacc6b24d58f6a839 <ndebug>: terminal_ctrl_fd: 12
Sep 17 20:07:26 aurora conmon[1177107]: conmon 799bacc6b24d58f6a839 <ndebug>: winsz read side: 15, winsz write side: 16
Sep 17 20:07:26 aurora conmon[1177107]: conmon 799bacc6b24d58f6a839 <ndebug>: container PID: 1177109
Sep 17 20:07:26 aurora conmon[1177107]: conmon 799bacc6b24d58f6a839 <nwarn>: Failed to add inotify watch for /sys/fs/cgroup/memory.events
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Received: 1177109"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=info msg="Got Conmon PID as 1177107"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Created container 799bacc6b24d58f6a839fe370736beb575a12be8f95d008905b85af9d2d1454a in OCI runtime"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="found local resolver, using \"/run/systemd/resolve/resolv.conf\" to get the nameservers"
Sep 17 20:07:26 aurora podman[1177076]: 2024-09-17 20:07:26.288415339 +0200 CEST m=+0.062954936 container init 799bacc6b24d58f6a839fe370736beb575a12be8f95d008905b85af9d2d1454a (image=docker.io/library/alpine:latest, name=systemd-testquadlet, PODMAN_SYSTEMD_UNIT=testquadlet.service)
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Starting container 799bacc6b24d58f6a839fe370736beb575a12be8f95d008905b85af9d2d1454a with command [sleep infinity]"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Started container 799bacc6b24d58f6a839fe370736beb575a12be8f95d008905b85af9d2d1454a"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Notify sent successfully"
Sep 17 20:07:26 aurora podman[1177076]: 2024-09-17 20:07:26.290473174 +0200 CEST m=+0.065012751 container start 799bacc6b24d58f6a839fe370736beb575a12be8f95d008905b85af9d2d1454a (image=docker.io/library/alpine:latest, name=systemd-testquadlet, PODMAN_SYSTEMD_UNIT=testquadlet.service)
Sep 17 20:07:26 aurora systemd[4838]: Started testquadlet.service - Sleep Container.
Sep 17 20:07:26 aurora testquadlet[1177076]: 799bacc6b24d58f6a839fe370736beb575a12be8f95d008905b85af9d2d1454a
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Called run.PersistentPostRunE(/usr/bin/podman run --name=systemd-testquadlet --cidfile=/run/user/1000/testquadlet.cid --replace --rm --cgroups=split --sdnotify=conmon -d --log-level=debug docker.io/alpine sleep infinity)"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Shutting down engines"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=info msg="Received shutdown.Stop(), terminating!" PID=1177076
Sep 17 20:07:26 aurora podman[1177076]: 2024-09-17 20:07:26.242690383 +0200 CEST m=+0.017229970 image pull 91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b docker.io/alpine
Sep 17 20:07:26 aurora systemd[4838]: Started podman-1177112.scope.
Sep 17 20:07:36 aurora testquadlet[1177112]: time="2024-09-17T20:07:36+02:00" level=warning msg="StopSignal SIGTERM failed to stop container systemd-testquadlet in 10 seconds, resorting to SIGKILL"
Sep 17 20:07:36 aurora conmon[1177107]: conmon 799bacc6b24d58f6a839 <ninfo>: container 1177109 exited with status 137
Sep 17 20:07:36 aurora conmon[1177107]: conmon 799bacc6b24d58f6a839 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/memory.events
Sep 17 20:07:36 aurora podman[1177112]: 2024-09-17 20:07:36.371777385 +0200 CEST m=+10.035892010 container died 799bacc6b24d58f6a839fe370736beb575a12be8f95d008905b85af9d2d1454a (image=docker.io/library/alpine:latest, name=systemd-testquadlet, PODMAN_SYSTEMD_UNIT=testquadlet.service)
Sep 17 20:07:36 aurora /usr/bin/podman[1177146]: time="2024-09-17T20:07:36+02:00" level=debug msg="Called cleanup.PersistentPreRunE(/usr/bin/podman --root /home/johnny/.local/share/containers/storage --runroot /run/user/1000/containers --log-level debug --cgroup-manager systemd --tmpdir /run/user/1000/libpod/tmp --network-config-dir  --network-backend netavark --volumepath /home/johnny/.local/share/containers/storage/volumes --db-backend boltdb --transient-store=false --runtime crun --storage-driver overlay --events-backend journald --syslog container cleanup --rm 799bacc6b24d58f6a839fe370736beb575a12be8f95d008905b85af9d2d1454a)"
Sep 17 20:07:36 aurora /usr/bin/podman[1177146]: time="2024-09-17T20:07:36+02:00" level=debug msg="Setting custom database backend: \"boltdb\""
Sep 17 20:07:36 aurora /usr/bin/podman[1177146]: time="2024-09-17T20:07:36+02:00" level=debug msg="Using conmon: \"/usr/bin/conmon\""
Sep 17 20:07:36 aurora /usr/bin/podman[1177146]: time="2024-09-17T20:07:36+02:00" level=info msg="Using boltdb as database backend"
Sep 17 20:07:36 aurora /usr/bin/podman[1177146]: time="2024-09-17T20:07:36+02:00" level=debug msg="Initializing boltdb state at /home/johnny/.local/share/containers/storage/libpod/bolt_state.db"
Sep 17 20:07:36 aurora podman[1177112]: 2024-09-17 20:07:36.426971551 +0200 CEST m=+10.091086176 container remove 799bacc6b24d58f6a839fe370736beb575a12be8f95d008905b85af9d2d1454a (image=docker.io/library/alpine:latest, name=systemd-testquadlet, PODMAN_SYSTEMD_UNIT=testquadlet.service)
Sep 17 20:07:36 aurora testquadlet[1177112]: 799bacc6b24d58f6a839fe370736beb575a12be8f95d008905b85af9d2d1454a
Sep 17 20:07:36 aurora /usr/bin/podman[1177146]: time="2024-09-17T20:07:36+02:00" level=debug msg="Using graph driver overlay"
Sep 17 20:07:36 aurora /usr/bin/podman[1177146]: time="2024-09-17T20:07:36+02:00" level=debug msg="Using graph root /home/johnny/.local/share/containers/storage"
Sep 17 20:07:36 aurora /usr/bin/podman[1177146]: time="2024-09-17T20:07:36+02:00" level=debug msg="Using run root /run/user/1000/containers"
Sep 17 20:07:36 aurora /usr/bin/podman[1177146]: time="2024-09-17T20:07:36+02:00" level=debug msg="Using static dir /home/johnny/.local/share/containers/storage/libpod"
Sep 17 20:07:36 aurora /usr/bin/podman[1177146]: time="2024-09-17T20:07:36+02:00" level=debug msg="Using tmp dir /run/user/1000/libpod/tmp"
Sep 17 20:07:36 aurora /usr/bin/podman[1177146]: time="2024-09-17T20:07:36+02:00" level=debug msg="Using volume path /home/johnny/.local/share/containers/storage/volumes"
Sep 17 20:07:36 aurora /usr/bin/podman[1177146]: time="2024-09-17T20:07:36+02:00" level=debug msg="Using transient store: false"
Sep 17 20:07:36 aurora /usr/bin/podman[1177146]: time="2024-09-17T20:07:36+02:00" level=debug msg="[graphdriver] trying provided driver \"overlay\""
Sep 17 20:07:36 aurora /usr/bin/podman[1177146]: time="2024-09-17T20:07:36+02:00" level=debug msg="Cached value indicated that overlay is supported"
Sep 17 20:07:36 aurora /usr/bin/podman[1177146]: time="2024-09-17T20:07:36+02:00" level=debug msg="Cached value indicated that overlay is supported"
Sep 17 20:07:36 aurora /usr/bin/podman[1177146]: time="2024-09-17T20:07:36+02:00" level=debug msg="Cached value indicated that metacopy is not being used"
Sep 17 20:07:36 aurora /usr/bin/podman[1177146]: time="2024-09-17T20:07:36+02:00" level=debug msg="Cached value indicated that native-diff is usable"
Sep 17 20:07:36 aurora /usr/bin/podman[1177146]: time="2024-09-17T20:07:36+02:00" level=debug msg="backingFs=btrfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false"
Sep 17 20:07:36 aurora /usr/bin/podman[1177146]: time="2024-09-17T20:07:36+02:00" level=debug msg="Initializing event backend journald"
Sep 17 20:07:36 aurora /usr/bin/podman[1177146]: time="2024-09-17T20:07:36+02:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument"
Sep 17 20:07:36 aurora /usr/bin/podman[1177146]: time="2024-09-17T20:07:36+02:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument"
Sep 17 20:07:36 aurora /usr/bin/podman[1177146]: time="2024-09-17T20:07:36+02:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument"
Sep 17 20:07:36 aurora /usr/bin/podman[1177146]: time="2024-09-17T20:07:36+02:00" level=debug msg="Configured OCI runtime crun-vm initialization failed: no valid executable found for OCI runtime crun-vm: invalid argument"
Sep 17 20:07:36 aurora /usr/bin/podman[1177146]: time="2024-09-17T20:07:36+02:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument"
Sep 17 20:07:36 aurora /usr/bin/podman[1177146]: time="2024-09-17T20:07:36+02:00" level=debug msg="Configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument"
Sep 17 20:07:36 aurora /usr/bin/podman[1177146]: time="2024-09-17T20:07:36+02:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument"
Sep 17 20:07:36 aurora /usr/bin/podman[1177146]: time="2024-09-17T20:07:36+02:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument"
Sep 17 20:07:36 aurora /usr/bin/podman[1177146]: time="2024-09-17T20:07:36+02:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument"
Sep 17 20:07:36 aurora /usr/bin/podman[1177146]: time="2024-09-17T20:07:36+02:00" level=debug msg="Using OCI runtime \"/usr/bin/crun\""
Sep 17 20:07:36 aurora /usr/bin/podman[1177146]: time="2024-09-17T20:07:36+02:00" level=info msg="Setting parallel job count to 49"
Sep 17 20:07:36 aurora systemd[4838]: Started podman-1177146.scope.
Sep 17 20:07:36 aurora /usr/bin/podman[1177146]: time="2024-09-17T20:07:36+02:00" level=error msg="Running container cleanup: no container with name or ID \"799bacc6b24d58f6a839fe370736beb575a12be8f95d008905b85af9d2d1454a\" found: no such container"
Sep 17 20:07:36 aurora /usr/bin/podman[1177146]: time="2024-09-17T20:07:36+02:00" level=debug msg="Shutting down engines"
Sep 17 20:07:36 aurora /usr/bin/podman[1177146]: time="2024-09-17T20:07:36+02:00" level=info msg="Received shutdown.Stop(), terminating!" PID=1177146
Sep 17 20:07:36 aurora systemd[4838]: Started podman-1177163.scope.

@Luap99
Copy link
Member

Luap99 commented Sep 17, 2024

So the container is started fine but then something is causing us to stop it right away

Sep 17 20:07:36 aurora testquadlet[1177112]: time="2024-09-17T20:07:36+02:00" level=warning msg="StopSignal SIGTERM failed to stop container systemd-testquadlet in 10 seconds, resorting to SIGKILL"
Sep 17 20:07:36 aurora conmon[1177107]: conmon 799bacc6b24d58f6a839 : container 1177109 exited with status 137

Given the StopSignal is a podman line it means something is triggering the podman rm -f ... (ExecStop=) to be run.

@Arcitec
Copy link
Author

Arcitec commented Sep 17, 2024

That makes sense. And it happens for every podman systemd service, but not for any of my other systemd services nor for manually executed podman run commands.

The service is generated as a Type=notify service (the default for podman quadlets). So I wonder if the service itself is breaking in the "notify" watcher? Perhaps the Notify detection thinks the container has immediately exited and then tells the service to ExecStop to clean up?

It's too suspicious that every quadlet service stops itself immediately. And I did see some line about a notify watcher failure.

PS: In the meantime I am installing Fedora 40 in a VM and will see if it works there. My current system started on Fedora 35 and has upgraded over the years and perhaps some core config is broken...

@Arcitec
Copy link
Author

Arcitec commented Sep 17, 2024

:/ Should I reinstall the entire operating system? Or is there a way to fix this on my real machine?

Here are the test results with a fresh Fedora 40 virtual machine:

  • With the virtual machine, the container isn't being killed:
liveuser@localhost-live:~$ podman ps
CONTAINER ID  IMAGE                            COMMAND         CREATED        STATUS        PORTS       NAMES
286a7af150d8  docker.io/library/alpine:latest  sleep infinity  8 seconds ago  Up 8 seconds              systemd-testquadlet
liveuser@localhost-live:~$ podman ps
CONTAINER ID  IMAGE                            COMMAND         CREATED         STATUS         PORTS       NAMES
286a7af150d8  docker.io/library/alpine:latest  sleep infinity  15 seconds ago  Up 15 seconds              systemd-testquadlet
liveuser@localhost-live:~$ podman ps
CONTAINER ID  IMAGE                            COMMAND         CREATED         STATUS         PORTS       NAMES
286a7af150d8  docker.io/library/alpine:latest  sleep infinity  27 seconds ago  Up 27 seconds              systemd-testquadlet
liveuser@localhost-live:~$ podman ps
CONTAINER ID  IMAGE                            COMMAND         CREATED        STATUS        PORTS       NAMES
286a7af150d8  docker.io/library/alpine:latest  sleep infinity  9 minutes ago  Up 9 minutes              systemd-testquadlet
liveuser@localhost-live:~$ 
  • System log:
Click for details
liveuser@localhost-live:~$ systemctl --user daemon-reload; systemctl --user restart testquadlet --no-block; journalctl --user -f
Sep 17 14:21:13 localhost-live gnome-terminal-[2705]: void terminal_screen_shell_preexec(VteTerminal*): assertion '!priv->between_preexec_and_precmd' failed
Sep 17 14:21:14 localhost-live gnome-terminal-[2705]: void terminal_screen_shell_preexec(VteTerminal*): assertion '!priv->between_preexec_and_precmd' failed
Sep 17 14:21:26 localhost-live gnome-terminal-[2705]: void terminal_screen_shell_preexec(VteTerminal*): assertion '!priv->between_preexec_and_precmd' failed
Sep 17 14:21:54 localhost-live systemd[1478]: Starting grub-boot-success.service - Mark boot as successful...
Sep 17 14:21:54 localhost-live systemd[1478]: Finished grub-boot-success.service - Mark boot as successful.
Sep 17 14:22:25 localhost-live systemd[1478]: Reloading requested from client PID 3521 ('systemctl') (unit vte-spawn-6181952f-3211-46ec-9583-0aaeeb7765eb.scope)...
Sep 17 14:22:25 localhost-live systemd[1478]: Reloading...
Sep 17 14:22:25 localhost-live gnome-terminal-[2705]: void terminal_screen_shell_preexec(VteTerminal*): assertion '!priv->between_preexec_and_precmd' failed
Sep 17 14:22:26 localhost-live systemd[1478]: Reloading finished in 121 ms.
Sep 17 14:22:26 localhost-live systemd[1478]: Starting testquadlet.service - Sleep Container...
Sep 17 14:22:26 localhost-live testquadlet[3533]: time="2024-09-17T14:22:26-04:00" level=info msg="/usr/bin/podman filtering at log level debug"
Sep 17 14:22:26 localhost-live testquadlet[3533]: time="2024-09-17T14:22:26-04:00" level=debug msg="Called run.PersistentPreRunE(/usr/bin/podman run --name=systemd-testquadlet --cidfile=/run/user/1000/testquadlet.cid --replace --rm --cgroups=split --sdnotify=conmon -d --log-level=debug docker.io/alpine sleep infinity)"
Sep 17 14:22:26 localhost-live testquadlet[3533]: time="2024-09-17T14:22:26-04:00" level=debug msg="Using conmon: \"/usr/bin/conmon\""
Sep 17 14:22:26 localhost-live testquadlet[3533]: time="2024-09-17T14:22:26-04:00" level=info msg="Using sqlite as database backend"
Sep 17 14:22:26 localhost-live testquadlet[3533]: time="2024-09-17T14:22:26-04:00" level=debug msg="Using graph driver overlay"
Sep 17 14:22:26 localhost-live testquadlet[3533]: time="2024-09-17T14:22:26-04:00" level=debug msg="Using graph root /home/liveuser/.local/share/containers/storage"
Sep 17 14:22:26 localhost-live testquadlet[3533]: time="2024-09-17T14:22:26-04:00" level=debug msg="Using run root /run/user/1000/containers"
Sep 17 14:22:26 localhost-live testquadlet[3533]: time="2024-09-17T14:22:26-04:00" level=debug msg="Using static dir /home/liveuser/.local/share/containers/storage/libpod"
Sep 17 14:22:26 localhost-live testquadlet[3533]: time="2024-09-17T14:22:26-04:00" level=debug msg="Using tmp dir /run/user/1000/libpod/tmp"
Sep 17 14:22:26 localhost-live testquadlet[3533]: time="2024-09-17T14:22:26-04:00" level=debug msg="Using volume path /home/liveuser/.local/share/containers/storage/volumes"
Sep 17 14:22:26 localhost-live testquadlet[3533]: time="2024-09-17T14:22:26-04:00" level=debug msg="Using transient store: false"
Sep 17 14:22:26 localhost-live testquadlet[3533]: time="2024-09-17T14:22:26-04:00" level=debug msg="[graphdriver] trying provided driver \"overlay\""
Sep 17 14:22:26 localhost-live testquadlet[3533]: time="2024-09-17T14:22:26-04:00" level=debug msg="Cached value indicated that overlay is supported"
Sep 17 14:22:26 localhost-live testquadlet[3533]: time="2024-09-17T14:22:26-04:00" level=debug msg="Cached value indicated that overlay is supported"
Sep 17 14:22:26 localhost-live testquadlet[3533]: time="2024-09-17T14:22:26-04:00" level=debug msg="Cached value indicated that metacopy is not being used"
Sep 17 14:22:26 localhost-live testquadlet[3533]: time="2024-09-17T14:22:26-04:00" level=debug msg="Cached value indicated that native-diff is usable"
Sep 17 14:22:26 localhost-live testquadlet[3533]: time="2024-09-17T14:22:26-04:00" level=debug msg="backingFs=extfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false"
Sep 17 14:22:26 localhost-live testquadlet[3533]: time="2024-09-17T14:22:26-04:00" level=debug msg="Initializing event backend journald"
Sep 17 14:22:26 localhost-live testquadlet[3533]: time="2024-09-17T14:22:26-04:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument"
Sep 17 14:22:26 localhost-live testquadlet[3533]: time="2024-09-17T14:22:26-04:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument"
Sep 17 14:22:26 localhost-live testquadlet[3533]: time="2024-09-17T14:22:26-04:00" level=debug msg="Configured OCI runtime crun-vm initialization failed: no valid executable found for OCI runtime crun-vm: invalid argument"
Sep 17 14:22:26 localhost-live testquadlet[3533]: time="2024-09-17T14:22:26-04:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument"
Sep 17 14:22:26 localhost-live testquadlet[3533]: time="2024-09-17T14:22:26-04:00" level=debug msg="Configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument"
Sep 17 14:22:26 localhost-live testquadlet[3533]: time="2024-09-17T14:22:26-04:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument"
Sep 17 14:22:26 localhost-live testquadlet[3533]: time="2024-09-17T14:22:26-04:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument"
Sep 17 14:22:26 localhost-live testquadlet[3533]: time="2024-09-17T14:22:26-04:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument"
Sep 17 14:22:26 localhost-live testquadlet[3533]: time="2024-09-17T14:22:26-04:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument"
Sep 17 14:22:26 localhost-live testquadlet[3533]: time="2024-09-17T14:22:26-04:00" level=debug msg="Using OCI runtime \"/usr/bin/crun\""
Sep 17 14:22:26 localhost-live testquadlet[3533]: time="2024-09-17T14:22:26-04:00" level=info msg="Setting parallel job count to 7"
Sep 17 14:22:26 localhost-live testquadlet[3533]: time="2024-09-17T14:22:26-04:00" level=debug msg="Pulling image docker.io/alpine (policy: missing)"
Sep 17 14:22:26 localhost-live testquadlet[3533]: time="2024-09-17T14:22:26-04:00" level=debug msg="Looking up image \"docker.io/alpine\" in local containers storage"
Sep 17 14:22:26 localhost-live testquadlet[3533]: time="2024-09-17T14:22:26-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux  [] }"
Sep 17 14:22:26 localhost-live testquadlet[3533]: time="2024-09-17T14:22:26-04:00" level=debug msg="Trying \"docker.io/library/alpine:latest\" ..."
Sep 17 14:22:26 localhost-live testquadlet[3533]: time="2024-09-17T14:22:26-04:00" level=debug msg="reference \"[overlay@/home/liveuser/.local/share/containers/storage+/run/user/1000/containers]docker.io/library/alpine:latest\" does not resolve to an image ID"
Sep 17 14:22:26 localhost-live testquadlet[3533]: time="2024-09-17T14:22:26-04:00" level=debug msg="Trying \"docker.io/library/alpine:latest\" ..."
Sep 17 14:22:26 localhost-live testquadlet[3533]: time="2024-09-17T14:22:26-04:00" level=debug msg="reference \"[overlay@/home/liveuser/.local/share/containers/storage+/run/user/1000/containers]docker.io/library/alpine:latest\" does not resolve to an image ID"
Sep 17 14:22:26 localhost-live testquadlet[3533]: time="2024-09-17T14:22:26-04:00" level=debug msg="Trying \"docker.io/alpine\" ..."
Sep 17 14:22:26 localhost-live testquadlet[3533]: time="2024-09-17T14:22:26-04:00" level=debug msg="Loading registries configuration \"/etc/containers/registries.conf\""
Sep 17 14:22:26 localhost-live testquadlet[3533]: time="2024-09-17T14:22:26-04:00" level=debug msg="Loading registries configuration \"/etc/containers/registries.conf.d/000-shortnames.conf\""
Sep 17 14:22:26 localhost-live testquadlet[3533]: time="2024-09-17T14:22:26-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux  [] }"
Sep 17 14:22:26 localhost-live testquadlet[3533]: time="2024-09-17T14:22:26-04:00" level=debug msg="Attempting to pull candidate docker.io/library/alpine:latest for docker.io/alpine"
Sep 17 14:22:26 localhost-live testquadlet[3533]: time="2024-09-17T14:22:26-04:00" level=debug msg="parsed reference into \"[overlay@/home/liveuser/.local/share/containers/storage+/run/user/1000/containers]docker.io/library/alpine:latest\""
Sep 17 14:22:26 localhost-live testquadlet[3533]: Trying to pull docker.io/library/alpine:latest...
Sep 17 14:22:26 localhost-live testquadlet[3533]: time="2024-09-17T14:22:26-04:00" level=debug msg="Copying source image //alpine:latest to destination image [overlay@/home/liveuser/.local/share/containers/storage+/run/user/1000/containers]docker.io/library/alpine:latest"
Sep 17 14:22:26 localhost-live testquadlet[3533]: Pulling image //alpine:latest inside systemd: setting pull timeout to 5m0s
Sep 17 14:22:26 localhost-live testquadlet[3533]: time="2024-09-17T14:22:26-04:00" level=debug msg="Using registries.d directory /etc/containers/registries.d"
Sep 17 14:22:26 localhost-live testquadlet[3533]: time="2024-09-17T14:22:26-04:00" level=debug msg="Trying to access \"docker.io/library/alpine:latest\""
Sep 17 14:22:26 localhost-live testquadlet[3533]: time="2024-09-17T14:22:26-04:00" level=debug msg="No credentials matching docker.io/library/alpine found in /run/user/1000/containers/auth.json"
Sep 17 14:22:26 localhost-live testquadlet[3533]: time="2024-09-17T14:22:26-04:00" level=debug msg="No credentials matching docker.io/library/alpine found in /home/liveuser/.config/containers/auth.json"
Sep 17 14:22:26 localhost-live testquadlet[3533]: time="2024-09-17T14:22:26-04:00" level=debug msg="No credentials matching docker.io/library/alpine found in /home/liveuser/.docker/config.json"
Sep 17 14:22:26 localhost-live testquadlet[3533]: time="2024-09-17T14:22:26-04:00" level=debug msg="No credentials matching docker.io/library/alpine found in /home/liveuser/.dockercfg"
Sep 17 14:22:26 localhost-live testquadlet[3533]: time="2024-09-17T14:22:26-04:00" level=debug msg="No credentials for docker.io/library/alpine found"
Sep 17 14:22:26 localhost-live testquadlet[3533]: time="2024-09-17T14:22:26-04:00" level=debug msg=" No signature storage configuration found for docker.io/library/alpine:latest, using built-in default file:///home/liveuser/.local/share/containers/sigstore"
Sep 17 14:22:26 localhost-live testquadlet[3533]: time="2024-09-17T14:22:26-04:00" level=debug msg="Looking for TLS certificates and private keys in /etc/docker/certs.d/docker.io"
Sep 17 14:22:26 localhost-live testquadlet[3533]: time="2024-09-17T14:22:26-04:00" level=debug msg="GET https://registry-1.docker.io/v2/"
Sep 17 14:22:26 localhost-live testquadlet[3533]: time="2024-09-17T14:22:26-04:00" level=debug msg="Ping https://registry-1.docker.io/v2/ status 401"
Sep 17 14:22:26 localhost-live testquadlet[3533]: time="2024-09-17T14:22:26-04:00" level=debug msg="GET https://auth.docker.io/token?scope=repository%3Alibrary%2Falpine%3Apull&service=registry.docker.io"
Sep 17 14:22:26 localhost-live testquadlet[3533]: time="2024-09-17T14:22:26-04:00" level=debug msg="GET https://registry-1.docker.io/v2/library/alpine/manifests/latest"
Sep 17 14:22:27 localhost-live testquadlet[3533]: time="2024-09-17T14:22:27-04:00" level=debug msg="Content-Type from manifest GET is \"application/vnd.docker.distribution.manifest.list.v2+json\""
Sep 17 14:22:27 localhost-live testquadlet[3533]: time="2024-09-17T14:22:27-04:00" level=debug msg="Using SQLite blob info cache at /home/liveuser/.local/share/containers/cache/blob-info-cache-v1.sqlite"
Sep 17 14:22:27 localhost-live testquadlet[3533]: time="2024-09-17T14:22:27-04:00" level=debug msg="Source is a manifest list; copying (only) instance sha256:33735bd63cf84d7e388d9f6d297d348c523c044410f553bd878c6d7829612735 for current system"
Sep 17 14:22:27 localhost-live testquadlet[3533]: time="2024-09-17T14:22:27-04:00" level=debug msg="GET https://registry-1.docker.io/v2/library/alpine/manifests/sha256:33735bd63cf84d7e388d9f6d297d348c523c044410f553bd878c6d7829612735"
Sep 17 14:22:27 localhost-live testquadlet[3533]: time="2024-09-17T14:22:27-04:00" level=debug msg="Content-Type from manifest GET is \"application/vnd.docker.distribution.manifest.v2+json\""
Sep 17 14:22:27 localhost-live testquadlet[3533]: time="2024-09-17T14:22:27-04:00" level=debug msg="IsRunningImageAllowed for image docker:docker.io/library/alpine:latest"
Sep 17 14:22:27 localhost-live testquadlet[3533]: time="2024-09-17T14:22:27-04:00" level=debug msg=" Using default policy section"
Sep 17 14:22:27 localhost-live testquadlet[3533]: time="2024-09-17T14:22:27-04:00" level=debug msg=" Requirement 0: allowed"
Sep 17 14:22:27 localhost-live testquadlet[3533]: time="2024-09-17T14:22:27-04:00" level=debug msg="Overall: allowed"
Sep 17 14:22:27 localhost-live testquadlet[3533]: time="2024-09-17T14:22:27-04:00" level=debug msg="Downloading /v2/library/alpine/blobs/sha256:91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b"
Sep 17 14:22:27 localhost-live testquadlet[3533]: time="2024-09-17T14:22:27-04:00" level=debug msg="GET https://registry-1.docker.io/v2/library/alpine/blobs/sha256:91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b"
Sep 17 14:22:27 localhost-live testquadlet[3533]: Getting image source signatures
Sep 17 14:22:27 localhost-live testquadlet[3533]: time="2024-09-17T14:22:27-04:00" level=debug msg="Reading /home/liveuser/.local/share/containers/sigstore/library/alpine@sha256=33735bd63cf84d7e388d9f6d297d348c523c044410f553bd878c6d7829612735/signature-1"
Sep 17 14:22:27 localhost-live testquadlet[3533]: time="2024-09-17T14:22:27-04:00" level=debug msg="Not looking for sigstore attachments: disabled by configuration"
Sep 17 14:22:27 localhost-live testquadlet[3533]: time="2024-09-17T14:22:27-04:00" level=debug msg="Manifest has MIME type application/vnd.docker.distribution.manifest.v2+json, ordered candidate list [application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.v1+prettyjws, application/vnd.oci.image.manifest.v1+json, application/vnd.docker.distribution.manifest.v1+json]"
Sep 17 14:22:27 localhost-live testquadlet[3533]: time="2024-09-17T14:22:27-04:00" level=debug msg="... will first try using the original manifest unmodified"
Sep 17 14:22:27 localhost-live testquadlet[3533]: Copying blob sha256:43c4264eed91be63b206e17d93e75256a6097070ce643c5e8f0379998b44f170
Sep 17 14:22:27 localhost-live testquadlet[3533]: time="2024-09-17T14:22:27-04:00" level=debug msg="Checking if we can reuse blob sha256:43c4264eed91be63b206e17d93e75256a6097070ce643c5e8f0379998b44f170: general substitution = true, compression for MIME type \"application/vnd.docker.image.rootfs.diff.tar.gzip\" = true"
Sep 17 14:22:27 localhost-live testquadlet[3533]: time="2024-09-17T14:22:27-04:00" level=debug msg="Failed to retrieve partial blob: convert_images not configured"
Sep 17 14:22:27 localhost-live testquadlet[3533]: time="2024-09-17T14:22:27-04:00" level=debug msg="Downloading /v2/library/alpine/blobs/sha256:43c4264eed91be63b206e17d93e75256a6097070ce643c5e8f0379998b44f170"
Sep 17 14:22:27 localhost-live testquadlet[3533]: time="2024-09-17T14:22:27-04:00" level=debug msg="GET https://registry-1.docker.io/v2/library/alpine/blobs/sha256:43c4264eed91be63b206e17d93e75256a6097070ce643c5e8f0379998b44f170"
Sep 17 14:22:27 localhost-live testquadlet[3533]: time="2024-09-17T14:22:27-04:00" level=debug msg="Detected compression format gzip"
Sep 17 14:22:27 localhost-live testquadlet[3533]: time="2024-09-17T14:22:27-04:00" level=debug msg="Using original blob without modification"
Sep 17 14:22:27 localhost-live testquadlet[3533]: time="2024-09-17T14:22:27-04:00" level=debug msg="Check for idmapped mounts support create mapped mount: operation not permitted"
Sep 17 14:22:27 localhost-live testquadlet[3533]: time="2024-09-17T14:22:27-04:00" level=debug msg="Applying tar in /home/liveuser/.local/share/containers/storage/overlay/63ca1fbb43ae5034640e5e6cb3e083e05c290072c5366fcaa9d62435a4cced85/diff"
Sep 17 14:22:27 localhost-live testquadlet[3533]: Copying config sha256:91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b
Sep 17 14:22:27 localhost-live testquadlet[3533]: time="2024-09-17T14:22:27-04:00" level=debug msg="No compression detected"
Sep 17 14:22:27 localhost-live testquadlet[3533]: time="2024-09-17T14:22:27-04:00" level=debug msg="Compression change for blob sha256:91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b (\"application/vnd.docker.container.image.v1+json\") not supported"
Sep 17 14:22:27 localhost-live testquadlet[3533]: time="2024-09-17T14:22:27-04:00" level=debug msg="Using original blob without modification"
Sep 17 14:22:27 localhost-live testquadlet[3533]: Writing manifest to image destination
Sep 17 14:22:27 localhost-live testquadlet[3533]: time="2024-09-17T14:22:27-04:00" level=debug msg="setting image creation date to 2024-09-06 22:20:07.972381771 +0000 UTC"
Sep 17 14:22:27 localhost-live testquadlet[3533]: time="2024-09-17T14:22:27-04:00" level=debug msg="created new image ID \"91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b\" with metadata \"{}\""
Sep 17 14:22:28 localhost-live testquadlet[3533]: time="2024-09-17T14:22:28-04:00" level=debug msg="added name \"docker.io/library/alpine:latest\" to image \"91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b\""
Sep 17 14:22:28 localhost-live testquadlet[3533]: time="2024-09-17T14:22:28-04:00" level=debug msg="Pulled candidate docker.io/library/alpine:latest successfully"
Sep 17 14:22:28 localhost-live testquadlet[3533]: time="2024-09-17T14:22:28-04:00" level=debug msg="Looking up image \"91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b\" in local containers storage"
Sep 17 14:22:28 localhost-live testquadlet[3533]: time="2024-09-17T14:22:28-04:00" level=debug msg="Trying \"91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b\" ..."
Sep 17 14:22:28 localhost-live testquadlet[3533]: time="2024-09-17T14:22:28-04:00" level=debug msg="parsed reference into \"[overlay@/home/liveuser/.local/share/containers/storage+/run/user/1000/containers]@91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b\""
Sep 17 14:22:28 localhost-live testquadlet[3533]: time="2024-09-17T14:22:28-04:00" level=debug msg="Found image \"91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b\" as \"91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b\" in local containers storage"
Sep 17 14:22:28 localhost-live testquadlet[3533]: time="2024-09-17T14:22:28-04:00" level=debug msg="Found image \"91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b\" as \"91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b\" in local containers storage ([overlay@/home/liveuser/.local/share/containers/storage+/run/user/1000/containers]@91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b)"
Sep 17 14:22:28 localhost-live testquadlet[3533]: time="2024-09-17T14:22:28-04:00" level=debug msg="exporting opaque data as blob \"sha256:91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b\""
Sep 17 14:22:28 localhost-live testquadlet[3533]: time="2024-09-17T14:22:28-04:00" level=debug msg="Looking up image \"docker.io/alpine\" in local containers storage"
Sep 17 14:22:28 localhost-live testquadlet[3533]: time="2024-09-17T14:22:28-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux  [] }"
Sep 17 14:22:28 localhost-live testquadlet[3533]: time="2024-09-17T14:22:28-04:00" level=debug msg="Trying \"docker.io/library/alpine:latest\" ..."
Sep 17 14:22:28 localhost-live testquadlet[3533]: time="2024-09-17T14:22:28-04:00" level=debug msg="parsed reference into \"[overlay@/home/liveuser/.local/share/containers/storage+/run/user/1000/containers]@91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b\""
Sep 17 14:22:28 localhost-live testquadlet[3533]: time="2024-09-17T14:22:28-04:00" level=debug msg="Found image \"docker.io/alpine\" as \"docker.io/library/alpine:latest\" in local containers storage"
Sep 17 14:22:28 localhost-live testquadlet[3533]: time="2024-09-17T14:22:28-04:00" level=debug msg="Found image \"docker.io/alpine\" as \"docker.io/library/alpine:latest\" in local containers storage ([overlay@/home/liveuser/.local/share/containers/storage+/run/user/1000/containers]@91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b)"
Sep 17 14:22:28 localhost-live testquadlet[3533]: time="2024-09-17T14:22:28-04:00" level=debug msg="exporting opaque data as blob \"sha256:91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b\""
Sep 17 14:22:28 localhost-live testquadlet[3533]: time="2024-09-17T14:22:28-04:00" level=debug msg="Inspecting image 91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b"
Sep 17 14:22:28 localhost-live testquadlet[3533]: time="2024-09-17T14:22:28-04:00" level=debug msg="exporting opaque data as blob \"sha256:91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b\""
Sep 17 14:22:28 localhost-live testquadlet[3533]: time="2024-09-17T14:22:28-04:00" level=debug msg="exporting opaque data as blob \"sha256:91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b\""
Sep 17 14:22:28 localhost-live testquadlet[3533]: time="2024-09-17T14:22:28-04:00" level=debug msg="Inspecting image 91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b"
Sep 17 14:22:28 localhost-live testquadlet[3533]: time="2024-09-17T14:22:28-04:00" level=debug msg="Inspecting image 91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b"
Sep 17 14:22:28 localhost-live testquadlet[3533]: time="2024-09-17T14:22:28-04:00" level=debug msg="Inspecting image 91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b"
Sep 17 14:22:28 localhost-live testquadlet[3533]: time="2024-09-17T14:22:28-04:00" level=debug msg="using systemd mode: false"
Sep 17 14:22:28 localhost-live testquadlet[3533]: time="2024-09-17T14:22:28-04:00" level=debug msg="setting container name systemd-testquadlet"
Sep 17 14:22:28 localhost-live testquadlet[3533]: time="2024-09-17T14:22:28-04:00" level=debug msg="No hostname set; container's hostname will default to runtime default"
Sep 17 14:22:28 localhost-live testquadlet[3533]: time="2024-09-17T14:22:28-04:00" level=debug msg="Loading seccomp profile from \"/usr/share/containers/seccomp.json\""
Sep 17 14:22:28 localhost-live testquadlet[3533]: time="2024-09-17T14:22:28-04:00" level=debug msg="Allocated lock 0 for container 286a7af150d850d730603ca2188abda937a1a72eeeb6007a5f22cffb191d4011"
Sep 17 14:22:28 localhost-live testquadlet[3533]: time="2024-09-17T14:22:28-04:00" level=debug msg="exporting opaque data as blob \"sha256:91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b\""
Sep 17 14:22:28 localhost-live testquadlet[3533]: time="2024-09-17T14:22:28-04:00" level=debug msg="Created container \"286a7af150d850d730603ca2188abda937a1a72eeeb6007a5f22cffb191d4011\""
Sep 17 14:22:28 localhost-live testquadlet[3533]: time="2024-09-17T14:22:28-04:00" level=debug msg="Container \"286a7af150d850d730603ca2188abda937a1a72eeeb6007a5f22cffb191d4011\" has work directory \"/home/liveuser/.local/share/containers/storage/overlay-containers/286a7af150d850d730603ca2188abda937a1a72eeeb6007a5f22cffb191d4011/userdata\""
Sep 17 14:22:28 localhost-live testquadlet[3533]: time="2024-09-17T14:22:28-04:00" level=debug msg="Container \"286a7af150d850d730603ca2188abda937a1a72eeeb6007a5f22cffb191d4011\" has run directory \"/run/user/1000/containers/overlay-containers/286a7af150d850d730603ca2188abda937a1a72eeeb6007a5f22cffb191d4011/userdata\""
Sep 17 14:22:28 localhost-live podman[3533]: 2024-09-17 14:22:28.014037369 -0400 EDT m=+1.925972776 container create 286a7af150d850d730603ca2188abda937a1a72eeeb6007a5f22cffb191d4011 (image=docker.io/library/alpine:latest, name=systemd-testquadlet, PODMAN_SYSTEMD_UNIT=testquadlet.service)
Sep 17 14:22:28 localhost-live testquadlet[3533]: time="2024-09-17T14:22:28-04:00" level=debug msg="Made network namespace at /run/user/1000/netns/netns-66083e96-c4bf-4679-c2d1-d5edbc802b3e for container 286a7af150d850d730603ca2188abda937a1a72eeeb6007a5f22cffb191d4011"
Sep 17 14:22:28 localhost-live testquadlet[3533]: time="2024-09-17T14:22:28-04:00" level=debug msg="overlay: test mount indicated that volatile is being used"
Sep 17 14:22:28 localhost-live testquadlet[3533]: time="2024-09-17T14:22:28-04:00" level=debug msg="overlay: mount_data=lowerdir=/home/liveuser/.local/share/containers/storage/overlay/l/IEH33QJAB5AN6Z5KDMY6DJKRJ3,upperdir=/home/liveuser/.local/share/containers/storage/overlay/2ba96ef703de1f7d9071cd77d0d3656b62fdd98017dd1a81fe4b40f476eb2ee7/diff,workdir=/home/liveuser/.local/share/containers/storage/overlay/2ba96ef703de1f7d9071cd77d0d3656b62fdd98017dd1a81fe4b40f476eb2ee7/work,userxattr,volatile,context=\"system_u:object_r:container_file_t:s0:c991,c999\""
Sep 17 14:22:28 localhost-live testquadlet[3533]: time="2024-09-17T14:22:28-04:00" level=debug msg="Mounted container \"286a7af150d850d730603ca2188abda937a1a72eeeb6007a5f22cffb191d4011\" at \"/home/liveuser/.local/share/containers/storage/overlay/2ba96ef703de1f7d9071cd77d0d3656b62fdd98017dd1a81fe4b40f476eb2ee7/merged\""
Sep 17 14:22:28 localhost-live testquadlet[3533]: time="2024-09-17T14:22:28-04:00" level=debug msg="pasta arguments: --config-net -t none -u none -T none -U none --no-map-gw --dns none --quiet --netns /run/user/1000/netns/netns-66083e96-c4bf-4679-c2d1-d5edbc802b3e"
Sep 17 14:22:28 localhost-live testquadlet[3533]: time="2024-09-17T14:22:28-04:00" level=debug msg="Created root filesystem for container 286a7af150d850d730603ca2188abda937a1a72eeeb6007a5f22cffb191d4011 at /home/liveuser/.local/share/containers/storage/overlay/2ba96ef703de1f7d9071cd77d0d3656b62fdd98017dd1a81fe4b40f476eb2ee7/merged"
Sep 17 14:22:28 localhost-live podman[3533]: 2024-09-17 14:22:28.00155018 -0400 EDT m=+1.913485587 image pull 91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b docker.io/alpine
Sep 17 14:22:28 localhost-live testquadlet[3533]: time="2024-09-17T14:22:28-04:00" level=debug msg="/etc/system-fips does not exist on host, not mounting FIPS mode subscription"
Sep 17 14:22:28 localhost-live testquadlet[3533]: time="2024-09-17T14:22:28-04:00" level=debug msg="reading hooks from /usr/share/containers/oci/hooks.d"
Sep 17 14:22:28 localhost-live testquadlet[3533]: time="2024-09-17T14:22:28-04:00" level=debug msg="Workdir \"/\" resolved to host path \"/home/liveuser/.local/share/containers/storage/overlay/2ba96ef703de1f7d9071cd77d0d3656b62fdd98017dd1a81fe4b40f476eb2ee7/merged\""
Sep 17 14:22:28 localhost-live testquadlet[3533]: time="2024-09-17T14:22:28-04:00" level=debug msg="Created OCI spec for container 286a7af150d850d730603ca2188abda937a1a72eeeb6007a5f22cffb191d4011 at /home/liveuser/.local/share/containers/storage/overlay-containers/286a7af150d850d730603ca2188abda937a1a72eeeb6007a5f22cffb191d4011/userdata/config.json"
Sep 17 14:22:28 localhost-live testquadlet[3533]: time="2024-09-17T14:22:28-04:00" level=debug msg="/usr/bin/conmon messages will be logged to syslog"
Sep 17 14:22:28 localhost-live testquadlet[3533]: time="2024-09-17T14:22:28-04:00" level=debug msg="running conmon: /usr/bin/conmon" args="[--api-version 1 -c 286a7af150d850d730603ca2188abda937a1a72eeeb6007a5f22cffb191d4011 -u 286a7af150d850d730603ca2188abda937a1a72eeeb6007a5f22cffb191d4011 -r /usr/bin/crun -b /home/liveuser/.local/share/containers/storage/overlay-containers/286a7af150d850d730603ca2188abda937a1a72eeeb6007a5f22cffb191d4011/userdata -p /run/user/1000/containers/overlay-containers/286a7af150d850d730603ca2188abda937a1a72eeeb6007a5f22cffb191d4011/userdata/pidfile -n systemd-testquadlet --exit-dir /run/user/1000/libpod/tmp/exits --persist-dir /run/user/1000/libpod/tmp/persist/286a7af150d850d730603ca2188abda937a1a72eeeb6007a5f22cffb191d4011 --full-attach -l journald --log-level debug --syslog --conmon-pidfile /run/user/1000/containers/overlay-containers/286a7af150d850d730603ca2188abda937a1a72eeeb6007a5f22cffb191d4011/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/liveuser/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1000/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --network-config-dir --exit-command-arg  --exit-command-arg --network-backend --exit-command-arg netavark --exit-command-arg --volumepath --exit-command-arg /home/liveuser/.local/share/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg sqlite --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg --rm --exit-command-arg 286a7af150d850d730603ca2188abda937a1a72eeeb6007a5f22cffb191d4011]"
Sep 17 14:22:28 localhost-live testquadlet[3562]: [conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied
Sep 17 14:22:28 localhost-live conmon[3562]: conmon 286a7af150d850d73060 <ndebug>: failed to write to /proc/self/oom_score_adj: Permission denied
Sep 17 14:22:28 localhost-live conmon[3563]: conmon 286a7af150d850d73060 <ndebug>: addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/12/attach}
Sep 17 14:22:28 localhost-live conmon[3563]: conmon 286a7af150d850d73060 <ndebug>: terminal_ctrl_fd: 12
Sep 17 14:22:28 localhost-live conmon[3563]: conmon 286a7af150d850d73060 <ndebug>: winsz read side: 15, winsz write side: 16
Sep 17 14:22:28 localhost-live conmon[3563]: conmon 286a7af150d850d73060 <ndebug>: container PID: 3565
Sep 17 14:22:28 localhost-live testquadlet[3533]: time="2024-09-17T14:22:28-04:00" level=debug msg="Received: 3565"
Sep 17 14:22:28 localhost-live testquadlet[3533]: time="2024-09-17T14:22:28-04:00" level=info msg="Got Conmon PID as 3563"
Sep 17 14:22:28 localhost-live testquadlet[3533]: time="2024-09-17T14:22:28-04:00" level=debug msg="Created container 286a7af150d850d730603ca2188abda937a1a72eeeb6007a5f22cffb191d4011 in OCI runtime"
Sep 17 14:22:28 localhost-live testquadlet[3533]: time="2024-09-17T14:22:28-04:00" level=debug msg="found local resolver, using \"/run/systemd/resolve/resolv.conf\" to get the nameservers"
Sep 17 14:22:28 localhost-live podman[3533]: 2024-09-17 14:22:28.111277311 -0400 EDT m=+2.023212718 container init 286a7af150d850d730603ca2188abda937a1a72eeeb6007a5f22cffb191d4011 (image=docker.io/library/alpine:latest, name=systemd-testquadlet, PODMAN_SYSTEMD_UNIT=testquadlet.service)
Sep 17 14:22:28 localhost-live testquadlet[3533]: time="2024-09-17T14:22:28-04:00" level=debug msg="Starting container 286a7af150d850d730603ca2188abda937a1a72eeeb6007a5f22cffb191d4011 with command [sleep infinity]"
Sep 17 14:22:28 localhost-live testquadlet[3533]: time="2024-09-17T14:22:28-04:00" level=debug msg="Started container 286a7af150d850d730603ca2188abda937a1a72eeeb6007a5f22cffb191d4011"
Sep 17 14:22:28 localhost-live testquadlet[3533]: time="2024-09-17T14:22:28-04:00" level=debug msg="Notify sent successfully"
Sep 17 14:22:28 localhost-live podman[3533]: 2024-09-17 14:22:28.114235933 -0400 EDT m=+2.026171340 container start 286a7af150d850d730603ca2188abda937a1a72eeeb6007a5f22cffb191d4011 (image=docker.io/library/alpine:latest, name=systemd-testquadlet, PODMAN_SYSTEMD_UNIT=testquadlet.service)
Sep 17 14:22:28 localhost-live systemd[1478]: Started testquadlet.service - Sleep Container.
Sep 17 14:22:28 localhost-live testquadlet[3533]: 286a7af150d850d730603ca2188abda937a1a72eeeb6007a5f22cffb191d4011
Sep 17 14:22:28 localhost-live testquadlet[3533]: time="2024-09-17T14:22:28-04:00" level=debug msg="Called run.PersistentPostRunE(/usr/bin/podman run --name=systemd-testquadlet --cidfile=/run/user/1000/testquadlet.cid --replace --rm --cgroups=split --sdnotify=conmon -d --log-level=debug docker.io/alpine sleep infinity)"
Sep 17 14:22:28 localhost-live testquadlet[3533]: time="2024-09-17T14:22:28-04:00" level=debug msg="Shutting down engines"
^[Sep 17 14:22:34 localhost-live systemd[1478]: Started vte-spawn-f9eeae8b-19e2-4612-b9da-367418fb14e8.scope - VTE child process 3569 launched by gnome-terminal-server process 2705.
Sep 17 14:22:43 localhost-live gnome-terminal-[2705]: void terminal_screen_shell_preexec(VteTerminal*): assertion '!priv->between_preexec_and_precmd' failed
Sep 17 14:22:55 localhost-live gnome-terminal-[2705]: void terminal_screen_shell_preexec(VteTerminal*): assertion '!priv->between_preexec_and_precmd' failed
Sep 17 14:25:17 localhost-live systemd[1478]: Created slice background.slice - User Background Tasks Slice.
Sep 17 14:25:17 localhost-live systemd[1478]: Starting systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories...
Sep 17 14:25:17 localhost-live systemd[1478]: Finished systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories.
Sep 17 14:27:55 localhost-live gnome-shell[1602]: Failed to create new MetaSelectionSourceMemory: Failed to create MetaAnonymousFile
Sep 17 14:27:55 localhost-live gnome-shell[1602]: Failed to create new MetaSelectionSourceMemory: Failed to create MetaAnonymousFile
Sep 17 14:32:21 localhost-live gnome-terminal-[2705]: void terminal_screen_shell_preexec(VteTerminal*): assertion '!priv->between_preexec_and_precmd' failed
Sep 17 14:32:31 localhost-live org.mozilla.firefox.desktop[2915]: MESA: error: ZINK: failed to choose pdev
Sep 17 14:32:31 localhost-live org.mozilla.firefox.desktop[2915]: libEGL warning: egl: failed to create dri2 screen
Sep 17 14:32:32 localhost-live org.mozilla.firefox.desktop[3384]: [Child 3384, MediaDecoderStateMachine #1] WARNING: Decoder=7f70f1024400 Decode error: NS_ERROR_DOM_MEDIA_FATAL_ERR (0x806e0005) - mozilla::MediaResult mozilla::FFmpegDataDecoder<60>::InitDecoder(AVDictionary**): Couldn't open avcodec: file /builddir/build/BUILD/firefox-124.0.1/dom/media/MediaDecoderStateMachineBase.cpp:166

@Arcitec
Copy link
Author

Arcitec commented Sep 17, 2024

I did a systemctl --user list-unit-files | grep -i pod and saw that podman.socket was enabled on the host but not on the VM, so I stopped it on the host too. That unfortunately didn't solve the issue.

I can't really think of any differences between the VM and host... I haven't changed anything manually.

@rhatdan
Copy link
Member

rhatdan commented Sep 17, 2024

Is this running as a logged in system. Might be systemd killing the service for some reason.

@Arcitec
Copy link
Author

Arcitec commented Sep 17, 2024

Yeah. It's at the GNOME desktop right now. I will try rebooting and logging in a runlevel 3 with a pure text terminal and running the container. Good idea for a test.

@rhatdan
Copy link
Member

rhatdan commented Sep 17, 2024

No I don't think that will matter.

@Arcitec
Copy link
Author

Arcitec commented Sep 17, 2024

I have now done these tests. None of them fixed the problem. Containers still exit immediately when using systemd.

  • Running in a text-mode terminal in runlevel 3 (no graphical desktop) after a reboot.
  • podman system reset --force to clear all images and configurations.
  • Adding [Container] Network=none to the .container spec to avoid pasta/slirp. It did not use networking anymore but still got terminated with the exact same signal.
  • sudo dnf reinstall podman conmon in case reinstalling would reset some system config for me.

None of it solves the problem. I'm about to do one more test... I'll create a 2nd user and see if that one has more luck.

@Arcitec
Copy link
Author

Arcitec commented Sep 17, 2024

Tried two more things, but nothing solves it:

  • Created a fresh user account and logged into that instead and set up the exact same quadlet service there. It dies with the exact same error.
  • Ran a full system update with the package manager.

One thing hits me though. The Fedora virtual machine was made from the Fedora 40 ISO which uses packages from March 2024. I should install it and do a full system upgrade inside the VM to see if the VM still works on the latest packages, or if having the latest Fedora 40 packages is what breaks it.

@Arcitec
Copy link
Author

Arcitec commented Sep 17, 2024

Fedora 40 Workstation virtual machine results:

  • Live CD (not updated since March 2024): Works.
  • Immediately after a fresh install (no package updates): Works.
  • After doing a full system package update (excluding kernel via sudo dnf offline-upgrade download --refresh --exclude=kernel*): Kernel is 6.8.5-301.fc40.x86_64. Works.
  • Upgrading to exactly Kernel 6.9.12-200.fc40.x86_64 (what I use on the host): Works.

Here's the log from the last run, where I had upgraded the VM to the same kernel as the host. As mentioned, this is what a SUCCESSFUL run looks like:

Click for details
systemctl --user daemon-reload; systemctl --user restart testquadlet --no-block; journalctl --user -f
Sep 17 21:59:43 localhost-live gnome-shell[1858]: clutter_text_get_editable: assertion 'CLUTTER_IS_TEXT (self)' failed
Sep 17 21:59:43 localhost-live gnome-shell[1858]: clutter_text_get_text: assertion 'CLUTTER_IS_TEXT (self)' failed
Sep 17 21:59:43 localhost-live gnome-shell[1858]: clutter_text_set_text: assertion 'CLUTTER_IS_TEXT (self)' failed
Sep 17 21:59:43 localhost-live gnome-shell[1858]: Object St.Label (0x55fe247fa020), has been already disposed — impossible to set any property on it. This might be caused by the object having been destroyed from C code using something such as destroy(), dispose(), or remove() vfuncs.
                                                  == Stack trace for context 0x55fe1f150630 ==
                                                  #0   55fe1f219cd8 i   resource:///org/gnome/shell/ui/search.js:965 (233fa3aef10 @ 70)
                                                  #1   55fe1f219c48 i   resource:///org/gnome/shell/ui/search.js:314 (233fa3a8ab0 @ 25)
                                                  #2   7fffae660b20 b   resource:///org/gnome/shell/ui/search.js:273 (233fa3a8920 @ 446)
                                                  #3   7fffae660bd0 b   self-hosted:852 (1a53a8ba5830 @ 15)
                                                  #4   55fe1f219bb8 i   resource:///org/gnome/shell/ui/init.js:21 (1b4467770bf0 @ 48)
Sep 17 21:59:43 localhost-live gnome-shell[1858]: Object .Gjs_ui_search_ListSearchResults (0x55fe21a99e80), has been already disposed — impossible to access it. This might be caused by the object having been destroyed from C code using something such as destroy(), dispose(), or remove() vfuncs.
                                                  == Stack trace for context 0x55fe1f150630 ==
                                                  #0   7fffae660b20 b   resource:///org/gnome/shell/ui/search.js:274 (233fa3a8920 @ 462)
                                                  #1   7fffae660bd0 b   self-hosted:852 (1a53a8ba5830 @ 15)
                                                  #2   55fe1f219bb8 i   resource:///org/gnome/shell/ui/init.js:21 (1b4467770bf0 @ 48)
Sep 17 21:59:53 localhost-live gnome-character[2695]: JS LOG: Characters Application exiting
Sep 17 22:00:11 localhost-live systemd[1654]: Reloading requested from client PID 3028 ('systemctl') (unit vte-spawn-bda0d49f-e583-4e34-aed7-6a6f67fe9ed1.scope)...
Sep 17 22:00:11 localhost-live systemd[1654]: Reloading...
Sep 17 22:00:11 localhost-live systemd[1654]: Reloading finished in 100 ms.
Sep 17 22:00:11 localhost-live systemd[1654]: Starting testquadlet.service - Sleep Container...
Sep 17 22:00:11 localhost-live testquadlet[3040]: time="2024-09-17T22:00:11+02:00" level=info msg="/usr/bin/podman filtering at log level debug"
Sep 17 22:00:11 localhost-live testquadlet[3040]: time="2024-09-17T22:00:11+02:00" level=debug msg="Called run.PersistentPreRunE(/usr/bin/podman run --name=systemd-testquadlet --cidfile=/run/user/1000/testquadlet.cid --replace --rm --cgroups=split --sdnotify=conmon -d --log-level=debug docker.io/alpine sleep infinity)"
Sep 17 22:00:11 localhost-live testquadlet[3040]: time="2024-09-17T22:00:11+02:00" level=debug msg="Using conmon: \"/usr/bin/conmon\""
Sep 17 22:00:11 localhost-live testquadlet[3040]: time="2024-09-17T22:00:11+02:00" level=info msg="Using sqlite as database backend"
Sep 17 22:00:11 localhost-live testquadlet[3040]: time="2024-09-17T22:00:11+02:00" level=debug msg="Using graph driver overlay"
Sep 17 22:00:11 localhost-live testquadlet[3040]: time="2024-09-17T22:00:11+02:00" level=debug msg="Using graph root /home/johnny/.local/share/containers/storage"
Sep 17 22:00:11 localhost-live testquadlet[3040]: time="2024-09-17T22:00:11+02:00" level=debug msg="Using run root /run/user/1000/containers"
Sep 17 22:00:11 localhost-live testquadlet[3040]: time="2024-09-17T22:00:11+02:00" level=debug msg="Using static dir /home/johnny/.local/share/containers/storage/libpod"
Sep 17 22:00:11 localhost-live testquadlet[3040]: time="2024-09-17T22:00:11+02:00" level=debug msg="Using tmp dir /run/user/1000/libpod/tmp"
Sep 17 22:00:11 localhost-live testquadlet[3040]: time="2024-09-17T22:00:11+02:00" level=debug msg="Using volume path /home/johnny/.local/share/containers/storage/volumes"
Sep 17 22:00:11 localhost-live testquadlet[3040]: time="2024-09-17T22:00:11+02:00" level=debug msg="Using transient store: false"
Sep 17 22:00:11 localhost-live testquadlet[3040]: time="2024-09-17T22:00:11+02:00" level=debug msg="Not configuring container store"
Sep 17 22:00:11 localhost-live testquadlet[3040]: time="2024-09-17T22:00:11+02:00" level=debug msg="Initializing event backend journald"
Sep 17 22:00:11 localhost-live testquadlet[3040]: time="2024-09-17T22:00:11+02:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument"
Sep 17 22:00:11 localhost-live testquadlet[3040]: time="2024-09-17T22:00:11+02:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument"
Sep 17 22:00:11 localhost-live testquadlet[3040]: time="2024-09-17T22:00:11+02:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument"
Sep 17 22:00:11 localhost-live testquadlet[3040]: time="2024-09-17T22:00:11+02:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument"
Sep 17 22:00:11 localhost-live testquadlet[3040]: time="2024-09-17T22:00:11+02:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument"
Sep 17 22:00:11 localhost-live testquadlet[3040]: time="2024-09-17T22:00:11+02:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument"
Sep 17 22:00:11 localhost-live testquadlet[3040]: time="2024-09-17T22:00:11+02:00" level=debug msg="Configured OCI runtime crun-vm initialization failed: no valid executable found for OCI runtime crun-vm: invalid argument"
Sep 17 22:00:11 localhost-live testquadlet[3040]: time="2024-09-17T22:00:11+02:00" level=debug msg="Configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument"
Sep 17 22:00:11 localhost-live testquadlet[3040]: time="2024-09-17T22:00:11+02:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument"
Sep 17 22:00:11 localhost-live testquadlet[3040]: time="2024-09-17T22:00:11+02:00" level=debug msg="Using OCI runtime \"/usr/bin/crun\""
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=info msg="/usr/bin/podman filtering at log level debug"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Called run.PersistentPreRunE(/usr/bin/podman run --name=systemd-testquadlet --cidfile=/run/user/1000/testquadlet.cid --replace --rm --cgroups=split --sdnotify=conmon -d --log-level=debug docker.io/alpine sleep infinity)"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Using conmon: \"/usr/bin/conmon\""
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=info msg="Using sqlite as database backend"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Using graph driver overlay"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Using graph root /home/johnny/.local/share/containers/storage"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Using run root /run/user/1000/containers"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Using static dir /home/johnny/.local/share/containers/storage/libpod"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Using tmp dir /run/user/1000/libpod/tmp"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Using volume path /home/johnny/.local/share/containers/storage/volumes"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Using transient store: false"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="[graphdriver] trying provided driver \"overlay\""
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="overlay: test mount with multiple lowers succeeded"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Cached value indicated that overlay is supported"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="overlay: test mount indicated that metacopy is not being used"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="backingFs=btrfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Initializing event backend journald"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Configured OCI runtime crun-vm initialization failed: no valid executable found for OCI runtime crun-vm: invalid argument"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Using OCI runtime \"/usr/bin/crun\""
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Initialized SHM lock manager at path /libpod_rootless_lock_1000"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Podman detected system restart - performing state refresh"
Sep 17 22:00:11 localhost-live podman[3047]: 2024-09-17 22:00:11.41011211 +0200 CEST m=+0.101478786 system refresh
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=info msg="Setting parallel job count to 7"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Pulling image docker.io/alpine (policy: missing)"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Looking up image \"docker.io/alpine\" in local containers storage"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux  [] }"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Trying \"docker.io/library/alpine:latest\" ..."
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="parsed reference into \"[overlay@/home/johnny/.local/share/containers/storage+/run/user/1000/containers]@91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b\""
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Found image \"docker.io/alpine\" as \"docker.io/library/alpine:latest\" in local containers storage"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Found image \"docker.io/alpine\" as \"docker.io/library/alpine:latest\" in local containers storage ([overlay@/home/johnny/.local/share/containers/storage+/run/user/1000/containers]@91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b)"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="exporting opaque data as blob \"sha256:91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b\""
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Looking up image \"docker.io/library/alpine:latest\" in local containers storage"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux  [] }"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Trying \"docker.io/library/alpine:latest\" ..."
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="parsed reference into \"[overlay@/home/johnny/.local/share/containers/storage+/run/user/1000/containers]@91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b\""
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Found image \"docker.io/library/alpine:latest\" as \"docker.io/library/alpine:latest\" in local containers storage"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Found image \"docker.io/library/alpine:latest\" as \"docker.io/library/alpine:latest\" in local containers storage ([overlay@/home/johnny/.local/share/containers/storage+/run/user/1000/containers]@91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b)"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="exporting opaque data as blob \"sha256:91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b\""
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Looking up image \"docker.io/alpine\" in local containers storage"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux  [] }"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Trying \"docker.io/library/alpine:latest\" ..."
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="parsed reference into \"[overlay@/home/johnny/.local/share/containers/storage+/run/user/1000/containers]@91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b\""
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Found image \"docker.io/alpine\" as \"docker.io/library/alpine:latest\" in local containers storage"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Found image \"docker.io/alpine\" as \"docker.io/library/alpine:latest\" in local containers storage ([overlay@/home/johnny/.local/share/containers/storage+/run/user/1000/containers]@91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b)"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="exporting opaque data as blob \"sha256:91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b\""
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Inspecting image 91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="exporting opaque data as blob \"sha256:91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b\""
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="exporting opaque data as blob \"sha256:91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b\""
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Inspecting image 91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Inspecting image 91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Inspecting image 91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="using systemd mode: false"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="setting container name systemd-testquadlet"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="No hostname set; container's hostname will default to runtime default"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Loading seccomp profile from \"/usr/share/containers/seccomp.json\""
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Allocated lock 0 for container fbd957be368b64ee3a3879f4725b20edeec9e851d74efbdee701b9e472f4e02f"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="exporting opaque data as blob \"sha256:91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b\""
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Check for idmapped mounts support create mapped mount: operation not permitted"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Created container \"fbd957be368b64ee3a3879f4725b20edeec9e851d74efbdee701b9e472f4e02f\""
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Container \"fbd957be368b64ee3a3879f4725b20edeec9e851d74efbdee701b9e472f4e02f\" has work directory \"/home/johnny/.local/share/containers/storage/overlay-containers/fbd957be368b64ee3a3879f4725b20edeec9e851d74efbdee701b9e472f4e02f/userdata\""
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Container \"fbd957be368b64ee3a3879f4725b20edeec9e851d74efbdee701b9e472f4e02f\" has run directory \"/run/user/1000/containers/overlay-containers/fbd957be368b64ee3a3879f4725b20edeec9e851d74efbdee701b9e472f4e02f/userdata\""
Sep 17 22:00:11 localhost-live podman[3047]: 2024-09-17 22:00:11.443356595 +0200 CEST m=+0.134723271 container create fbd957be368b64ee3a3879f4725b20edeec9e851d74efbdee701b9e472f4e02f (image=docker.io/library/alpine:latest, name=systemd-testquadlet, PODMAN_SYSTEMD_UNIT=testquadlet.service)
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Made network namespace at /run/user/1000/netns/netns-261872a4-7e29-0a16-49a8-e95b9c9626ac for container fbd957be368b64ee3a3879f4725b20edeec9e851d74efbdee701b9e472f4e02f"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="pasta arguments: --config-net --dns-forward 169.254.0.1 -t none -u none -T none -U none --no-map-gw --quiet --netns /run/user/1000/netns/netns-261872a4-7e29-0a16-49a8-e95b9c9626ac"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="overlay: test mount indicated that volatile is being used"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="overlay: mount_data=lowerdir=/home/johnny/.local/share/containers/storage/overlay/l/SIBMF2AYCNVSDIC56KDYB5YVXH,upperdir=/home/johnny/.local/share/containers/storage/overlay/e3921301c342a58ae9e5e8e29e2f8a561b3486cefe68eff3fbf004b1d7e1e088/diff,workdir=/home/johnny/.local/share/containers/storage/overlay/e3921301c342a58ae9e5e8e29e2f8a561b3486cefe68eff3fbf004b1d7e1e088/work,userxattr,volatile,context=\"system_u:object_r:container_file_t:s0:c240,c557\""
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Mounted container \"fbd957be368b64ee3a3879f4725b20edeec9e851d74efbdee701b9e472f4e02f\" at \"/home/johnny/.local/share/containers/storage/overlay/e3921301c342a58ae9e5e8e29e2f8a561b3486cefe68eff3fbf004b1d7e1e088/merged\""
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Created root filesystem for container fbd957be368b64ee3a3879f4725b20edeec9e851d74efbdee701b9e472f4e02f at /home/johnny/.local/share/containers/storage/overlay/e3921301c342a58ae9e5e8e29e2f8a561b3486cefe68eff3fbf004b1d7e1e088/merged"
Sep 17 22:00:11 localhost-live pasta[3067]: Couldn't get any nameserver address
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=info msg="pasta logged warnings: \"Couldn't get any nameserver address\\n\""
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="/etc/system-fips does not exist on host, not mounting FIPS mode subscription"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="reading hooks from /usr/share/containers/oci/hooks.d"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Workdir \"/\" resolved to host path \"/home/johnny/.local/share/containers/storage/overlay/e3921301c342a58ae9e5e8e29e2f8a561b3486cefe68eff3fbf004b1d7e1e088/merged\""
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Created OCI spec for container fbd957be368b64ee3a3879f4725b20edeec9e851d74efbdee701b9e472f4e02f at /home/johnny/.local/share/containers/storage/overlay-containers/fbd957be368b64ee3a3879f4725b20edeec9e851d74efbdee701b9e472f4e02f/userdata/config.json"
Sep 17 22:00:11 localhost-live podman[3047]: 2024-09-17 22:00:11.412991042 +0200 CEST m=+0.104357718 image pull 91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b docker.io/alpine
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="/usr/bin/conmon messages will be logged to syslog"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="running conmon: /usr/bin/conmon" args="[--api-version 1 -c fbd957be368b64ee3a3879f4725b20edeec9e851d74efbdee701b9e472f4e02f -u fbd957be368b64ee3a3879f4725b20edeec9e851d74efbdee701b9e472f4e02f -r /usr/bin/crun -b /home/johnny/.local/share/containers/storage/overlay-containers/fbd957be368b64ee3a3879f4725b20edeec9e851d74efbdee701b9e472f4e02f/userdata -p /run/user/1000/containers/overlay-containers/fbd957be368b64ee3a3879f4725b20edeec9e851d74efbdee701b9e472f4e02f/userdata/pidfile -n systemd-testquadlet --exit-dir /run/user/1000/libpod/tmp/exits --persist-dir /run/user/1000/libpod/tmp/persist/fbd957be368b64ee3a3879f4725b20edeec9e851d74efbdee701b9e472f4e02f --full-attach -l journald --log-level debug --syslog --conmon-pidfile /run/user/1000/containers/overlay-containers/fbd957be368b64ee3a3879f4725b20edeec9e851d74efbdee701b9e472f4e02f/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/johnny/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1000/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --network-config-dir --exit-command-arg  --exit-command-arg --network-backend --exit-command-arg netavark --exit-command-arg --volumepath --exit-command-arg /home/johnny/.local/share/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg sqlite --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg --rm --exit-command-arg fbd957be368b64ee3a3879f4725b20edeec9e851d74efbdee701b9e472f4e02f]"
Sep 17 22:00:11 localhost-live testquadlet[3078]: [conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied
Sep 17 22:00:11 localhost-live conmon[3078]: conmon fbd957be368b64ee3a38 <ndebug>: failed to write to /proc/self/oom_score_adj: Permission denied
Sep 17 22:00:11 localhost-live conmon[3079]: conmon fbd957be368b64ee3a38 <ndebug>: addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/13/attach}
Sep 17 22:00:11 localhost-live conmon[3079]: conmon fbd957be368b64ee3a38 <ndebug>: terminal_ctrl_fd: 13
Sep 17 22:00:11 localhost-live conmon[3079]: conmon fbd957be368b64ee3a38 <ndebug>: winsz read side: 16, winsz write side: 17
Sep 17 22:00:11 localhost-live conmon[3079]: conmon fbd957be368b64ee3a38 <ndebug>: container PID: 3081
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Received: 3081"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=info msg="Got Conmon PID as 3079"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Created container fbd957be368b64ee3a3879f4725b20edeec9e851d74efbdee701b9e472f4e02f in OCI runtime"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="found local resolver, using \"/run/systemd/resolve/resolv.conf\" to get the nameservers"
Sep 17 22:00:11 localhost-live podman[3047]: 2024-09-17 22:00:11.547831259 +0200 CEST m=+0.239197935 container init fbd957be368b64ee3a3879f4725b20edeec9e851d74efbdee701b9e472f4e02f (image=docker.io/library/alpine:latest, name=systemd-testquadlet, PODMAN_SYSTEMD_UNIT=testquadlet.service)
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Starting container fbd957be368b64ee3a3879f4725b20edeec9e851d74efbdee701b9e472f4e02f with command [sleep infinity]"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Started container fbd957be368b64ee3a3879f4725b20edeec9e851d74efbdee701b9e472f4e02f"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Notify sent successfully"
Sep 17 22:00:11 localhost-live systemd[1654]: Started testquadlet.service - Sleep Container.
Sep 17 22:00:11 localhost-live podman[3047]: 2024-09-17 22:00:11.551258092 +0200 CEST m=+0.242624768 container start fbd957be368b64ee3a3879f4725b20edeec9e851d74efbdee701b9e472f4e02f (image=docker.io/library/alpine:latest, name=systemd-testquadlet, PODMAN_SYSTEMD_UNIT=testquadlet.service)
Sep 17 22:00:11 localhost-live testquadlet[3047]: fbd957be368b64ee3a3879f4725b20edeec9e851d74efbdee701b9e472f4e02f
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Called run.PersistentPostRunE(/usr/bin/podman run --name=systemd-testquadlet --cidfile=/run/user/1000/testquadlet.cid --replace --rm --cgroups=split --sdnotify=conmon -d --log-level=debug docker.io/alpine sleep infinity)"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Shutting down engines"
Sep 17 22:00:11 localhost-live systemd[1654]: Created slice user.slice - Slice /user.
Sep 17 22:00:11 localhost-live systemd[1654]: Started podman-pause-42818873.scope.
Sep 17 22:00:23 localhost-live systemd[1654]: Started vte-spawn-b8fe01d2-e52a-4a11-bfd5-7c0f26198a59.scope - VTE child process 3093 launched by gnome-terminal-server process 2848.
Sep 17 22:00:37 localhost-live systemd[1654]: Starting gvfs-metadata.service - Virtual filesystem metadata service...
Sep 17 22:00:37 localhost-live systemd[1654]: Started gvfs-metadata.service - Virtual filesystem metadata service.
Sep 17 22:00:38 localhost-live spice-vdagent[2044]: No SPICE display found for connector Virtual-1
Sep 17 22:00:38 localhost-live spice-vdagent[2044]: vdagent_mutter_get_resolutions: No Spice display ID matching - assuming display ID == Monitor index

I am soon out of ideas.

@Arcitec
Copy link
Author

Arcitec commented Sep 17, 2024

Everything points to:

  • This may be a bug that happens if you update an old Fedora Workstation version over the years (I started at Fedora 35 and upgraded each step along the way, and am on Fedora 40 now).
  • There's very low chance that it's some other application on my system doing it, since I tried creating a brand new, fresh user on my host system and still had 100% the exact same issue there.

@Arcitec
Copy link
Author

Arcitec commented Sep 17, 2024

I generated lists of all systemd system units for both the VM and the Host to compare and see if there's something here that could cause the conflict.

But I am more and more thinking that Fedora is just broken. It's not the first time something breaks permanently during OS-to-OS upgrades in this distro.

At least we've narrowed it down to the fact that the host receives a SIGTERM as soon as the systemd podman units start.

Any last advice or ideas? Maybe some idea for how to see which process sends SIGTERM? I have Pods and Podman Desktop on the machine. I might try removing those if they somehow interfere, but that's unlikely, especially since they are Flatpaks.


Generated via the command: systemctl --system list-unit-files | sort | sed -E 's@\s+@ @g'

Click for details
--- system-units-vm.txt	2024-09-17 22:11:16.043517907 +0200
+++ system-units-host.txt	2024-09-17 22:11:25.525331678 +0200
@@ -1,4 +1,5 @@
-553 unit files listed.
+
+594 unit files listed.
 abrtd.service enabled enabled
 abrt-journal-core.service enabled enabled
 abrt-oops.service enabled enabled
@@ -6,6 +7,12 @@
 abrt-vmcore.service enabled enabled
 abrt-xorg.service enabled enabled
 accounts-daemon.service enabled enabled
+adb.service disabled disabled
+[email protected] disabled disabled
+akmods-keygen.target static -
+[email protected] disabled disabled
+akmods.service enabled enabled
+akmods-shutdown.service disabled disabled
 alsa-restore.service static -
 alsa-state.service static -
 anaconda-direct.service static -
@@ -20,6 +27,7 @@
 anaconda.target static -
 [email protected] static -
 arp-ethers.service disabled disabled
+atd.service enabled enabled
 auditd.service enabled enabled
 audit-rules.service enabled enabled
 auth-rpcgss-module.service static -
@@ -43,10 +51,15 @@
 chronyd-restricted.service disabled disabled
 chronyd.service enabled enabled
 chrony-wait.service disabled disabled
+cni-dhcp.service disabled disabled
+cni-dhcp.socket disabled disabled
 colord.service static -
 [email protected] static -
 console-getty.service disabled disabled
 [email protected] static -
+coolercontrold.service enabled disabled
+coolercontrol-liqctld.service static -
+crond.service enabled enabled
 cryptsetup-pre.target static -
 cryptsetup.target static -
 ctrl-alt-del.target alias -
@@ -83,6 +96,9 @@
 display-manager.service alias -
 dm-event.service static -
 dm-event.socket enabled enabled
+dmraid-activation.service disabled enabled
+dnf5-offline-transaction-cleanup.service static -
+dnf5-offline-transaction.service disabled disabled
 dnf-makecache.service static -
 dnf-makecache.timer enabled enabled
 dnf-system-upgrade-cleanup.service static -
@@ -101,6 +117,9 @@
 emergency.target static -
 exit.target disabled disabled
 factory-reset.target static -
+fancontrol.service disabled disabled
+fcoemon.socket disabled disabled
+fcoe.service disabled disabled
 fedora-third-party-refresh.service disabled disabled
 final.target static -
 firewalld.service enabled enabled
@@ -127,6 +146,7 @@
 grub-boot-indeterminate.service static -
 gssproxy.service disabled disabled
 halt.target disabled disabled
+hddtemp.service disabled disabled
 hibernate.target static -
 home.mount generated -
 htcacheclean.service disabled disabled
@@ -167,14 +187,17 @@
 kmod-static-nodes.service static -
 ldconfig.service static -
 libvirtd-admin.socket disabled disabled
-libvirtd-ro.socket disabled disabled
-libvirtd.service disabled disabled
-libvirtd.socket disabled disabled
+libvirtd-ro.socket enabled disabled
+libvirtd.service enabled disabled
+libvirtd.socket enabled disabled
 libvirtd-tcp.socket disabled disabled
 libvirtd-tls.socket disabled disabled
 libvirt-guests.service disabled disabled
-livesys-late.service enabled enabled
-livesys.service enabled enabled
+livesys-late.service generated -
+livesys.service generated -
+lldpad.service disabled disabled
+lldpad.socket disabled disabled
+lm_sensors.service enabled enabled
 loadmodules.service disabled disabled
 local-fs-pre.target static -
 local-fs.target static -
@@ -185,6 +208,7 @@
 lvm2-lvmpolld.service static -
 lvm2-lvmpolld.socket enabled enabled
 lvm2-monitor.service enabled enabled
+machine-qemu\x2d2\x2dfedora40.scope transient -
 machine.slice static -
 machines.target disabled enabled
 man-db-cache-update.service static -
@@ -201,9 +225,13 @@
 mdmonitor-oneshot.timer disabled disabled
 mdmonitor.service enabled enabled
 [email protected] static -
+mnt-entertainment.mount generated -
+mnt-media_storage.mount generated -
 ModemManager.service enabled enabled
 [email protected] static -
 -.mount generated -
+multipathd.service enabled enabled
+multipathd.socket enabled disabled
 multi-user.target static -
 ndctl-monitor.service disabled disabled
 netavark-dhcp-proxy.service disabled disabled
@@ -228,6 +256,12 @@
 nss-lookup.target static -
 nss-user-lookup.target static -
 numad.service disabled disabled
+nvidia-fallback.service disabled disabled
+nvidia-hibernate.service enabled enabled
+nvidia-persistenced.service disabled disabled
+nvidia-powerd.service enabled enabled
+nvidia-resume.service enabled enabled
+nvidia-suspend.service enabled enabled
 nvmefc-boot-connections.service disabled disabled
 nvmf-autoconnect.service disabled disabled
 nvmf-connect-nbft.service static -
@@ -242,6 +276,8 @@
 paths.target static -
 pcscd.service indirect disabled
 pcscd.socket enabled enabled
+piavpn.service enabled disabled
+plexmediaserver.service enabled disabled
 plocate-updatedb.service static -
 plocate-updatedb.timer enabled enabled
 plymouth-halt.service static -
@@ -275,10 +311,11 @@
 quotaon.service static -
 raid-check.service static -
 raid-check.timer enabled enabled
+ratbagd.service disabled disabled
 rc-local.service static -
 realmd.service static -
 reboot.target enabled enabled
-remote-cryptsetup.target enabled enabled
+remote-cryptsetup.target disabled enabled
 remote-fs-pre.target static -
 remote-fs.target enabled enabled
 remote-veritysetup.target disabled disabled
@@ -291,8 +328,8 @@
 rpc_pipefs.target static -
 rpc-statd-notify.service static -
 rpc-statd.service static -
-rpmdb-migrate.service disabled enabled
-rpmdb-rebuild.service disabled enabled
+rpmdb-migrate.service enabled enabled
+rpmdb-rebuild.service enabled enabled
 rtkit-daemon.service enabled enabled
 runlevel0.target alias -
 runlevel1.target alias -
@@ -314,6 +351,7 @@
 sleep.target static -
 slices.target static -
 smartcard.target static -
+smartd.service enabled enabled
 sockets.target static -
 soft-reboot.target static -
 sound.target static -
@@ -323,7 +361,7 @@
 spice-webdavd.service static -
 [email protected] disabled disabled
 sshd-keygen.target static -
-sshd.service disabled disabled
+sshd.service enabled disabled
 [email protected] static -
 sshd.socket disabled disabled
 ssh-host-keys-migration.service disabled disabled
@@ -368,9 +406,10 @@
 systemd-boot-random-seed.service static -
 systemd-boot-update.service disabled disabled
 systemd-bsod.service static -
-systemd-confext.service enabled enabled
+systemd-confext.service disabled enabled
 [email protected] static -
 systemd-coredump.socket static -
+systemd-cryptsetup@luks\x2d25bc83ad\x2dfefb\x2d42fd\x2d89d2\x2df5201b6ce248.service generated -
 systemd-exit.service static -
 systemd-firstboot.service static -
 systemd-fsck-root.service static -
@@ -407,7 +446,7 @@
 systemd-networkd.socket disabled disabled
 systemd-networkd-wait-online.service disabled disabled
 [email protected] disabled disabled
-systemd-network-generator.service enabled enabled
+systemd-network-generator.service disabled enabled
 [email protected] disabled disabled
 systemd-oomd.service enabled enabled
 systemd-oomd.socket disabled disabled
@@ -428,7 +467,7 @@
 systemd-pcrphase-sysinit.service static -
 systemd-portabled.service static -
 systemd-poweroff.service static -
-systemd-pstore.service enabled enabled
+systemd-pstore.service disabled enabled
 systemd-quotacheck.service static -
 systemd-random-seed.service static -
 systemd-reboot.service static -
@@ -442,7 +481,7 @@
 systemd-suspend.service static -
 systemd-suspend-then-hibernate.service static -
 systemd-sysctl.service static -
-systemd-sysext.service enabled enabled
+systemd-sysext.service disabled enabled
 [email protected] static -
 systemd-sysext.socket disabled disabled
 systemd-sysupdate-reboot.service indirect disabled
@@ -479,6 +518,7 @@
 system-update-cleanup.service static -
 system-update-pre.target static -
 system-update.target static -
+tcsd.service disabled disabled
 [email protected] static -
 thermald.service enabled enabled
 timers.target static -
@@ -500,36 +540,37 @@
 user.slice static -
 var-lib-machines.mount static -
 var-lib-nfs-rpc_pipefs.mount static -
+var-lib-plexmediaserver-media.mount generated -
 vboxclient.service static -
 vboxservice.service enabled enabled
 veritysetup-pre.target static -
 veritysetup.target static -
 vgauthd.service enabled disabled
 virt-guest-shutdown.target static -
-virtinterfaced-admin.socket enabled enabled
-virtinterfaced-ro.socket enabled enabled
+virtinterfaced-admin.socket disabled enabled
+virtinterfaced-ro.socket disabled enabled
 virtinterfaced.service disabled disabled
 virtinterfaced.socket enabled enabled
-virtlockd-admin.socket enabled enabled
+virtlockd-admin.socket disabled enabled
 virtlockd.service disabled disabled
 virtlockd.socket enabled enabled
-virtlogd-admin.socket enabled enabled
+virtlogd-admin.socket disabled enabled
 virtlogd.service disabled disabled
 virtlogd.socket enabled enabled
-virtnetworkd-admin.socket enabled enabled
-virtnetworkd-ro.socket enabled enabled
+virtnetworkd-admin.socket disabled enabled
+virtnetworkd-ro.socket disabled enabled
 virtnetworkd.service disabled disabled
 virtnetworkd.socket enabled enabled
-virtnodedevd-admin.socket enabled enabled
-virtnodedevd-ro.socket enabled enabled
+virtnodedevd-admin.socket disabled enabled
+virtnodedevd-ro.socket disabled enabled
 virtnodedevd.service disabled disabled
 virtnodedevd.socket enabled enabled
-virtnwfilterd-admin.socket enabled enabled
-virtnwfilterd-ro.socket enabled enabled
+virtnwfilterd-admin.socket disabled enabled
+virtnwfilterd-ro.socket disabled enabled
 virtnwfilterd.service disabled disabled
 virtnwfilterd.socket enabled enabled
-virtproxyd-admin.socket enabled enabled
-virtproxyd-ro.socket enabled enabled
+virtproxyd-admin.socket disabled enabled
+virtproxyd-ro.socket disabled enabled
 virtproxyd.service disabled disabled
 virtproxyd.socket enabled enabled
 virtproxyd-tcp.socket disabled disabled
@@ -538,15 +579,16 @@
 virtqemud-ro.socket enabled enabled
 virtqemud.service enabled enabled
 virtqemud.socket enabled enabled
-virtsecretd-admin.socket enabled enabled
-virtsecretd-ro.socket enabled enabled
+virtsecretd-admin.socket disabled enabled
+virtsecretd-ro.socket disabled enabled
 virtsecretd.service disabled disabled
 virtsecretd.socket enabled enabled
-virtstoraged-admin.socket enabled enabled
-virtstoraged-ro.socket enabled enabled
+virtstoraged-admin.socket disabled enabled
+virtstoraged-ro.socket disabled enabled
 virtstoraged.service disabled disabled
 virtstoraged.socket enabled enabled
 vmtoolsd.service enabled enabled
+[email protected] static -
 wpa_supplicant.service disabled disabled
 wsdd.service disabled disabled
 zfs-fuse-scrub.service static -

@Arcitec
Copy link
Author

Arcitec commented Sep 17, 2024

Well I had some ideas with strace to capture all system signals (such as SIGTERM). This trace captures all incoming signals to the watched process... And this shows that there's actually nothing that sends SIGTERM to Podman. I re-ran these commands 30+ times just trying to get at least ONE incoming "hey, sigterm yourself!" signal, but it did not capture that even once.

It did capture various signals being output BY the podman process to terminate its child (the container). And also signals from conmon detecting that podman has exited.

But no sigterm "towards podman" was ever being received or sent by conmon or podman themselves.

Which means:

  • Either Podman is internally deciding to SIGTERM itself.
  • Or the sigterm is so fast that it happens in the fractions of a second before strace has a chance to attach itself to podman.

  • Monitoring conmon:
systemctl --user daemon-reload; systemctl --user restart testquadlet --no-block; ps aux | grep podman; POD_PID=$(pidof conmon); echo "PID=${POD_PID}"; strace -p $POD_PID -e signal
johnny     19847  0.0  0.0 231328  2068 ?        Ss   22:35   0:00 /usr/bin/conmon --api-version 1 -c 030c5e181d7d6fbb5d76a50de51349065d694cce0f4582103b037d6aca08f7d9 -u 030c5e181d7d6fbb5d76a50de51349065d694cce0f4582103b037d6aca08f7d9 -r /usr/bin/crun -b /home/johnny/.local/share/containers/storage/overlay-containers/030c5e181d7d6fbb5d76a50de51349065d694cce0f4582103b037d6aca08f7d9/userdata -p /run/user/1000/containers/overlay-containers/030c5e181d7d6fbb5d76a50de51349065d694cce0f4582103b037d6aca08f7d9/userdata/pidfile -n systemd-testquadlet --exit-dir /run/user/1000/libpod/tmp/exits --persist-dir /run/user/1000/libpod/tmp/persist/030c5e181d7d6fbb5d76a50de51349065d694cce0f4582103b037d6aca08f7d9 --full-attach -l journald --log-level debug --syslog --conmon-pidfile /run/user/1000/containers/overlay-containers/030c5e181d7d6fbb5d76a50de51349065d694cce0f4582103b037d6aca08f7d9/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/johnny/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1000/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --network-config-dir --exit-command-arg  --exit-command-arg --network-backend --exit-command-arg netavark --exit-command-arg --volumepath --exit-command-arg /home/johnny/.local/share/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg sqlite --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg --rm --exit-command-arg 030c5e181d7d6fbb5d76a50de51349065d694cce0f4582103b037d6aca08f7d9
johnny     19852  0.7  0.0 2605668 43904 ?       Ssl  22:35   0:00 /usr/bin/podman rm -v -f -i --cidfile=/run/user/1000/testquadlet.cid
johnny     19900  0.0  0.0 227924  2304 pts/6    S+   22:35   0:00 grep --color=auto podman
PID=19847
strace: Process 19847 attached
rt_sigaction(SIGCHLD, {sa_handler=SIG_DFL, sa_mask=[CHLD], sa_flags=SA_RESTORER|SA_RESTART, sa_restorer=0x7f95c1bf2d00}, {sa_handler=SIG_DFL, sa_mask=[], sa_flags=0}, 8) = 0
+++ exited with 137 +++
  • Monitoring podman:
systemctl --user daemon-reload; systemctl --user restart testquadlet --no-block; ps aux | grep podman; POD_PID=$(pidof podman); echo "PID=${POD_PID}"; strace -p $POD_PID -e signal
johnny     23951  0.0  0.0 231328  2064 ?        Ss   22:42   0:00 /usr/bin/conmon --api-version 1 -c c52c97b5dd2adba17b67c1de505fb912d05829312a437c131ed38384c4a71214 -u c52c97b5dd2adba17b67c1de505fb912d05829312a437c131ed38384c4a71214 -r /usr/bin/crun -b /home/johnny/.local/share/containers/storage/overlay-containers/c52c97b5dd2adba17b67c1de505fb912d05829312a437c131ed38384c4a71214/userdata -p /run/user/1000/containers/overlay-containers/c52c97b5dd2adba17b67c1de505fb912d05829312a437c131ed38384c4a71214/userdata/pidfile -n systemd-testquadlet --exit-dir /run/user/1000/libpod/tmp/exits --persist-dir /run/user/1000/libpod/tmp/persist/c52c97b5dd2adba17b67c1de505fb912d05829312a437c131ed38384c4a71214 --full-attach -l journald --log-level debug --syslog --conmon-pidfile /run/user/1000/containers/overlay-containers/c52c97b5dd2adba17b67c1de505fb912d05829312a437c131ed38384c4a71214/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/johnny/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1000/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --network-config-dir --exit-command-arg  --exit-command-arg --network-backend --exit-command-arg netavark --exit-command-arg --volumepath --exit-command-arg /home/johnny/.local/share/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg sqlite --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg --rm --exit-command-arg c52c97b5dd2adba17b67c1de505fb912d05829312a437c131ed38384c4a71214
johnny     23955  2.6  0.0 2458296 43008 ?       Ssl  22:42   0:00 /usr/bin/podman rm -v -f -i --cidfile=/run/user/1000/testquadlet.cid
johnny     24002  0.0  0.0 227924  2048 pts/6    S+   22:42   0:00 grep --color=auto podman
PID=23955
strace: Process 23955 attached
kill(23953, 0)                          = 0
kill(23953, 0)                          = 0
kill(23953, 0)                          = 0
kill(23953, 0)                          = 0
kill(23953, 0)                          = 0
kill(23953, 0)                          = 0
kill(23953, 0)                          = 0
kill(23953, 0)                          = 0
kill(23953, 0)                          = 0
kill(23953, 0)                          = 0
kill(23953, 0)                          = 0
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=24030, si_uid=0, si_status=0, si_utime=0, si_stime=0} ---
rt_sigreturn({mask=[]})                 = 1
kill(23953, 0)                          = -1 ESRCH (No such process)
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=23955, si_uid=0} ---
rt_sigreturn({mask=[]})                 = 1
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=23955, si_uid=0} ---
rt_sigreturn({mask=[]})                 = 1
rt_sigprocmask(SIG_SETMASK, NULL, [], 8) = 0
rt_sigprocmask(SIG_SETMASK, ~[], NULL, 8) = 0
rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=24045, si_uid=0, si_status=0, si_utime=0, si_stime=0} ---
rt_sigreturn({mask=[]})                 = 0
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=23955, si_uid=0} ---
rt_sigreturn({mask=[]})                 = 1
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=23955, si_uid=0} ---
rt_sigreturn({mask=[]})                 = 1
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=23955, si_uid=0} ---
rt_sigreturn({mask=[]})                 = 1
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=23955, si_uid=0} ---
rt_sigreturn({mask=[]})                 = 1
+++ exited with 0 +++

The 23955 PID is podman itself.

I was also quick to capture what kill(23953, 0) = 0 referred to and it turns out that was just podman trying to kill the infinitely sleeping container process:

ps aux | rg 23953
johnny     23953  0.0  0.0   1612   640 ?        Ss   22:42   0:00 sleep infinity

Edit: I even simplified the command to cut out the ps aux | grep delay to minimize the pause between systemd startup and strace; still wasn't able to see any sigterm being forced onto podman:

systemctl --user daemon-reload; systemctl --user restart testquadlet --no-block; POD_PID=$(pidof podman); echo "PID=${POD_PID}"; strace -p $POD_PID -e signal

@Arcitec
Copy link
Author

Arcitec commented Sep 17, 2024

I just had a small breakthrough, so I am not giving up yet.

  • I manually created ~/.config/systemd/user/testquadletmanual.service and filled its contents as follows:
  • I used /usr/libexec/podman/quadlet --user --dryrun to get the correct service file contents.
  • I edited the ExecStart= line to remove -d to tell podman to stay resident instead of daemonizing (which otherwise immediately exits with status 0).
  • My resulting file contents for that service are the exact same as podman generates, minus -d flag:
[Unit]
Wants=network-online.target
After=network-online.target
Description=Sleep Container
SourcePath=/home/johnny/.config/containers/systemd/testquadlet.container
RequiresMountsFor=%t/containers

[X-Container]
Image=docker.io/alpine
PodmanArgs=--log-level=debug
Exec=sleep infinity

[Install]
WantedBy=default.target

[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
KillMode=mixed
ExecStop=/usr/bin/podman rm -v -f -i --cidfile=%t/%N.cid
ExecStopPost=-/usr/bin/podman rm -v -f -i --cidfile=%t/%N.cid
Delegate=yes
Type=notify
NotifyAccess=all
SyslogIdentifier=%N
ExecStart=/usr/bin/podman run --name=systemd-%N --cidfile=%t/%N.cid --replace --rm --cgroups=split --sdnotify=conmon --log-level=debug docker.io/alpine sleep infinity
  • Then I started the custom service and watched the system log:
Click for details
$ systemctl --user restart testquadletmanual --no-block; journalctl --user -f
Sep 17 23:04:27 aurora gnome-shell[5424]: Can't update stage views actor unnamed [Gjs_dash-to-dock_micxgx_gmail_com_docking_DashSlideContainer] is on because it needs an allocation.
Sep 17 23:04:27 aurora gnome-shell[5424]: Can't update stage views actor dashtodockBox [StBoxLayout] is on because it needs an allocation.
Sep 17 23:04:27 aurora gnome-shell[5424]: Can't update stage views actor bms-dash-backgroundgroup [MetaBackgroundGroup] is on because it needs an allocation.
Sep 17 23:04:27 aurora gnome-shell[5424]: Can't update stage views actor bms-dash-blurred-widget [StWidget] is on because it needs an allocation.
Sep 17 23:04:27 aurora gnome-shell[5424]: Can't update stage views actor dashtodockContainer [Gjs_dash-to-dock_micxgx_gmail_com_docking_DashToDock] is on because it needs an allocation.
Sep 17 23:04:27 aurora gnome-shell[5424]: Can't update stage views actor unnamed [Gjs_dash-to-dock_micxgx_gmail_com_docking_DashSlideContainer] is on because it needs an allocation.
Sep 17 23:04:27 aurora gnome-shell[5424]: Can't update stage views actor dashtodockBox [StBoxLayout] is on because it needs an allocation.
Sep 17 23:04:27 aurora gnome-shell[5424]: Can't update stage views actor bms-dash-backgroundgroup [MetaBackgroundGroup] is on because it needs an allocation.
Sep 17 23:04:27 aurora gnome-shell[5424]: Can't update stage views actor bms-dash-blurred-widget [StWidget] is on because it needs an allocation.
Sep 17 23:04:27 aurora systemd[5025]: Starting testquadletmanual.service - Sleep Container...
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=info msg="/usr/bin/podman filtering at log level debug"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Called run.PersistentPreRunE(/usr/bin/podman run --name=systemd-testquadletmanual --cidfile=/run/user/1000/testquadletmanual.cid --replace --rm --cgroups=split --sdnotify=conmon --log-level=debug docker.io/alpine sleep infinity)"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Using conmon: \"/usr/bin/conmon\""
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=info msg="Using sqlite as database backend"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Using graph driver overlay"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Using graph root /home/johnny/.local/share/containers/storage"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Using run root /run/user/1000/containers"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Using static dir /home/johnny/.local/share/containers/storage/libpod"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Using tmp dir /run/user/1000/libpod/tmp"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Using volume path /home/johnny/.local/share/containers/storage/volumes"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Using transient store: false"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="[graphdriver] trying provided driver \"overlay\""
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Cached value indicated that overlay is supported"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Cached value indicated that overlay is supported"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Cached value indicated that metacopy is not being used"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Cached value indicated that native-diff is usable"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="backingFs=btrfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Initializing event backend journald"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Configured OCI runtime crun-vm initialization failed: no valid executable found for OCI runtime crun-vm: invalid argument"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Using OCI runtime \"/usr/bin/crun\""
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=info msg="Setting parallel job count to 49"
Sep 17 23:04:27 aurora systemd[5025]: Started podman-30394.scope.
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Pulling image docker.io/alpine (policy: missing)"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Looking up image \"docker.io/alpine\" in local containers storage"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux  [] }"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Trying \"docker.io/library/alpine:latest\" ..."
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="parsed reference into \"[overlay@/home/johnny/.local/share/containers/storage+/run/user/1000/containers]@91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b\""
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Found image \"docker.io/alpine\" as \"docker.io/library/alpine:latest\" in local containers storage"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Found image \"docker.io/alpine\" as \"docker.io/library/alpine:latest\" in local containers storage ([overlay@/home/johnny/.local/share/containers/storage+/run/user/1000/containers]@91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b)"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="exporting opaque data as blob \"sha256:91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b\""
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Looking up image \"docker.io/library/alpine:latest\" in local containers storage"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux  [] }"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Trying \"docker.io/library/alpine:latest\" ..."
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="parsed reference into \"[overlay@/home/johnny/.local/share/containers/storage+/run/user/1000/containers]@91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b\""
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Found image \"docker.io/library/alpine:latest\" as \"docker.io/library/alpine:latest\" in local containers storage"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Found image \"docker.io/library/alpine:latest\" as \"docker.io/library/alpine:latest\" in local containers storage ([overlay@/home/johnny/.local/share/containers/storage+/run/user/1000/containers]@91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b)"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="exporting opaque data as blob \"sha256:91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b\""
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Looking up image \"docker.io/alpine\" in local containers storage"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux  [] }"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Trying \"docker.io/library/alpine:latest\" ..."
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="parsed reference into \"[overlay@/home/johnny/.local/share/containers/storage+/run/user/1000/containers]@91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b\""
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Found image \"docker.io/alpine\" as \"docker.io/library/alpine:latest\" in local containers storage"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Found image \"docker.io/alpine\" as \"docker.io/library/alpine:latest\" in local containers storage ([overlay@/home/johnny/.local/share/containers/storage+/run/user/1000/containers]@91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b)"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="exporting opaque data as blob \"sha256:91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b\""
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Inspecting image 91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="exporting opaque data as blob \"sha256:91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b\""
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="exporting opaque data as blob \"sha256:91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b\""
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Inspecting image 91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Inspecting image 91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Inspecting image 91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="using systemd mode: false"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="setting container name systemd-testquadletmanual"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="No hostname set; container's hostname will default to runtime default"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Loading seccomp profile from \"/usr/share/containers/seccomp.json\""
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Allocated lock 0 for container 4bb29585c3dd878377d6d36be64065ff193273f0931e9ab8fbfc12efa4b564bb"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="exporting opaque data as blob \"sha256:91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b\""
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Cached value indicated that idmapped mounts for overlay are not supported"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Check for idmapped mounts support "
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Created container \"4bb29585c3dd878377d6d36be64065ff193273f0931e9ab8fbfc12efa4b564bb\""
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Container \"4bb29585c3dd878377d6d36be64065ff193273f0931e9ab8fbfc12efa4b564bb\" has work directory \"/home/johnny/.local/share/containers/storage/overlay-containers/4bb29585c3dd878377d6d36be64065ff193273f0931e9ab8fbfc12efa4b564bb/userdata\""
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Container \"4bb29585c3dd878377d6d36be64065ff193273f0931e9ab8fbfc12efa4b564bb\" has run directory \"/run/user/1000/containers/overlay-containers/4bb29585c3dd878377d6d36be64065ff193273f0931e9ab8fbfc12efa4b564bb/userdata\""
Sep 17 23:04:27 aurora podman[30394]: 2024-09-17 23:04:27.83295267 +0200 CEST m=+0.033580016 container create 4bb29585c3dd878377d6d36be64065ff193273f0931e9ab8fbfc12efa4b564bb (image=docker.io/library/alpine:latest, name=systemd-testquadletmanual, PODMAN_SYSTEMD_UNIT=testquadletmanual.service)
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Not attaching to stdin"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=info msg="Received shutdown.Stop(), terminating!" PID=30394
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Enabling signal proxying"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Cached value indicated that volatile is being used"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="overlay: mount_data=lowerdir=/home/johnny/.local/share/containers/storage/overlay/l/RMGDGIAGZTBD6WW47XCMRD57TR,upperdir=/home/johnny/.local/share/containers/storage/overlay/034c0e17f0002101986aa61fcbafbfe6ae5198d9b22a8762b38527bf7c9a14eb/diff,workdir=/home/johnny/.local/share/containers/storage/overlay/034c0e17f0002101986aa61fcbafbfe6ae5198d9b22a8762b38527bf7c9a14eb/work,userxattr,volatile,context=\"system_u:object_r:container_file_t:s0:c521,c622\""
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Made network namespace at /run/user/1000/netns/netns-59c96dc8-4d12-27f9-eadf-017afaa7a2ad for container 4bb29585c3dd878377d6d36be64065ff193273f0931e9ab8fbfc12efa4b564bb"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="pasta arguments: --config-net --dns-forward 169.254.0.1 -t none -u none -T none -U none --no-map-gw --quiet --netns /run/user/1000/netns/netns-59c96dc8-4d12-27f9-eadf-017afaa7a2ad"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Mounted container \"4bb29585c3dd878377d6d36be64065ff193273f0931e9ab8fbfc12efa4b564bb\" at \"/home/johnny/.local/share/containers/storage/overlay/034c0e17f0002101986aa61fcbafbfe6ae5198d9b22a8762b38527bf7c9a14eb/merged\""
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Created root filesystem for container 4bb29585c3dd878377d6d36be64065ff193273f0931e9ab8fbfc12efa4b564bb at /home/johnny/.local/share/containers/storage/overlay/034c0e17f0002101986aa61fcbafbfe6ae5198d9b22a8762b38527bf7c9a14eb/merged"
Sep 17 23:04:27 aurora pasta[30420]: Couldn't get any nameserver address
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=info msg="pasta logged warnings: \"Couldn't get any nameserver address\\n\""
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="/etc/system-fips does not exist on host, not mounting FIPS mode subscription"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="reading hooks from /usr/share/containers/oci/hooks.d"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Workdir \"/\" resolved to host path \"/home/johnny/.local/share/containers/storage/overlay/034c0e17f0002101986aa61fcbafbfe6ae5198d9b22a8762b38527bf7c9a14eb/merged\""
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Created OCI spec for container 4bb29585c3dd878377d6d36be64065ff193273f0931e9ab8fbfc12efa4b564bb at /home/johnny/.local/share/containers/storage/overlay-containers/4bb29585c3dd878377d6d36be64065ff193273f0931e9ab8fbfc12efa4b564bb/userdata/config.json"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="/usr/bin/conmon messages will be logged to syslog"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="running conmon: /usr/bin/conmon" args="[--api-version 1 -c 4bb29585c3dd878377d6d36be64065ff193273f0931e9ab8fbfc12efa4b564bb -u 4bb29585c3dd878377d6d36be64065ff193273f0931e9ab8fbfc12efa4b564bb -r /usr/bin/crun -b /home/johnny/.local/share/containers/storage/overlay-containers/4bb29585c3dd878377d6d36be64065ff193273f0931e9ab8fbfc12efa4b564bb/userdata -p /run/user/1000/containers/overlay-containers/4bb29585c3dd878377d6d36be64065ff193273f0931e9ab8fbfc12efa4b564bb/userdata/pidfile -n systemd-testquadletmanual --exit-dir /run/user/1000/libpod/tmp/exits --persist-dir /run/user/1000/libpod/tmp/persist/4bb29585c3dd878377d6d36be64065ff193273f0931e9ab8fbfc12efa4b564bb --full-attach -l journald --log-level debug --syslog --conmon-pidfile /run/user/1000/containers/overlay-containers/4bb29585c3dd878377d6d36be64065ff193273f0931e9ab8fbfc12efa4b564bb/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/johnny/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1000/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --network-config-dir --exit-command-arg  --exit-command-arg --network-backend --exit-command-arg netavark --exit-command-arg --volumepath --exit-command-arg /home/johnny/.local/share/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg sqlite --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg --rm --exit-command-arg 4bb29585c3dd878377d6d36be64065ff193273f0931e9ab8fbfc12efa4b564bb]"
Sep 17 23:04:27 aurora testquadletmanual[30431]: [conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied
Sep 17 23:04:27 aurora conmon[30431]: conmon 4bb29585c3dd878377d6 <ndebug>: failed to write to /proc/self/oom_score_adj: Permission denied
Sep 17 23:04:27 aurora conmon[30433]: conmon 4bb29585c3dd878377d6 <ndebug>: addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/12/attach}
Sep 17 23:04:27 aurora conmon[30433]: conmon 4bb29585c3dd878377d6 <ndebug>: terminal_ctrl_fd: 12
Sep 17 23:04:27 aurora conmon[30433]: conmon 4bb29585c3dd878377d6 <ndebug>: winsz read side: 15, winsz write side: 16
Sep 17 23:04:27 aurora conmon[30433]: conmon 4bb29585c3dd878377d6 <ndebug>: container PID: 30435
Sep 17 23:04:27 aurora conmon[30433]: conmon 4bb29585c3dd878377d6 <nwarn>: Failed to add inotify watch for /sys/fs/cgroup/memory.events
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Received: 30435"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=info msg="Got Conmon PID as 30433"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Created container 4bb29585c3dd878377d6d36be64065ff193273f0931e9ab8fbfc12efa4b564bb in OCI runtime"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="found local resolver, using \"/run/systemd/resolve/resolv.conf\" to get the nameservers"
Sep 17 23:04:27 aurora podman[30394]: 2024-09-17 23:04:27.893203359 +0200 CEST m=+0.093830705 container init 4bb29585c3dd878377d6d36be64065ff193273f0931e9ab8fbfc12efa4b564bb (image=docker.io/library/alpine:latest, name=systemd-testquadletmanual, PODMAN_SYSTEMD_UNIT=testquadletmanual.service)
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Attaching to container 4bb29585c3dd878377d6d36be64065ff193273f0931e9ab8fbfc12efa4b564bb"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Starting container 4bb29585c3dd878377d6d36be64065ff193273f0931e9ab8fbfc12efa4b564bb with command [sleep infinity]"
Sep 17 23:04:27 aurora conmon[30433]: conmon 4bb29585c3dd878377d6 <ndebug>: Accepted console connection 9
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Started container 4bb29585c3dd878377d6d36be64065ff193273f0931e9ab8fbfc12efa4b564bb"
Sep 17 23:04:27 aurora testquadletmanual[30394]: time="2024-09-17T23:04:27+02:00" level=debug msg="Notify sent successfully"
Sep 17 23:04:27 aurora podman[30394]: 2024-09-17 23:04:27.895656448 +0200 CEST m=+0.096283784 container start 4bb29585c3dd878377d6d36be64065ff193273f0931e9ab8fbfc12efa4b564bb (image=docker.io/library/alpine:latest, name=systemd-testquadletmanual, PODMAN_SYSTEMD_UNIT=testquadletmanual.service)
Sep 17 23:04:27 aurora systemd[5025]: Started testquadletmanual.service - Sleep Container.
Sep 17 23:04:27 aurora podman[30394]: 2024-09-17 23:04:27.90602109 +0200 CEST m=+0.106648436 container attach 4bb29585c3dd878377d6d36be64065ff193273f0931e9ab8fbfc12efa4b564bb (image=docker.io/library/alpine:latest, name=systemd-testquadletmanual, PODMAN_SYSTEMD_UNIT=testquadletmanual.service)
Sep 17 23:04:27 aurora podman[30394]: 2024-09-17 23:04:27.816963524 +0200 CEST m=+0.017590860 image pull 91ef0af61f39ece4d6710e465df5ed6ca12112358344fd51ae6a3b886634148b docker.io/alpine
  • And checked podman ps (success):
podman ps
CONTAINER ID  IMAGE                            COMMAND         CREATED        STATUS        PORTS       NAMES
4bb29585c3dd  docker.io/library/alpine:latest  sleep infinity  8 minutes ago  Up 8 minutes              systemd-testquadletmanual

Conclusion so far:

  • Nothing was ever sending sigterm.
  • When podman exits immediately due to -d then the systemd service assumes that the service itself has stopped. Why does this happen, when it doesn't happen in a fresh VM? I used a difftool to 100% verify that the ONLY difference between my host and VM services was the -d flag. Why doesn't -d work on the host?
  • Why does podman terminate the container when -d flag is used? Is it caused by conmon? It is caused by podman?
  • My strongest theory is that systemd executes ExecStart=, then sees that it exits with code 0, doesn't understand that it's a daemon/notify-based service, so it then runs ExecStop= automatically to clean up (which is what terminated things via "sigterm" in the earlier logs).

Since I got this far, it might be worth investigating more and finding if this is a bug in podman in certain systems. Any ideas based on this finding, @Luap99 @rhatdan ?

@Arcitec
Copy link
Author

Arcitec commented Sep 17, 2024

Since everything is pointing towards a notify-related bug, I have taken a quick look at the VM logs where -d is successful, and searched for all lines with the word "notify":

#23990 (comment)

Sep 17 22:00:11 localhost-live testquadlet[3040]: time="2024-09-17T22:00:11+02:00" level=debug msg="Called run.PersistentPreRunE(/usr/bin/podman run --name=systemd-testquadlet --cidfile=/run/user/1000/testquadlet.cid --replace --rm --cgroups=split --sdnotify=conmon -d --log-level=debug docker.io/alpine sleep infinity)"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Called run.PersistentPreRunE(/usr/bin/podman run --name=systemd-testquadlet --cidfile=/run/user/1000/testquadlet.cid --replace --rm --cgroups=split --sdnotify=conmon -d --log-level=debug docker.io/alpine sleep infinity)"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Notify sent successfully"
Sep 17 22:00:11 localhost-live testquadlet[3047]: time="2024-09-17T22:00:11+02:00" level=debug msg="Called run.PersistentPostRunE(/usr/bin/podman run --name=systemd-testquadlet --cidfile=/run/user/1000/testquadlet.cid --replace --rm --cgroups=split --sdnotify=conmon -d --log-level=debug docker.io/alpine sleep infinity)"

And compared it to the host logs where -d fails:

#23990 (comment)

Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Called run.PersistentPreRunE(/usr/bin/podman run --name=systemd-testquadlet --cidfile=/run/user/1000/testquadlet.cid --replace --rm --cgroups=split --sdnotify=conmon -d --log-level=debug docker.io/alpine sleep infinity)"
Sep 17 20:07:26 aurora conmon[1177107]: conmon 799bacc6b24d58f6a839 <nwarn>: Failed to add inotify watch for /sys/fs/cgroup/memory.events
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Notify sent successfully"
Sep 17 20:07:26 aurora testquadlet[1177076]: time="2024-09-17T20:07:26+02:00" level=debug msg="Called run.PersistentPostRunE(/usr/bin/podman run --name=systemd-testquadlet --cidfile=/run/user/1000/testquadlet.cid --replace --rm --cgroups=split --sdnotify=conmon -d --log-level=debug docker.io/alpine sleep infinity)"

And the thing that immediately stands out is this line:

Sep 17 20:07:26 aurora conmon[1177107]: conmon 799bacc6b24d58f6a839 <nwarn>: Failed to add inotify watch for /sys/fs/cgroup/memory.events

I was curious, so I tried ls /sys/fs/cgroup/memory.events on the host and inside the VM and see that this directory does NOT exist on either of them. Meaning that it looks like conmon ONLY tries to access that non-existent file on the HOST.

So far, there's some pretty strong hints that there's a bug, in which conmon on my host is trying to watch a non-existent file (/sys/fs/cgroup/memory.events), whereas on the VM (fresh Fedora 40 installation) conmon doesn't try to watch that file. And when the host fails to watch that non-existent signal file, conmon exits everything.

This is my strongest theory at the moment. I would have to debug differences in cgroup settings between guest and host.

Here's a comparison of host and VM podman info:

Click for host podman info
host:
  arch: amd64
  buildahVersion: 1.37.2
  cgroupControllers:
  - cpu
  - io
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.12-2.fc40.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.12, commit: '
  cpuUtilization:
    idlePercent: 97.19
    systemPercent: 0.9
    userPercent: 1.91
  cpus: 16
  databaseBackend: sqlite
  distribution:
    distribution: fedora
    variant: workstation
    version: "40"
  eventLogger: journald
  freeLocks: 2047
  hostname: aurora
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 6.9.12-200.fc40.x86_64
  linkmode: dynamic
  logDriver: journald
  memFree: 32950640640
  memTotal: 67107381248
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: aardvark-dns-1.12.2-2.fc40.x86_64
      path: /usr/libexec/podman/aardvark-dns
      version: aardvark-dns 1.12.2
    package: netavark-1.12.2-1.fc40.x86_64
    path: /usr/libexec/podman/netavark
    version: netavark 1.12.2
  ociRuntime:
    name: crun
    package: crun-1.17-1.fc40.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.17
      commit: 000fa0d4eeed8938301f3bcf8206405315bc1017
      rundir: /run/user/1000/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
  os: linux
  pasta:
    executable: /usr/bin/pasta
    package: passt-0^20240906.g6b38f07-1.fc40.x86_64
    version: |
      pasta 0^20240906.g6b38f07-1.fc40.x86_64
      Copyright Red Hat
      GNU General Public License, version 2 or later
        <https://www.gnu.org/licenses/old-licenses/gpl-2.0.html>
      This is free software: you are free to change and redistribute it.
      There is NO WARRANTY, to the extent permitted by law.
  remoteSocket:
    exists: true
    path: /run/user/1000/podman/podman.sock
  rootlessNetworkCmd: pasta
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.2.2-2.fc40.x86_64
    version: |-
      slirp4netns version 1.2.2
      commit: 0ee2d87523e906518d34a6b423271e4826f71faf
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.5
  swapFree: 8589930496
  swapTotal: 8589930496
  uptime: 1h 55m 57.00s (Approximately 0.04 days)
  variant: ""
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  localhost:5000:
    Blocked: false
    Insecure: true
    Location: localhost:5000
    MirrorByDigestOnly: false
    Mirrors: null
    Prefix: localhost:5000
    PullFromMirror: ""
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - docker.io
store:
  configFile: /home/johnny/.config/containers/storage.conf
  containerStore:
    number: 1
    paused: 0
    running: 1
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /home/johnny/.local/share/containers/storage
  graphRootAllocated: 1998678130688
  graphRootUsed: 1932992999424
  graphStatus:
    Backing Filesystem: btrfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Supports shifting: "false"
    Supports volatile: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 2
  runRoot: /run/user/1000/containers
  transientStore: false
  volumePath: /home/johnny/.local/share/containers/storage/volumes
version:
  APIVersion: 5.2.2
  Built: 1724198400
  BuiltTime: Wed Aug 21 02:00:00 2024
  GitCommit: ""
  GoVersion: go1.22.6
  Os: linux
  OsArch: linux/amd64
  Version: 5.2.2
Click for VM podman info
host:
  arch: amd64
  buildahVersion: 1.37.2
  cgroupControllers:
  - cpu
  - io
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.12-2.fc40.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.12, commit: '
  cpuUtilization:
    idlePercent: 99.58
    systemPercent: 0.15
    userPercent: 0.27
  cpus: 2
  databaseBackend: sqlite
  distribution:
    distribution: fedora
    variant: workstation
    version: "40"
  eventLogger: journald
  freeLocks: 2047
  hostname: localhost-live
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 524288
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 524288
      size: 65536
  kernel: 6.9.12-200.fc40.x86_64
  linkmode: dynamic
  logDriver: journald
  memFree: 1364869120
  memTotal: 4078301184
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: aardvark-dns-1.12.2-2.fc40.x86_64
      path: /usr/libexec/podman/aardvark-dns
      version: aardvark-dns 1.12.2
    package: netavark-1.12.2-1.fc40.x86_64
    path: /usr/libexec/podman/netavark
    version: netavark 1.12.2
  ociRuntime:
    name: crun
    package: crun-1.17-1.fc40.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.17
      commit: 000fa0d4eeed8938301f3bcf8206405315bc1017
      rundir: /run/user/1000/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
  os: linux
  pasta:
    executable: /usr/bin/pasta
    package: passt-0^20240906.g6b38f07-1.fc40.x86_64
    version: |
      pasta 0^20240906.g6b38f07-1.fc40.x86_64
      Copyright Red Hat
      GNU General Public License, version 2 or later
        <https://www.gnu.org/licenses/old-licenses/gpl-2.0.html>
      This is free software: you are free to change and redistribute it.
      There is NO WARRANTY, to the extent permitted by law.
  remoteSocket:
    exists: false
    path: /run/user/1000/podman/podman.sock
  rootlessNetworkCmd: pasta
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: ""
    package: ""
    version: ""
  swapFree: 4077907968
  swapTotal: 4077907968
  uptime: 1h 25m 54.00s (Approximately 0.04 days)
  variant: ""
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - docker.io
store:
  configFile: /home/johnny/.config/containers/storage.conf
  containerStore:
    number: 1
    paused: 0
    running: 1
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /home/johnny/.local/share/containers/storage
  graphRootAllocated: 19769851904
  graphRootUsed: 4667817984
  graphStatus:
    Backing Filesystem: btrfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Supports shifting: "false"
    Supports volatile: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 1
  runRoot: /run/user/1000/containers
  transientStore: false
  volumePath: /home/johnny/.local/share/containers/storage/volumes
version:
  APIVersion: 5.2.2
  Built: 1724198400
  BuiltTime: Wed Aug 21 02:00:00 2024
  GitCommit: ""
  GoVersion: go1.22.6
  Os: linux
  OsArch: linux/amd64
  Version: 5.2.2

I compared them in Meld and the only real differences are:

  • Remote Socket (/run/user/1000/podman/podman.sock) exits: Host=yes, VM=no.
  • slirp4netns: Details are known on the host. Empty settings on the VM.
  • Registries: Host has a custom registry on localhost. VM doesn't.

Any ideas for how to debug the Failed to add inotify watch for /sys/fs/cgroup/memory.events reason? There could be differences in the cgroups settings between a fresh Fedora Workstation 40 and one that has been upgraded, but I don't know how to check that. All I know so far is that the file doesn't exist on either VM or host, and that conmon only tries to access it on the Host, and that this is the most likely reason why the podman quadlet service is terminating.

@Arcitec
Copy link
Author

Arcitec commented Sep 17, 2024

Oh My GOD! I've solved it after nearly 10 hours of troubleshooting and a ruined day.

@Luap99 @rhatdan @giuseppe

There is a bug in either conmon, podman, systemd or pia-vpn.

The issue is extremely easy to reproduce:

  • Make a virtual machine with Fedora Workstation 40.
  • Create ~/.config/containers/systemd/testquadlet.container with the following contents:
[Unit]
Description=Sleep Container

[Container]
Image=docker.io/alpine
PodmanArgs=--log-level=debug
Exec=sleep infinity

[Install]
WantedBy=default.target
  • Generate the updated service and run it with system logging visible:
$ systemctl --user daemon-reload; systemctl --user restart testquadlet --no-block; journalctl --user -f
  • Verify that it's running (it is):
$ podman ps
  • Stop it.
$ systemctl --user stop testquadlet
$ cd ~/Downloads
$ chmod +x pia-linux-3.6.1-08339.run
$ ./pia-linux-3.6.1-08339.run
  • Generate the updated systemd service and run it again with system logging visible:
$ systemctl --user daemon-reload; systemctl --user restart testquadlet --no-block; journalctl --user -f
  • Now it fails to start, and will fail to start until you uninstall PIA VPN and restart the whole system until you unmount its Cgroup v1 (then Podman starts working immediately):
$ sudo umount /opt/piavpn/etc/cgroup/net_cls
  • In terms of debugging so far, I've checked the cgroup mounts and found the following, which shows that PIA VPN adds a Cgroup:
$ mount | grep -i cgroup
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime,seclabel,nsdelegate,memory_recursiveprot)
none on /opt/piavpn/etc/cgroup/net_cls type cgroup (rw,relatime,seclabel,net_cls)

IMPORTANT Edit: I found the kernel documentation for cgroups which confirms that the "do not break userspace" rule of kernel development is still in effect, and means that mounting cgroup v1 and cgroup v2 at the same time is 100% supported and valid.

"All controllers which support v2 and are not bound to a v1 hierarchy are automatically bound to the v2 hierarchy and show up at the root. Controllers which are not in active use in the v2 hierarchy can be bound to other hierarchies. This allows mixing v2 hierarchy with the legacy v1 multiple hierarchies in a fully backward compatible way."

From what I've been able to understand so far, their cgroupv1 at /opt/piavpn/etc/cgroup/net_cls is totally fine since it's at a different path, and the kernel and systemd doesn't mind at all, and that this is a bug in podman which misdetects the cgroup filesystems when there are multiple kinds of cgroups mounted?

While it's regrettable that PIA VPN creates a legacy Cgroup v1, it's totally valid to do so, and it places it in a non-conflicting location at /opt/piavpn/etc/cgroup/net_cls where there's no mixing with the core system cgroups.

Systemd themselves added support for legacy Cgroup v1s on Cgroup v2 systems in one of the tickets I linked to above, which is their own acknowledgement that this is valid. The kernel docs confirm it too.

And then we have tools such as LXC, which is another container runner technology which supports systems that have Cgroup v1 mounts at various locations on a Cgroup v2 system, as seen when people ensured that the v1 net_cls simply wasn't mounted under the v2 path: here and another reference here

Therefore it seems pretty clear that the bug is in podman or conmon. I see so many people in other tickets who wasted days on debugging this situation. Would it perhaps be possible to revisit this issue and code cgroup detection into podman so that it tries to add itself to the system's cgroup v2 rather than getting confused by cgroup v1s from other apps?

The issue doesn't seem to exist in systemd, since I have around 1000 services in total between my system and user units, and all of them run and properly use cgroupv2, except for podman's quadlets which fails to start when systems contain both cgroup v1 and cgroup v2 mounts.

The small test-case in this message could be helpful for coding some detection to avoid cgroup v1s in podman. Furthermore, the systemd ticket links to some commits which show the techniques that systemd used for properly filtering and supporting mixed Cgroup v1 and v2 systems.

It would be such a big improvement if Podman detects Cgroup v2 and registers itself to that while ignoring backwards-compatible v1 groups created by various other apps.

I'm looking forward to hearing your feedback. :)

@Arcitec Arcitec changed the title [Quadlet][Rootless] The generated systemd service exits immediately with success code 0 [Quadlet][Rootless] The generated systemd service exits immediately with success code 0 [Update: It gets confused by extra cgroups and fails startup, see last replies!] Sep 17, 2024
@Arcitec Arcitec changed the title [Quadlet][Rootless] The generated systemd service exits immediately with success code 0 [Update: It gets confused by extra cgroups and fails startup, see last replies!] [Quadlet][Rootless] The generated systemd service exits immediately with success code 0 [Update: Podman currently doesn't support cgroup v1 + v2 mixed systems. See last reply!] Sep 17, 2024
@giuseppe
Copy link
Member

I've tried to manually mount a cgroup v1 hierarchy and everything seems to work fine. I don't think it is a podman/conmon/crun issue if the container runs. The cgroup somehow confuses systemd, that terminates the container.

Please try to replace the ExecStop directive with a /usr/bin/sleep 600 and restart the service. While the ExecStop command runs, can you check what systemctl status reports for the service?

Do you get the same behaviour if you stop the VPN service (the cgroup tree is still present but there is no process running)? That could also manage the cgroup and terminate processes that are not recognized

@Luap99
Copy link
Member

Luap99 commented Sep 18, 2024

Just to be clear here regarding the sequence,
podman starts the container fine, podman writes to the systemd notify socket the the mainpid of conmon and signals the ready status which also seems to work fine
Then something is causing systemd to stop the unit as ExecStop= is triggered, one way this happens if the mainpid conmon exited (which isn't the case here because then the container would have died already and no stop signal would be delivered by podman). So something else is confused by the cgroup setup which triggers the ExecStop= to be run.

@giuseppe
Copy link
Member

giuseppe commented Sep 18, 2024

I think I found the reason, we have a wrong check for the ownership for the current cgroup that causes the current process to be moved to a different scope. I've opened a PR:

@Arcitec
Copy link
Author

Arcitec commented Sep 18, 2024

@giuseppe Thank you so much. All of you are absolutely brilliant. Podman is one of the most important projects in the world and I am grateful to all of you. :)

@Arcitec Arcitec changed the title [Quadlet][Rootless] The generated systemd service exits immediately with success code 0 [Update: Podman currently doesn't support cgroup v1 + v2 mixed systems. See last reply!] [Quadlet][Rootless] The generated systemd service exits immediately with success code 0 [Podman currently doesn't support cgroup v1 + v2 mixed systems.] [Update: FIXED IN Podman Podman 5.3.0] Sep 19, 2024
@Arcitec Arcitec changed the title [Quadlet][Rootless] The generated systemd service exits immediately with success code 0 [Podman currently doesn't support cgroup v1 + v2 mixed systems.] [Update: FIXED IN Podman Podman 5.3.0] [Quadlet][Rootless] The generated systemd service exits immediately with success code 0 [Podman currently doesn't support cgroup v1 + v2 mixed systems.] [Update: FIXED in Podman 5.3.0] Sep 19, 2024
@atagen
Copy link

atagen commented Oct 9, 2024

is this fix in 5.2.4 or only forthcoming 5.3.0?

@giuseppe
Copy link
Member

giuseppe commented Oct 9, 2024

the fix is not in 5.2.4

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants