Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

API not compatible with Docker when pulling image with tag in both fromImage and tag #23938

Open
vlk-charles opened this issue Sep 11, 2024 · 2 comments · May be fixed by #24184
Open

API not compatible with Docker when pulling image with tag in both fromImage and tag #23938

vlk-charles opened this issue Sep 11, 2024 · 2 comments · May be fixed by #24184
Assignees
Labels
jira kind/bug Categorizes issue or PR as related to a bug. stale-issue

Comments

@vlk-charles
Copy link

vlk-charles commented Sep 11, 2024

Issue Description

The Podman API daemon handles the /images/create endpoint differently than Docker. The documentation specifies two parameters where the tag can appear when pulling an image:

  • fromImage - Name of the image to pull. The name may include a tag or digest.
  • tag - Tag or digest. If empty when pulling an image, this causes all tags for the given image to be pulled.

It is not clear to me whether it is a violation of the API to include the tag in both places or, if it is not, what the expected behavior is. When it is specified in both places, Podman fails to normalize the name to a valid format, whereas Docker is more graceful. Docker just takes the value from tag, potentially overriding any value in fromImage (treating ?fromImage=image%3Aanything&tag=latest the same as ?fromImage=image&tag=latest).

Steps to reproduce the issue

curl -vd '' --unix-socket "$XDG_RUNTIME_DIR/podman/podman.sock" 'http://localhost/v1.40/images/create?fromImage=hello-world%3Alatest&tag=latest'

Describe the results you received

Nothing was pulled because the image name got normalized to an invalid format:

*   Trying /run/user/987/podman/podman.sock:0...
* Connected to localhost (/run/user/987/podman/podman.sock) port 80 (#0)
> POST /v1.40/images/create?fromImage=hello-world%3Alatest&tag=latest HTTP/1.1
> Host: localhost
> User-Agent: curl/7.76.1
> Accept: */*
> Content-Length: 0
> Content-Type: application/x-www-form-urlencoded
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 500 Internal Server Error
< Api-Version: 1.41
< Content-Type: application/json
< Libpod-Api-Version: 4.9.4-rhel
< Server: Libpod/4.9.4-rhel (linux)
< X-Reference-Id: 0xc000542f08
< Date: Wed, 11 Sep 2024 22:31:51 GMT
< Content-Length: 174
< 
{"cause":"normalizing name for compat API: invalid reference format","message":"normalizing image: normalizing name for compat API: invalid reference format","response":500}

The daemon logs the following:

time="2024-09-11T21:18:53Z" level=debug msg="Looking up image \"hello-world:latest:latest\" in local containers storage"
time="2024-09-11T21:18:53Z" level=info msg="Request Failed(Internal Server Error): normalizing image: normalizing name for compat API: invalid reference format"
@ - - [11/Sep/2024:21:18:53 +0000] "POST /images/create?fromImage=hello-world%3Alatest&tag=latest HTTP/1.1" 500 174 "" "curl/7.76.1"

Describe the results you expected

I would expect this this unusual use of the API to be handled more gracefully, like Docker does:

# curl -vd '' --unix-socket /run/docker.sock 'http://localhost/v1.40/images/create?fromImage=hello-world%3Alatest&tag=latest'
*   Trying /run/docker.sock:0...
* Connected to localhost (/var/run/docker.sock) port 80 (#0)
> POST /v1.40/images/create?fromImage=hello-world%3Alatest&tag=latest HTTP/1.1
> Host: localhost
> User-Agent: curl/7.65.3
> Accept: */*
> Content-Length: 0
> Content-Type: application/x-www-form-urlencoded
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Api-Version: 1.40
< Content-Type: application/json
< Docker-Experimental: false
< Ostype: linux
< Server: Docker/19.03.12 (linux)
< Date: Wed, 11 Sep 2024 22:54:07 GMT
< Transfer-Encoding: chunked
< 
{"status":"Pulling from library/hello-world","id":"latest"}
{"status":"Pulling fs layer","progressDetail":{},"id":"c1ec31eb5944"}
{"status":"Downloading","progressDetail":{"current":719,"total":2459},"progress":"[==============\u003e                                    ]     719B/2.459kB","id":"c1ec31eb5944"}
{"status":"Downloading","progressDetail":{"current":2459,"total":2459},"progress":"[==================================================\u003e]  2.459kB/2.459kB","id":"c1ec31eb5944"}
{"status":"Download complete","progressDetail":{},"id":"c1ec31eb5944"}
{"status":"Extracting","progressDetail":{"current":2459,"total":2459},"progress":"[==================================================\u003e]  2.459kB/2.459kB","id":"c1ec31eb5944"}
{"status":"Extracting","progressDetail":{"current":2459,"total":2459},"progress":"[==================================================\u003e]  2.459kB/2.459kB","id":"c1ec31eb5944"}
{"status":"Pull complete","progressDetail":{},"id":"c1ec31eb5944"}
{"status":"Digest: sha256:91fb4b041da273d5a3273b6d587d62d518300a6ad268b28628f74997b93171b2"}
{"status":"Status: Downloaded newer image for hello-world:latest"}
* Connection #0 to host localhost left intact
# docker images hello-world
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
hello-world         latest              d2c94e258dcb        16 months ago       13.3kB

podman info output

host:
  arch: amd64
  buildahVersion: 1.33.8
  cgroupControllers:
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.10-1.el9.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.10, commit: 3ea3d7f99779af0fcd69ec16c211a7dc3b4efb60'
  cpuUtilization:
    idlePercent: 92.86
    systemPercent: 2.64
    userPercent: 4.49
  cpus: 2
  databaseBackend: sqlite
  distribution:
    distribution: rocky
    version: "9.4"
  eventLogger: file
  freeLocks: 2034
  hostname: <hostname>
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 987
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 987
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 5.14.0-427.20.1.el9_4.x86_64
  linkmode: dynamic
  logDriver: k8s-file
  memFree: 589791232
  memTotal: 5944541184
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: aardvark-dns-1.10.0-3.el9_4.x86_64
      path: /usr/libexec/podman/aardvark-dns
      version: aardvark-dns 1.10.0
    package: netavark-1.10.3-1.el9.x86_64
    path: /usr/libexec/podman/netavark
    version: netavark 1.10.3
  ociRuntime:
    name: crun
    package: crun-1.14.3-1.el9.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.14.3
      commit: 1961d211ba98f532ea52d2e80f4c20359f241a98
      rundir: /run/user/987/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
  os: linux
  pasta:
    executable: ""
    package: ""
    version: ""
  remoteSocket:
    exists: true
    path: /run/user/987/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /bin/slirp4netns
    package: slirp4netns-1.2.3-1.el9.x86_64
    version: |-
      slirp4netns version 1.2.3
      commit: c22fde291bb35b354e6ca44d13be181c76a0a432
      libslirp: 4.4.0
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.2
  swapFree: 0
  swapTotal: 0
  uptime: 2162h 57m 53.00s (Approximately 90.08 days)
  variant: ""
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - registry.access.redhat.com
  - registry.redhat.io
  - docker.io
store:
  configFile: /home/<username>/.config/containers/storage.conf
  containerStore:
    number: 8
    paused: 0
    running: 7
    stopped: 1
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /home/<username>/.local/share/containers/storage
  graphRootAllocated: 21407727616
  graphRootUsed: 16723095552
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Supports shifting: "false"
    Supports volatile: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 53
  runRoot: /tmp/containers-user-987/containers
  transientStore: false
  volumePath: /home/<username>/.local/share/containers/storage/volumes
version:
  APIVersion: 4.9.4-rhel
  Built: 1725442483
  BuiltTime: Wed Sep  4 09:34:43 2024
  GitCommit: ""
  GoVersion: go1.21.11 (Red Hat 1.21.11-1.el9_4)
  Os: linux
  OsArch: linux/amd64
  Version: 4.9.4-rhel

Podman in a container

No

Privileged Or Rootless

Rootless

Upstream Latest Release

No

Additional environment details

Fully updated Rocky Linux 9.4 on an ESXi-hosted VM.

Additional information

Issue manifests with Kestra (issue 4845).

@vlk-charles vlk-charles added the kind/bug Categorizes issue or PR as related to a bug. label Sep 11, 2024
@baude
Copy link
Member

baude commented Sep 13, 2024

@inknos could you take a look at this ?

@inknos inknos self-assigned this Sep 19, 2024
@inknos inknos added the jira label Sep 19, 2024
@inknos inknos linked a pull request Oct 7, 2024 that will close this issue
inknos added a commit to inknos/podman that referenced this issue Oct 9, 2024
Podman handles /images/create better when fromImage and Tag are
specified. Now the tag/digest value provided in Tag will replace the one
in fromImage

Fixes: containers#23938

Signed-off-by: Nicola Sella <[email protected]>
Copy link

A friendly reminder that this issue had no activity for 30 days.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
jira kind/bug Categorizes issue or PR as related to a bug. stale-issue
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants