-
Notifications
You must be signed in to change notification settings - Fork 113
feat(ppc64le) codeserver workbench image build on power #2317
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(ppc64le) codeserver workbench image build on power #2317
Conversation
Signed-off-by: Md. Shafi Hussain <[email protected]> # Conflicts: # codeserver/ubi9-python-3.12/Dockerfile.cpu
Signed-off-by: Md. Shafi Hussain <[email protected]>
WalkthroughAdds ppc64le to Tekton PipelineRun build platforms and extends arm64 inclusion for the matching component in the PR generator. Overhauls the UBI9 Python 3.12 CPU Dockerfile with multi-stage builds, RPM-based code-server installation, caching, and ppc64le-specific handling. Introduces new scripts to set up ppc64le dev env and build/fetch code-server RPMs. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes ✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
Status, Documentation and Community
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@jiridanek: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
codeserver/ubi9-python-3.12/Dockerfile.cpu (1)
93-99
: oc download uses uname -m; mirror expects arch aliasesKnown issue: mirror path differs (arm64 vs aarch64). Use TARGETARCH mapping.
-# Install the oc client begin -RUN curl -L https://mirror.openshift.com/pub/openshift-v4/$(uname -m)/clients/ocp/stable/openshift-client-linux.tar.gz \ +# Install the oc client begin +ARG TARGETARCH +RUN set -eux; \ + case "$TARGETARCH" in \ + amd64) OC_ARCH=x86_64 ;; \ + arm64) OC_ARCH=arm64 ;; \ + ppc64le) OC_ARCH=ppc64le ;; \ + s390x) OC_ARCH=s390x ;; \ + *) echo "Unsupported TARGETARCH=$TARGETARCH" >&2; exit 1 ;; \ + esac; \ + curl -L "https://mirror.openshift.com/pub/openshift-v4/${OC_ARCH}/clients/ocp/stable/openshift-client-linux.tar.gz" \ -o /tmp/openshift-client-linux.tar.gz && \ tar -xzvf /tmp/openshift-client-linux.tar.gz oc && \ rm -f /tmp/openshift-client-linux.tar.gz
♻️ Duplicate comments (1)
codeserver/ubi9-python-3.12/get_code_server_rpm.sh (1)
65-66
: Patches handling is brittle but tracked elsewhereThe unconditional application of patches assumes patches/series exists. Not blocking here; issue is already being tracked per previous discussion.
If you want, I can send a small guard to noop when patches/series is absent.
🧹 Nitpick comments (6)
codeserver/ubi9-python-3.12/devel_env_setup.sh (1)
35-38
: Avoid expensive global filesystem scans; fix ShellCheck SC2155
- PKG_CONFIG_PATH via find / is slow and nondeterministic.
- Exporting command substitution directly triggers SC2155.
- export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/OpenBLAS/lib/ - export PKG_CONFIG_PATH=$(find / -type d -name "pkgconfig" 2>/dev/null | tr '\n' ':') + export LD_LIBRARY_PATH="${LD_LIBRARY_PATH:+${LD_LIBRARY_PATH}:}/opt/OpenBLAS/lib" + PKG_PATHS="/usr/lib64/pkgconfig:/usr/lib/pkgconfig:/usr/share/pkgconfig:/opt/OpenBLAS/lib/pkgconfig" + export PKG_CONFIG_PATH="${PKG_CONFIG_PATH:+${PKG_CONFIG_PATH}:}${PKG_PATHS}"scripts/generate_pull_request_pipelineruns.py (1)
144-151
: Platform matrix growth: consider centralizing per-component arch policyThe component lists are diverging. A single dict mapping component → [arches] avoids duplication and mismatches with push pipelines.
- if component in [ - "odh-workbench-codeserver-datascience-cpu-py312-ubi9", - "odh-workbench-jupyter-minimal-cpu-py312-ubi9", - "odh-pipeline-runtime-minimal-cpu-py312-ubi9", - "odh-pipeline-runtime-datascience-cpu-py312-ubi9", - ]: - build_platforms.extend(["linux/ppc64le"]) + component_arch_overrides = { + "odh-workbench-codeserver-datascience-cpu-py312-ubi9": ["linux/ppc64le"], + "odh-workbench-jupyter-minimal-cpu-py312-ubi9": ["linux/ppc64le"], + "odh-pipeline-runtime-minimal-cpu-py312-ubi9": ["linux/ppc64le", "linux/s390x", "linux/arm64"], + "odh-pipeline-runtime-datascience-cpu-py312-ubi9": ["linux/ppc64le", "linux/s390x"], + } + build_platforms.extend(component_arch_overrides.get(component, []))codeserver/ubi9-python-3.12/get_code_server_rpm.sh (2)
48-49
: PKG_CONFIG_PATH via global find is heavy; prefer known prefixesMinor, but speeds builds and reduces noise.
- export PKG_CONFIG_PATH=$(find / -type d -name "pkgconfig" 2>/dev/null | tr '\n' ':') + PKG_PATHS="/usr/lib64/pkgconfig:/usr/lib/pkgconfig:/usr/share/pkgconfig" + export PKG_CONFIG_PATH="${PKG_CONFIG_PATH:+${PKG_CONFIG_PATH}:}${PKG_PATHS}"
51-53
: Shellcheck SC2046: quote command substitution and archRobust against spaces and ensures correct rpm arch.
- NFPM_VERSION=$(curl -s "https://api.github.com/repos/goreleaser/nfpm/releases/latest" | jq -r '.tag_name') \ - && dnf install -y https://github.com/goreleaser/nfpm/releases/download/${NFPM_VERSION}/nfpm-${NFPM_VERSION:1}-1.$(uname -m).rpm + NFPM_VERSION="$(curl -s https://api.github.com/repos/goreleaser/nfpm/releases/latest | jq -r '.tag_name')" \ + && dnf install -y "https://github.com/goreleaser/nfpm/releases/download/${NFPM_VERSION}/nfpm-${NFPM_VERSION:1}-1.$(uname -m).rpm"codeserver/ubi9-python-3.12/Dockerfile.cpu (2)
77-85
: OpenBLAS install on ppc64le stage: align with builder PREFIX and avoid misspelling mount path
- Mount target is /OpenBlas (capitalization differs from /OpenBLAS in setup script). Ensure consistency.
- If the builder installed to /opt/OpenBLAS, pass PREFIX=/opt/OpenBLAS here too.
-RUN --mount=type=cache,from=whl-cache,source=/root/OpenBLAS,target=/OpenBlas,rw \ +RUN --mount=type=cache,from=whl-cache,source=/root/OpenBLAS,target=/OpenBLAS,rw \ bash -c ' \ if [[ $(uname -m) == "ppc64le" ]]; then \ dnf install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm; \ dnf install -y lcms2 libraqm libimagequant openjpeg2; \ - PREFIX=/usr/ make install -C /OpenBlas; \ + PREFIX=/opt/OpenBLAS make -C /OpenBLAS install; \ fi 'If you intend to install into /usr, also reflect that in devel_env_setup.sh.
215-229
: uv offline + debugpy from source: OK; ensure cache reuseLooks good. After installing as root, you chown /opt/app-root/lib; also consider chowning /root/.cache/uv only if it gets persisted across layers.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (5)
.tekton/odh-workbench-codeserver-datascience-cpu-py312-ubi9-pull-request.yaml
(1 hunks)codeserver/ubi9-python-3.12/Dockerfile.cpu
(4 hunks)codeserver/ubi9-python-3.12/devel_env_setup.sh
(1 hunks)codeserver/ubi9-python-3.12/get_code_server_rpm.sh
(1 hunks)scripts/generate_pull_request_pipelineruns.py
(1 hunks)
🧰 Additional context used
🧠 Learnings (14)
📓 Common learnings
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-05T17:24:08.616Z
Learning: jiridanek requested PR review for #1521 covering s390x architecture support improvements, demonstrating continued focus on systematic multi-architecture compatibility enhancements in the opendatahub-io/notebooks repository through clean implementation with centralized configuration, proper CI integration, and architecture-aware testing patterns.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2215
File: runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:0-0
Timestamp: 2025-09-05T11:27:31.022Z
Learning: jiridanek requested GitHub issue creation for build toolchain optimization in datascience runtime during PR #2215 review. Issue #2308 was created addressing unnecessary build dependencies (gcc-toolset-13, cmake, ninja-build, rust, cargo) in final runtime image for ppc64le architecture, covering comprehensive problem analysis with specific line numbers, multiple solution options for builder-only toolchains, clear acceptance criteria for size reduction and security improvement, detailed implementation guidance for package segregation, and proper context linking to PR #2215 review comment, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:122-123
Timestamp: 2025-09-05T12:25:09.696Z
Learning: jiridanek requested GitHub issue creation for Docker multi-stage synchronization improvement in codeserver/ubi9-python-3.12/Dockerfile.cpu during PR #2227 review. The issue addresses sentinel file pattern using /tmp/control copied to /dev/null for stage coordination between rpm-base, whl-cache, and codeserver stages, proposing semantic improvements with descriptive file names, inline documentation, and elimination of /dev/null hack while maintaining multi-architecture build functionality for ppc64le support.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:126-128
Timestamp: 2025-09-05T12:25:58.313Z
Learning: jiridanek expressed concern about code-server installation complexity in codeserver/ubi9-python-3.12/Dockerfile.cpu during PR #2227 review, describing the multi-stage cache mount approach as a "hack" that worries them. GitHub issue #2315 was created addressing the architectural complexity with comprehensive problem description, four solution options, clear acceptance criteria, and implementation considerations, assigned to jiridanek, continuing the established pattern of systematic technical improvement tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1179
File: jupyter/utils/install_pandoc.sh:1-1
Timestamp: 2025-09-05T07:46:50.781Z
Learning: jiridanek requested GitHub issue creation during PR #1179 review to explore installing Pandoc from EPEL repository for ppc64le architecture as an alternative to building from source, noting that EPEL packages are acceptable unlike CentOS Stream in red-hat-data-services/notebooks. Issue #2281 was successfully created with comprehensive problem description covering build complexity concerns, multiple solution options, clear acceptance criteria, and proper context linking, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1320
File: jupyter/pytorch/ubi9-python-3.12/Dockerfile.cuda:42-52
Timestamp: 2025-07-09T12:29:56.162Z
Learning: jiridanek requested GitHub issue creation for OpenShift client architecture mapping problem affecting 29 Dockerfiles during PR #1320 review. Issue was created with comprehensive analysis covering all affected files using $(uname -m) returning 'aarch64' but OpenShift mirror expecting 'arm64', systematic solution using BuildKit TARGETARCH mapping with proper amd64→x86_64 and arm64→arm64 conversion, detailed acceptance criteria, and implementation guidance, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:218-218
Timestamp: 2025-09-05T12:29:07.786Z
Learning: jiridanek requested GitHub issue creation for uv multi-stage Docker build architectural investigation during PR #2227 review. The current implementation uses a three-stage build with whl-cache stage for wheel building/caching, base stage for OS setup, and final codeserver stage for offline installation using --offline flag and cache mounts. The pattern separates build phase (internet access, build tools) from install phase (offline, faster) while supporting multi-architecture builds (x86_64, ppc64le) with sentinel file coordination using /tmp/control files.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1179
File: jupyter/utils/install_pandoc.sh:1-1
Timestamp: 2025-09-05T07:46:50.781Z
Learning: jiridanek requested GitHub issue creation during PR #1179 review to explore installing Pandoc from EPEL repository for ppc64le architecture as an alternative to building from source, noting that EPEL packages are acceptable unlike CentOS Stream in red-hat-data-services/notebooks, continuing the established pattern of systematic infrastructure improvements through comprehensive issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-01T17:35:29.175Z
Learning: jiridanek requested GitHub issue creation for adding pytorch+llmcompressor images to Makefile build targets during PR #1519 review. Issue #1598 was successfully created with comprehensive problem description covering missing build targets for both jupyter workbench and runtime images, detailed solution with specific Makefile code examples following established patterns, thorough acceptance criteria covering individual targets, BASE_DIRS variable inclusion, and all-images target integration, implementation notes about handling '+' characters in paths, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2215
File: runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:0-0
Timestamp: 2025-09-05T12:10:50.840Z
Learning: jiridanek requested GitHub issue creation for Dockerfile environment variable refactoring during PR #2215 review. Issue #2311 was created addressing build-only variables (OPENBLAS_VERSION, ONNX_VERSION, GRPC_PYTHON_BUILD_SYSTEM_OPENSSL) being unnecessarily written to /etc/profile.d/ppc64le.sh in runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu, causing variable duplication across stages, unreliable sourcing in non-login build contexts, and violation of DRY principles. The issue includes comprehensive problem description covering affected lines 30-37, detailed impact analysis of build reliability and maintenance overhead, three solution options with centralized ARG/ENV approach as recommended, clear acceptance criteria for version centralization and build-only variable cleanup, and specific implementation guidance with code examples, assigned to jiridanek, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2180
File: .tekton/odh-base-image-cuda-py312-ubi9-pull-request.yaml:36-39
Timestamp: 2025-08-28T12:42:23.404Z
Learning: Konflux build system in opendatahub-io/notebooks repository requires "linux/x86_64" platform identifier format for Tekton PipelineRun build-platforms parameter, not the OCI-standard "linux/amd64". This is consistently used across all .tekton/ pipeline files and enforced by scripts/generate_pull_request_pipelineruns.py type definitions. Docker/buildah contexts use "linux/amd64" while Konflux/Tekton contexts use "linux/x86_64".
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2215
File: runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:0-0
Timestamp: 2025-09-05T12:10:50.840Z
Learning: jiridanek requested GitHub issue creation for Dockerfile environment variable refactoring during PR #2215 review. Issue addresses build-only variables (OPENBLAS_VERSION, ONNX_VERSION, GRPC_PYTHON_BUILD_SYSTEM_OPENSSL) being unnecessarily written to /etc/profile.d/ppc64le.sh in runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu, causing variable duplication across stages, unreliable sourcing in non-login build contexts, and violation of DRY principles. The issue includes comprehensive problem description covering affected lines 30-37, detailed impact analysis of build reliability and maintenance overhead, three solution options with centralized ARG/ENV approach as recommended, clear acceptance criteria for version centralization and build-only variable cleanup, and specific implementation guidance with code examples, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:126-128
Timestamp: 2025-09-05T12:25:58.313Z
Learning: jiridanek expressed concern about code-server installation complexity in codeserver/ubi9-python-3.12/Dockerfile.cpu during PR #2227 review, describing the multi-stage cache mount approach as a "hack" that worries them, leading to GitHub issue creation for systematic architectural improvement addressing maintainability concerns.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2185
File: jupyter/pytorch/ubi9-python-3.12/Dockerfile.cuda:109-117
Timestamp: 2025-08-29T08:48:55.985Z
Learning: jiridanek prefers to implement systematic cleanup improvements through dedicated GitHub issues (like #2076) rather than applying individual point fixes during PR reviews, maintaining consistency with their established pattern of comprehensive code quality improvements.
📚 Learning: 2025-09-05T12:10:50.840Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2215
File: runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:0-0
Timestamp: 2025-09-05T12:10:50.840Z
Learning: jiridanek requested GitHub issue creation for Dockerfile environment variable refactoring during PR #2215 review. Issue addresses build-only variables (OPENBLAS_VERSION, ONNX_VERSION, GRPC_PYTHON_BUILD_SYSTEM_OPENSSL) being unnecessarily written to /etc/profile.d/ppc64le.sh in runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu, causing variable duplication across stages, unreliable sourcing in non-login build contexts, and violation of DRY principles. The issue includes comprehensive problem description covering affected lines 30-37, detailed impact analysis of build reliability and maintenance overhead, three solution options with centralized ARG/ENV approach as recommended, clear acceptance criteria for version centralization and build-only variable cleanup, and specific implementation guidance with code examples, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.
Applied to files:
codeserver/ubi9-python-3.12/devel_env_setup.sh
codeserver/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-09-05T12:10:50.840Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2215
File: runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:0-0
Timestamp: 2025-09-05T12:10:50.840Z
Learning: jiridanek requested GitHub issue creation for Dockerfile environment variable refactoring during PR #2215 review. Issue #2311 was created addressing build-only variables (OPENBLAS_VERSION, ONNX_VERSION, GRPC_PYTHON_BUILD_SYSTEM_OPENSSL) being unnecessarily written to /etc/profile.d/ppc64le.sh in runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu, causing variable duplication across stages, unreliable sourcing in non-login build contexts, and violation of DRY principles. The issue includes comprehensive problem description covering affected lines 30-37, detailed impact analysis of build reliability and maintenance overhead, three solution options with centralized ARG/ENV approach as recommended, clear acceptance criteria for version centralization and build-only variable cleanup, and specific implementation guidance with code examples, assigned to jiridanek, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.
Applied to files:
codeserver/ubi9-python-3.12/devel_env_setup.sh
codeserver/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-09-04T09:23:45.728Z
Learnt from: atheo89
PR: opendatahub-io/notebooks#2271
File: .tekton/odh-workbench-jupyter-datascience-cpu-py312-ubi9-pull-request.yaml:43-44
Timestamp: 2025-09-04T09:23:45.728Z
Learning: atheo89 corrected CodeRabbit's analysis error regarding the build-args-file parameter in the generate_pull_request_pipelineruns.py script during PR #2271 review. The script was updated with dynamic parameter handling that automatically extracts parameters (including build-args-file) from push pipelines and includes them in generated PR pipelines, contradicting CodeRabbit's assessment that the parameter was missing from the generator. This demonstrates the importance of understanding the full context of script modifications before flagging parameter generation issues.
Applied to files:
scripts/generate_pull_request_pipelineruns.py
📚 Learning: 2025-08-28T12:42:23.404Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2180
File: .tekton/odh-base-image-cuda-py312-ubi9-pull-request.yaml:36-39
Timestamp: 2025-08-28T12:42:23.404Z
Learning: Konflux build system in opendatahub-io/notebooks repository requires "linux/x86_64" platform identifier format for Tekton PipelineRun build-platforms parameter, not the OCI-standard "linux/amd64". This is consistently used across all .tekton/ pipeline files and enforced by scripts/generate_pull_request_pipelineruns.py type definitions. Docker/buildah contexts use "linux/amd64" while Konflux/Tekton contexts use "linux/x86_64".
Applied to files:
scripts/generate_pull_request_pipelineruns.py
.tekton/odh-workbench-codeserver-datascience-cpu-py312-ubi9-pull-request.yaml
📚 Learning: 2025-08-05T17:24:08.616Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-05T17:24:08.616Z
Learning: jiridanek requested PR review for #1521 covering s390x architecture support improvements, demonstrating continued focus on systematic multi-architecture compatibility enhancements in the opendatahub-io/notebooks repository through clean implementation with centralized configuration, proper CI integration, and architecture-aware testing patterns.
Applied to files:
.tekton/odh-workbench-codeserver-datascience-cpu-py312-ubi9-pull-request.yaml
📚 Learning: 2025-09-05T12:25:09.696Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:122-123
Timestamp: 2025-09-05T12:25:09.696Z
Learning: jiridanek requested GitHub issue creation for Docker multi-stage synchronization improvement in codeserver/ubi9-python-3.12/Dockerfile.cpu during PR #2227 review. The issue addresses sentinel file pattern using /tmp/control copied to /dev/null for stage coordination between rpm-base, whl-cache, and codeserver stages, proposing semantic improvements with descriptive file names, inline documentation, and elimination of /dev/null hack while maintaining multi-architecture build functionality for ppc64le support.
Applied to files:
.tekton/odh-workbench-codeserver-datascience-cpu-py312-ubi9-pull-request.yaml
codeserver/ubi9-python-3.12/get_code_server_rpm.sh
codeserver/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-09-05T11:27:31.022Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2215
File: runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:0-0
Timestamp: 2025-09-05T11:27:31.022Z
Learning: jiridanek requested GitHub issue creation for build toolchain optimization in datascience runtime during PR #2215 review. Issue #2308 was created addressing unnecessary build dependencies (gcc-toolset-13, cmake, ninja-build, rust, cargo) in final runtime image for ppc64le architecture, covering comprehensive problem analysis with specific line numbers, multiple solution options for builder-only toolchains, clear acceptance criteria for size reduction and security improvement, detailed implementation guidance for package segregation, and proper context linking to PR #2215 review comment, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.
Applied to files:
.tekton/odh-workbench-codeserver-datascience-cpu-py312-ubi9-pull-request.yaml
📚 Learning: 2025-08-27T15:33:28.871Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2145
File: runtimes/rocm-tensorflow/ubi9-python-3.12/pyproject.toml:75-77
Timestamp: 2025-08-27T15:33:28.871Z
Learning: jiridanek requested GitHub issue creation for multi-architecture environment exploration during PR #2145 review, specifically to investigate architecture-specific environments for ROCm TensorFlow wheel optimization across x86_64, aarch64, ppc64le, and s390x architectures, noting uncertainty about implementation benefits but wanting systematic exploration through issue tracking.
Applied to files:
.tekton/odh-workbench-codeserver-datascience-cpu-py312-ubi9-pull-request.yaml
📚 Learning: 2025-09-05T12:34:48.347Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/get_code_server_rpm.sh:65-66
Timestamp: 2025-09-05T12:34:48.347Z
Learning: jiridanek requested GitHub issue creation for patches mechanism improvement in codeserver/ubi9-python-3.12/get_code_server_rpm.sh during PR #2227 review. GitHub issue #2318 was created addressing fragile patches application that assumes patches/series always exists, proposing conditional patch handling with proper validation, error handling, and documentation, assigned to jiridanek, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Applied to files:
codeserver/ubi9-python-3.12/get_code_server_rpm.sh
📚 Learning: 2025-09-05T12:25:58.313Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:126-128
Timestamp: 2025-09-05T12:25:58.313Z
Learning: jiridanek expressed concern about code-server installation complexity in codeserver/ubi9-python-3.12/Dockerfile.cpu during PR #2227 review, describing the multi-stage cache mount approach as a "hack" that worries them, leading to GitHub issue creation for systematic architectural improvement addressing maintainability concerns.
Applied to files:
codeserver/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-09-05T12:29:07.786Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:218-218
Timestamp: 2025-09-05T12:29:07.786Z
Learning: jiridanek requested GitHub issue creation for uv multi-stage Docker build architectural investigation during PR #2227 review. The current implementation uses a three-stage build with whl-cache stage for wheel building/caching, base stage for OS setup, and final codeserver stage for offline installation using --offline flag and cache mounts. The pattern separates build phase (internet access, build tools) from install phase (offline, faster) while supporting multi-architecture builds (x86_64, ppc64le) with sentinel file coordination using /tmp/control files.
Applied to files:
codeserver/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-09-05T12:25:58.313Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:126-128
Timestamp: 2025-09-05T12:25:58.313Z
Learning: jiridanek expressed concern about code-server installation complexity in codeserver/ubi9-python-3.12/Dockerfile.cpu during PR #2227 review, describing the multi-stage cache mount approach as a "hack" that worries them. GitHub issue #2315 was created addressing the architectural complexity with comprehensive problem description, four solution options, clear acceptance criteria, and implementation considerations, assigned to jiridanek, continuing the established pattern of systematic technical improvement tracking.
Applied to files:
codeserver/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-07-07T11:08:48.524Z
Learnt from: atheo89
PR: opendatahub-io/notebooks#1258
File: codeserver/ubi9-python-3.11/Dockerfile.cpu:32-32
Timestamp: 2025-07-07T11:08:48.524Z
Learning: atheo89 requested GitHub issue creation for multi-architecture Dockerfile improvements during PR #1258 review, specifically for enhancing structural consistency across Docker stages, replacing $(uname -m) with ${TARGETARCH} for cross-architecture builds, and adding OCI-compliant metadata labels. Issue #1332 was created with comprehensive problem description, phased implementation approach, detailed acceptance criteria, implementation guidance with code examples, and proper context linking, continuing the established pattern of systematic code quality improvements.
Applied to files:
codeserver/ubi9-python-3.12/Dockerfile.cpu
🪛 Shellcheck (0.10.0)
codeserver/ubi9-python-3.12/devel_env_setup.sh
[warning] 37-37: Declare and assign separately to avoid masking return values.
(SC2155)
codeserver/ubi9-python-3.12/get_code_server_rpm.sh
[warning] 48-48: Declare and assign separately to avoid masking return values.
(SC2155)
[warning] 52-52: Quote this to prevent word splitting.
(SC2046)
🔇 Additional comments (4)
codeserver/ubi9-python-3.12/devel_env_setup.sh (1)
22-24
: Threading knobs: make NUM_THREADS track available coresHard-coding NUM_THREADS=120 can degrade perf on small VMs. Tie it to MAX_JOBS.
- export MAX_JOBS=${MAX_JOBS:-$(nproc)} + export MAX_JOBS=${MAX_JOBS:-$(nproc)}(See follow-up diff below at the build site.)
Likely an incorrect or invalid review comment.
.tekton/odh-workbench-codeserver-datascience-cpu-py312-ubi9-pull-request.yaml (1)
37-40
: Add ppc64le to build-platforms: LGTMMatches Konflux platform naming (“linux/ppc64le”). No other YAML hazards spotted.
codeserver/ubi9-python-3.12/get_code_server_rpm.sh (1)
79-82
: Download path depends on version formatting—ensure leading ‘v’ handling is consistentYou strip the leading ‘v’ in the filename; release tags use a leading ‘v’. With the global default fixed, this should work, but please verify the URL pattern for all arches.
codeserver/ubi9-python-3.12/Dockerfile.cpu (1)
127-134
: RPM filename suffix must match TARGETARCH (GOARCH)This assumes code-server RPMs are named with GOARCH (amd64/arm64). Your get_code_server_rpm.sh produces ppc64le and downloads GOARCH-named RPMs for others. Verify amd64/arm64 names at runtime.
If needed, map TARGETARCH→rpm arch like you did for oc.
dnf install -y cmake gcc-toolset-13 fribidi-devel lcms2-devel \ | ||
libimagequant-devel libraqm-devel openjpeg2-devel tcl-devel tk-devel | ||
|
||
# install rust | ||
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y | ||
|
||
source /opt/rh/gcc-toolset-13/enable | ||
source "$HOME/.cargo/env" | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Ensure required toolchains are actually available and sourced safely
- gcc-toolset-13 is enabled, but rustup isn’t needed here unless something downstream compiles Rust; if truly required, keep it, otherwise drop to reduce host mutations.
- Guard the source lines with existence checks to avoid non-zero exits on hosts without the SCL path or cargo env.
Apply:
- source /opt/rh/gcc-toolset-13/enable
- source "$HOME/.cargo/env"
+ if [[ -f /opt/rh/gcc-toolset-13/enable ]]; then source /opt/rh/gcc-toolset-13/enable; fi
+ if [[ -f "$HOME/.cargo/env" ]]; then source "$HOME/.cargo/env"; fi
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
dnf install -y cmake gcc-toolset-13 fribidi-devel lcms2-devel \ | |
libimagequant-devel libraqm-devel openjpeg2-devel tcl-devel tk-devel | |
# install rust | |
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y | |
source /opt/rh/gcc-toolset-13/enable | |
source "$HOME/.cargo/env" | |
dnf install -y cmake gcc-toolset-13 fribidi-devel lcms2-devel \ | |
libimagequant-devel libraqm-devel openjpeg2-devel tcl-devel tk-devel | |
# install rust | |
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y | |
if [[ -f /opt/rh/gcc-toolset-13/enable ]]; then source /opt/rh/gcc-toolset-13/enable; fi | |
if [[ -f "$HOME/.cargo/env" ]]; then source "$HOME/.cargo/env"; fi |
🤖 Prompt for AI Agents
In codeserver/ubi9-python-3.12/devel_env_setup.sh around lines 13 to 21, the
script always installs and sources rustup and blindly sources the gcc-toolset
and cargo env which can fail on hosts where those files/paths are absent; remove
the rustup installation if no downstream Rust compilation is required, otherwise
keep it but only install when needed, and protect both source lines with
existence checks (test the SCL enable script and the $HOME/.cargo/env file
before sourcing) so the script does not exit with an error on hosts that lack
those files.
# Install OpenBlas | ||
# IMPORTANT: Ensure Openblas is installed in the final image | ||
curl -L https://github.com/OpenMathLib/OpenBLAS/releases/download/v${OPENBLAS_VERSION}/OpenBLAS-${OPENBLAS_VERSION}.tar.gz | tar xz | ||
# rename directory for mounting (without knowing version numbers) in multistage builds | ||
mv OpenBLAS-${OPENBLAS_VERSION}/ OpenBLAS/ | ||
cd OpenBLAS/ | ||
make -j${MAX_JOBS} TARGET=POWER9 BINARY=64 USE_OPENMP=1 USE_THREAD=1 NUM_THREADS=120 DYNAMIC_ARCH=1 INTERFACE64=0 | ||
make install | ||
cd .. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
OpenBLAS install: prefer PREFIX + threads from MAX_JOBS
- Use PREFIX to control install root explicitly.
- Don’t force NUM_THREADS=120; use MAX_JOBS.
- make -j${MAX_JOBS} TARGET=POWER9 BINARY=64 USE_OPENMP=1 USE_THREAD=1 NUM_THREADS=120 DYNAMIC_ARCH=1 INTERFACE64=0
- make install
+ make -j"${MAX_JOBS}" TARGET=POWER9 BINARY=64 USE_OPENMP=1 USE_THREAD=1 NUM_THREADS="${MAX_JOBS}" DYNAMIC_ARCH=1 INTERFACE64=0
+ make install PREFIX=/opt/OpenBLAS
And adjust the runtime path below to match /opt/OpenBLAS if changed elsewhere.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
# Install OpenBlas | |
# IMPORTANT: Ensure Openblas is installed in the final image | |
curl -L https://github.com/OpenMathLib/OpenBLAS/releases/download/v${OPENBLAS_VERSION}/OpenBLAS-${OPENBLAS_VERSION}.tar.gz | tar xz | |
# rename directory for mounting (without knowing version numbers) in multistage builds | |
mv OpenBLAS-${OPENBLAS_VERSION}/ OpenBLAS/ | |
cd OpenBLAS/ | |
make -j${MAX_JOBS} TARGET=POWER9 BINARY=64 USE_OPENMP=1 USE_THREAD=1 NUM_THREADS=120 DYNAMIC_ARCH=1 INTERFACE64=0 | |
make install | |
cd .. | |
# Install OpenBlas | |
# IMPORTANT: Ensure Openblas is installed in the final image | |
curl -L https://github.com/OpenMathLib/OpenBLAS/releases/download/v${OPENBLAS_VERSION}/OpenBLAS-${OPENBLAS_VERSION}.tar.gz | tar xz | |
# rename directory for mounting (without knowing version numbers) in multistage builds | |
mv OpenBLAS-${OPENBLAS_VERSION}/ OpenBLAS/ | |
cd OpenBLAS/ | |
make -j"${MAX_JOBS}" TARGET=POWER9 BINARY=64 USE_OPENMP=1 USE_THREAD=1 NUM_THREADS="${MAX_JOBS}" DYNAMIC_ARCH=1 INTERFACE64=0 | |
make install PREFIX=/opt/OpenBLAS | |
cd .. |
🤖 Prompt for AI Agents
In codeserver/ubi9-python-3.12/devel_env_setup.sh around lines 25 to 33, the
OpenBLAS build/install forces NUM_THREADS=120 and does not set an explicit
install prefix; change the make/install invocation to pass PREFIX (e.g.
/opt/OpenBLAS) to control install root and replace NUM_THREADS=120 with
NUM_THREADS=${MAX_JOBS} (or remove NUM_THREADS and rely on -j${MAX_JOBS}) so
threading follows the build parallelism; update subsequent runtime path
references in the script to match the chosen PREFIX (/opt/OpenBLAS) to keep
build and runtime consistent.
export CODESERVER_VERSION=${CODESERVER_VERSION:-v4.98.0} | ||
|
||
export NVM_DIR=/root/.nvm VENV=/opt/.venv | ||
export PATH=${VENV}/bin:$PATH | ||
|
||
export ELECTRON_SKIP_BINARY_DOWNLOAD=1 PLAYWRIGHT_SKIP_BROWSER_DOWNLOAD=1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Defaults should apply for all arches
Else-branch uses CODESERVER_VERSION but it’s only defaulted inside the ppc64le branch.
- export NODE_VERSION=${NODE_VERSION:-20}
- export CODESERVER_VERSION=${CODESERVER_VERSION:-v4.98.0}
+ : "${NODE_VERSION:=20}"
+ : "${CODESERVER_VERSION:=v4.98.0}"
+ export NODE_VERSION CODESERVER_VERSION
Place this above the if-block so both branches see it.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
export CODESERVER_VERSION=${CODESERVER_VERSION:-v4.98.0} | |
export NVM_DIR=/root/.nvm VENV=/opt/.venv | |
export PATH=${VENV}/bin:$PATH | |
export ELECTRON_SKIP_BINARY_DOWNLOAD=1 PLAYWRIGHT_SKIP_BROWSER_DOWNLOAD=1 | |
: "${NODE_VERSION:=20}" | |
: "${CODESERVER_VERSION:=v4.98.0}" | |
export NODE_VERSION CODESERVER_VERSION | |
export NVM_DIR=/root/.nvm VENV=/opt/.venv | |
export PATH=${VENV}/bin:$PATH | |
export ELECTRON_SKIP_BINARY_DOWNLOAD=1 PLAYWRIGHT_SKIP_BROWSER_DOWNLOAD=1 |
🤖 Prompt for AI Agents
In codeserver/ubi9-python-3.12/get_code_server_rpm.sh around lines 24 to 29, the
export of CODESERVER_VERSION is currently inside the ppc64le branch so the
else-branch doesn’t see the default; move the line "export
CODESERVER_VERSION=${CODESERVER_VERSION:-v4.98.0}" out of the if-block and place
it above the if-block so the default applies to all architectures, leaving the
remaining exports (NVM_DIR, VENV, PATH, ELECTRON_SKIP_BINARY_DOWNLOAD,
PLAYWRIGHT_SKIP_BROWSER_DOWNLOAD) unchanged.
# install build dependencies | ||
# dnf install -y \ | ||
# git gcc-toolset-13 automake libtool rsync krb5-devel libX11-devel gettext jq patch | ||
dnf install -y jq libtool gcc-toolset-13 | ||
|
||
. /opt/rh/gcc-toolset-13/enable | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ppc64le build will fail: missing essential build deps (git, autoconf/automake, make, patch, etc.)
You commented out the correct set and replaced it with a minimal subset. libxkbfile’s autogen + code-server build need these.
- dnf install -y jq libtool gcc-toolset-13
+ dnf install -y \
+ git gcc-toolset-13 automake autoconf make libtool rsync \
+ krb5-devel libX11-devel gettext jq patch
Also consider “dnf groupinstall -y 'Development Tools'” if acceptable.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
# install build dependencies | |
# dnf install -y \ | |
# git gcc-toolset-13 automake libtool rsync krb5-devel libX11-devel gettext jq patch | |
dnf install -y jq libtool gcc-toolset-13 | |
. /opt/rh/gcc-toolset-13/enable | |
# install build dependencies | |
# dnf install -y \ | |
# git gcc-toolset-13 automake libtool rsync krb5-devel libX11-devel gettext jq patch | |
dnf install -y \ | |
git gcc-toolset-13 automake autoconf make libtool rsync \ | |
krb5-devel libX11-devel gettext jq patch | |
. /opt/rh/gcc-toolset-13/enable |
🤖 Prompt for AI Agents
In codeserver/ubi9-python-3.12/get_code_server_rpm.sh around lines 31 to 37, the
reduced dnf install removed essential build dependencies (git,
autoconf/automake, make, patch, libX11-devel, krb5-devel, etc.) causing ppc64le
builds to fail; restore the original full dependency list (or use dnf
groupinstall -y 'Development Tools' plus the extra packages like libX11-devel,
krb5-devel, gettext, jq, patch) and keep enabling gcc-toolset-13 afterwards so
autogen and code-server build steps have all required tools.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it seems to be building just fine, so far, but @coderabbitai please create github issue about unifying the build tools install for builder image
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
@jiridanek I'll create a GitHub issue to address the build tools installation standardization across builder images.
The issue has been created to address the systematic unification of build tools installation across builder images. This aligns with your preference for comprehensive infrastructure improvements and will help standardize the build process while maintaining the flexibility needed for different architectures and use cases.
🏁 Script executed:
#!/bin/bash
# Create GitHub issue for build tools unification
gh issue create \
--title "Unify build tools installation across builder images" \
--body "## Problem Description
Currently, build tools installation varies across different builder images in the repository, leading to inconsistencies and potential maintenance issues. This was highlighted during PR #2317 review where the code-server ppc64le build dependencies were reduced but still functioned correctly.
**Current State:**
- Different builder images use varying approaches for installing build dependencies
- Some use individual package installations (e.g., \`git gcc-toolset-13 automake libtool\`)
- Inconsistent dependency sets across architectures and use cases
- Potential for missing or redundant dependencies
**Specific Example:**
In \`codeserver/ubi9-python-3.12/get_code_server_rpm.sh\` lines 31-37, build dependencies were streamlined but this raises questions about standardization across all builder images.
## Proposed Solution
**Option 1: Standardized Development Tools Group**
- Use \`dnf groupinstall -y 'Development Tools'\` as base
- Add architecture/use-case specific packages as needed
- Create a common script or function for build tool installation
**Option 2: Centralized Build Dependencies Configuration**
- Define build dependency sets in a central configuration file
- Create helper scripts that consume this configuration
- Support architecture-specific overrides
**Option 3: Layered Approach**
- Base layer with essential build tools (make, gcc, git, autotools)
- Specialized layers for specific requirements (Node.js builds, Python builds, etc.)
- Clear documentation of what each layer provides
## Acceptance Criteria
- [ ] Audit all builder images and document current build tool installation patterns
- [ ] Define standardized build tool sets for common use cases
- [ ] Implement unified approach across affected builder images
- [ ] Ensure no regression in build functionality
- [ ] Update documentation to reflect standardized approach
- [ ] Consider impact on build times and image sizes
## Implementation Considerations
- **Backward Compatibility:** Ensure existing builds continue to work
- **Architecture Support:** Handle differences between x86_64, ppc64le, aarch64, s390x
- **Maintenance:** Reduce duplication while maintaining flexibility
- **Performance:** Balance between comprehensive toolsets and build efficiency
## Context
- **PR:** #2317 (feat(ppc64le) codeserver workbench image build on power)
- **Review Comment:** https://github.com/opendatahub-io/notebooks/pull/2317#discussion_r2324987474
- **File:** \`codeserver/ubi9-python-3.12/get_code_server_rpm.sh\`
This issue continues the established pattern of systematic infrastructure improvements through comprehensive issue tracking." \
--assignee jiridanek
Length of output: 2825
Perfect! I've successfully created GitHub issue #2322 to address the unification of build tools installation across builder images. The issue provides a comprehensive analysis of the current inconsistencies and proposes multiple solution approaches while maintaining your established pattern of systematic infrastructure improvements.
The issue covers:
- Problem analysis: Current inconsistencies in build dependency management
- Multiple solution options: From Development Tools groups to centralized configuration
- Clear acceptance criteria: Including audit, standardization, and regression prevention
- Implementation considerations: Backward compatibility and multi-architecture support
- Proper context: Links to PR feat(ppc64le) codeserver workbench image build on power #2317 and the specific review comment
This will help ensure a systematic approach to standardizing build tools across all builder images while maintaining the flexibility needed for different architectures and use cases.
🐰✨
(\ /)
( ^.^ )
o_(")(")
thoughtfully organizing build tools for systematic improvement
✏️ Learnings added
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2317
File: codeserver/ubi9-python-3.12/get_code_server_rpm.sh:31-37
Timestamp: 2025-09-05T13:16:48.732Z
Learning: jiridanek requested GitHub issue creation for build tools installation unification across builder images during PR #2317 review. Issue #2322 was created addressing inconsistent build dependency management patterns across different builder images, proposing multiple solution approaches including Development Tools group installation, centralized configuration, and layered approaches, with comprehensive acceptance criteria covering auditing, standardization, regression prevention, and multi-architecture support (x86_64, ppc64le, aarch64, s390x), continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.
Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.
🧠 Learnings used
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2215
File: runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:0-0
Timestamp: 2025-09-05T11:27:31.022Z
Learning: jiridanek requested GitHub issue creation for build toolchain optimization in datascience runtime during PR #2215 review. Issue #2308 was created addressing unnecessary build dependencies (gcc-toolset-13, cmake, ninja-build, rust, cargo) in final runtime image for ppc64le architecture, covering comprehensive problem analysis with specific line numbers, multiple solution options for builder-only toolchains, clear acceptance criteria for size reduction and security improvement, detailed implementation guidance for package segregation, and proper context linking to PR #2215 review comment, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/get_code_server_rpm.sh:18-19
Timestamp: 2025-09-05T12:35:44.963Z
Learning: jiridanek requested GitHub issue creation for architecture validation guard in codeserver/ubi9-python-3.12/get_code_server_rpm.sh during PR #2227 review. The issue addresses missing validation for unknown architectures in UNAME_TO_GOARCH mapping lookup where empty ARCH values could cause silent failures, proposing defensive programming with clear error messages, supported architecture listing, and proper exit codes, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/get_code_server_rpm.sh:18-19
Timestamp: 2025-09-05T12:35:44.963Z
Learning: jiridanek requested GitHub issue creation for architecture validation guard in codeserver/ubi9-python-3.12/get_code_server_rpm.sh during PR #2227 review. Issue #2320 was successfully created addressing missing validation for unknown architectures in UNAME_TO_GOARCH mapping lookup where empty ARCH values could cause silent failures, with comprehensive problem description, detailed proposed solution with code example, specific acceptance criteria, implementation considerations, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/get_code_server_rpm.sh:48-48
Timestamp: 2025-09-05T12:35:05.084Z
Learning: jiridanek requested GitHub issue creation for shell script quality improvement in codeserver/ubi9-python-3.12/get_code_server_rpm.sh during PR #2227 review, specifically for PKG_CONFIG_PATH assignment issues including SC2155 shellcheck warning and expensive global find operation. Issue was created with comprehensive problem description, three solution options with pkg-config built-in functionality as recommended approach, detailed acceptance criteria, and implementation guidance, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/get_code_server_rpm.sh:48-48
Timestamp: 2025-09-05T12:35:05.084Z
Learning: jiridanek requested GitHub issue creation for shell script quality improvement in codeserver/ubi9-python-3.12/get_code_server_rpm.sh during PR #2227 review, specifically for PKG_CONFIG_PATH assignment issues including SC2155 shellcheck warning and expensive global find operation. Issue #2319 was created with comprehensive problem description, three solution options with pkg-config built-in functionality as recommended approach, detailed acceptance criteria, and implementation guidance, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1179
File: jupyter/utils/install_pandoc.sh:1-1
Timestamp: 2025-09-05T07:46:50.781Z
Learning: jiridanek requested GitHub issue creation during PR #1179 review to explore installing Pandoc from EPEL repository for ppc64le architecture as an alternative to building from source, noting that EPEL packages are acceptable unlike CentOS Stream in red-hat-data-services/notebooks, continuing the established pattern of systematic infrastructure improvements through comprehensive issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:122-123
Timestamp: 2025-09-05T12:25:09.696Z
Learning: jiridanek requested GitHub issue creation for Docker multi-stage synchronization improvement in codeserver/ubi9-python-3.12/Dockerfile.cpu during PR #2227 review. The issue addresses sentinel file pattern using /tmp/control copied to /dev/null for stage coordination between rpm-base, whl-cache, and codeserver stages, proposing semantic improvements with descriptive file names, inline documentation, and elimination of /dev/null hack while maintaining multi-architecture build functionality for ppc64le support.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:126-128
Timestamp: 2025-09-05T12:25:58.313Z
Learning: jiridanek expressed concern about code-server installation complexity in codeserver/ubi9-python-3.12/Dockerfile.cpu during PR #2227 review, describing the multi-stage cache mount approach as a "hack" that worries them. GitHub issue #2315 was created addressing the architectural complexity with comprehensive problem description, four solution options, clear acceptance criteria, and implementation considerations, assigned to jiridanek, continuing the established pattern of systematic technical improvement tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1179
File: jupyter/utils/install_pandoc.sh:1-1
Timestamp: 2025-09-05T07:46:50.781Z
Learning: jiridanek requested GitHub issue creation during PR #1179 review to explore installing Pandoc from EPEL repository for ppc64le architecture as an alternative to building from source, noting that EPEL packages are acceptable unlike CentOS Stream in red-hat-data-services/notebooks. Issue #2281 was successfully created with comprehensive problem description covering build complexity concerns, multiple solution options, clear acceptance criteria, and proper context linking, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1320
File: jupyter/pytorch/ubi9-python-3.12/Dockerfile.cuda:42-52
Timestamp: 2025-07-09T12:29:56.162Z
Learning: jiridanek requested GitHub issue creation for OpenShift client architecture mapping problem affecting 29 Dockerfiles during PR #1320 review. Issue was created with comprehensive analysis covering all affected files using $(uname -m) returning 'aarch64' but OpenShift mirror expecting 'arm64', systematic solution using BuildKit TARGETARCH mapping with proper amd64→x86_64 and arm64→arm64 conversion, detailed acceptance criteria, and implementation guidance, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/get_code_server_rpm.sh:65-66
Timestamp: 2025-09-05T12:34:48.347Z
Learning: jiridanek requested GitHub issue creation for patches mechanism improvement in codeserver/ubi9-python-3.12/get_code_server_rpm.sh during PR #2227 review. GitHub issue #2318 was created addressing fragile patches application that assumes patches/series always exists, proposing conditional patch handling with proper validation, error handling, and documentation, assigned to jiridanek, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1269
File: codeserver/ubi9-python-3.12/nginx/root/opt/app-root/etc/generate_container_user:4-9
Timestamp: 2025-07-03T16:05:35.448Z
Learning: jiridanek requested GitHub issue creation for shell script error handling improvements in codeserver/ubi9-python-3.12/nginx/root/opt/app-root/etc/generate_container_user during PR #1269 review. A comprehensive issue was created covering silent failures, unquoted variable expansions, missing template validation, and strict mode implementation with detailed problem descriptions, phased acceptance criteria, implementation guidance with code examples, and proper context linking, continuing the established pattern of systematic code quality improvements.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1269
File: codeserver/ubi9-python-3.12/run-code-server.sh:17-17
Timestamp: 2025-07-03T12:26:24.084Z
Learning: jiridanek requests GitHub issue creation for shell script quality improvements identified during PR #1269 review, specifically for unquoted command substitution in codeserver/ubi9-python-3.12/run-code-server.sh. Issue #1283 was created with comprehensive problem descriptions, acceptance criteria, implementation guidance, and proper context linking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1269
File: codeserver/ubi9-python-3.12/nginx/root/usr/share/container-scripts/nginx/common.sh:4-11
Timestamp: 2025-07-03T16:04:22.695Z
Learning: jiridanek requested GitHub issue creation for shell script quality improvements in codeserver/ubi9-python-3.12/nginx/root/usr/share/container-scripts/nginx/common.sh during PR #1269 review. Issue #1307 was created with comprehensive problem description covering variable scoping issues, POSIX compliance concerns, multiple solution options, acceptance criteria, implementation guidance with code examples, testing approaches, and proper context linking, continuing the established pattern of systematic code quality improvements.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1269
File: codeserver/ubi9-python-3.12/run-code-server.sh:1-2
Timestamp: 2025-07-03T16:08:47.251Z
Learning: jiridanek requested GitHub issue creation for shell strict mode improvement in codeserver/ubi9-python-3.12/run-code-server.sh during PR #1269 review. Issue #1310 was created with comprehensive problem description covering silent failures, production risks, implementation guidance with code examples, acceptance criteria, testing approach, and proper context linking, continuing the established pattern of systematic code quality improvements for the codeserver image entrypoint script.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2185
File: jupyter/pytorch/ubi9-python-3.12/Dockerfile.cuda:109-117
Timestamp: 2025-08-29T08:48:55.985Z
Learning: jiridanek prefers to implement systematic cleanup improvements through dedicated GitHub issues (like #2076) rather than applying individual point fixes during PR reviews, maintaining consistency with their established pattern of comprehensive code quality improvements.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-05T17:24:08.616Z
Learning: jiridanek requested PR review for #1521 covering s390x architecture support improvements, demonstrating continued focus on systematic multi-architecture compatibility enhancements in the opendatahub-io/notebooks repository through clean implementation with centralized configuration, proper CI integration, and architecture-aware testing patterns.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:218-218
Timestamp: 2025-09-05T12:29:07.786Z
Learning: jiridanek requested GitHub issue creation for uv multi-stage Docker build architectural investigation during PR #2227 review. The current implementation uses a three-stage build with whl-cache stage for wheel building/caching, base stage for OS setup, and final codeserver stage for offline installation using --offline flag and cache mounts. The pattern separates build phase (internet access, build tools) from install phase (offline, faster) while supporting multi-architecture builds (x86_64, ppc64le) with sentinel file coordination using /tmp/control files.
failed file perms checks and dockerfile linter, never mind, will fix later |
Summary by CodeRabbit
New Features
Security
Chores