-
Notifications
You must be signed in to change notification settings - Fork 124
RHAIENG-287: fix(ci): hadolint warnings #2547
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Skipping CI for Draft Pull Request. |
WalkthroughAdds a new multi-stage build stage alias Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related issues
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
06e5d7f to
d379559
Compare
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Nitpick comments (1)
runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu (1)
50-61: Build-only env in /etc/profile.d risks non-effect during RUNStoring OPENBLAS/ONNX/PYARROW vars and PATH into profile.d is not sourced by non-interactive RUN steps and can cause subtle build-time issues and duplication across stages.
Recommend centralizing via ARG/ENV at stage scope and passing inline exports only where needed. This reduces drift and aligns with prior repo guidance. Based on learnings
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
ci/hadolint-config.yaml(1 hunks)codeserver/ubi9-python-3.12/Dockerfile.cpu(4 hunks)jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu(9 hunks)jupyter/trustyai/ubi9-python-3.12/Dockerfile.cpu(1 hunks)runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu(9 hunks)
🧰 Additional context used
🧠 Learnings (17)
📓 Common learnings
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-01T14:46:03.215Z
Learning: jiridanek requested GitHub issue creation for two nitpicks during PR #1588 review: comment wording improvement in ROCm TensorFlow Dockerfile and typo fix in Jupyter DataScience Dockerfile stage header. Issues #1589 and #1590 were successfully created with comprehensive problem descriptions, specific file locations and line numbers, clear before/after solutions, detailed acceptance criteria, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1259
File: jupyter/rocm/tensorflow/ubi9-python-3.12/kustomize/base/service.yaml:5-15
Timestamp: 2025-07-02T18:59:15.788Z
Learning: jiridanek creates targeted GitHub issues for specific test quality improvements identified during PR reviews in opendatahub-io/notebooks. Issue #1268 demonstrates this by converting a review comment about insufficient tf2onnx conversion test validation into a comprehensive improvement plan with clear acceptance criteria, code examples, and ROCm-specific context.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-05T17:24:08.616Z
Learning: jiridanek requested PR review for #1521 covering s390x architecture support improvements, demonstrating continued focus on systematic multi-architecture compatibility enhancements in the opendatahub-io/notebooks repository through clean implementation with centralized configuration, proper CI integration, and architecture-aware testing patterns.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1379
File: .tekton/odh-workbench-jupyter-datascience-cpu-py312-ubi9-push.yaml:14-17
Timestamp: 2025-07-11T11:15:47.424Z
Learning: jiridanek requested GitHub issue creation for CEL filter problem in datascience workbench Tekton pipelines during PR #1379 review. Issue #1383 was successfully created with comprehensive problem description covering both Python 3.11 and 3.12 pipelines incorrectly watching jupyter/minimal directories instead of jupyter/datascience directories, detailed impact analysis of pipeline execution failures, complete solution with before/after code examples, thorough acceptance criteria for path updates and pipeline triggering verification, implementation notes about repository structure alignment, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-27T18:54:33.532Z
Learning: jiridanek requested GitHub issue creation for comment typo fix during PR #2145 review, specifically fixing "hhttps" to "https" in rstudio/rhel9-python-3.11/Dockerfile.cuda line 139 NVIDIA CUDA Dockerfile reference. Issue #2164 was created with comprehensive problem description, clear before/after solution, and acceptance criteria, continuing the established pattern of systematic code quality improvements through detailed issue tracking for even minor documentation fixes.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2265
File: .tekton/odh-workbench-jupyter-pytorch-llmcompressor-cuda-py312-ubi9-pull-request.yaml:16-16
Timestamp: 2025-09-05T12:10:28.916Z
Learning: jiridanek requested GitHub issue creation for trigger path cleanup in pytorch+llmcompressor pipeline during PR #2265 review. Issue #2310 was successfully created addressing copy-paste errors where irrelevant Minimal/DataScience trigger paths were included in the pytorch+llmcompressor pipeline on-cel-expression, causing unnecessary pipeline triggers. The issue includes comprehensive problem description covering specific irrelevant paths, detailed solution with before/after YAML code examples, clear acceptance criteria for implementation and testing, repository-wide scope consideration for similar issues, and proper context linking to PR #2265 review comment, assigned to jiridanek.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1306
File: jupyter/trustyai/ubi9-python-3.12/test/test_notebook.ipynb:112-126
Timestamp: 2025-07-08T13:21:09.150Z
Learning: jiridanek requested GitHub issue creation for notebook linting and formatting improvements during PR #1306 review, specifically to address inconsistent metadata across .ipynb files and implement systematic quality standards. This continues the established pattern of comprehensive issue creation for code quality improvements with detailed problem descriptions, multiple solution options, phased acceptance criteria, and proper context linking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1259
File: jupyter/rocm/tensorflow/ubi9-python-3.12/test/test_notebook.ipynb:22-29
Timestamp: 2025-07-02T18:27:51.097Z
Learning: jiridanek consistently creates comprehensive follow-up GitHub issues from PR review comments in opendatahub-io/notebooks, turning specific code quality concerns into systematic improvements tracked with proper context, acceptance criteria, and cross-references. Issue #1266 demonstrates this pattern by expanding a specific error handling concern in load_expected_versions() into a repository-wide improvement initiative.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-09-16T15:46:55.366Z
Learning: jiridanek requested GitHub issue creation for TensorFlow test notebook improvements during PR #1975 review, consolidating multiple review nitpicks into a single comprehensive issue covering code organization, error handling, documentation consistency, testing methodology, and notebook standards alignment. Issue #2491 was successfully created with detailed problem descriptions, three solution options, comprehensive acceptance criteria, phased implementation guidance, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-27T18:58:27.554Z
Learning: jiridanek requested GitHub issue creation for URL typo fix during PR #2145 review, identifying "hhttps" instead of "https" in rstudio/rhel9-python-3.11/Dockerfile.cuda line 139. Issue #2166 was created with specific line reference, before/after diff, and acceptance criteria for this minor but important documentation accuracy fix, demonstrating attention to detail in code review process.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2215
File: runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:0-0
Timestamp: 2025-09-05T12:10:50.856Z
Learning: jiridanek requested GitHub issue creation for Dockerfile environment variable refactoring during PR #2215 review. Issue #2311 was created addressing build-only variables (OPENBLAS_VERSION, ONNX_VERSION, GRPC_PYTHON_BUILD_SYSTEM_OPENSSL) being unnecessarily written to /etc/profile.d/ppc64le.sh in runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu, causing variable duplication across stages, unreliable sourcing in non-login build contexts, and violation of DRY principles. The issue includes comprehensive problem description covering affected lines 30-37, detailed impact analysis of build reliability and maintenance overhead, three solution options with centralized ARG/ENV approach as recommended, clear acceptance criteria for version centralization and build-only variable cleanup, and specific implementation guidance with code examples, assigned to jiridanek, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2215
File: runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:0-0
Timestamp: 2025-09-05T12:10:50.856Z
Learning: jiridanek requested GitHub issue creation for Dockerfile environment variable refactoring during PR #2215 review. Issue addresses build-only variables (OPENBLAS_VERSION, ONNX_VERSION, GRPC_PYTHON_BUILD_SYSTEM_OPENSSL) being unnecessarily written to /etc/profile.d/ppc64le.sh in runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu, causing variable duplication across stages, unreliable sourcing in non-login build contexts, and violation of DRY principles. The issue includes comprehensive problem description covering affected lines 30-37, detailed impact analysis of build reliability and maintenance overhead, three solution options with centralized ARG/ENV approach as recommended, clear acceptance criteria for version centralization and build-only variable cleanup, and specific implementation guidance with code examples, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2185
File: jupyter/pytorch/ubi9-python-3.12/Dockerfile.cuda:109-117
Timestamp: 2025-08-29T08:48:55.985Z
Learning: jiridanek prefers to implement systematic cleanup improvements through dedicated GitHub issues (like #2076) rather than applying individual point fixes during PR reviews, maintaining consistency with their established pattern of comprehensive code quality improvements.
📚 Learning: 2025-09-12T08:27:00.439Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2432
File: jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu:232-249
Timestamp: 2025-09-12T08:27:00.439Z
Learning: jiridanek requested GitHub issue creation for Rust toolchain availability during s390x builds in jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu during PR #2432 review. The issue addresses PATH availability problems where Rust/cargo installed in cpu-base stage at /opt/.cargo/bin may not be accessible during uv pip install step in jupyter-datascience stage, proposing three solution approaches: immediate environment variable fix, builder stage pattern following codeserver approach, and ENV declaration fix, with comprehensive acceptance criteria covering build reliability, multi-architecture compatibility, and alignment with established patterns, continuing the systematic infrastructure improvement tracking methodology.
Applied to files:
jupyter/trustyai/ubi9-python-3.12/Dockerfile.cpucodeserver/ubi9-python-3.12/Dockerfile.cpujupyter/datascience/ubi9-python-3.12/Dockerfile.cpuruntimes/datascience/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-09-10T21:24:07.914Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-09-10T21:24:07.914Z
Learning: jiridanek requested GitHub issue creation for Docker chown optimization in codeserver/ubi9-python-3.12/Dockerfile.cpu during PR #2356 review. Issue #2403 was created addressing performance impact of broad recursive chown on entire /opt/app-root directory (line 235), proposing three solution approaches: scoped chown targeting specific changed paths, root cause fix during file creation, and test modification for permission validation, with detailed benefits analysis covering layer size reduction and build time optimization, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.
Applied to files:
jupyter/trustyai/ubi9-python-3.12/Dockerfile.cpucodeserver/ubi9-python-3.12/Dockerfile.cpujupyter/datascience/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-09-12T08:27:00.439Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2432
File: jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu:232-249
Timestamp: 2025-09-12T08:27:00.439Z
Learning: jiridanek requested GitHub issue creation for Rust toolchain availability during s390x builds in jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu during PR #2432 review. Issue #2435 was created addressing PATH availability problems where Rust/cargo installed in cpu-base stage at /opt/.cargo/bin may not be accessible during uv pip install step in jupyter-datascience stage, proposing three solution approaches: immediate environment variable fix, builder stage pattern following codeserver approach, and ENV declaration fix, with comprehensive acceptance criteria covering build reliability, multi-architecture compatibility, and alignment with established patterns, continuing the systematic infrastructure improvement tracking methodology.
Applied to files:
jupyter/trustyai/ubi9-python-3.12/Dockerfile.cpucodeserver/ubi9-python-3.12/Dockerfile.cpujupyter/datascience/ubi9-python-3.12/Dockerfile.cpuruntimes/datascience/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-09-05T12:25:09.719Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:122-123
Timestamp: 2025-09-05T12:25:09.719Z
Learning: jiridanek requested GitHub issue creation for Docker multi-stage synchronization improvement in codeserver/ubi9-python-3.12/Dockerfile.cpu during PR #2227 review. The issue addresses sentinel file pattern using /tmp/control copied to /dev/null for stage coordination between rpm-base, whl-cache, and codeserver stages, proposing semantic improvements with descriptive file names, inline documentation, and elimination of /dev/null hack while maintaining multi-architecture build functionality for ppc64le support.
Applied to files:
jupyter/trustyai/ubi9-python-3.12/Dockerfile.cpucodeserver/ubi9-python-3.12/Dockerfile.cpujupyter/datascience/ubi9-python-3.12/Dockerfile.cpuruntimes/datascience/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-09-16T10:39:23.295Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2460
File: jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu:206-221
Timestamp: 2025-09-16T10:39:23.295Z
Learning: jiridanek requested GitHub issue creation for OpenBLAS installation staging during ppc64le builds in jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu during PR #2460 review. Issue #2466 was created addressing permission errors where OpenBLAS make install fails when attempting to write to /usr/local system paths from USER 1001 context in final stage, proposing DESTDIR staging pattern to build and install OpenBLAS artifacts within openblas-builder stage then COPY pre-installed files to final stage, with comprehensive problem description covering specific permission denied errors, detailed technical solution with code examples, clear acceptance criteria for build reliability and multi-architecture compatibility, and proper context linking to PR #2460 review comment, continuing the systematic infrastructure improvement tracking methodology for Power architecture support.
Applied to files:
jupyter/trustyai/ubi9-python-3.12/Dockerfile.cpujupyter/datascience/ubi9-python-3.12/Dockerfile.cpuruntimes/datascience/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-09-05T12:29:07.819Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:218-218
Timestamp: 2025-09-05T12:29:07.819Z
Learning: jiridanek requested GitHub issue creation for uv multi-stage Docker build architectural investigation during PR #2227 review. The current implementation uses a three-stage build with whl-cache stage for wheel building/caching, base stage for OS setup, and final codeserver stage for offline installation using --offline flag and cache mounts. The pattern separates build phase (internet access, build tools) from install phase (offline, faster) while supporting multi-architecture builds (x86_64, ppc64le) with sentinel file coordination using /tmp/control files.
Applied to files:
jupyter/trustyai/ubi9-python-3.12/Dockerfile.cpucodeserver/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-07-09T08:07:30.628Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1333
File: runtimes/pytorch/ubi9-python-3.12/Dockerfile.cuda:17-25
Timestamp: 2025-07-09T08:07:30.628Z
Learning: jiridanek requested GitHub issue creation for oc client installation permission problem in PyTorch CUDA runtime Dockerfile during PR #1333 review. Issue #1356 was created with comprehensive problem description covering USER 1001 permission conflicts with root-owned /opt/app-root/bin directory, detailed impact analysis of build failures and non-executable binaries, current problematic code snippet, complete solution with user switching approach, clear acceptance criteria, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Applied to files:
jupyter/trustyai/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-09-05T12:25:58.344Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:126-128
Timestamp: 2025-09-05T12:25:58.344Z
Learning: jiridanek expressed concern about code-server installation complexity in codeserver/ubi9-python-3.12/Dockerfile.cpu during PR #2227 review, describing the multi-stage cache mount approach as a "hack" that worries them, leading to GitHub issue creation for systematic architectural improvement addressing maintainability concerns.
Applied to files:
jupyter/trustyai/ubi9-python-3.12/Dockerfile.cpucodeserver/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-09-05T12:25:58.344Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:126-128
Timestamp: 2025-09-05T12:25:58.344Z
Learning: jiridanek expressed concern about code-server installation complexity in codeserver/ubi9-python-3.12/Dockerfile.cpu during PR #2227 review, describing the multi-stage cache mount approach as a "hack" that worries them. GitHub issue #2315 was created addressing the architectural complexity with comprehensive problem description, four solution options, clear acceptance criteria, and implementation considerations, assigned to jiridanek, continuing the established pattern of systematic technical improvement tracking.
Applied to files:
codeserver/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-09-05T12:10:50.856Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2215
File: runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:0-0
Timestamp: 2025-09-05T12:10:50.856Z
Learning: jiridanek requested GitHub issue creation for Dockerfile environment variable refactoring during PR #2215 review. Issue #2311 was created addressing build-only variables (OPENBLAS_VERSION, ONNX_VERSION, GRPC_PYTHON_BUILD_SYSTEM_OPENSSL) being unnecessarily written to /etc/profile.d/ppc64le.sh in runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu, causing variable duplication across stages, unreliable sourcing in non-login build contexts, and violation of DRY principles. The issue includes comprehensive problem description covering affected lines 30-37, detailed impact analysis of build reliability and maintenance overhead, three solution options with centralized ARG/ENV approach as recommended, clear acceptance criteria for version centralization and build-only variable cleanup, and specific implementation guidance with code examples, assigned to jiridanek, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.
Applied to files:
jupyter/datascience/ubi9-python-3.12/Dockerfile.cpuruntimes/datascience/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-09-05T12:10:50.856Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2215
File: runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:0-0
Timestamp: 2025-09-05T12:10:50.856Z
Learning: jiridanek requested GitHub issue creation for Dockerfile environment variable refactoring during PR #2215 review. Issue addresses build-only variables (OPENBLAS_VERSION, ONNX_VERSION, GRPC_PYTHON_BUILD_SYSTEM_OPENSSL) being unnecessarily written to /etc/profile.d/ppc64le.sh in runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu, causing variable duplication across stages, unreliable sourcing in non-login build contexts, and violation of DRY principles. The issue includes comprehensive problem description covering affected lines 30-37, detailed impact analysis of build reliability and maintenance overhead, three solution options with centralized ARG/ENV approach as recommended, clear acceptance criteria for version centralization and build-only variable cleanup, and specific implementation guidance with code examples, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.
Applied to files:
jupyter/datascience/ubi9-python-3.12/Dockerfile.cpuruntimes/datascience/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-07-04T10:41:13.061Z
Learnt from: grdryn
PR: opendatahub-io/notebooks#1320
File: rstudio/rhel9-python-3.11/Dockerfile.cuda:34-35
Timestamp: 2025-07-04T10:41:13.061Z
Learning: In the opendatahub-io/notebooks repository, when adapting NVIDIA CUDA Dockerfiles, the project intentionally maintains consistency with upstream NVIDIA patterns even when it might involve potential risks like empty variable expansions in package installation commands. This is considered acceptable because the containers only run on RHEL 9 with known yum/dnf behavior, and upstream consistency is prioritized over defensive coding practices.
Applied to files:
jupyter/datascience/ubi9-python-3.12/Dockerfile.cpuruntimes/datascience/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-09-05T11:27:31.040Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2215
File: runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:0-0
Timestamp: 2025-09-05T11:27:31.040Z
Learning: jiridanek requested GitHub issue creation for build toolchain optimization in datascience runtime during PR #2215 review. Issue #2308 was created addressing unnecessary build dependencies (gcc-toolset-13, cmake, ninja-build, rust, cargo) in final runtime image for ppc64le architecture, covering comprehensive problem analysis with specific line numbers, multiple solution options for builder-only toolchains, clear acceptance criteria for size reduction and security improvement, detailed implementation guidance for package segregation, and proper context linking to PR #2215 review comment, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.
Applied to files:
jupyter/datascience/ubi9-python-3.12/Dockerfile.cpuruntimes/datascience/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-08-12T08:40:55.286Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1909
File: runtimes/pytorch+llmcompressor/ubi9-python-3.11/Dockerfile.cuda:11-15
Timestamp: 2025-08-12T08:40:55.286Z
Learning: jiridanek requested GitHub issue creation for redundant CUDA upgrade optimization during PR #1909 review. Analysis revealed all 14 CUDA Dockerfiles contain redundant `yum upgrade -y` commands in cuda-base stages that execute after base stages already performed comprehensive `dnf upgrade` via pre-upgrade blocks, causing unnecessary CI latency and build inefficiency. Issue includes complete scope analysis with specific line numbers, investigation framework requiring NVIDIA upstream documentation review, multiple solution options, comprehensive acceptance criteria covering systematic testing and performance measurement, and proper context linking to PR #1909 review comment.
Applied to files:
jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-09-12T08:24:58.328Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2432
File: jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu:66-79
Timestamp: 2025-09-12T08:24:58.328Z
Learning: jiridanek requested GitHub issue creation for Rust toolchain PATH availability problem during PR #2432 review. Issue addresses environment variables written to /etc/profile.d/cargo.sh not being available during Docker RUN commands on s390x builds, potentially causing uv pip install failures when compiling Rust-based Python packages. The issue includes comprehensive problem description covering lines 66-79 and 232-249, detailed impact analysis of build reliability concerns, three solution options (ENV declarations, inline variables, hybrid approach), clear acceptance criteria for cargo availability verification, and specific implementation guidance with build testing steps, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.
Applied to files:
runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-07-07T11:08:48.524Z
Learnt from: atheo89
PR: opendatahub-io/notebooks#1258
File: codeserver/ubi9-python-3.11/Dockerfile.cpu:32-32
Timestamp: 2025-07-07T11:08:48.524Z
Learning: atheo89 requested GitHub issue creation for multi-architecture Dockerfile improvements during PR #1258 review, specifically for enhancing structural consistency across Docker stages, replacing $(uname -m) with ${TARGETARCH} for cross-architecture builds, and adding OCI-compliant metadata labels. Issue #1332 was created with comprehensive problem description, phased implementation approach, detailed acceptance criteria, implementation guidance with code examples, and proper context linking, continuing the established pattern of systematic code quality improvements.
Applied to files:
runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (10)
- GitHub Check: build (runtime-datascience-ubi9-python-3.12, 3.12, linux/s390x, false) / build
- GitHub Check: build (runtime-datascience-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (codeserver-ubi9-python-3.12, 3.12, linux/arm64, false) / build
- GitHub Check: build (jupyter-trustyai-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (codeserver-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (jupyter-datascience-ubi9-python-3.12, 3.12, linux/ppc64le, false) / build
- GitHub Check: Red Hat Konflux / odh-pipeline-runtime-datascience-cpu-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-datascience-cpu-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-codeserver-datascience-cpu-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-trustyai-cpu-py312-ubi9-on-pull-request
🔇 Additional comments (20)
jupyter/trustyai/ubi9-python-3.12/Dockerfile.cpu (2)
32-34: Use of --no-cache-dir with pip is correctSwitching to --no-cache-dir is the right flag (hadolint-friendly) and avoids caching wheels in layer.
40-42: Non-root switch improves security and satisfies DL3002Switching back to USER 1001 at end of the stage is good. No subsequent commands depend on root here.
Confirm later stages that consume /root/.cache/uv run as root (they do here), so cache permissions remain compatible.
codeserver/ubi9-python-3.12/Dockerfile.cpu (5)
33-35: Non-root user switch after rpm fetch stageGood DL3002 hygiene; nothing else runs as root in this stage afterward.
56-58: pip --no-cache-dir: correct flag and aligns with hadolintThis avoids persisting pip cache in the layer.
67-69: Drop privileges in whl-cache stageAppropriate security posture; stage work already completed as root.
288-291: Switch to /bin/bash heredoc and explicit 2>&1 redirection is fine
- /bin/bash ensures pipefail support.
- With set -o pipefail, the pipeline will fail if python fails, even with tee.
293-295: Tests artifact copy into final imageMinimal and clear; preserves test log for inspection.
jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu (6)
23-24: Use printf instead of echo -e for dummy scriptRobust and portable; avoids echo -e ambiguity.
206-209: ONNX builder: pinning CMAKE_ARGS and using --no-cache-dirGood reproducibility and cache avoidance for requirements build; CMAKE_ARGS export is explicit.
226-226: wget with progress indicatorHarmless improvement; does not affect build semantics.
313-321: Power-specific ONNX wheel install with --no-cache-dirCorrectly scoped to ppc64le and avoids cache bloat.
323-331: Cleanup of ONNX wheels directory under rootReasonable cleanup split; keeps image tidy.
332-341: OpenBLAS install gated by arch and cleaned upInstaller run as root with proper PREFIX; deletes sources after install.
runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu (7)
84-91: Python alternatives for s390xLooks correct; keeps explicit 3.12 mapping.
190-191: Drop privileges after s390x wheel build setupGood DL3002 posture; no further root ops in this stage.
209-209: OpenBLAS fetch with wget progressNo behavior changes; acceptable.
237-240: ONNX builder: requirements no-cache-dir and CMAKE_ARGS exportClear and correct for deterministic build.
265-269: arrow-builder: explicit exports for ARROW_HOME/LD_LIBRARY_PATHImproves clarity over inline expansions; matches best practice.
307-308: DL3002 non-root switchConsistent with other stages; good.
337-338: Runtime ppc64le: explicit HOME and --no-cache-dirGood call to ensure ~/.cache isn’t used when running as root.
| RUN <<EOF | ||
| if [ "$TARGETARCH" = "s390x" ]; then | ||
| # Install Rust and set up environment | ||
| mkdir -p /opt/.cargo && \ | ||
| export HOME=/root && \ | ||
| curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs -o rustup-init.sh && \ | ||
| chmod +x rustup-init.sh && \ | ||
| CARGO_HOME=/opt/.cargo HOME=/root ./rustup-init.sh -y --no-modify-path && \ | ||
| rm -f rustup-init.sh && \ | ||
| chown -R 1001:0 /opt/.cargo && \ | ||
| mkdir -p /opt/.cargo | ||
| export HOME=/root | ||
| curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs -o rustup-init.sh | ||
| chmod +x rustup-init.sh | ||
| CARGO_HOME=/opt/.cargo HOME=/root ./rustup-init.sh -y --no-modify-path | ||
| rm -f rustup-init.sh | ||
| chown -R 1001:0 /opt/.cargo | ||
| # Set environment variables | ||
| echo 'export PATH=/opt/.cargo/bin:$PATH' >> /etc/profile.d/cargo.sh && \ | ||
| echo 'export CARGO_HOME=/opt/.cargo' >> /etc/profile.d/cargo.sh && \ | ||
| echo 'export GRPC_PYTHON_BUILD_SYSTEM_OPENSSL=1' >> /etc/profile.d/cargo.sh; \ | ||
| cat > /etc/profile.d/cargo.sh <<'CARGO_EOF' | ||
| export PATH=/opt/.cargo/bin:$PATH | ||
| export CARGO_HOME=/opt/.cargo | ||
| export GRPC_PYTHON_BUILD_SYSTEM_OPENSSL=1 | ||
| CARGO_EOF | ||
| fi | ||
| EOF | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rust toolchain PATH may not be available during RUN steps
Writing PATH/CARGO_HOME to /etc/profile.d/cargo.sh won’t affect non-interactive RUN layers. Later uv pip installs that need cargo may fail on s390x if PATH isn’t set inline or via ENV.
Proposed fix (concise):
- Add an ENV in this stage (and any dependent build stages that compile Rust-based packages):
+ENV PATH="/opt/.cargo/bin:${PATH}" CARGO_HOME="/opt/.cargo"- Alternatively, prefix commands that need cargo:
-UV_LINK_MODE=copy uv pip install ...
+PATH=/opt/.cargo/bin:$PATH CARGO_HOME=/opt/.cargo UV_LINK_MODE=copy uv pip install ...Based on learnings
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| RUN <<EOF | |
| if [ "$TARGETARCH" = "s390x" ]; then | |
| # Install Rust and set up environment | |
| mkdir -p /opt/.cargo && \ | |
| export HOME=/root && \ | |
| curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs -o rustup-init.sh && \ | |
| chmod +x rustup-init.sh && \ | |
| CARGO_HOME=/opt/.cargo HOME=/root ./rustup-init.sh -y --no-modify-path && \ | |
| rm -f rustup-init.sh && \ | |
| chown -R 1001:0 /opt/.cargo && \ | |
| mkdir -p /opt/.cargo | |
| export HOME=/root | |
| curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs -o rustup-init.sh | |
| chmod +x rustup-init.sh | |
| CARGO_HOME=/opt/.cargo HOME=/root ./rustup-init.sh -y --no-modify-path | |
| rm -f rustup-init.sh | |
| chown -R 1001:0 /opt/.cargo | |
| # Set environment variables | |
| echo 'export PATH=/opt/.cargo/bin:$PATH' >> /etc/profile.d/cargo.sh && \ | |
| echo 'export CARGO_HOME=/opt/.cargo' >> /etc/profile.d/cargo.sh && \ | |
| echo 'export GRPC_PYTHON_BUILD_SYSTEM_OPENSSL=1' >> /etc/profile.d/cargo.sh; \ | |
| cat > /etc/profile.d/cargo.sh <<'CARGO_EOF' | |
| export PATH=/opt/.cargo/bin:$PATH | |
| export CARGO_HOME=/opt/.cargo | |
| export GRPC_PYTHON_BUILD_SYSTEM_OPENSSL=1 | |
| CARGO_EOF | |
| fi | |
| EOF | |
| ENV PATH="/opt/.cargo/bin:${PATH}" CARGO_HOME="/opt/.cargo" | |
| RUN <<EOF | |
| if [ "$TARGETARCH" = "s390x" ]; then | |
| # Install Rust and set up environment | |
| mkdir -p /opt/.cargo | |
| export HOME=/root | |
| curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs -o rustup-init.sh | |
| chmod +x rustup-init.sh | |
| CARGO_HOME=/opt/.cargo HOME=/root ./rustup-init.sh -y --no-modify-path | |
| rm -f rustup-init.sh | |
| chown -R 1001:0 /opt/.cargo | |
| # Set environment variables | |
| cat > /etc/profile.d/cargo.sh <<'CARGO_EOF' | |
| export PATH=/opt/.cargo/bin:$PATH | |
| export CARGO_HOME=/opt/.cargo | |
| export GRPC_PYTHON_BUILD_SYSTEM_OPENSSL=1 | |
| CARGO_EOF | |
| fi | |
| EOF |
🤖 Prompt for AI Agents
In jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu around lines 69-87, the
rust install writes PATH and CARGO_HOME to /etc/profile.d/cargo.sh which does
not affect non-interactive RUN layers, so subsequent pip installs that invoke
cargo may fail on s390x; fix by adding explicit ENV declarations (e.g., ENV
PATH=/opt/.cargo/bin:$PATH and ENV CARGO_HOME=/opt/.cargo) in this Dockerfile
stage (and any downstream build stages that compile Rust-based packages), or
alternatively ensure any RUN commands that require cargo are prefixed with the
appropriate CARGO_HOME and PATH environment variables inline.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is no different from what there was in the Dockerfile before this PR
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@coderabbitai this is no different from what there was in the Dockerfile before this PR
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jiridanek You're absolutely right - this PATH availability issue existed before this PR. Looking at the retrieved learnings, I can see that this specific Rust toolchain PATH problem was already identified and tracked in issue #2435 during previous reviews.
Since this PR is focused on fixing hadolint warnings rather than functional changes, and the PATH issue is already being tracked separately, I'll withdraw my suggestion. The heredoc conversion you've implemented here maintains the same functionality while addressing the linting warnings, which is exactly the scope of this PR.
Thanks for the clarification!
🧠 Learnings used
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2432
File: jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu:232-249
Timestamp: 2025-09-12T08:27:00.439Z
Learning: jiridanek requested GitHub issue creation for Rust toolchain availability during s390x builds in jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu during PR #2432 review. The issue addresses PATH availability problems where Rust/cargo installed in cpu-base stage at /opt/.cargo/bin may not be accessible during uv pip install step in jupyter-datascience stage, proposing three solution approaches: immediate environment variable fix, builder stage pattern following codeserver approach, and ENV declaration fix, with comprehensive acceptance criteria covering build reliability, multi-architecture compatibility, and alignment with established patterns, continuing the systematic infrastructure improvement tracking methodology.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2432
File: jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu:232-249
Timestamp: 2025-09-12T08:27:00.439Z
Learning: jiridanek requested GitHub issue creation for Rust toolchain availability during s390x builds in jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu during PR #2432 review. Issue #2435 was created addressing PATH availability problems where Rust/cargo installed in cpu-base stage at /opt/.cargo/bin may not be accessible during uv pip install step in jupyter-datascience stage, proposing three solution approaches: immediate environment variable fix, builder stage pattern following codeserver approach, and ENV declaration fix, with comprehensive acceptance criteria covering build reliability, multi-architecture compatibility, and alignment with established patterns, continuing the systematic infrastructure improvement tracking methodology.
Learnt from: grdryn
PR: opendatahub-io/notebooks#1320
File: rstudio/rhel9-python-3.11/Dockerfile.cuda:40-42
Timestamp: 2025-07-04T17:08:02.399Z
Learning: In the opendatahub-io/notebooks repository, when using multi-architecture Dockerfiles with BuildKit, the implicit build argument TARGETARCH is automatically available in the global scope for FROM instructions without explicit declaration. However, if TARGETARCH is used within a build stage, it must be declared explicitly within that stage. The current placement pattern (declaring ARG TARGETARCH after FROM instructions that use it) is correct for modern Docker/Podman/Buildah environments and does not require compatibility with older Docker versions.
Learnt from: grdryn
PR: opendatahub-io/notebooks#1320
File: rstudio/rhel9-python-3.11/Dockerfile.cuda:34-35
Timestamp: 2025-07-04T10:41:13.061Z
Learning: In the opendatahub-io/notebooks repository, when adapting NVIDIA CUDA Dockerfiles, the project intentionally maintains consistency with upstream NVIDIA patterns even when it might involve potential risks like empty variable expansions in package installation commands. This is considered acceptable because the containers only run on RHEL 9 with known yum/dnf behavior, and upstream consistency is prioritized over defensive coding practices.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2432
File: jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu:66-79
Timestamp: 2025-09-12T08:24:58.328Z
Learning: jiridanek requested GitHub issue creation for Rust toolchain PATH availability problem during PR #2432 review. Issue addresses environment variables written to /etc/profile.d/cargo.sh not being available during Docker RUN commands on s390x builds, potentially causing uv pip install failures when compiling Rust-based Python packages. The issue includes comprehensive problem description covering lines 66-79 and 232-249, detailed impact analysis of build reliability concerns, three solution options (ENV declarations, inline variables, hybrid approach), clear acceptance criteria for cargo availability verification, and specific implementation guidance with build testing steps, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2215
File: runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:0-0
Timestamp: 2025-09-05T12:10:50.856Z
Learning: jiridanek requested GitHub issue creation for Dockerfile environment variable refactoring during PR #2215 review. Issue #2311 was created addressing build-only variables (OPENBLAS_VERSION, ONNX_VERSION, GRPC_PYTHON_BUILD_SYSTEM_OPENSSL) being unnecessarily written to /etc/profile.d/ppc64le.sh in runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu, causing variable duplication across stages, unreliable sourcing in non-login build contexts, and violation of DRY principles. The issue includes comprehensive problem description covering affected lines 30-37, detailed impact analysis of build reliability and maintenance overhead, three solution options with centralized ARG/ENV approach as recommended, clear acceptance criteria for version centralization and build-only variable cleanup, and specific implementation guidance with code examples, assigned to jiridanek, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2215
File: runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:0-0
Timestamp: 2025-09-05T12:10:50.856Z
Learning: jiridanek requested GitHub issue creation for Dockerfile environment variable refactoring during PR #2215 review. Issue addresses build-only variables (OPENBLAS_VERSION, ONNX_VERSION, GRPC_PYTHON_BUILD_SYSTEM_OPENSSL) being unnecessarily written to /etc/profile.d/ppc64le.sh in runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu, causing variable duplication across stages, unreliable sourcing in non-login build contexts, and violation of DRY principles. The issue includes comprehensive problem description covering affected lines 30-37, detailed impact analysis of build reliability and maintenance overhead, three solution options with centralized ARG/ENV approach as recommended, clear acceptance criteria for version centralization and build-only variable cleanup, and specific implementation guidance with code examples, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2215
File: runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:0-0
Timestamp: 2025-09-05T11:27:31.040Z
Learning: jiridanek requested GitHub issue creation for build toolchain optimization in datascience runtime during PR #2215 review. Issue #2308 was created addressing unnecessary build dependencies (gcc-toolset-13, cmake, ninja-build, rust, cargo) in final runtime image for ppc64le architecture, covering comprehensive problem analysis with specific line numbers, multiple solution options for builder-only toolchains, clear acceptance criteria for size reduction and security improvement, detailed implementation guidance for package segregation, and proper context linking to PR #2215 review comment, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:122-123
Timestamp: 2025-09-05T12:25:09.719Z
Learning: jiridanek requested GitHub issue creation for Docker multi-stage synchronization improvement in codeserver/ubi9-python-3.12/Dockerfile.cpu during PR #2227 review. The issue addresses sentinel file pattern using /tmp/control copied to /dev/null for stage coordination between rpm-base, whl-cache, and codeserver stages, proposing semantic improvements with descriptive file names, inline documentation, and elimination of /dev/null hack while maintaining multi-architecture build functionality for ppc64le support.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-09-10T21:24:07.914Z
Learning: jiridanek requested GitHub issue creation for unused VENV export removal in codeserver/ubi9-python-3.12/get_code_server_rpm.sh during PR #2356 review. Issue #2402 was created addressing PATH pollution from non-existent VENV directory being added to PATH without the virtual environment ever being created, proposing simple cleanup by removing unused variable exports, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:218-218
Timestamp: 2025-09-05T12:29:07.819Z
Learning: jiridanek requested GitHub issue creation for uv multi-stage Docker build architectural investigation during PR #2227 review. The current implementation uses a three-stage build with whl-cache stage for wheel building/caching, base stage for OS setup, and final codeserver stage for offline installation using --offline flag and cache mounts. The pattern separates build phase (internet access, build tools) from install phase (offline, faster) while supporting multi-architecture builds (x86_64, ppc64le) with sentinel file coordination using /tmp/control files.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1909
File: runtimes/pytorch+llmcompressor/ubi9-python-3.11/Dockerfile.cuda:11-15
Timestamp: 2025-08-12T08:40:55.286Z
Learning: jiridanek requested GitHub issue creation for redundant CUDA upgrade optimization during PR #1909 review. Analysis revealed all 14 CUDA Dockerfiles contain redundant `yum upgrade -y` commands in cuda-base stages that execute after base stages already performed comprehensive `dnf upgrade` via pre-upgrade blocks, causing unnecessary CI latency and build inefficiency. Issue includes complete scope analysis with specific line numbers, investigation framework requiring NVIDIA upstream documentation review, multiple solution options, comprehensive acceptance criteria covering systematic testing and performance measurement, and proper context linking to PR #1909 review comment.
Learnt from: atheo89
PR: opendatahub-io/notebooks#1258
File: codeserver/ubi9-python-3.11/Dockerfile.cpu:32-32
Timestamp: 2025-07-07T11:08:48.524Z
Learning: atheo89 requested GitHub issue creation for multi-architecture Dockerfile improvements during PR #1258 review, specifically for enhancing structural consistency across Docker stages, replacing $(uname -m) with ${TARGETARCH} for cross-architecture builds, and adding OCI-compliant metadata labels. Issue #1332 was created with comprehensive problem description, phased implementation approach, detailed acceptance criteria, implementation guidance with code examples, and proper context linking, continuing the established pattern of systematic code quality improvements.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2317
File: codeserver/ubi9-python-3.12/get_code_server_rpm.sh:31-37
Timestamp: 2025-09-05T13:16:48.754Z
Learning: jiridanek requested GitHub issue creation for build tools installation unification across builder images during PR #2317 review. Issue #2322 was created addressing inconsistent build dependency management patterns across different builder images, proposing multiple solution approaches including Development Tools group installation, centralized configuration, and layered approaches, with comprehensive acceptance criteria covering auditing, standardization, regression prevention, and multi-architecture support (x86_64, ppc64le, aarch64, s390x), continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2460
File: jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu:206-221
Timestamp: 2025-09-16T10:39:23.295Z
Learning: jiridanek requested GitHub issue creation for OpenBLAS installation staging during ppc64le builds in jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu during PR #2460 review. Issue #2466 was created addressing permission errors where OpenBLAS make install fails when attempting to write to /usr/local system paths from USER 1001 context in final stage, proposing DESTDIR staging pattern to build and install OpenBLAS artifacts within openblas-builder stage then COPY pre-installed files to final stage, with comprehensive problem description covering specific permission denied errors, detailed technical solution with code examples, clear acceptance criteria for build reliability and multi-architecture compatibility, and proper context linking to PR #2460 review comment, continuing the systematic infrastructure improvement tracking methodology for Power architecture support.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2185
File: jupyter/pytorch/ubi9-python-3.12/Dockerfile.cuda:109-117
Timestamp: 2025-08-29T08:48:55.985Z
Learning: jiridanek prefers to implement systematic cleanup improvements through dedicated GitHub issues (like #2076) rather than applying individual point fixes during PR reviews, maintaining consistency with their established pattern of comprehensive code quality improvements.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-01T14:46:03.215Z
Learning: jiridanek requested GitHub issue creation for two nitpicks during PR #1588 review: comment wording improvement in ROCm TensorFlow Dockerfile and typo fix in Jupyter DataScience Dockerfile stage header. Issues #1589 and #1590 were successfully created with comprehensive problem descriptions, specific file locations and line numbers, clear before/after solutions, detailed acceptance criteria, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1259
File: jupyter/rocm/tensorflow/ubi9-python-3.12/kustomize/base/service.yaml:5-15
Timestamp: 2025-07-02T18:59:15.788Z
Learning: jiridanek creates targeted GitHub issues for specific test quality improvements identified during PR reviews in opendatahub-io/notebooks. Issue #1268 demonstrates this by converting a review comment about insufficient tf2onnx conversion test validation into a comprehensive improvement plan with clear acceptance criteria, code examples, and ROCm-specific context.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-27T18:54:33.532Z
Learning: jiridanek requested GitHub issue creation for comment typo fix during PR #2145 review, specifically fixing "hhttps" to "https" in rstudio/rhel9-python-3.11/Dockerfile.cuda line 139 NVIDIA CUDA Dockerfile reference. Issue #2164 was created with comprehensive problem description, clear before/after solution, and acceptance criteria, continuing the established pattern of systematic code quality improvements through detailed issue tracking for even minor documentation fixes.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1379
File: .tekton/odh-workbench-jupyter-datascience-cpu-py312-ubi9-push.yaml:14-17
Timestamp: 2025-07-11T11:15:47.424Z
Learning: jiridanek requested GitHub issue creation for CEL filter problem in datascience workbench Tekton pipelines during PR #1379 review. Issue #1383 was successfully created with comprehensive problem description covering both Python 3.11 and 3.12 pipelines incorrectly watching jupyter/minimal directories instead of jupyter/datascience directories, detailed impact analysis of pipeline execution failures, complete solution with before/after code examples, thorough acceptance criteria for path updates and pipeline triggering verification, implementation notes about repository structure alignment, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1259
File: jupyter/rocm/tensorflow/ubi9-python-3.12/test/test_notebook.ipynb:22-29
Timestamp: 2025-07-02T18:27:51.097Z
Learning: jiridanek consistently creates comprehensive follow-up GitHub issues from PR review comments in opendatahub-io/notebooks, turning specific code quality concerns into systematic improvements tracked with proper context, acceptance criteria, and cross-references. Issue #1266 demonstrates this pattern by expanding a specific error handling concern in load_expected_versions() into a repository-wide improvement initiative.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2265
File: .tekton/odh-workbench-jupyter-pytorch-llmcompressor-cuda-py312-ubi9-pull-request.yaml:16-16
Timestamp: 2025-09-05T12:10:28.916Z
Learning: jiridanek requested GitHub issue creation for trigger path cleanup in pytorch+llmcompressor pipeline during PR #2265 review. Issue #2310 was successfully created addressing copy-paste errors where irrelevant Minimal/DataScience trigger paths were included in the pytorch+llmcompressor pipeline on-cel-expression, causing unnecessary pipeline triggers. The issue includes comprehensive problem description covering specific irrelevant paths, detailed solution with before/after YAML code examples, clear acceptance criteria for implementation and testing, repository-wide scope consideration for similar issues, and proper context linking to PR #2265 review comment, assigned to jiridanek.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-05T17:24:08.616Z
Learning: jiridanek requested PR review for #1521 covering s390x architecture support improvements, demonstrating continued focus on systematic multi-architecture compatibility enhancements in the opendatahub-io/notebooks repository through clean implementation with centralized configuration, proper CI integration, and architecture-aware testing patterns.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1306
File: jupyter/trustyai/ubi9-python-3.12/test/test_notebook.ipynb:112-126
Timestamp: 2025-07-08T13:21:09.150Z
Learning: jiridanek requested GitHub issue creation for notebook linting and formatting improvements during PR #1306 review, specifically to address inconsistent metadata across .ipynb files and implement systematic quality standards. This continues the established pattern of comprehensive issue creation for code quality improvements with detailed problem descriptions, multiple solution options, phased acceptance criteria, and proper context linking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-09-16T15:46:55.366Z
Learning: jiridanek requested GitHub issue creation for TensorFlow test notebook improvements during PR #1975 review, consolidating multiple review nitpicks into a single comprehensive issue covering code organization, error handling, documentation consistency, testing methodology, and notebook standards alignment. Issue #2491 was successfully created with detailed problem descriptions, three solution options, comprehensive acceptance criteria, phased implementation guidance, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2004
File: .tekton/odh-workbench-jupyter-minimal-cuda-py311-ubi9-push.yaml:35-36
Timestamp: 2025-08-19T15:49:45.132Z
Learning: jiridanek requested GitHub issue creation for two PipelineRef compatibility problems during PR #2004 review: missing additional-tags parameter in singlearch-push-pipeline and missing netrc workspace binding in PipelineRuns. Issues were successfully created with comprehensive problem descriptions covering technical details, impact analysis on multiple affected files, clear solutions with code examples, detailed acceptance criteria, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
d379559 to
5fe8705
Compare
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (10)
jupyter/minimal/ubi9-python-3.12/Dockerfile.rocm (1)
6-8: ubi-repos stage looks good; consider pinning the base imageIntroducing a dedicated ubi-repos stage is clean. For reproducibility and to avoid hadolint DL3006, consider pinning registry.access.redhat.com/ubi9/ubi to a specific tag or digest.
codeserver/ubi9-python-3.12/Dockerfile.cpu (1)
6-8: ubi-repos stage added: LGTM; pin the imagePattern is consistent with the PR. Please pin the ubi image to a tag/digest to improve determinism and avoid DL3006.
jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu (3)
6-8: ubi-repos stage: LGTM; consider pinningPlease pin registry.access.redhat.com/ubi9/ubi to a tag/digest for reproducibility and to satisfy hadolint DL3006.
195-212: Use /bin/bash for heredoc using bashismsThis block uses set -o pipefail; ensure the heredoc runs under bash explicitly.
-RUN <<'EOF' +RUN /bin/bash <<'EOF' set -Eeuxo pipefail ... EOF
223-224: Verify OpenBLAS source URL and consider tarballThe GitHub org is OpenMathLib; verify version availability. Tarballs (.tar.gz) can be faster and avoid unzip dependency.
jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cuda (1)
8-10: ubi-repos stage introduced: LGTM; consider pinningPin registry.access.redhat.com/ubi9/ubi to a specific tag/digest for reproducibility and to silence DL3006.
runtimes/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cuda (1)
8-10: ubi-repos alias added: LGTM; pin imagePin the UBI image to a tag/digest to avoid drift and potential hadolint warnings (DL3006).
jupyter/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cuda (1)
8-10: ubi-repos stage: LGTM; consider pinningRecommend pinning registry.access.redhat.com/ubi9/ubi to a tag/digest for deterministic builds.
runtimes/rocm-pytorch/ubi9-python-3.12/Dockerfile.rocm (1)
6-8: ubi-repos stage: LGTM; pin to tag/digestPinning improves reproducibility and often aligns with hadolint rules.
runtimes/pytorch/ubi9-python-3.12/Dockerfile.cuda (1)
8-10: ubi-repos alias: LGTM; consider pinningPlease pin to a specific tag/digest to reduce build variance and satisfy DL3006.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (19)
ci/hadolint-config.yaml(1 hunks)codeserver/ubi9-python-3.12/Dockerfile.cpu(5 hunks)jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu(10 hunks)jupyter/minimal/ubi9-python-3.12/Dockerfile.cpu(2 hunks)jupyter/minimal/ubi9-python-3.12/Dockerfile.cuda(2 hunks)jupyter/minimal/ubi9-python-3.12/Dockerfile.rocm(2 hunks)jupyter/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cuda(2 hunks)jupyter/pytorch/ubi9-python-3.12/Dockerfile.cuda(2 hunks)jupyter/rocm/pytorch/ubi9-python-3.12/Dockerfile.rocm(2 hunks)jupyter/rocm/tensorflow/ubi9-python-3.12/Dockerfile.rocm(2 hunks)jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cuda(2 hunks)jupyter/trustyai/ubi9-python-3.12/Dockerfile.cpu(3 hunks)runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu(8 hunks)runtimes/minimal/ubi9-python-3.12/Dockerfile.cpu(2 hunks)runtimes/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cuda(2 hunks)runtimes/pytorch/ubi9-python-3.12/Dockerfile.cuda(2 hunks)runtimes/rocm-pytorch/ubi9-python-3.12/Dockerfile.rocm(2 hunks)runtimes/rocm-tensorflow/ubi9-python-3.12/Dockerfile.rocm(2 hunks)runtimes/tensorflow/ubi9-python-3.12/Dockerfile.cuda(2 hunks)
🧰 Additional context used
🧠 Learnings (39)
📓 Common learnings
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-01T14:46:03.215Z
Learning: jiridanek requested GitHub issue creation for two nitpicks during PR #1588 review: comment wording improvement in ROCm TensorFlow Dockerfile and typo fix in Jupyter DataScience Dockerfile stage header. Issues #1589 and #1590 were successfully created with comprehensive problem descriptions, specific file locations and line numbers, clear before/after solutions, detailed acceptance criteria, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1259
File: jupyter/rocm/tensorflow/ubi9-python-3.12/kustomize/base/service.yaml:5-15
Timestamp: 2025-07-02T18:59:15.788Z
Learning: jiridanek creates targeted GitHub issues for specific test quality improvements identified during PR reviews in opendatahub-io/notebooks. Issue #1268 demonstrates this by converting a review comment about insufficient tf2onnx conversion test validation into a comprehensive improvement plan with clear acceptance criteria, code examples, and ROCm-specific context.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-05T17:24:08.616Z
Learning: jiridanek requested PR review for #1521 covering s390x architecture support improvements, demonstrating continued focus on systematic multi-architecture compatibility enhancements in the opendatahub-io/notebooks repository through clean implementation with centralized configuration, proper CI integration, and architecture-aware testing patterns.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2265
File: .tekton/odh-workbench-jupyter-pytorch-llmcompressor-cuda-py312-ubi9-pull-request.yaml:16-16
Timestamp: 2025-09-05T12:10:28.916Z
Learning: jiridanek requested GitHub issue creation for trigger path cleanup in pytorch+llmcompressor pipeline during PR #2265 review. Issue #2310 was successfully created addressing copy-paste errors where irrelevant Minimal/DataScience trigger paths were included in the pytorch+llmcompressor pipeline on-cel-expression, causing unnecessary pipeline triggers. The issue includes comprehensive problem description covering specific irrelevant paths, detailed solution with before/after YAML code examples, clear acceptance criteria for implementation and testing, repository-wide scope consideration for similar issues, and proper context linking to PR #2265 review comment, assigned to jiridanek.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1306
File: jupyter/trustyai/ubi9-python-3.12/test/test_notebook.ipynb:112-126
Timestamp: 2025-07-08T13:21:09.150Z
Learning: jiridanek requested GitHub issue creation for notebook linting and formatting improvements during PR #1306 review, specifically to address inconsistent metadata across .ipynb files and implement systematic quality standards. This continues the established pattern of comprehensive issue creation for code quality improvements with detailed problem descriptions, multiple solution options, phased acceptance criteria, and proper context linking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-27T18:54:33.532Z
Learning: jiridanek requested GitHub issue creation for comment typo fix during PR #2145 review, specifically fixing "hhttps" to "https" in rstudio/rhel9-python-3.11/Dockerfile.cuda line 139 NVIDIA CUDA Dockerfile reference. Issue #2164 was created with comprehensive problem description, clear before/after solution, and acceptance criteria, continuing the established pattern of systematic code quality improvements through detailed issue tracking for even minor documentation fixes.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1379
File: .tekton/odh-workbench-jupyter-datascience-cpu-py312-ubi9-push.yaml:14-17
Timestamp: 2025-07-11T11:15:47.424Z
Learning: jiridanek requested GitHub issue creation for CEL filter problem in datascience workbench Tekton pipelines during PR #1379 review. Issue #1383 was successfully created with comprehensive problem description covering both Python 3.11 and 3.12 pipelines incorrectly watching jupyter/minimal directories instead of jupyter/datascience directories, detailed impact analysis of pipeline execution failures, complete solution with before/after code examples, thorough acceptance criteria for path updates and pipeline triggering verification, implementation notes about repository structure alignment, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1259
File: jupyter/rocm/tensorflow/ubi9-python-3.12/test/test_notebook.ipynb:22-29
Timestamp: 2025-07-02T18:27:51.097Z
Learning: jiridanek consistently creates comprehensive follow-up GitHub issues from PR review comments in opendatahub-io/notebooks, turning specific code quality concerns into systematic improvements tracked with proper context, acceptance criteria, and cross-references. Issue #1266 demonstrates this pattern by expanding a specific error handling concern in load_expected_versions() into a repository-wide improvement initiative.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2367
File: .github/workflows/build-notebooks-pr-aipcc.yaml:68-79
Timestamp: 2025-09-09T18:48:54.058Z
Learning: jiridanek requested GitHub issue creation for pull_request_target workflow security hardening during PR #2367 review, specifically addressing the direct usage of github.event.pull_request.head.ref (untrusted branch names) in shell scripts within .github/workflows/build-notebooks-pr-aipcc.yaml. Issue was created with comprehensive security analysis covering low-medium risk assessment with existing permission check mitigation, detailed solution approach using environment variables and safe ref handling, clear acceptance criteria, and proper context linking, continuing the established pattern of systematic security improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-09-16T15:46:55.366Z
Learning: jiridanek requested GitHub issue creation for TensorFlow test notebook improvements during PR #1975 review, consolidating multiple review nitpicks into a single comprehensive issue covering code organization, error handling, documentation consistency, testing methodology, and notebook standards alignment. Issue #2491 was successfully created with detailed problem descriptions, three solution options, comprehensive acceptance criteria, phased implementation guidance, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:122-123
Timestamp: 2025-09-05T12:25:09.719Z
Learning: jiridanek requested GitHub issue creation for Docker multi-stage synchronization improvement in codeserver/ubi9-python-3.12/Dockerfile.cpu during PR #2227 review. The issue addresses sentinel file pattern using /tmp/control copied to /dev/null for stage coordination between rpm-base, whl-cache, and codeserver stages, proposing semantic improvements with descriptive file names, inline documentation, and elimination of /dev/null hack while maintaining multi-architecture build functionality for ppc64le support.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2215
File: runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:0-0
Timestamp: 2025-09-05T12:10:50.856Z
Learning: jiridanek requested GitHub issue creation for Dockerfile environment variable refactoring during PR #2215 review. Issue #2311 was created addressing build-only variables (OPENBLAS_VERSION, ONNX_VERSION, GRPC_PYTHON_BUILD_SYSTEM_OPENSSL) being unnecessarily written to /etc/profile.d/ppc64le.sh in runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu, causing variable duplication across stages, unreliable sourcing in non-login build contexts, and violation of DRY principles. The issue includes comprehensive problem description covering affected lines 30-37, detailed impact analysis of build reliability and maintenance overhead, three solution options with centralized ARG/ENV approach as recommended, clear acceptance criteria for version centralization and build-only variable cleanup, and specific implementation guidance with code examples, assigned to jiridanek, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2215
File: runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:0-0
Timestamp: 2025-09-05T12:10:50.856Z
Learning: jiridanek requested GitHub issue creation for Dockerfile environment variable refactoring during PR #2215 review. Issue addresses build-only variables (OPENBLAS_VERSION, ONNX_VERSION, GRPC_PYTHON_BUILD_SYSTEM_OPENSSL) being unnecessarily written to /etc/profile.d/ppc64le.sh in runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu, causing variable duplication across stages, unreliable sourcing in non-login build contexts, and violation of DRY principles. The issue includes comprehensive problem description covering affected lines 30-37, detailed impact analysis of build reliability and maintenance overhead, three solution options with centralized ARG/ENV approach as recommended, clear acceptance criteria for version centralization and build-only variable cleanup, and specific implementation guidance with code examples, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.
Learnt from: atheo89
PR: opendatahub-io/notebooks#1258
File: codeserver/ubi9-python-3.11/Dockerfile.cpu:32-32
Timestamp: 2025-07-07T11:08:48.524Z
Learning: atheo89 requested GitHub issue creation for multi-architecture Dockerfile improvements during PR #1258 review, specifically for enhancing structural consistency across Docker stages, replacing $(uname -m) with ${TARGETARCH} for cross-architecture builds, and adding OCI-compliant metadata labels. Issue #1332 was created with comprehensive problem description, phased implementation approach, detailed acceptance criteria, implementation guidance with code examples, and proper context linking, continuing the established pattern of systematic code quality improvements.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2185
File: jupyter/pytorch/ubi9-python-3.12/Dockerfile.cuda:109-117
Timestamp: 2025-08-29T08:48:55.985Z
Learning: jiridanek prefers to implement systematic cleanup improvements through dedicated GitHub issues (like #2076) rather than applying individual point fixes during PR reviews, maintaining consistency with their established pattern of comprehensive code quality improvements.
📚 Learning: 2025-09-12T08:27:00.439Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2432
File: jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu:232-249
Timestamp: 2025-09-12T08:27:00.439Z
Learning: jiridanek requested GitHub issue creation for Rust toolchain availability during s390x builds in jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu during PR #2432 review. The issue addresses PATH availability problems where Rust/cargo installed in cpu-base stage at /opt/.cargo/bin may not be accessible during uv pip install step in jupyter-datascience stage, proposing three solution approaches: immediate environment variable fix, builder stage pattern following codeserver approach, and ENV declaration fix, with comprehensive acceptance criteria covering build reliability, multi-architecture compatibility, and alignment with established patterns, continuing the systematic infrastructure improvement tracking methodology.
Applied to files:
jupyter/datascience/ubi9-python-3.12/Dockerfile.cpujupyter/trustyai/ubi9-python-3.12/Dockerfile.cpuruntimes/minimal/ubi9-python-3.12/Dockerfile.cpujupyter/minimal/ubi9-python-3.12/Dockerfile.rocmruntimes/pytorch/ubi9-python-3.12/Dockerfile.cudajupyter/minimal/ubi9-python-3.12/Dockerfile.cpucodeserver/ubi9-python-3.12/Dockerfile.cpuruntimes/datascience/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-09-12T08:27:00.439Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2432
File: jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu:232-249
Timestamp: 2025-09-12T08:27:00.439Z
Learning: jiridanek requested GitHub issue creation for Rust toolchain availability during s390x builds in jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu during PR #2432 review. Issue #2435 was created addressing PATH availability problems where Rust/cargo installed in cpu-base stage at /opt/.cargo/bin may not be accessible during uv pip install step in jupyter-datascience stage, proposing three solution approaches: immediate environment variable fix, builder stage pattern following codeserver approach, and ENV declaration fix, with comprehensive acceptance criteria covering build reliability, multi-architecture compatibility, and alignment with established patterns, continuing the systematic infrastructure improvement tracking methodology.
Applied to files:
jupyter/datascience/ubi9-python-3.12/Dockerfile.cpujupyter/trustyai/ubi9-python-3.12/Dockerfile.cpuruntimes/minimal/ubi9-python-3.12/Dockerfile.cpujupyter/minimal/ubi9-python-3.12/Dockerfile.rocmruntimes/pytorch/ubi9-python-3.12/Dockerfile.cudajupyter/minimal/ubi9-python-3.12/Dockerfile.cpucodeserver/ubi9-python-3.12/Dockerfile.cpujupyter/minimal/ubi9-python-3.12/Dockerfile.cudaruntimes/datascience/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-09-12T08:24:58.328Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2432
File: jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu:66-79
Timestamp: 2025-09-12T08:24:58.328Z
Learning: jiridanek requested GitHub issue creation for Rust toolchain PATH availability problem during PR #2432 review. Issue addresses environment variables written to /etc/profile.d/cargo.sh not being available during Docker RUN commands on s390x builds, potentially causing uv pip install failures when compiling Rust-based Python packages. The issue includes comprehensive problem description covering lines 66-79 and 232-249, detailed impact analysis of build reliability concerns, three solution options (ENV declarations, inline variables, hybrid approach), clear acceptance criteria for cargo availability verification, and specific implementation guidance with build testing steps, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.
Applied to files:
jupyter/datascience/ubi9-python-3.12/Dockerfile.cpuruntimes/datascience/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-09-05T12:10:50.856Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2215
File: runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:0-0
Timestamp: 2025-09-05T12:10:50.856Z
Learning: jiridanek requested GitHub issue creation for Dockerfile environment variable refactoring during PR #2215 review. Issue addresses build-only variables (OPENBLAS_VERSION, ONNX_VERSION, GRPC_PYTHON_BUILD_SYSTEM_OPENSSL) being unnecessarily written to /etc/profile.d/ppc64le.sh in runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu, causing variable duplication across stages, unreliable sourcing in non-login build contexts, and violation of DRY principles. The issue includes comprehensive problem description covering affected lines 30-37, detailed impact analysis of build reliability and maintenance overhead, three solution options with centralized ARG/ENV approach as recommended, clear acceptance criteria for version centralization and build-only variable cleanup, and specific implementation guidance with code examples, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.
Applied to files:
jupyter/datascience/ubi9-python-3.12/Dockerfile.cpujupyter/trustyai/ubi9-python-3.12/Dockerfile.cpuruntimes/minimal/ubi9-python-3.12/Dockerfile.cpuruntimes/rocm-tensorflow/ubi9-python-3.12/Dockerfile.rocmjupyter/rocm/pytorch/ubi9-python-3.12/Dockerfile.rocmruntimes/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cudajupyter/minimal/ubi9-python-3.12/Dockerfile.rocmjupyter/tensorflow/ubi9-python-3.12/Dockerfile.cudajupyter/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cudajupyter/pytorch/ubi9-python-3.12/Dockerfile.cudaruntimes/rocm-pytorch/ubi9-python-3.12/Dockerfile.rocmjupyter/rocm/tensorflow/ubi9-python-3.12/Dockerfile.rocmruntimes/pytorch/ubi9-python-3.12/Dockerfile.cudaruntimes/tensorflow/ubi9-python-3.12/Dockerfile.cudajupyter/minimal/ubi9-python-3.12/Dockerfile.cpucodeserver/ubi9-python-3.12/Dockerfile.cpujupyter/minimal/ubi9-python-3.12/Dockerfile.cudaruntimes/datascience/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-09-05T12:10:50.856Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2215
File: runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:0-0
Timestamp: 2025-09-05T12:10:50.856Z
Learning: jiridanek requested GitHub issue creation for Dockerfile environment variable refactoring during PR #2215 review. Issue #2311 was created addressing build-only variables (OPENBLAS_VERSION, ONNX_VERSION, GRPC_PYTHON_BUILD_SYSTEM_OPENSSL) being unnecessarily written to /etc/profile.d/ppc64le.sh in runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu, causing variable duplication across stages, unreliable sourcing in non-login build contexts, and violation of DRY principles. The issue includes comprehensive problem description covering affected lines 30-37, detailed impact analysis of build reliability and maintenance overhead, three solution options with centralized ARG/ENV approach as recommended, clear acceptance criteria for version centralization and build-only variable cleanup, and specific implementation guidance with code examples, assigned to jiridanek, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.
Applied to files:
jupyter/datascience/ubi9-python-3.12/Dockerfile.cpujupyter/trustyai/ubi9-python-3.12/Dockerfile.cpuruntimes/minimal/ubi9-python-3.12/Dockerfile.cpuruntimes/rocm-tensorflow/ubi9-python-3.12/Dockerfile.rocmjupyter/rocm/pytorch/ubi9-python-3.12/Dockerfile.rocmruntimes/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cudajupyter/minimal/ubi9-python-3.12/Dockerfile.rocmjupyter/tensorflow/ubi9-python-3.12/Dockerfile.cudajupyter/pytorch/ubi9-python-3.12/Dockerfile.cudaruntimes/rocm-pytorch/ubi9-python-3.12/Dockerfile.rocmjupyter/rocm/tensorflow/ubi9-python-3.12/Dockerfile.rocmruntimes/pytorch/ubi9-python-3.12/Dockerfile.cudaruntimes/tensorflow/ubi9-python-3.12/Dockerfile.cudajupyter/minimal/ubi9-python-3.12/Dockerfile.cpucodeserver/ubi9-python-3.12/Dockerfile.cpujupyter/minimal/ubi9-python-3.12/Dockerfile.cudaruntimes/datascience/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-09-05T11:27:31.040Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2215
File: runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:0-0
Timestamp: 2025-09-05T11:27:31.040Z
Learning: jiridanek requested GitHub issue creation for build toolchain optimization in datascience runtime during PR #2215 review. Issue #2308 was created addressing unnecessary build dependencies (gcc-toolset-13, cmake, ninja-build, rust, cargo) in final runtime image for ppc64le architecture, covering comprehensive problem analysis with specific line numbers, multiple solution options for builder-only toolchains, clear acceptance criteria for size reduction and security improvement, detailed implementation guidance for package segregation, and proper context linking to PR #2215 review comment, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.
Applied to files:
jupyter/datascience/ubi9-python-3.12/Dockerfile.cpuruntimes/datascience/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-09-05T12:25:09.719Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:122-123
Timestamp: 2025-09-05T12:25:09.719Z
Learning: jiridanek requested GitHub issue creation for Docker multi-stage synchronization improvement in codeserver/ubi9-python-3.12/Dockerfile.cpu during PR #2227 review. The issue addresses sentinel file pattern using /tmp/control copied to /dev/null for stage coordination between rpm-base, whl-cache, and codeserver stages, proposing semantic improvements with descriptive file names, inline documentation, and elimination of /dev/null hack while maintaining multi-architecture build functionality for ppc64le support.
Applied to files:
jupyter/datascience/ubi9-python-3.12/Dockerfile.cpujupyter/trustyai/ubi9-python-3.12/Dockerfile.cpuruntimes/minimal/ubi9-python-3.12/Dockerfile.cpuruntimes/rocm-tensorflow/ubi9-python-3.12/Dockerfile.rocmjupyter/rocm/pytorch/ubi9-python-3.12/Dockerfile.rocmci/hadolint-config.yamlruntimes/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cudajupyter/minimal/ubi9-python-3.12/Dockerfile.rocmjupyter/tensorflow/ubi9-python-3.12/Dockerfile.cudajupyter/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cudajupyter/pytorch/ubi9-python-3.12/Dockerfile.cudaruntimes/rocm-pytorch/ubi9-python-3.12/Dockerfile.rocmjupyter/rocm/tensorflow/ubi9-python-3.12/Dockerfile.rocmruntimes/pytorch/ubi9-python-3.12/Dockerfile.cudaruntimes/tensorflow/ubi9-python-3.12/Dockerfile.cudajupyter/minimal/ubi9-python-3.12/Dockerfile.cpucodeserver/ubi9-python-3.12/Dockerfile.cpujupyter/minimal/ubi9-python-3.12/Dockerfile.cudaruntimes/datascience/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-09-16T10:39:23.295Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2460
File: jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu:206-221
Timestamp: 2025-09-16T10:39:23.295Z
Learning: jiridanek requested GitHub issue creation for OpenBLAS installation staging during ppc64le builds in jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu during PR #2460 review. Issue #2466 was created addressing permission errors where OpenBLAS make install fails when attempting to write to /usr/local system paths from USER 1001 context in final stage, proposing DESTDIR staging pattern to build and install OpenBLAS artifacts within openblas-builder stage then COPY pre-installed files to final stage, with comprehensive problem description covering specific permission denied errors, detailed technical solution with code examples, clear acceptance criteria for build reliability and multi-architecture compatibility, and proper context linking to PR #2460 review comment, continuing the systematic infrastructure improvement tracking methodology for Power architecture support.
Applied to files:
jupyter/datascience/ubi9-python-3.12/Dockerfile.cpujupyter/minimal/ubi9-python-3.12/Dockerfile.rocmjupyter/pytorch/ubi9-python-3.12/Dockerfile.cudajupyter/minimal/ubi9-python-3.12/Dockerfile.cpuruntimes/datascience/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-09-05T12:29:07.819Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:218-218
Timestamp: 2025-09-05T12:29:07.819Z
Learning: jiridanek requested GitHub issue creation for uv multi-stage Docker build architectural investigation during PR #2227 review. The current implementation uses a three-stage build with whl-cache stage for wheel building/caching, base stage for OS setup, and final codeserver stage for offline installation using --offline flag and cache mounts. The pattern separates build phase (internet access, build tools) from install phase (offline, faster) while supporting multi-architecture builds (x86_64, ppc64le) with sentinel file coordination using /tmp/control files.
Applied to files:
jupyter/datascience/ubi9-python-3.12/Dockerfile.cpujupyter/trustyai/ubi9-python-3.12/Dockerfile.cpujupyter/minimal/ubi9-python-3.12/Dockerfile.rocmjupyter/tensorflow/ubi9-python-3.12/Dockerfile.cudajupyter/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cudajupyter/pytorch/ubi9-python-3.12/Dockerfile.cudaruntimes/rocm-pytorch/ubi9-python-3.12/Dockerfile.rocmruntimes/pytorch/ubi9-python-3.12/Dockerfile.cudajupyter/minimal/ubi9-python-3.12/Dockerfile.cpucodeserver/ubi9-python-3.12/Dockerfile.cpuruntimes/datascience/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-07-07T11:08:48.524Z
Learnt from: atheo89
PR: opendatahub-io/notebooks#1258
File: codeserver/ubi9-python-3.11/Dockerfile.cpu:32-32
Timestamp: 2025-07-07T11:08:48.524Z
Learning: atheo89 requested GitHub issue creation for multi-architecture Dockerfile improvements during PR #1258 review, specifically for enhancing structural consistency across Docker stages, replacing $(uname -m) with ${TARGETARCH} for cross-architecture builds, and adding OCI-compliant metadata labels. Issue #1332 was created with comprehensive problem description, phased implementation approach, detailed acceptance criteria, implementation guidance with code examples, and proper context linking, continuing the established pattern of systematic code quality improvements.
Applied to files:
jupyter/datascience/ubi9-python-3.12/Dockerfile.cpuruntimes/datascience/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-07-18T19:01:39.811Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1396
File: jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cuda:192-195
Timestamp: 2025-07-18T19:01:39.811Z
Learning: In the opendatahub-io/notebooks repository, mixing CentOS packages with UBI base images is bad practice that removes supportability and creates "Frankenstein" images according to Red Hat guidance. However, using EPEL packages is acceptable, though it may require extra work with AIPCC for internal Red Hat builds. The official reference is at developers.redhat.com/articles/ubi-faq.
Applied to files:
jupyter/datascience/ubi9-python-3.12/Dockerfile.cpujupyter/trustyai/ubi9-python-3.12/Dockerfile.cpuruntimes/minimal/ubi9-python-3.12/Dockerfile.cpuruntimes/rocm-tensorflow/ubi9-python-3.12/Dockerfile.rocmjupyter/rocm/pytorch/ubi9-python-3.12/Dockerfile.rocmruntimes/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cudajupyter/minimal/ubi9-python-3.12/Dockerfile.rocmjupyter/tensorflow/ubi9-python-3.12/Dockerfile.cudajupyter/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cudajupyter/pytorch/ubi9-python-3.12/Dockerfile.cudaruntimes/rocm-pytorch/ubi9-python-3.12/Dockerfile.rocmjupyter/rocm/tensorflow/ubi9-python-3.12/Dockerfile.rocmruntimes/pytorch/ubi9-python-3.12/Dockerfile.cudaruntimes/tensorflow/ubi9-python-3.12/Dockerfile.cudajupyter/minimal/ubi9-python-3.12/Dockerfile.cpucodeserver/ubi9-python-3.12/Dockerfile.cpujupyter/minimal/ubi9-python-3.12/Dockerfile.cudaruntimes/datascience/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-07-04T10:41:13.061Z
Learnt from: grdryn
PR: opendatahub-io/notebooks#1320
File: rstudio/rhel9-python-3.11/Dockerfile.cuda:34-35
Timestamp: 2025-07-04T10:41:13.061Z
Learning: In the opendatahub-io/notebooks repository, when adapting NVIDIA CUDA Dockerfiles, the project intentionally maintains consistency with upstream NVIDIA patterns even when it might involve potential risks like empty variable expansions in package installation commands. This is considered acceptable because the containers only run on RHEL 9 with known yum/dnf behavior, and upstream consistency is prioritized over defensive coding practices.
Applied to files:
jupyter/trustyai/ubi9-python-3.12/Dockerfile.cpujupyter/tensorflow/ubi9-python-3.12/Dockerfile.cudajupyter/pytorch/ubi9-python-3.12/Dockerfile.cudaruntimes/pytorch/ubi9-python-3.12/Dockerfile.cudaruntimes/tensorflow/ubi9-python-3.12/Dockerfile.cudajupyter/minimal/ubi9-python-3.12/Dockerfile.cpujupyter/minimal/ubi9-python-3.12/Dockerfile.cudaruntimes/datascience/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-09-10T21:24:07.914Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-09-10T21:24:07.914Z
Learning: jiridanek requested GitHub issue creation for Docker chown optimization in codeserver/ubi9-python-3.12/Dockerfile.cpu during PR #2356 review. Issue #2403 was created addressing performance impact of broad recursive chown on entire /opt/app-root directory (line 235), proposing three solution approaches: scoped chown targeting specific changed paths, root cause fix during file creation, and test modification for permission validation, with detailed benefits analysis covering layer size reduction and build time optimization, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.
Applied to files:
runtimes/minimal/ubi9-python-3.12/Dockerfile.cpujupyter/minimal/ubi9-python-3.12/Dockerfile.cpucodeserver/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-08-07T12:39:01.997Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1720
File: jupyter/tensorflow/ubi9-python-3.12/requirements.txt:1531-1558
Timestamp: 2025-08-07T12:39:01.997Z
Learning: In opendatahub-io/notebooks, the ROCm TensorFlow Python 3.12 UBI9 image (runtimes/rocm-tensorflow/ubi9-python-3.12/Dockerfile.rocm) was missing libxcrypt-compat, which is required for MySQL SASL2 plugin authentication with mysql-connector-python==9.3.0 on Python 3.12 UBI9. Issue #1722 was created to track this, following the established pattern for systematic dependency consistency and runtime compatibility across all Python 3.12 UBI9 images.
Applied to files:
runtimes/minimal/ubi9-python-3.12/Dockerfile.cpuruntimes/rocm-tensorflow/ubi9-python-3.12/Dockerfile.rocmjupyter/rocm/pytorch/ubi9-python-3.12/Dockerfile.rocmjupyter/minimal/ubi9-python-3.12/Dockerfile.rocmjupyter/tensorflow/ubi9-python-3.12/Dockerfile.cudaruntimes/rocm-pytorch/ubi9-python-3.12/Dockerfile.rocmjupyter/rocm/tensorflow/ubi9-python-3.12/Dockerfile.rocmruntimes/tensorflow/ubi9-python-3.12/Dockerfile.cudajupyter/minimal/ubi9-python-3.12/Dockerfile.cuda
📚 Learning: 2025-08-28T12:43:09.835Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2180
File: base-images/rocm/6.2/ubi9-python-3.12/Dockerfile.rocm:32-34
Timestamp: 2025-08-28T12:43:09.835Z
Learning: Issue #1346 "Multi-architecture support for ROCm TensorFlow runtime image" already covers hardcoded RHEL point release and architecture mapping problems in ROCm Dockerfiles, specifically documenting the hardcoded "rhel/9.4/main/x86_64" pattern in amdgpu repository URLs that breaks multi-architecture builds. This issue should be referenced when encountering similar hardcoded architecture patterns in ROCm base images.
Applied to files:
runtimes/rocm-tensorflow/ubi9-python-3.12/Dockerfile.rocmjupyter/rocm/pytorch/ubi9-python-3.12/Dockerfile.rocmruntimes/rocm-pytorch/ubi9-python-3.12/Dockerfile.rocmjupyter/rocm/tensorflow/ubi9-python-3.12/Dockerfile.rocm
📚 Learning: 2025-09-12T09:51:55.421Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-09-12T09:51:55.421Z
Learning: jiridanek identified orphaned TensorFlow ROCm Python 3.12 entries in manifests/base/params-latest.env during PR #2103 review. The params file references odh-workbench-jupyter-tensorflow-rocm-py312-ubi9 and odh-pipeline-runtime-tensorflow-rocm-py312-ubi9 images with 2025a-v1.35 tags, but the corresponding source directories (jupyter/rocm/tensorflow/ubi9-python-3.12/ and runtimes/rocm-tensorflow/ubi9-python-3.12/) don't exist in the repository, creating a broken build situation where Tekton pipelines exist but have no source to build from.
Applied to files:
runtimes/rocm-tensorflow/ubi9-python-3.12/Dockerfile.rocmjupyter/rocm/pytorch/ubi9-python-3.12/Dockerfile.rocmjupyter/minimal/ubi9-python-3.12/Dockerfile.rocmruntimes/rocm-pytorch/ubi9-python-3.12/Dockerfile.rocmjupyter/rocm/tensorflow/ubi9-python-3.12/Dockerfile.rocm
📚 Learning: 2025-09-12T09:51:55.421Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-09-12T09:51:55.421Z
Learning: During PR #2103 review investigation, found that runtimes/rocm-tensorflow/ubi9-python-3.12/ directory DOES exist in the repository, correcting initial assessment that all TensorFlow ROCm Python 3.12 entries in params-latest.env were orphaned. The pipeline runtime component appears to have proper source directory support.
Applied to files:
runtimes/rocm-tensorflow/ubi9-python-3.12/Dockerfile.rocmruntimes/rocm-pytorch/ubi9-python-3.12/Dockerfile.rocmjupyter/rocm/tensorflow/ubi9-python-3.12/Dockerfile.rocm
📚 Learning: 2025-08-28T12:43:09.835Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2180
File: base-images/rocm/6.2/ubi9-python-3.12/Dockerfile.rocm:32-34
Timestamp: 2025-08-28T12:43:09.835Z
Learning: jiridanek indicated that hardcoded RHEL point release and architecture mapping issues in ROCm Dockerfiles are already tracked in existing GitHub issues, specifically issue #1346 which covers multi-architecture support for ROCm TensorFlow runtime image and includes hardcoded architecture dependencies that break multi-arch support.
Applied to files:
runtimes/rocm-tensorflow/ubi9-python-3.12/Dockerfile.rocmjupyter/rocm/pytorch/ubi9-python-3.12/Dockerfile.rocmruntimes/rocm-pytorch/ubi9-python-3.12/Dockerfile.rocmjupyter/rocm/tensorflow/ubi9-python-3.12/Dockerfile.rocm
📚 Learning: 2025-08-27T15:12:21.684Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-27T15:12:21.684Z
Learning: jiridanek requested GitHub issue creation for ROCm TensorFlow wheel optimization during PR #2145 review, specifically to move tensorflow-rocm URL from dependencies to tool.uv.sources pattern and add required-environments for x86_64 architecture validation. This follows uv-native patterns for cleaner dependency management and better lock-time guarantees.
Applied to files:
runtimes/rocm-tensorflow/ubi9-python-3.12/Dockerfile.rocmjupyter/rocm/tensorflow/ubi9-python-3.12/Dockerfile.rocm
📚 Learning: 2025-08-20T12:39:54.827Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2000
File: runtimes/rocm-pytorch/ubi9-python-3.11/requirements.txt:807-809
Timestamp: 2025-08-20T12:39:54.827Z
Learning: Issue #2055 tracks the remaining runtimes/rocm-tensorflow/ubi9-python-3.12/requirements.txt file that stayed at jupyter-core==5.7.2 during PR #2000, which will be addressed separately due to specific challenges with that ROCm TensorFlow image.
Applied to files:
runtimes/rocm-tensorflow/ubi9-python-3.12/Dockerfile.rocm
📚 Learning: 2025-09-17T12:57:03.745Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2501
File: .dockerignore:1-1
Timestamp: 2025-09-17T12:57:03.745Z
Learning: jiridanek confirmed creation of GitHub issue for standardized .dockerignore template across all image directories during PR #2501 review, requesting comprehensive repository-wide security and performance improvements to prevent accidental secrets inclusion, optimize build contexts, and maintain consistent developer experience patterns across all notebook image directories.
Applied to files:
ci/hadolint-config.yaml
📚 Learning: 2025-08-27T14:49:24.112Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2145
File: runtimes/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cuda:152-159
Timestamp: 2025-08-27T14:49:24.112Z
Learning: jiridanek requested GitHub issue creation for CUDA version alignment in pytorch+llmcompressor runtime during PR #2145 review. Issue #2148 was created addressing the mismatch between Dockerfile CUDA 12.6 and pylock.toml cu124 PyTorch wheels. The issue includes comprehensive problem description covering affected files (runtimes/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cuda and pylock.toml), detailed solution with PyTorch index URL update from cu124 to cu126, lock regeneration steps using uv, clear acceptance criteria for wheel alignment verification, and proper context linking to PR #2145 review comment, assigned to jiridanek.
Applied to files:
runtimes/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cudajupyter/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cudajupyter/pytorch/ubi9-python-3.12/Dockerfile.cudaruntimes/pytorch/ubi9-python-3.12/Dockerfile.cuda
📚 Learning: 2025-08-27T14:49:24.112Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2145
File: runtimes/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cuda:152-159
Timestamp: 2025-08-27T14:49:24.112Z
Learning: jiridanek requested GitHub issue creation for CUDA version alignment in pytorch+llmcompressor runtime during PR #2145 review. Issue addresses mismatch between Dockerfile CUDA 12.6 and pylock.toml cu124 PyTorch wheels. Comprehensive issue created with detailed problem description covering affected files, current state analysis, step-by-step solution including PyTorch index URL update and lock regeneration, clear acceptance criteria for cu126 wheel verification, and proper context linking to PR #2145 review comment.
Applied to files:
runtimes/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cudajupyter/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cudajupyter/pytorch/ubi9-python-3.12/Dockerfile.cuda
📚 Learning: 2025-08-01T17:35:29.175Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-01T17:35:29.175Z
Learning: jiridanek requested GitHub issue creation for adding pytorch+llmcompressor images to Makefile build targets during PR #1519 review. Issue #1598 was successfully created with comprehensive problem description covering missing build targets for both jupyter workbench and runtime images, detailed solution with specific Makefile code examples following established patterns, thorough acceptance criteria covering individual targets, BASE_DIRS variable inclusion, and all-images target integration, implementation notes about handling '+' characters in paths, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Applied to files:
runtimes/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cudajupyter/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cuda
📚 Learning: 2025-09-15T07:35:57.126Z
Learnt from: atheo89
PR: opendatahub-io/notebooks#2455
File: base-images/cuda/12.8/ubi9-python-3.12/Dockerfile.cuda:103-105
Timestamp: 2025-09-15T07:35:57.126Z
Learning: In the opendatahub-io/notebooks repository, cuda-toolkit-12-8 package is not available for download under ubi9 base images, which explains why c9s CUDA variants include cuda-toolkit-12-8 installation while ubi9 CUDA variants do not. This divergence is expected due to package availability constraints between different base image ecosystems.
Applied to files:
runtimes/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cudaruntimes/pytorch/ubi9-python-3.12/Dockerfile.cudaruntimes/tensorflow/ubi9-python-3.12/Dockerfile.cudajupyter/minimal/ubi9-python-3.12/Dockerfile.cuda
📚 Learning: 2025-08-12T08:40:55.286Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1909
File: runtimes/pytorch+llmcompressor/ubi9-python-3.11/Dockerfile.cuda:11-15
Timestamp: 2025-08-12T08:40:55.286Z
Learning: jiridanek requested GitHub issue creation for redundant CUDA upgrade optimization during PR #1909 review. Analysis revealed all 14 CUDA Dockerfiles contain redundant `yum upgrade -y` commands in cuda-base stages that execute after base stages already performed comprehensive `dnf upgrade` via pre-upgrade blocks, causing unnecessary CI latency and build inefficiency. Issue includes complete scope analysis with specific line numbers, investigation framework requiring NVIDIA upstream documentation review, multiple solution options, comprehensive acceptance criteria covering systematic testing and performance measurement, and proper context linking to PR #1909 review comment.
Applied to files:
jupyter/minimal/ubi9-python-3.12/Dockerfile.rocmjupyter/tensorflow/ubi9-python-3.12/Dockerfile.cudajupyter/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cudajupyter/pytorch/ubi9-python-3.12/Dockerfile.cudaruntimes/pytorch/ubi9-python-3.12/Dockerfile.cudaruntimes/tensorflow/ubi9-python-3.12/Dockerfile.cudajupyter/minimal/ubi9-python-3.12/Dockerfile.cuda
📚 Learning: 2025-07-01T10:41:56.419Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-07-01T10:41:56.419Z
Learning: In the opendatahub-io/notebooks repository, TensorFlow packages with `extras = ["and-cuda"]` can cause build conflicts on macOS due to platform-specific CUDA packages. When the Dockerfile installs CUDA system-wide, removing the extras and letting TensorFlow find CUDA at runtime resolves these conflicts.
Applied to files:
jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cudaruntimes/tensorflow/ubi9-python-3.12/Dockerfile.cuda
📚 Learning: 2025-08-12T08:40:55.286Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1909
File: runtimes/pytorch+llmcompressor/ubi9-python-3.11/Dockerfile.cuda:11-15
Timestamp: 2025-08-12T08:40:55.286Z
Learning: jiridanek requested GitHub issue creation for redundant CUDA upgrade optimization during PR #1909 review. Issue covers duplicate yum/dnf upgrade commands in cuda-base stages that execute after base stages already performed comprehensive upgrades, causing unnecessary CI latency and build inefficiency across multiple CUDA Dockerfiles. The solution requires investigating NVIDIA upstream documentation requirements before removing redundant upgrades, with systematic testing of all CUDA variants and performance measurement. Issue follows established pattern of comprehensive problem analysis, multiple solution options, detailed acceptance criteria, and proper context linking.
Applied to files:
jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cudajupyter/pytorch/ubi9-python-3.12/Dockerfile.cudajupyter/minimal/ubi9-python-3.12/Dockerfile.cuda
📚 Learning: 2025-08-27T19:00:57.755Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-27T19:00:57.755Z
Learning: jiridanek requested GitHub issue creation for tensorflow[and-cuda] extra removal investigation during PR #2145 review, addressing the concern that this extra forces CUDA-specific wheels causing cross-platform conflicts while Docker images already supply CUDA via base layer. Issue #2168 was created with comprehensive investigation framework covering dependency differences, runtime verification, performance testing, cross-platform compatibility, and clear acceptance criteria for deciding whether to remove the extra in favor of base tensorflow package.
Applied to files:
jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cudaruntimes/tensorflow/ubi9-python-3.12/Dockerfile.cuda
📚 Learning: 2025-09-05T12:10:28.916Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2265
File: .tekton/odh-workbench-jupyter-pytorch-llmcompressor-cuda-py312-ubi9-pull-request.yaml:16-16
Timestamp: 2025-09-05T12:10:28.916Z
Learning: jiridanek requested GitHub issue creation for trigger path cleanup in pytorch+llmcompressor pipeline during PR #2265 review. The issue addresses copy-paste errors where irrelevant Minimal/DataScience trigger paths were included in the pytorch+llmcompressor pipeline on-cel-expression, causing unnecessary pipeline triggers. Solution involves removing unrelated paths and keeping only pytorch+llmcompressor-specific paths, build-args/cuda.conf, jupyter/utils, and the pipeline YAML itself, with comprehensive acceptance criteria and proper context linking.
Applied to files:
jupyter/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cuda
📚 Learning: 2025-08-19T11:45:12.501Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1998
File: runtimes/pytorch+llmcompressor/ubi9-python-3.11/Dockerfile.cuda:159-161
Timestamp: 2025-08-19T11:45:12.501Z
Learning: jiridanek requested GitHub issue creation for duplicated micropipenv installation cleanup in pytorch+llmcompressor images during PR #1998 review. Analysis confirmed duplication exists in both pytorch+llmcompressor Dockerfiles with micropipenv installed twice: unpinned early install (lines 23/36) for Pipfile.lock deployment and pinned later install (lines 160/248) in requirements.txt block. Issue #1999 created with comprehensive problem analysis covering exact line numbers and affected files, three solution options (remove early install, consolidate installations, conditional logic), detailed acceptance criteria covering build testing and functionality verification, implementation notes for coordination with version pinning efforts, and proper context linking to PR #1998 review comment.
Applied to files:
jupyter/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cuda
📚 Learning: 2025-06-16T11:32:09.203Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-06-16T11:32:09.203Z
Learning: In the opendatahub-io/notebooks repository, there is a known issue with missing `runtimes/rocm/pytorch/ubi9-python-3.11/kustomize/base/kustomization.yaml` file that causes rocm runtime tests to fail with "no such file or directory" error. This is tracked in JIRA RHOAIENG-22044 and was intended to be fixed in PR #1015.
Applied to files:
runtimes/rocm-pytorch/ubi9-python-3.12/Dockerfile.rocm
📚 Learning: 2025-07-23T16:18:42.922Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-07-23T16:18:42.922Z
Learning: The TensorFlow ROCm Python 3.12 compatibility issue in opendatahub-io/notebooks PR #1259 was caused by using tensorflow-rocm==2.14.0.600 in Pipfile.lock which lacks Python 3.12 wheels, while the Pipfile specifies tensorflow_rocm=~=2.18.1. The solution requires updating Pipfile sources to include https://repo.radeon.com/rocm/manylinux/rocm-rel-6.4/ repository which contains tensorflow_rocm-2.18.1-cp312-cp312-manylinux_2_28_x86_64.whl and regenerating Pipfile.lock using the piplock-refresh GitHub Action.
Applied to files:
jupyter/rocm/tensorflow/ubi9-python-3.12/Dockerfile.rocm
📚 Learning: 2025-08-27T15:45:10.946Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2145
File: runtimes/pytorch/ubi9-python-3.12/pyproject.toml:6-11
Timestamp: 2025-08-27T15:45:10.946Z
Learning: jiridanek requested GitHub issue creation for missing CUDA runtime support in pytorch runtime during PR #2145 review. Issue #2161 was created addressing the missing CUDA 12.6 runtime libraries required by cu126 PyTorch wheels in runtimes/pytorch/ubi9-python-3.12/Dockerfile.cuda. The issue includes comprehensive problem description covering current CPU-only base image vs CUDA wheel requirements, three solution options (CUDA-enabled base image, install CUDA libraries, CPU wheels), clear acceptance criteria for GPU acceleration verification, reference to related issue #2148 for CUDA alignment context, and proper context linking to PR #2145 review comment, assigned to jiridanek.
Applied to files:
runtimes/pytorch/ubi9-python-3.12/Dockerfile.cuda
📚 Learning: 2025-09-05T12:25:58.344Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:126-128
Timestamp: 2025-09-05T12:25:58.344Z
Learning: jiridanek expressed concern about code-server installation complexity in codeserver/ubi9-python-3.12/Dockerfile.cpu during PR #2227 review, describing the multi-stage cache mount approach as a "hack" that worries them, leading to GitHub issue creation for systematic architectural improvement addressing maintainability concerns.
Applied to files:
jupyter/minimal/ubi9-python-3.12/Dockerfile.cpucodeserver/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-09-05T12:25:58.344Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:126-128
Timestamp: 2025-09-05T12:25:58.344Z
Learning: jiridanek expressed concern about code-server installation complexity in codeserver/ubi9-python-3.12/Dockerfile.cpu during PR #2227 review, describing the multi-stage cache mount approach as a "hack" that worries them. GitHub issue #2315 was created addressing the architectural complexity with comprehensive problem description, four solution options, clear acceptance criteria, and implementation considerations, assigned to jiridanek, continuing the established pattern of systematic technical improvement tracking.
Applied to files:
codeserver/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-07-08T19:09:48.746Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1306
File: jupyter/trustyai/ubi9-python-3.12/kustomize/base/kustomization.yaml:8-12
Timestamp: 2025-07-08T19:09:48.746Z
Learning: jiridanek requested GitHub issue creation for misleading CUDA prefix in TrustyAI image tags during PR #1306 review, affecting both Python 3.11 and 3.12 versions. Issue #1338 was created with comprehensive problem description covering both affected images, repository pattern analysis comparing correct vs incorrect naming conventions, clear solution with code examples, detailed acceptance criteria, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Applied to files:
jupyter/minimal/ubi9-python-3.12/Dockerfile.cuda
📚 Learning: 2025-09-15T07:36:41.522Z
Learnt from: atheo89
PR: opendatahub-io/notebooks#2455
File: base-images/cuda/12.8/cuda-repos/cuda.repo-amd64:1-1
Timestamp: 2025-09-15T07:36:41.522Z
Learning: atheo89 requested GitHub issue creation for cuda-repos directory reorganization during PR #2455 review. Issue #2456 was created addressing the redundant cuda-repos directories under version-specific paths (12.6 and 12.8), proposing consolidation to base-images/cuda/cuda-repos with comprehensive analysis covering both CUDA versions, affected Dockerfiles requiring ARG CUDA_REPOS updates, clear solution with directory move and reference updates, detailed acceptance criteria, and proper context linking to PR #2455 review comment, assigned to atheo89.
Applied to files:
jupyter/minimal/ubi9-python-3.12/Dockerfile.cuda
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (45)
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-trustyai-cpu-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-tensorflow-rocm-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-pipeline-runtime-tensorflow-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-pipeline-runtime-datascience-cpu-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-minimal-cpu-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-pipeline-runtime-pytorch-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-pytorch-rocm-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-pipeline-runtime-minimal-cpu-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-minimal-rocm-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-pipeline-runtime-tensorflow-rocm-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-pytorch-llmcompressor-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-codeserver-datascience-cpu-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-minimal-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-pipeline-runtime-pytorch-llmcompressor-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-pytorch-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-pipeline-runtime-pytorch-rocm-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-datascience-cpu-py312-ubi9-on-pull-request
- GitHub Check: build (cuda-jupyter-tensorflow-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (jupyter-minimal-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (jupyter-datascience-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (cuda-jupyter-minimal-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (rocm-runtime-tensorflow-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (codeserver-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (runtime-minimal-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (cuda-jupyter-tensorflow-ubi9-python-3.12, 3.12, linux/arm64, false) / build
- GitHub Check: build (rocm-jupyter-tensorflow-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (codeserver-ubi9-python-3.12, 3.12, linux/arm64, false) / build
- GitHub Check: build (cuda-jupyter-pytorch-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (jupyter-trustyai-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (runtime-cuda-tensorflow-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (cuda-jupyter-pytorch-llmcompressor-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (jupyter-minimal-ubi9-python-3.12, 3.12, linux/ppc64le, false) / build
- GitHub Check: build (jupyter-datascience-ubi9-python-3.12, 3.12, linux/ppc64le, false) / build
- GitHub Check: build (rocm-jupyter-minimal-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (rocm-jupyter-pytorch-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (rocm-runtime-pytorch-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (runtime-cuda-pytorch-llmcompressor-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (runtime-cuda-tensorflow-ubi9-python-3.12, 3.12, linux/arm64, false) / build
- GitHub Check: build (cuda-jupyter-minimal-ubi9-python-3.12, 3.12, linux/arm64, false) / build
- GitHub Check: build (runtime-datascience-ubi9-python-3.12, 3.12, linux/s390x, false) / build
- GitHub Check: build (jupyter-minimal-ubi9-python-3.12, 3.12, linux/s390x, false) / build
- GitHub Check: build (runtime-cuda-pytorch-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (runtime-datascience-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (runtime-minimal-ubi9-python-3.12, 3.12, linux/s390x, false) / build
5fe8705 to
e81a794
Compare
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu (1)
331-339: OpenBLAS install likely fails on ppc64le: make not present in final stageFinal jupyter-datascience stage does not install make for ppc64le, yet runs ‘make -C /openblas install’. This will fail at runtime on Power.
Two options:
- Preferred (DESTDIR staging): Perform “make install” in openblas-builder into a staging dir and COPY into final image. No make needed in final stage.
*** In openblas-builder (within the existing RUN block after build) *** - make -j$(nproc) TARGET=POWER9 BINARY=64 USE_OPENMP=1 USE_THREAD=1 NUM_THREADS=120 DYNAMIC_ARCH=1 INTERFACE64=0 + make -j$(nproc) TARGET=POWER9 BINARY=64 USE_OPENMP=1 USE_THREAD=1 NUM_THREADS=120 DYNAMIC_ARCH=1 INTERFACE64=0 + make install DESTDIR=/tmp/openblas-install PREFIX=/usr/local + chmod -R 755 /tmp/openblas-install*** In jupyter-datascience (replace the RUN install block) *** -Run /bin/bash <<'EOF' -set -Eeuxo pipefail -if [ "${TARGETARCH}" = "ppc64le" ]; then - PREFIX=/usr/local make -C /openblas install - rm -rf /openblas -else - echo "Skipping ONNX/OpenBLAS install on non-Power" -fi -EOF +COPY --from=openblas-builder /tmp/openblas-install/ / +# Cleanup unpacked sources if still present +RUN rm -rf /openblas
- Quick fix (temporary make): Install make for the single step, then remove it. Less clean and may bloat layers.
- PREFIX=/usr/local make -C /openblas install + dnf install -y make && \ + PREFIX=/usr/local make -C /openblas install && \ + dnf remove -y make && dnf clean all && rm -rf /var/cache/dnfBased on learnings
🧹 Nitpick comments (1)
jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu (1)
22-34: Map TARGETARCH/uname to valid GOARCH valuesFallback to uname -m may yield x86_64, which is not a valid GOARCH. Add a small mapping for robustness.
Apply this diff:
-Run arch="${TARGETARCH:-$(uname -m)}" && \ - arch=$(echo "$arch" | cut -d- -f1) && \ +Run arch="${TARGETARCH:-$(uname -m)}" && \ + arch=$(echo "$arch" | cut -d- -f1) && \ + case "$arch" in \ + x86_64) arch=amd64 ;; \ + aarch64) arch=arm64 ;; \ + esac && \
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
codeserver/ubi9-python-3.12/Dockerfile.cpu(6 hunks)jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu(11 hunks)jupyter/trustyai/ubi9-python-3.12/Dockerfile.cpu(4 hunks)runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu(12 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu
- jupyter/trustyai/ubi9-python-3.12/Dockerfile.cpu
🧰 Additional context used
🧠 Learnings (19)
📓 Common learnings
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-01T14:46:03.215Z
Learning: jiridanek requested GitHub issue creation for two nitpicks during PR #1588 review: comment wording improvement in ROCm TensorFlow Dockerfile and typo fix in Jupyter DataScience Dockerfile stage header. Issues #1589 and #1590 were successfully created with comprehensive problem descriptions, specific file locations and line numbers, clear before/after solutions, detailed acceptance criteria, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1259
File: jupyter/rocm/tensorflow/ubi9-python-3.12/kustomize/base/service.yaml:5-15
Timestamp: 2025-07-02T18:59:15.788Z
Learning: jiridanek creates targeted GitHub issues for specific test quality improvements identified during PR reviews in opendatahub-io/notebooks. Issue #1268 demonstrates this by converting a review comment about insufficient tf2onnx conversion test validation into a comprehensive improvement plan with clear acceptance criteria, code examples, and ROCm-specific context.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-27T18:54:33.532Z
Learning: jiridanek requested GitHub issue creation for comment typo fix during PR #2145 review, specifically fixing "hhttps" to "https" in rstudio/rhel9-python-3.11/Dockerfile.cuda line 139 NVIDIA CUDA Dockerfile reference. Issue #2164 was created with comprehensive problem description, clear before/after solution, and acceptance criteria, continuing the established pattern of systematic code quality improvements through detailed issue tracking for even minor documentation fixes.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1379
File: .tekton/odh-workbench-jupyter-datascience-cpu-py312-ubi9-push.yaml:14-17
Timestamp: 2025-07-11T11:15:47.424Z
Learning: jiridanek requested GitHub issue creation for CEL filter problem in datascience workbench Tekton pipelines during PR #1379 review. Issue #1383 was successfully created with comprehensive problem description covering both Python 3.11 and 3.12 pipelines incorrectly watching jupyter/minimal directories instead of jupyter/datascience directories, detailed impact analysis of pipeline execution failures, complete solution with before/after code examples, thorough acceptance criteria for path updates and pipeline triggering verification, implementation notes about repository structure alignment, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1259
File: jupyter/rocm/tensorflow/ubi9-python-3.12/test/test_notebook.ipynb:22-29
Timestamp: 2025-07-02T18:27:51.097Z
Learning: jiridanek consistently creates comprehensive follow-up GitHub issues from PR review comments in opendatahub-io/notebooks, turning specific code quality concerns into systematic improvements tracked with proper context, acceptance criteria, and cross-references. Issue #1266 demonstrates this pattern by expanding a specific error handling concern in load_expected_versions() into a repository-wide improvement initiative.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2265
File: .tekton/odh-workbench-jupyter-pytorch-llmcompressor-cuda-py312-ubi9-pull-request.yaml:16-16
Timestamp: 2025-09-05T12:10:28.916Z
Learning: jiridanek requested GitHub issue creation for trigger path cleanup in pytorch+llmcompressor pipeline during PR #2265 review. Issue #2310 was successfully created addressing copy-paste errors where irrelevant Minimal/DataScience trigger paths were included in the pytorch+llmcompressor pipeline on-cel-expression, causing unnecessary pipeline triggers. The issue includes comprehensive problem description covering specific irrelevant paths, detailed solution with before/after YAML code examples, clear acceptance criteria for implementation and testing, repository-wide scope consideration for similar issues, and proper context linking to PR #2265 review comment, assigned to jiridanek.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-05T17:24:08.616Z
Learning: jiridanek requested PR review for #1521 covering s390x architecture support improvements, demonstrating continued focus on systematic multi-architecture compatibility enhancements in the opendatahub-io/notebooks repository through clean implementation with centralized configuration, proper CI integration, and architecture-aware testing patterns.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1306
File: jupyter/trustyai/ubi9-python-3.12/test/test_notebook.ipynb:112-126
Timestamp: 2025-07-08T13:21:09.150Z
Learning: jiridanek requested GitHub issue creation for notebook linting and formatting improvements during PR #1306 review, specifically to address inconsistent metadata across .ipynb files and implement systematic quality standards. This continues the established pattern of comprehensive issue creation for code quality improvements with detailed problem descriptions, multiple solution options, phased acceptance criteria, and proper context linking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-09-16T15:46:55.366Z
Learning: jiridanek requested GitHub issue creation for TensorFlow test notebook improvements during PR #1975 review, consolidating multiple review nitpicks into a single comprehensive issue covering code organization, error handling, documentation consistency, testing methodology, and notebook standards alignment. Issue #2491 was successfully created with detailed problem descriptions, three solution options, comprehensive acceptance criteria, phased implementation guidance, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2004
File: .tekton/odh-workbench-jupyter-minimal-cuda-py311-ubi9-push.yaml:35-36
Timestamp: 2025-08-19T15:49:45.132Z
Learning: jiridanek requested GitHub issue creation for two PipelineRef compatibility problems during PR #2004 review: missing additional-tags parameter in singlearch-push-pipeline and missing netrc workspace binding in PipelineRuns. Issues were successfully created with comprehensive problem descriptions covering technical details, impact analysis on multiple affected files, clear solutions with code examples, detailed acceptance criteria, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:218-218
Timestamp: 2025-09-05T12:29:07.819Z
Learning: jiridanek requested GitHub issue creation for uv multi-stage Docker build architectural investigation during PR #2227 review. The current implementation uses a three-stage build with whl-cache stage for wheel building/caching, base stage for OS setup, and final codeserver stage for offline installation using --offline flag and cache mounts. The pattern separates build phase (internet access, build tools) from install phase (offline, faster) while supporting multi-architecture builds (x86_64, ppc64le) with sentinel file coordination using /tmp/control files.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:122-123
Timestamp: 2025-09-05T12:25:09.719Z
Learning: jiridanek requested GitHub issue creation for Docker multi-stage synchronization improvement in codeserver/ubi9-python-3.12/Dockerfile.cpu during PR #2227 review. The issue addresses sentinel file pattern using /tmp/control copied to /dev/null for stage coordination between rpm-base, whl-cache, and codeserver stages, proposing semantic improvements with descriptive file names, inline documentation, and elimination of /dev/null hack while maintaining multi-architecture build functionality for ppc64le support.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:126-128
Timestamp: 2025-09-05T12:25:58.344Z
Learning: jiridanek expressed concern about code-server installation complexity in codeserver/ubi9-python-3.12/Dockerfile.cpu during PR #2227 review, describing the multi-stage cache mount approach as a "hack" that worries them, leading to GitHub issue creation for systematic architectural improvement addressing maintainability concerns.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2185
File: jupyter/pytorch/ubi9-python-3.12/Dockerfile.cuda:109-117
Timestamp: 2025-08-29T08:48:55.985Z
Learning: jiridanek prefers to implement systematic cleanup improvements through dedicated GitHub issues (like #2076) rather than applying individual point fixes during PR reviews, maintaining consistency with their established pattern of comprehensive code quality improvements.
📚 Learning: 2025-09-05T12:25:09.719Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:122-123
Timestamp: 2025-09-05T12:25:09.719Z
Learning: jiridanek requested GitHub issue creation for Docker multi-stage synchronization improvement in codeserver/ubi9-python-3.12/Dockerfile.cpu during PR #2227 review. The issue addresses sentinel file pattern using /tmp/control copied to /dev/null for stage coordination between rpm-base, whl-cache, and codeserver stages, proposing semantic improvements with descriptive file names, inline documentation, and elimination of /dev/null hack while maintaining multi-architecture build functionality for ppc64le support.
Applied to files:
jupyter/datascience/ubi9-python-3.12/Dockerfile.cpucodeserver/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-09-12T08:27:00.439Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2432
File: jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu:232-249
Timestamp: 2025-09-12T08:27:00.439Z
Learning: jiridanek requested GitHub issue creation for Rust toolchain availability during s390x builds in jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu during PR #2432 review. The issue addresses PATH availability problems where Rust/cargo installed in cpu-base stage at /opt/.cargo/bin may not be accessible during uv pip install step in jupyter-datascience stage, proposing three solution approaches: immediate environment variable fix, builder stage pattern following codeserver approach, and ENV declaration fix, with comprehensive acceptance criteria covering build reliability, multi-architecture compatibility, and alignment with established patterns, continuing the systematic infrastructure improvement tracking methodology.
Applied to files:
jupyter/datascience/ubi9-python-3.12/Dockerfile.cpucodeserver/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-09-12T08:27:00.439Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2432
File: jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu:232-249
Timestamp: 2025-09-12T08:27:00.439Z
Learning: jiridanek requested GitHub issue creation for Rust toolchain availability during s390x builds in jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu during PR #2432 review. Issue #2435 was created addressing PATH availability problems where Rust/cargo installed in cpu-base stage at /opt/.cargo/bin may not be accessible during uv pip install step in jupyter-datascience stage, proposing three solution approaches: immediate environment variable fix, builder stage pattern following codeserver approach, and ENV declaration fix, with comprehensive acceptance criteria covering build reliability, multi-architecture compatibility, and alignment with established patterns, continuing the systematic infrastructure improvement tracking methodology.
Applied to files:
jupyter/datascience/ubi9-python-3.12/Dockerfile.cpucodeserver/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-09-05T12:10:50.856Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2215
File: runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:0-0
Timestamp: 2025-09-05T12:10:50.856Z
Learning: jiridanek requested GitHub issue creation for Dockerfile environment variable refactoring during PR #2215 review. Issue #2311 was created addressing build-only variables (OPENBLAS_VERSION, ONNX_VERSION, GRPC_PYTHON_BUILD_SYSTEM_OPENSSL) being unnecessarily written to /etc/profile.d/ppc64le.sh in runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu, causing variable duplication across stages, unreliable sourcing in non-login build contexts, and violation of DRY principles. The issue includes comprehensive problem description covering affected lines 30-37, detailed impact analysis of build reliability and maintenance overhead, three solution options with centralized ARG/ENV approach as recommended, clear acceptance criteria for version centralization and build-only variable cleanup, and specific implementation guidance with code examples, assigned to jiridanek, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.
Applied to files:
jupyter/datascience/ubi9-python-3.12/Dockerfile.cpucodeserver/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-09-05T12:10:50.856Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2215
File: runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:0-0
Timestamp: 2025-09-05T12:10:50.856Z
Learning: jiridanek requested GitHub issue creation for Dockerfile environment variable refactoring during PR #2215 review. Issue addresses build-only variables (OPENBLAS_VERSION, ONNX_VERSION, GRPC_PYTHON_BUILD_SYSTEM_OPENSSL) being unnecessarily written to /etc/profile.d/ppc64le.sh in runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu, causing variable duplication across stages, unreliable sourcing in non-login build contexts, and violation of DRY principles. The issue includes comprehensive problem description covering affected lines 30-37, detailed impact analysis of build reliability and maintenance overhead, three solution options with centralized ARG/ENV approach as recommended, clear acceptance criteria for version centralization and build-only variable cleanup, and specific implementation guidance with code examples, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.
Applied to files:
jupyter/datascience/ubi9-python-3.12/Dockerfile.cpucodeserver/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-09-16T10:39:23.295Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2460
File: jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu:206-221
Timestamp: 2025-09-16T10:39:23.295Z
Learning: jiridanek requested GitHub issue creation for OpenBLAS installation staging during ppc64le builds in jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu during PR #2460 review. Issue #2466 was created addressing permission errors where OpenBLAS make install fails when attempting to write to /usr/local system paths from USER 1001 context in final stage, proposing DESTDIR staging pattern to build and install OpenBLAS artifacts within openblas-builder stage then COPY pre-installed files to final stage, with comprehensive problem description covering specific permission denied errors, detailed technical solution with code examples, clear acceptance criteria for build reliability and multi-architecture compatibility, and proper context linking to PR #2460 review comment, continuing the systematic infrastructure improvement tracking methodology for Power architecture support.
Applied to files:
jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-07-04T10:41:13.061Z
Learnt from: grdryn
PR: opendatahub-io/notebooks#1320
File: rstudio/rhel9-python-3.11/Dockerfile.cuda:34-35
Timestamp: 2025-07-04T10:41:13.061Z
Learning: In the opendatahub-io/notebooks repository, when adapting NVIDIA CUDA Dockerfiles, the project intentionally maintains consistency with upstream NVIDIA patterns even when it might involve potential risks like empty variable expansions in package installation commands. This is considered acceptable because the containers only run on RHEL 9 with known yum/dnf behavior, and upstream consistency is prioritized over defensive coding practices.
Applied to files:
jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-09-05T12:29:07.819Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:218-218
Timestamp: 2025-09-05T12:29:07.819Z
Learning: jiridanek requested GitHub issue creation for uv multi-stage Docker build architectural investigation during PR #2227 review. The current implementation uses a three-stage build with whl-cache stage for wheel building/caching, base stage for OS setup, and final codeserver stage for offline installation using --offline flag and cache mounts. The pattern separates build phase (internet access, build tools) from install phase (offline, faster) while supporting multi-architecture builds (x86_64, ppc64le) with sentinel file coordination using /tmp/control files.
Applied to files:
jupyter/datascience/ubi9-python-3.12/Dockerfile.cpucodeserver/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-09-05T12:25:58.344Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:126-128
Timestamp: 2025-09-05T12:25:58.344Z
Learning: jiridanek expressed concern about code-server installation complexity in codeserver/ubi9-python-3.12/Dockerfile.cpu during PR #2227 review, describing the multi-stage cache mount approach as a "hack" that worries them, leading to GitHub issue creation for systematic architectural improvement addressing maintainability concerns.
Applied to files:
jupyter/datascience/ubi9-python-3.12/Dockerfile.cpucodeserver/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-08-12T08:40:55.286Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1909
File: runtimes/pytorch+llmcompressor/ubi9-python-3.11/Dockerfile.cuda:11-15
Timestamp: 2025-08-12T08:40:55.286Z
Learning: jiridanek requested GitHub issue creation for redundant CUDA upgrade optimization during PR #1909 review. Analysis revealed all 14 CUDA Dockerfiles contain redundant `yum upgrade -y` commands in cuda-base stages that execute after base stages already performed comprehensive `dnf upgrade` via pre-upgrade blocks, causing unnecessary CI latency and build inefficiency. Issue includes complete scope analysis with specific line numbers, investigation framework requiring NVIDIA upstream documentation review, multiple solution options, comprehensive acceptance criteria covering systematic testing and performance measurement, and proper context linking to PR #1909 review comment.
Applied to files:
jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-09-12T08:24:58.328Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2432
File: jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu:66-79
Timestamp: 2025-09-12T08:24:58.328Z
Learning: jiridanek requested GitHub issue creation for Rust toolchain PATH availability problem during PR #2432 review. Issue addresses environment variables written to /etc/profile.d/cargo.sh not being available during Docker RUN commands on s390x builds, potentially causing uv pip install failures when compiling Rust-based Python packages. The issue includes comprehensive problem description covering lines 66-79 and 232-249, detailed impact analysis of build reliability concerns, three solution options (ENV declarations, inline variables, hybrid approach), clear acceptance criteria for cargo availability verification, and specific implementation guidance with build testing steps, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.
Applied to files:
jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-09-05T11:27:31.040Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2215
File: runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:0-0
Timestamp: 2025-09-05T11:27:31.040Z
Learning: jiridanek requested GitHub issue creation for build toolchain optimization in datascience runtime during PR #2215 review. Issue #2308 was created addressing unnecessary build dependencies (gcc-toolset-13, cmake, ninja-build, rust, cargo) in final runtime image for ppc64le architecture, covering comprehensive problem analysis with specific line numbers, multiple solution options for builder-only toolchains, clear acceptance criteria for size reduction and security improvement, detailed implementation guidance for package segregation, and proper context linking to PR #2215 review comment, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.
Applied to files:
jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-07-07T11:08:48.524Z
Learnt from: atheo89
PR: opendatahub-io/notebooks#1258
File: codeserver/ubi9-python-3.11/Dockerfile.cpu:32-32
Timestamp: 2025-07-07T11:08:48.524Z
Learning: atheo89 requested GitHub issue creation for multi-architecture Dockerfile improvements during PR #1258 review, specifically for enhancing structural consistency across Docker stages, replacing $(uname -m) with ${TARGETARCH} for cross-architecture builds, and adding OCI-compliant metadata labels. Issue #1332 was created with comprehensive problem description, phased implementation approach, detailed acceptance criteria, implementation guidance with code examples, and proper context linking, continuing the established pattern of systematic code quality improvements.
Applied to files:
jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-07-18T19:01:39.811Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1396
File: jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cuda:192-195
Timestamp: 2025-07-18T19:01:39.811Z
Learning: In the opendatahub-io/notebooks repository, mixing CentOS packages with UBI base images is bad practice that removes supportability and creates "Frankenstein" images according to Red Hat guidance. However, using EPEL packages is acceptable, though it may require extra work with AIPCC for internal Red Hat builds. The official reference is at developers.redhat.com/articles/ubi-faq.
Applied to files:
jupyter/datascience/ubi9-python-3.12/Dockerfile.cpucodeserver/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-07-04T17:08:02.399Z
Learnt from: grdryn
PR: opendatahub-io/notebooks#1320
File: rstudio/rhel9-python-3.11/Dockerfile.cuda:40-42
Timestamp: 2025-07-04T17:08:02.399Z
Learning: In the opendatahub-io/notebooks repository, when using multi-architecture Dockerfiles with BuildKit, the implicit build argument TARGETARCH is automatically available in the global scope for FROM instructions without explicit declaration. However, if TARGETARCH is used within a build stage, it must be declared explicitly within that stage. The current placement pattern (declaring ARG TARGETARCH after FROM instructions that use it) is correct for modern Docker/Podman/Buildah environments and does not require compatibility with older Docker versions.
Applied to files:
jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-07-04T17:07:52.656Z
Learnt from: grdryn
PR: opendatahub-io/notebooks#1320
File: jupyter/minimal/ubi9-python-3.11/Dockerfile.cuda:29-38
Timestamp: 2025-07-04T17:07:52.656Z
Learning: In the opendatahub-io/notebooks repository, modern Docker with BuildKit automatically provides build arguments like TARGETARCH in the global scope for FROM instructions, but these arguments must be explicitly declared with ARG statements inside build stages where they will be used. The ARG declaration should be placed within the stage that uses it, not moved to the global scope, as this is the correct pattern for modern Docker/Podman/Buildah environments.
Applied to files:
jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-09-10T21:24:07.914Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-09-10T21:24:07.914Z
Learning: jiridanek requested GitHub issue creation for Docker chown optimization in codeserver/ubi9-python-3.12/Dockerfile.cpu during PR #2356 review. Issue #2403 was created addressing performance impact of broad recursive chown on entire /opt/app-root directory (line 235), proposing three solution approaches: scoped chown targeting specific changed paths, root cause fix during file creation, and test modification for permission validation, with detailed benefits analysis covering layer size reduction and build time optimization, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.
Applied to files:
codeserver/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-09-05T12:25:58.344Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:126-128
Timestamp: 2025-09-05T12:25:58.344Z
Learning: jiridanek expressed concern about code-server installation complexity in codeserver/ubi9-python-3.12/Dockerfile.cpu during PR #2227 review, describing the multi-stage cache mount approach as a "hack" that worries them. GitHub issue #2315 was created addressing the architectural complexity with comprehensive problem description, four solution options, clear acceptance criteria, and implementation considerations, assigned to jiridanek, continuing the established pattern of systematic technical improvement tracking.
Applied to files:
codeserver/ubi9-python-3.12/Dockerfile.cpu
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (45)
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-minimal-rocm-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-pipeline-runtime-tensorflow-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-datascience-cpu-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-pipeline-runtime-tensorflow-rocm-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-pytorch-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-pipeline-runtime-pytorch-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-pipeline-runtime-datascience-cpu-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-tensorflow-rocm-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-minimal-cpu-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-pipeline-runtime-minimal-cpu-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-pipeline-runtime-pytorch-rocm-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-pipeline-runtime-pytorch-llmcompressor-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-pytorch-rocm-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-codeserver-datascience-cpu-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-minimal-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-pytorch-llmcompressor-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-trustyai-cpu-py312-ubi9-on-pull-request
- GitHub Check: build (codeserver-ubi9-python-3.12, 3.12, linux/arm64, false) / build
- GitHub Check: build (runtime-cuda-pytorch-llmcompressor-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (runtime-minimal-ubi9-python-3.12, 3.12, linux/s390x, false) / build
- GitHub Check: build (runtime-datascience-ubi9-python-3.12, 3.12, linux/s390x, false) / build
- GitHub Check: build (jupyter-minimal-ubi9-python-3.12, 3.12, linux/s390x, false) / build
- GitHub Check: build (runtime-cuda-tensorflow-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (jupyter-trustyai-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (jupyter-minimal-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (rocm-jupyter-minimal-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (rocm-jupyter-pytorch-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (jupyter-datascience-ubi9-python-3.12, 3.12, linux/ppc64le, false) / build
- GitHub Check: build (cuda-jupyter-tensorflow-ubi9-python-3.12, 3.12, linux/arm64, false) / build
- GitHub Check: build (cuda-jupyter-tensorflow-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (cuda-jupyter-minimal-ubi9-python-3.12, 3.12, linux/arm64, false) / build
- GitHub Check: build (jupyter-datascience-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (cuda-jupyter-pytorch-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (runtime-datascience-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (runtime-cuda-tensorflow-ubi9-python-3.12, 3.12, linux/arm64, false) / build
- GitHub Check: build (runtime-minimal-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (codeserver-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (cuda-jupyter-pytorch-llmcompressor-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (cuda-jupyter-minimal-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (jupyter-minimal-ubi9-python-3.12, 3.12, linux/ppc64le, false) / build
- GitHub Check: build (rocm-jupyter-tensorflow-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (rocm-runtime-tensorflow-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (rocm-runtime-pytorch-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (runtime-cuda-pytorch-ubi9-python-3.12, 3.12, linux/amd64, false) / build
🔇 Additional comments (17)
codeserver/ubi9-python-3.12/Dockerfile.cpu (7)
6-8: Good adoption of ubi-repos aliasUsing a dedicated stage alias to source ubi.repo improves clarity and consistency across Dockerfiles.
15-15: hadolint suppression is appropriate hereIgnoring DL3002 for USER root is justified for package installs.
24-24: COPY ubi.repo from ubi-repos is correctThis enables dnf on AIPCC “repo-less” bases without adding external repos.
42-60: whl-cache improvements look good
- DL3002 suppression acknowledged.
- Switching to pip --no-cache-dir for uv reduces layer size.
82-82: Second ubi.repo injection is consistentMirrors the rpm-base approach; OK.
287-291: Bash heredoc with pipefail in tests is correctRUN uses /bin/bash and sets -Eeuxo pipefail; tee preserves logs without masking failures.
292-294: Final stage capitalization consistencyNo behavioral change; fine to standardize.
jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu (10)
6-8: Good: ubi-repos stage for repo injectionConsistent with the broader change.
50-50: COPY ubi.repo is correctEnables package management on repo-less bases.
72-90: Rust toolchain PATH/CARGO_HOME not exported for RUN layersWriting to /etc/profile.d/cargo.sh won’t affect non-interactive RUN steps; cargo may be missing during later uv/pip builds on s390x.
Recommend adding persistent ENV in this stage (and any downstream stage that compiles Rust-based deps):
+ENV PATH="/opt/.cargo/bin:${PATH}" CARGO_HOME="/opt/.cargo"Alternatively, prefix commands that need cargo:
-uv pip install ... +PATH=/opt/.cargo/bin:$PATH CARGO_HOME=/opt/.cargo uv pip install ...Based on learnings
92-99: Python alternatives for s390xLooks reasonable and constrained to s390x only.
121-123: hadolint suppression is fineRoot needed in builder; suppression is acceptable.
179-189: Builder heredoc with bash and pipefail is solidPattern addresses DL4006 concerns.
205-208: ONNX build flags look goodPinning Python executable via CMAKE_ARGS and no-cache installs are fine.
225-233: OpenBLAS source fetch is OKDirect GitHub fetch with progress is acceptable for builder-only stage.
312-319: ONNX wheel install gated by arch is correctInstalls only on Power; fine.
322-330: Cleanup of onnx wheelsGood hygiene.
|
@coderabbitai check the purely refactoring-focused PR for inadvertently changing functionality |
…arning ``` ./runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:50 SC2016 info: Expressions don't expand in single quotes, use double quotes for that. ./runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:61 SC2016 info: Expressions don't expand in single quotes, use double quotes for that. ```
…Hadolint DL3047 warning ``` ./runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:192 DL3047 info: Avoid use of wget without progress bar. Use `wget --progress=dot:giga <url>`. Or consider using `-q` or `-nv` (shorthands for `--quiet` or `--no-verbose`). ```
``` ./codeserver/ubi9-python-3.12/Dockerfile.cpu:282 SC3029 warning: In POSIX sh, |& in place of 2>&1 | is undefined. ```
…SC2155 warning ``` ./runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:214 SC2155 warning: Declare and assign separately to avoid masking return values. ```
…d Hadolint DL3042 warning ``` ./runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:214 DL3042 warning: Avoid use of cache directory with pip. Use `pip install --no-cache-dir <package>` ./runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:239 DL3042 warning: Avoid use of cache directory with pip. Use `pip install --no-cache-dir <package>` ./runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:308 DL3042 warning: Avoid use of cache directory with pip. Use `pip install --no-cache-dir <package>` ./codeserver/ubi9-python-3.12/Dockerfile.cpu:53 DL3042 warning: Avoid use of cache directory with pip. Use `pip install --no-cache-dir <package>` ./jupyter/trustyai/ubi9-python-3.12/Dockerfile.cpu:32 DL3042 warning: Avoid use of cache directory with pip. Use `pip install --no-cache-dir <package>` ```
… DL3022 warning ``` `FROM` alias ./runtimes/tensorflow/ubi9-python-3.12/Dockerfile.cuda:23 DL3022 warning: `COPY --from` should reference a previously defined `FROM` alias ./runtimes/minimal/ubi9-python-3.12/Dockerfile.cpu:19 DL3022 warning: `COPY --from` should reference a previously defined `FROM` alias ./runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:21 DL3022 warning: `COPY --from` should reference a previously defined `FROM` alias ./runtimes/rocm-pytorch/ubi9-python-3.12/Dockerfile.rocm:19 DL3022 warning: `COPY --from` should reference a previously defined `FROM` alias ./runtimes/pytorch/ubi9-python-3.12/Dockerfile.cuda:21 DL3022 warning: `COPY --from` should reference a previously defined `FROM` alias ./runtimes/rocm-tensorflow/ubi9-python-3.12/Dockerfile.rocm:19 DL3022 warning: `COPY --from` should reference a previously defined `FROM` alias ./codeserver/ubi9-python-3.12/Dockerfile.cpu:20 DL3022 warning: `COPY --from` should reference a previously defined `FROM` alias ./codeserver/ubi9-python-3.12/Dockerfile.cpu:77 DL3022 warning: `COPY --from` should reference a previously defined `FROM` alias ```
…id Hadolint DL3002 warning ``` ./jupyter/trustyai/ubi9-python-3.12/Dockerfile.cpu:23 DL3002 warning: Last USER should not be root ./runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:104 DL3002 warning: Last USER should not be root ./runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:182 DL3002 warning: Last USER should not be root ./runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:206 DL3002 warning: Last USER should not be root ./runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:231 DL3002 warning: Last USER should not be root ./codeserver/ubi9-python-3.12/Dockerfile.cpu:12 DL3002 warning: Last USER should not be root ./codeserver/ubi9-python-3.12/Dockerfile.cpu:38 DL3002 warning: Last USER should not be root ```
…C3037,SC3041 warning ``` ./jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu:19 SC3037 warning: In POSIX sh, echo flags are undefined. ./jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu:178 SC3041 warning: In POSIX sh, set flag -E is undefined. ./jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu:312 SC3041 warning: In POSIX sh, set flag -E is undefined. ./jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu:322 SC3041 warning: In POSIX sh, set flag -E is undefined. ./jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu:331 SC3041 warning: In POSIX sh, set flag -E is undefined. ./codeserver/ubi9-python-3.12/Dockerfile.cpu:288 SC3041 warning: In POSIX sh, set flag -E is undefined. ```
…he Hadolint fixes
e81a794 to
7d5ad39
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu (1)
26-26: Consider consolidating printf usage withecho -eLine 26 uses
printfinstead ofecho -eto avoid shell-specific behavior, which aligns with the shell portability improvements mentioned in the PR objectives.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (18)
codeserver/ubi9-python-3.12/Dockerfile.cpu(6 hunks)jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu(11 hunks)jupyter/minimal/ubi9-python-3.12/Dockerfile.cpu(2 hunks)jupyter/minimal/ubi9-python-3.12/Dockerfile.cuda(2 hunks)jupyter/minimal/ubi9-python-3.12/Dockerfile.rocm(2 hunks)jupyter/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cuda(2 hunks)jupyter/pytorch/ubi9-python-3.12/Dockerfile.cuda(2 hunks)jupyter/rocm/pytorch/ubi9-python-3.12/Dockerfile.rocm(2 hunks)jupyter/rocm/tensorflow/ubi9-python-3.12/Dockerfile.rocm(2 hunks)jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cuda(2 hunks)jupyter/trustyai/ubi9-python-3.12/Dockerfile.cpu(4 hunks)runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu(12 hunks)runtimes/minimal/ubi9-python-3.12/Dockerfile.cpu(2 hunks)runtimes/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cuda(2 hunks)runtimes/pytorch/ubi9-python-3.12/Dockerfile.cuda(2 hunks)runtimes/rocm-pytorch/ubi9-python-3.12/Dockerfile.rocm(2 hunks)runtimes/rocm-tensorflow/ubi9-python-3.12/Dockerfile.rocm(2 hunks)runtimes/tensorflow/ubi9-python-3.12/Dockerfile.cuda(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (10)
- runtimes/pytorch/ubi9-python-3.12/Dockerfile.cuda
- jupyter/minimal/ubi9-python-3.12/Dockerfile.rocm
- runtimes/rocm-pytorch/ubi9-python-3.12/Dockerfile.rocm
- runtimes/minimal/ubi9-python-3.12/Dockerfile.cpu
- codeserver/ubi9-python-3.12/Dockerfile.cpu
- runtimes/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cuda
- runtimes/tensorflow/ubi9-python-3.12/Dockerfile.cuda
- jupyter/trustyai/ubi9-python-3.12/Dockerfile.cpu
- jupyter/rocm/pytorch/ubi9-python-3.12/Dockerfile.rocm
- jupyter/rocm/tensorflow/ubi9-python-3.12/Dockerfile.rocm
🧰 Additional context used
🧠 Learnings (45)
📓 Common learnings
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2215
File: runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:0-0
Timestamp: 2025-09-05T12:10:50.856Z
Learning: jiridanek requested GitHub issue creation for Dockerfile environment variable refactoring during PR #2215 review. Issue #2311 was created addressing build-only variables (OPENBLAS_VERSION, ONNX_VERSION, GRPC_PYTHON_BUILD_SYSTEM_OPENSSL) being unnecessarily written to /etc/profile.d/ppc64le.sh in runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu, causing variable duplication across stages, unreliable sourcing in non-login build contexts, and violation of DRY principles. The issue includes comprehensive problem description covering affected lines 30-37, detailed impact analysis of build reliability and maintenance overhead, three solution options with centralized ARG/ENV approach as recommended, clear acceptance criteria for version centralization and build-only variable cleanup, and specific implementation guidance with code examples, assigned to jiridanek, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2215
File: runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:0-0
Timestamp: 2025-09-05T12:10:50.856Z
Learning: jiridanek requested GitHub issue creation for Dockerfile environment variable refactoring during PR #2215 review. Issue addresses build-only variables (OPENBLAS_VERSION, ONNX_VERSION, GRPC_PYTHON_BUILD_SYSTEM_OPENSSL) being unnecessarily written to /etc/profile.d/ppc64le.sh in runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu, causing variable duplication across stages, unreliable sourcing in non-login build contexts, and violation of DRY principles. The issue includes comprehensive problem description covering affected lines 30-37, detailed impact analysis of build reliability and maintenance overhead, three solution options with centralized ARG/ENV approach as recommended, clear acceptance criteria for version centralization and build-only variable cleanup, and specific implementation guidance with code examples, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-09-26T10:52:59.415Z
Learning: jiridanek corrected CodeRabbit's incorrect analysis claiming environment variable timing changes in PR #2547. After re-examination, the refactoring PR only contained ubi-repos alias introduction, pip flag corrections (--no-cache to --no-cache-dir), and minor shell improvements, with NO environment variable timing changes or profile.d modifications. This demonstrates the importance of examining actual code diffs rather than relying on AI summary interpretations.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-01T14:46:03.215Z
Learning: jiridanek requested GitHub issue creation for two nitpicks during PR #1588 review: comment wording improvement in ROCm TensorFlow Dockerfile and typo fix in Jupyter DataScience Dockerfile stage header. Issues #1589 and #1590 were successfully created with comprehensive problem descriptions, specific file locations and line numbers, clear before/after solutions, detailed acceptance criteria, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1259
File: jupyter/rocm/tensorflow/ubi9-python-3.12/Pipfile:31-36
Timestamp: 2025-08-20T17:51:39.525Z
Learning: jiridanek consistently escalates point issues identified during PR reviews into systematic repository-wide improvements. When CodeRabbit flagged missing libxcrypt-compat in a single ROCm TensorFlow Python 3.12 image during PR #1259 review, jiridanek requested creation of issue #2075 for systematic review and fixing of all Python 3.12 UBI9 images, demonstrating his comprehensive approach to infrastructure consistency and code quality.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1909
File: runtimes/pytorch+llmcompressor/ubi9-python-3.11/Dockerfile.cuda:11-15
Timestamp: 2025-08-12T08:40:55.286Z
Learning: jiridanek requested GitHub issue creation for redundant CUDA upgrade optimization during PR #1909 review. Analysis revealed all 14 CUDA Dockerfiles contain redundant `yum upgrade -y` commands in cuda-base stages that execute after base stages already performed comprehensive `dnf upgrade` via pre-upgrade blocks, causing unnecessary CI latency and build inefficiency. Issue includes complete scope analysis with specific line numbers, investigation framework requiring NVIDIA upstream documentation review, multiple solution options, comprehensive acceptance criteria covering systematic testing and performance measurement, and proper context linking to PR #1909 review comment.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2215
File: runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:0-0
Timestamp: 2025-09-05T11:27:31.040Z
Learning: jiridanek requested GitHub issue creation for build toolchain optimization in datascience runtime during PR #2215 review. Issue #2308 was created addressing unnecessary build dependencies (gcc-toolset-13, cmake, ninja-build, rust, cargo) in final runtime image for ppc64le architecture, covering comprehensive problem analysis with specific line numbers, multiple solution options for builder-only toolchains, clear acceptance criteria for size reduction and security improvement, detailed implementation guidance for package segregation, and proper context linking to PR #2215 review comment, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-27T18:54:33.532Z
Learning: jiridanek requested GitHub issue creation for comment typo fix during PR #2145 review, specifically fixing "hhttps" to "https" in rstudio/rhel9-python-3.11/Dockerfile.cuda line 139 NVIDIA CUDA Dockerfile reference. Issue #2164 was created with comprehensive problem description, clear before/after solution, and acceptance criteria, continuing the established pattern of systematic code quality improvements through detailed issue tracking for even minor documentation fixes.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-27T15:20:35.737Z
Learning: jiridanek requested GitHub issue creation for DNF consistency during PR #2145 review, identifying inconsistent yum vs dnf usage across UBI9 Dockerfiles. Issue #2157 was created with comprehensive repository-wide audit strategy, systematic replacement approach, benefits analysis, and clear acceptance criteria for standardizing package manager usage across all UBI9 images, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-05T17:24:08.616Z
Learning: jiridanek requested PR review for #1521 covering s390x architecture support improvements, demonstrating continued focus on systematic multi-architecture compatibility enhancements in the opendatahub-io/notebooks repository through clean implementation with centralized configuration, proper CI integration, and architecture-aware testing patterns.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:218-218
Timestamp: 2025-09-05T12:29:07.819Z
Learning: jiridanek requested GitHub issue creation for uv multi-stage Docker build architectural investigation during PR #2227 review. The current implementation uses a three-stage build with whl-cache stage for wheel building/caching, base stage for OS setup, and final codeserver stage for offline installation using --offline flag and cache mounts. The pattern separates build phase (internet access, build tools) from install phase (offline, faster) while supporting multi-architecture builds (x86_64, ppc64le) with sentinel file coordination using /tmp/control files.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:122-123
Timestamp: 2025-09-05T12:25:09.719Z
Learning: jiridanek requested GitHub issue creation for Docker multi-stage synchronization improvement in codeserver/ubi9-python-3.12/Dockerfile.cpu during PR #2227 review. The issue addresses sentinel file pattern using /tmp/control copied to /dev/null for stage coordination between rpm-base, whl-cache, and codeserver stages, proposing semantic improvements with descriptive file names, inline documentation, and elimination of /dev/null hack while maintaining multi-architecture build functionality for ppc64le support.
Learnt from: atheo89
PR: opendatahub-io/notebooks#1258
File: codeserver/ubi9-python-3.11/Dockerfile.cpu:32-32
Timestamp: 2025-07-07T11:08:48.524Z
Learning: atheo89 requested GitHub issue creation for multi-architecture Dockerfile improvements during PR #1258 review, specifically for enhancing structural consistency across Docker stages, replacing $(uname -m) with ${TARGETARCH} for cross-architecture builds, and adding OCI-compliant metadata labels. Issue #1332 was created with comprehensive problem description, phased implementation approach, detailed acceptance criteria, implementation guidance with code examples, and proper context linking, continuing the established pattern of systematic code quality improvements.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:126-128
Timestamp: 2025-09-05T12:25:58.344Z
Learning: jiridanek expressed concern about code-server installation complexity in codeserver/ubi9-python-3.12/Dockerfile.cpu during PR #2227 review, describing the multi-stage cache mount approach as a "hack" that worries them, leading to GitHub issue creation for systematic architectural improvement addressing maintainability concerns.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2432
File: jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu:232-249
Timestamp: 2025-09-12T08:27:00.439Z
Learning: jiridanek requested GitHub issue creation for Rust toolchain availability during s390x builds in jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu during PR #2432 review. Issue #2435 was created addressing PATH availability problems where Rust/cargo installed in cpu-base stage at /opt/.cargo/bin may not be accessible during uv pip install step in jupyter-datascience stage, proposing three solution approaches: immediate environment variable fix, builder stage pattern following codeserver approach, and ENV declaration fix, with comprehensive acceptance criteria covering build reliability, multi-architecture compatibility, and alignment with established patterns, continuing the systematic infrastructure improvement tracking methodology.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2432
File: jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu:232-249
Timestamp: 2025-09-12T08:27:00.439Z
Learning: jiridanek requested GitHub issue creation for Rust toolchain availability during s390x builds in jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu during PR #2432 review. The issue addresses PATH availability problems where Rust/cargo installed in cpu-base stage at /opt/.cargo/bin may not be accessible during uv pip install step in jupyter-datascience stage, proposing three solution approaches: immediate environment variable fix, builder stage pattern following codeserver approach, and ENV declaration fix, with comprehensive acceptance criteria covering build reliability, multi-architecture compatibility, and alignment with established patterns, continuing the systematic infrastructure improvement tracking methodology.
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2185
File: jupyter/pytorch/ubi9-python-3.12/Dockerfile.cuda:109-117
Timestamp: 2025-08-29T08:48:55.985Z
Learning: jiridanek prefers to implement systematic cleanup improvements through dedicated GitHub issues (like #2076) rather than applying individual point fixes during PR reviews, maintaining consistency with their established pattern of comprehensive code quality improvements.
📚 Learning: 2025-08-07T12:39:01.997Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1720
File: jupyter/tensorflow/ubi9-python-3.12/requirements.txt:1531-1558
Timestamp: 2025-08-07T12:39:01.997Z
Learning: In opendatahub-io/notebooks, the ROCm TensorFlow Python 3.12 UBI9 image (runtimes/rocm-tensorflow/ubi9-python-3.12/Dockerfile.rocm) was missing libxcrypt-compat, which is required for MySQL SASL2 plugin authentication with mysql-connector-python==9.3.0 on Python 3.12 UBI9. Issue #1722 was created to track this, following the established pattern for systematic dependency consistency and runtime compatibility across all Python 3.12 UBI9 images.
Applied to files:
runtimes/rocm-tensorflow/ubi9-python-3.12/Dockerfile.rocmjupyter/minimal/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-09-12T09:51:55.421Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-09-12T09:51:55.421Z
Learning: jiridanek identified orphaned TensorFlow ROCm Python 3.12 entries in manifests/base/params-latest.env during PR #2103 review. The params file references odh-workbench-jupyter-tensorflow-rocm-py312-ubi9 and odh-pipeline-runtime-tensorflow-rocm-py312-ubi9 images with 2025a-v1.35 tags, but the corresponding source directories (jupyter/rocm/tensorflow/ubi9-python-3.12/ and runtimes/rocm-tensorflow/ubi9-python-3.12/) don't exist in the repository, creating a broken build situation where Tekton pipelines exist but have no source to build from.
Applied to files:
runtimes/rocm-tensorflow/ubi9-python-3.12/Dockerfile.rocm
📚 Learning: 2025-08-28T12:43:09.835Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2180
File: base-images/rocm/6.2/ubi9-python-3.12/Dockerfile.rocm:32-34
Timestamp: 2025-08-28T12:43:09.835Z
Learning: Issue #1346 "Multi-architecture support for ROCm TensorFlow runtime image" already covers hardcoded RHEL point release and architecture mapping problems in ROCm Dockerfiles, specifically documenting the hardcoded "rhel/9.4/main/x86_64" pattern in amdgpu repository URLs that breaks multi-architecture builds. This issue should be referenced when encountering similar hardcoded architecture patterns in ROCm base images.
Applied to files:
runtimes/rocm-tensorflow/ubi9-python-3.12/Dockerfile.rocm
📚 Learning: 2025-09-12T09:51:55.421Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-09-12T09:51:55.421Z
Learning: During PR #2103 review investigation, found that runtimes/rocm-tensorflow/ubi9-python-3.12/ directory DOES exist in the repository, correcting initial assessment that all TensorFlow ROCm Python 3.12 entries in params-latest.env were orphaned. The pipeline runtime component appears to have proper source directory support.
Applied to files:
runtimes/rocm-tensorflow/ubi9-python-3.12/Dockerfile.rocm
📚 Learning: 2025-08-28T12:43:09.835Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2180
File: base-images/rocm/6.2/ubi9-python-3.12/Dockerfile.rocm:32-34
Timestamp: 2025-08-28T12:43:09.835Z
Learning: jiridanek indicated that hardcoded RHEL point release and architecture mapping issues in ROCm Dockerfiles are already tracked in existing GitHub issues, specifically issue #1346 which covers multi-architecture support for ROCm TensorFlow runtime image and includes hardcoded architecture dependencies that break multi-arch support.
Applied to files:
runtimes/rocm-tensorflow/ubi9-python-3.12/Dockerfile.rocm
📚 Learning: 2025-09-05T12:10:50.856Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2215
File: runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:0-0
Timestamp: 2025-09-05T12:10:50.856Z
Learning: jiridanek requested GitHub issue creation for Dockerfile environment variable refactoring during PR #2215 review. Issue addresses build-only variables (OPENBLAS_VERSION, ONNX_VERSION, GRPC_PYTHON_BUILD_SYSTEM_OPENSSL) being unnecessarily written to /etc/profile.d/ppc64le.sh in runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu, causing variable duplication across stages, unreliable sourcing in non-login build contexts, and violation of DRY principles. The issue includes comprehensive problem description covering affected lines 30-37, detailed impact analysis of build reliability and maintenance overhead, three solution options with centralized ARG/ENV approach as recommended, clear acceptance criteria for version centralization and build-only variable cleanup, and specific implementation guidance with code examples, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.
Applied to files:
runtimes/rocm-tensorflow/ubi9-python-3.12/Dockerfile.rocmruntimes/datascience/ubi9-python-3.12/Dockerfile.cpujupyter/tensorflow/ubi9-python-3.12/Dockerfile.cudajupyter/datascience/ubi9-python-3.12/Dockerfile.cpujupyter/minimal/ubi9-python-3.12/Dockerfile.cudajupyter/pytorch/ubi9-python-3.12/Dockerfile.cudajupyter/minimal/ubi9-python-3.12/Dockerfile.cpujupyter/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cuda
📚 Learning: 2025-09-05T12:25:09.719Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:122-123
Timestamp: 2025-09-05T12:25:09.719Z
Learning: jiridanek requested GitHub issue creation for Docker multi-stage synchronization improvement in codeserver/ubi9-python-3.12/Dockerfile.cpu during PR #2227 review. The issue addresses sentinel file pattern using /tmp/control copied to /dev/null for stage coordination between rpm-base, whl-cache, and codeserver stages, proposing semantic improvements with descriptive file names, inline documentation, and elimination of /dev/null hack while maintaining multi-architecture build functionality for ppc64le support.
Applied to files:
runtimes/rocm-tensorflow/ubi9-python-3.12/Dockerfile.rocmruntimes/datascience/ubi9-python-3.12/Dockerfile.cpujupyter/tensorflow/ubi9-python-3.12/Dockerfile.cudajupyter/datascience/ubi9-python-3.12/Dockerfile.cpujupyter/minimal/ubi9-python-3.12/Dockerfile.cudajupyter/pytorch/ubi9-python-3.12/Dockerfile.cudajupyter/minimal/ubi9-python-3.12/Dockerfile.cpujupyter/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cuda
📚 Learning: 2025-08-27T15:12:21.684Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-27T15:12:21.684Z
Learning: jiridanek requested GitHub issue creation for ROCm TensorFlow wheel optimization during PR #2145 review, specifically to move tensorflow-rocm URL from dependencies to tool.uv.sources pattern and add required-environments for x86_64 architecture validation. This follows uv-native patterns for cleaner dependency management and better lock-time guarantees.
Applied to files:
runtimes/rocm-tensorflow/ubi9-python-3.12/Dockerfile.rocm
📚 Learning: 2025-09-05T12:10:50.856Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2215
File: runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:0-0
Timestamp: 2025-09-05T12:10:50.856Z
Learning: jiridanek requested GitHub issue creation for Dockerfile environment variable refactoring during PR #2215 review. Issue #2311 was created addressing build-only variables (OPENBLAS_VERSION, ONNX_VERSION, GRPC_PYTHON_BUILD_SYSTEM_OPENSSL) being unnecessarily written to /etc/profile.d/ppc64le.sh in runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu, causing variable duplication across stages, unreliable sourcing in non-login build contexts, and violation of DRY principles. The issue includes comprehensive problem description covering affected lines 30-37, detailed impact analysis of build reliability and maintenance overhead, three solution options with centralized ARG/ENV approach as recommended, clear acceptance criteria for version centralization and build-only variable cleanup, and specific implementation guidance with code examples, assigned to jiridanek, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.
Applied to files:
runtimes/rocm-tensorflow/ubi9-python-3.12/Dockerfile.rocmruntimes/datascience/ubi9-python-3.12/Dockerfile.cpujupyter/tensorflow/ubi9-python-3.12/Dockerfile.cudajupyter/datascience/ubi9-python-3.12/Dockerfile.cpujupyter/minimal/ubi9-python-3.12/Dockerfile.cudajupyter/pytorch/ubi9-python-3.12/Dockerfile.cudajupyter/minimal/ubi9-python-3.12/Dockerfile.cpujupyter/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cuda
📚 Learning: 2025-08-20T12:39:54.827Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2000
File: runtimes/rocm-pytorch/ubi9-python-3.11/requirements.txt:807-809
Timestamp: 2025-08-20T12:39:54.827Z
Learning: Issue #2055 tracks the remaining runtimes/rocm-tensorflow/ubi9-python-3.12/requirements.txt file that stayed at jupyter-core==5.7.2 during PR #2000, which will be addressed separately due to specific challenges with that ROCm TensorFlow image.
Applied to files:
runtimes/rocm-tensorflow/ubi9-python-3.12/Dockerfile.rocm
📚 Learning: 2025-07-18T19:01:39.811Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1396
File: jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cuda:192-195
Timestamp: 2025-07-18T19:01:39.811Z
Learning: In the opendatahub-io/notebooks repository, mixing CentOS packages with UBI base images is bad practice that removes supportability and creates "Frankenstein" images according to Red Hat guidance. However, using EPEL packages is acceptable, though it may require extra work with AIPCC for internal Red Hat builds. The official reference is at developers.redhat.com/articles/ubi-faq.
Applied to files:
runtimes/rocm-tensorflow/ubi9-python-3.12/Dockerfile.rocmruntimes/datascience/ubi9-python-3.12/Dockerfile.cpujupyter/tensorflow/ubi9-python-3.12/Dockerfile.cudajupyter/datascience/ubi9-python-3.12/Dockerfile.cpujupyter/minimal/ubi9-python-3.12/Dockerfile.cudajupyter/pytorch/ubi9-python-3.12/Dockerfile.cudajupyter/minimal/ubi9-python-3.12/Dockerfile.cpujupyter/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cuda
📚 Learning: 2025-09-12T08:27:00.439Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2432
File: jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu:232-249
Timestamp: 2025-09-12T08:27:00.439Z
Learning: jiridanek requested GitHub issue creation for Rust toolchain availability during s390x builds in jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu during PR #2432 review. Issue #2435 was created addressing PATH availability problems where Rust/cargo installed in cpu-base stage at /opt/.cargo/bin may not be accessible during uv pip install step in jupyter-datascience stage, proposing three solution approaches: immediate environment variable fix, builder stage pattern following codeserver approach, and ENV declaration fix, with comprehensive acceptance criteria covering build reliability, multi-architecture compatibility, and alignment with established patterns, continuing the systematic infrastructure improvement tracking methodology.
Applied to files:
runtimes/datascience/ubi9-python-3.12/Dockerfile.cpujupyter/datascience/ubi9-python-3.12/Dockerfile.cpujupyter/minimal/ubi9-python-3.12/Dockerfile.cudajupyter/minimal/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-09-12T08:27:00.439Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2432
File: jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu:232-249
Timestamp: 2025-09-12T08:27:00.439Z
Learning: jiridanek requested GitHub issue creation for Rust toolchain availability during s390x builds in jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu during PR #2432 review. The issue addresses PATH availability problems where Rust/cargo installed in cpu-base stage at /opt/.cargo/bin may not be accessible during uv pip install step in jupyter-datascience stage, proposing three solution approaches: immediate environment variable fix, builder stage pattern following codeserver approach, and ENV declaration fix, with comprehensive acceptance criteria covering build reliability, multi-architecture compatibility, and alignment with established patterns, continuing the systematic infrastructure improvement tracking methodology.
Applied to files:
runtimes/datascience/ubi9-python-3.12/Dockerfile.cpujupyter/datascience/ubi9-python-3.12/Dockerfile.cpujupyter/minimal/ubi9-python-3.12/Dockerfile.cudajupyter/minimal/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-09-05T11:27:31.040Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2215
File: runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu:0-0
Timestamp: 2025-09-05T11:27:31.040Z
Learning: jiridanek requested GitHub issue creation for build toolchain optimization in datascience runtime during PR #2215 review. Issue #2308 was created addressing unnecessary build dependencies (gcc-toolset-13, cmake, ninja-build, rust, cargo) in final runtime image for ppc64le architecture, covering comprehensive problem analysis with specific line numbers, multiple solution options for builder-only toolchains, clear acceptance criteria for size reduction and security improvement, detailed implementation guidance for package segregation, and proper context linking to PR #2215 review comment, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.
Applied to files:
runtimes/datascience/ubi9-python-3.12/Dockerfile.cpujupyter/datascience/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-09-05T12:29:07.819Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:218-218
Timestamp: 2025-09-05T12:29:07.819Z
Learning: jiridanek requested GitHub issue creation for uv multi-stage Docker build architectural investigation during PR #2227 review. The current implementation uses a three-stage build with whl-cache stage for wheel building/caching, base stage for OS setup, and final codeserver stage for offline installation using --offline flag and cache mounts. The pattern separates build phase (internet access, build tools) from install phase (offline, faster) while supporting multi-architecture builds (x86_64, ppc64le) with sentinel file coordination using /tmp/control files.
Applied to files:
runtimes/datascience/ubi9-python-3.12/Dockerfile.cpujupyter/tensorflow/ubi9-python-3.12/Dockerfile.cudajupyter/datascience/ubi9-python-3.12/Dockerfile.cpujupyter/minimal/ubi9-python-3.12/Dockerfile.cuda
📚 Learning: 2025-09-16T10:39:23.295Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2460
File: jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu:206-221
Timestamp: 2025-09-16T10:39:23.295Z
Learning: jiridanek requested GitHub issue creation for OpenBLAS installation staging during ppc64le builds in jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu during PR #2460 review. Issue #2466 was created addressing permission errors where OpenBLAS make install fails when attempting to write to /usr/local system paths from USER 1001 context in final stage, proposing DESTDIR staging pattern to build and install OpenBLAS artifacts within openblas-builder stage then COPY pre-installed files to final stage, with comprehensive problem description covering specific permission denied errors, detailed technical solution with code examples, clear acceptance criteria for build reliability and multi-architecture compatibility, and proper context linking to PR #2460 review comment, continuing the systematic infrastructure improvement tracking methodology for Power architecture support.
Applied to files:
runtimes/datascience/ubi9-python-3.12/Dockerfile.cpujupyter/datascience/ubi9-python-3.12/Dockerfile.cpujupyter/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cuda
📚 Learning: 2025-07-07T11:08:48.524Z
Learnt from: atheo89
PR: opendatahub-io/notebooks#1258
File: codeserver/ubi9-python-3.11/Dockerfile.cpu:32-32
Timestamp: 2025-07-07T11:08:48.524Z
Learning: atheo89 requested GitHub issue creation for multi-architecture Dockerfile improvements during PR #1258 review, specifically for enhancing structural consistency across Docker stages, replacing $(uname -m) with ${TARGETARCH} for cross-architecture builds, and adding OCI-compliant metadata labels. Issue #1332 was created with comprehensive problem description, phased implementation approach, detailed acceptance criteria, implementation guidance with code examples, and proper context linking, continuing the established pattern of systematic code quality improvements.
Applied to files:
runtimes/datascience/ubi9-python-3.12/Dockerfile.cpujupyter/datascience/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-07-04T10:41:13.061Z
Learnt from: grdryn
PR: opendatahub-io/notebooks#1320
File: rstudio/rhel9-python-3.11/Dockerfile.cuda:34-35
Timestamp: 2025-07-04T10:41:13.061Z
Learning: In the opendatahub-io/notebooks repository, when adapting NVIDIA CUDA Dockerfiles, the project intentionally maintains consistency with upstream NVIDIA patterns even when it might involve potential risks like empty variable expansions in package installation commands. This is considered acceptable because the containers only run on RHEL 9 with known yum/dnf behavior, and upstream consistency is prioritized over defensive coding practices.
Applied to files:
runtimes/datascience/ubi9-python-3.12/Dockerfile.cpujupyter/tensorflow/ubi9-python-3.12/Dockerfile.cudajupyter/datascience/ubi9-python-3.12/Dockerfile.cpujupyter/minimal/ubi9-python-3.12/Dockerfile.cudajupyter/pytorch/ubi9-python-3.12/Dockerfile.cuda
📚 Learning: 2025-09-12T08:24:58.328Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2432
File: jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu:66-79
Timestamp: 2025-09-12T08:24:58.328Z
Learning: jiridanek requested GitHub issue creation for Rust toolchain PATH availability problem during PR #2432 review. Issue addresses environment variables written to /etc/profile.d/cargo.sh not being available during Docker RUN commands on s390x builds, potentially causing uv pip install failures when compiling Rust-based Python packages. The issue includes comprehensive problem description covering lines 66-79 and 232-249, detailed impact analysis of build reliability concerns, three solution options (ENV declarations, inline variables, hybrid approach), clear acceptance criteria for cargo availability verification, and specific implementation guidance with build testing steps, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.
Applied to files:
runtimes/datascience/ubi9-python-3.12/Dockerfile.cpujupyter/datascience/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-09-26T10:52:59.415Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-09-26T10:52:59.415Z
Learning: jiridanek corrected CodeRabbit's incorrect analysis claiming environment variable timing changes in PR #2547. After re-examination, the refactoring PR only contained ubi-repos alias introduction, pip flag corrections (--no-cache to --no-cache-dir), and minor shell improvements, with NO environment variable timing changes or profile.d modifications. This demonstrates the importance of examining actual code diffs rather than relying on AI summary interpretations.
Applied to files:
runtimes/datascience/ubi9-python-3.12/Dockerfile.cpujupyter/datascience/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-08-19T15:49:45.132Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2004
File: .tekton/odh-workbench-jupyter-minimal-cuda-py311-ubi9-push.yaml:35-36
Timestamp: 2025-08-19T15:49:45.132Z
Learning: jiridanek requested GitHub issue creation for two PipelineRef compatibility problems during PR #2004 review: missing additional-tags parameter in singlearch-push-pipeline and missing netrc workspace binding in PipelineRuns. Issues were successfully created with comprehensive problem descriptions covering technical details, impact analysis on multiple affected files, clear solutions with code examples, detailed acceptance criteria, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Applied to files:
runtimes/datascience/ubi9-python-3.12/Dockerfile.cpujupyter/datascience/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-07-11T11:16:05.131Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-07-11T11:16:05.131Z
Learning: jiridanek requested GitHub issue creation for RStudio py311 Tekton push pipelines during PR #1379 review. Issue #1384 was successfully created covering two RStudio variants (CPU and CUDA) found in manifests/base/params-latest.env, with comprehensive problem description, implementation requirements following the same pattern as other workbench pipelines, clear acceptance criteria, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Applied to files:
runtimes/datascience/ubi9-python-3.12/Dockerfile.cpujupyter/datascience/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-07-11T11:15:47.424Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1379
File: .tekton/odh-workbench-jupyter-datascience-cpu-py312-ubi9-push.yaml:14-17
Timestamp: 2025-07-11T11:15:47.424Z
Learning: jiridanek requested GitHub issue creation for CEL filter problem in datascience workbench Tekton pipelines during PR #1379 review. Issue #1383 was successfully created with comprehensive problem description covering both Python 3.11 and 3.12 pipelines incorrectly watching jupyter/minimal directories instead of jupyter/datascience directories, detailed impact analysis of pipeline execution failures, complete solution with before/after code examples, thorough acceptance criteria for path updates and pipeline triggering verification, implementation notes about repository structure alignment, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Applied to files:
runtimes/datascience/ubi9-python-3.12/Dockerfile.cpujupyter/datascience/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-07-11T11:15:25.572Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1379
File: .tekton/odh-workbench-jupyter-pytorch-cuda-py312-ubi9-push.yaml:40-44
Timestamp: 2025-07-11T11:15:25.572Z
Learning: jiridanek requested GitHub issue creation for critical Tekton array parameter syntax error during PR #1379 review, affecting all 32 pipeline files with additional-tags parameter. Issue #1382 was created with comprehensive problem description covering both Python 3.11 and 3.12 pipeline files (16 each), detailed root cause analysis of value: vs values: syntax error, critical impact assessment of complete pipeline execution failure, complete list of all affected files across workbench and runtime types, implementation commands with sed scripts, detailed acceptance criteria, implementation notes about systematic copy-paste error propagation, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Applied to files:
runtimes/datascience/ubi9-python-3.12/Dockerfile.cpujupyter/datascience/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-07-11T11:16:05.131Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-07-11T11:16:05.131Z
Learning: jiridanek requested GitHub issue creation for adding RStudio py311 Tekton push pipelines during PR #1379 review, referencing existing registry entries in manifests/base/params-latest.env but missing corresponding .tekton pipeline files. A comprehensive issue was created with detailed problem description, implementation requirements following the same pattern as other workbench pipelines, clear acceptance criteria, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Applied to files:
runtimes/datascience/ubi9-python-3.12/Dockerfile.cpujupyter/datascience/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-09-05T12:10:28.916Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2265
File: .tekton/odh-workbench-jupyter-pytorch-llmcompressor-cuda-py312-ubi9-pull-request.yaml:16-16
Timestamp: 2025-09-05T12:10:28.916Z
Learning: jiridanek requested GitHub issue creation for trigger path cleanup in pytorch+llmcompressor pipeline during PR #2265 review. Issue #2310 was successfully created addressing copy-paste errors where irrelevant Minimal/DataScience trigger paths were included in the pytorch+llmcompressor pipeline on-cel-expression, causing unnecessary pipeline triggers. The issue includes comprehensive problem description covering specific irrelevant paths, detailed solution with before/after YAML code examples, clear acceptance criteria for implementation and testing, repository-wide scope consideration for similar issues, and proper context linking to PR #2265 review comment, assigned to jiridanek.
Applied to files:
runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-08-01T14:46:03.215Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-01T14:46:03.215Z
Learning: jiridanek requested GitHub issue creation for two nitpicks during PR #1588 review: comment wording improvement in ROCm TensorFlow Dockerfile and typo fix in Jupyter DataScience Dockerfile stage header. Issues #1589 and #1590 were successfully created with comprehensive problem descriptions, specific file locations and line numbers, clear before/after solutions, detailed acceptance criteria, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Applied to files:
runtimes/datascience/ubi9-python-3.12/Dockerfile.cpujupyter/datascience/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-07-04T17:08:02.399Z
Learnt from: grdryn
PR: opendatahub-io/notebooks#1320
File: rstudio/rhel9-python-3.11/Dockerfile.cuda:40-42
Timestamp: 2025-07-04T17:08:02.399Z
Learning: In the opendatahub-io/notebooks repository, when using multi-architecture Dockerfiles with BuildKit, the implicit build argument TARGETARCH is automatically available in the global scope for FROM instructions without explicit declaration. However, if TARGETARCH is used within a build stage, it must be declared explicitly within that stage. The current placement pattern (declaring ARG TARGETARCH after FROM instructions that use it) is correct for modern Docker/Podman/Buildah environments and does not require compatibility with older Docker versions.
Applied to files:
runtimes/datascience/ubi9-python-3.12/Dockerfile.cpujupyter/datascience/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-07-04T17:07:52.656Z
Learnt from: grdryn
PR: opendatahub-io/notebooks#1320
File: jupyter/minimal/ubi9-python-3.11/Dockerfile.cuda:29-38
Timestamp: 2025-07-04T17:07:52.656Z
Learning: In the opendatahub-io/notebooks repository, modern Docker with BuildKit automatically provides build arguments like TARGETARCH in the global scope for FROM instructions, but these arguments must be explicitly declared with ARG statements inside build stages where they will be used. The ARG declaration should be placed within the stage that uses it, not moved to the global scope, as this is the correct pattern for modern Docker/Podman/Buildah environments.
Applied to files:
runtimes/datascience/ubi9-python-3.12/Dockerfile.cpujupyter/datascience/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-08-12T08:40:55.286Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1909
File: runtimes/pytorch+llmcompressor/ubi9-python-3.11/Dockerfile.cuda:11-15
Timestamp: 2025-08-12T08:40:55.286Z
Learning: jiridanek requested GitHub issue creation for redundant CUDA upgrade optimization during PR #1909 review. Analysis revealed all 14 CUDA Dockerfiles contain redundant `yum upgrade -y` commands in cuda-base stages that execute after base stages already performed comprehensive `dnf upgrade` via pre-upgrade blocks, causing unnecessary CI latency and build inefficiency. Issue includes complete scope analysis with specific line numbers, investigation framework requiring NVIDIA upstream documentation review, multiple solution options, comprehensive acceptance criteria covering systematic testing and performance measurement, and proper context linking to PR #1909 review comment.
Applied to files:
jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cudajupyter/datascience/ubi9-python-3.12/Dockerfile.cpujupyter/minimal/ubi9-python-3.12/Dockerfile.cudajupyter/pytorch/ubi9-python-3.12/Dockerfile.cudajupyter/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cuda
📚 Learning: 2025-07-01T10:41:56.419Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-07-01T10:41:56.419Z
Learning: In the opendatahub-io/notebooks repository, TensorFlow packages with `extras = ["and-cuda"]` can cause build conflicts on macOS due to platform-specific CUDA packages. When the Dockerfile installs CUDA system-wide, removing the extras and letting TensorFlow find CUDA at runtime resolves these conflicts.
Applied to files:
jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cuda
📚 Learning: 2025-08-12T08:40:55.286Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1909
File: runtimes/pytorch+llmcompressor/ubi9-python-3.11/Dockerfile.cuda:11-15
Timestamp: 2025-08-12T08:40:55.286Z
Learning: jiridanek requested GitHub issue creation for redundant CUDA upgrade optimization during PR #1909 review. Issue covers duplicate yum/dnf upgrade commands in cuda-base stages that execute after base stages already performed comprehensive upgrades, causing unnecessary CI latency and build inefficiency across multiple CUDA Dockerfiles. The solution requires investigating NVIDIA upstream documentation requirements before removing redundant upgrades, with systematic testing of all CUDA variants and performance measurement. Issue follows established pattern of comprehensive problem analysis, multiple solution options, detailed acceptance criteria, and proper context linking.
Applied to files:
jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cudajupyter/minimal/ubi9-python-3.12/Dockerfile.cudajupyter/pytorch/ubi9-python-3.12/Dockerfile.cuda
📚 Learning: 2025-08-27T19:00:57.755Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-27T19:00:57.755Z
Learning: jiridanek requested GitHub issue creation for tensorflow[and-cuda] extra removal investigation during PR #2145 review, addressing the concern that this extra forces CUDA-specific wheels causing cross-platform conflicts while Docker images already supply CUDA via base layer. Issue #2168 was created with comprehensive investigation framework covering dependency differences, runtime verification, performance testing, cross-platform compatibility, and clear acceptance criteria for deciding whether to remove the extra in favor of base tensorflow package.
Applied to files:
jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cuda
📚 Learning: 2025-09-10T21:24:07.914Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-09-10T21:24:07.914Z
Learning: jiridanek requested GitHub issue creation for Docker chown optimization in codeserver/ubi9-python-3.12/Dockerfile.cpu during PR #2356 review. Issue #2403 was created addressing performance impact of broad recursive chown on entire /opt/app-root directory (line 235), proposing three solution approaches: scoped chown targeting specific changed paths, root cause fix during file creation, and test modification for permission validation, with detailed benefits analysis covering layer size reduction and build time optimization, continuing the established pattern of systematic infrastructure improvements through detailed issue tracking.
Applied to files:
jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-09-05T12:25:58.344Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2227
File: codeserver/ubi9-python-3.12/Dockerfile.cpu:126-128
Timestamp: 2025-09-05T12:25:58.344Z
Learning: jiridanek expressed concern about code-server installation complexity in codeserver/ubi9-python-3.12/Dockerfile.cpu during PR #2227 review, describing the multi-stage cache mount approach as a "hack" that worries them, leading to GitHub issue creation for systematic architectural improvement addressing maintainability concerns.
Applied to files:
jupyter/datascience/ubi9-python-3.12/Dockerfile.cpujupyter/minimal/ubi9-python-3.12/Dockerfile.cpu
📚 Learning: 2025-08-06T15:33:47.175Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#968
File: runtimes/rocm-tensorflow/ubi9-python-3.12/Dockerfile.rocm:17-18
Timestamp: 2025-08-06T15:33:47.175Z
Learning: During PR #968 review, CodeRabbit initially incorrectly identified 1 legitimate micropipenv usage in jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cuda when analyzing Python 3.12 images for unused dependencies. Upon jiridanek's request for re-verification, comprehensive analysis revealed all 15 Python 3.12 Dockerfiles install micropipenv but none actually use it, making the cleanup scope 100% unnecessary installations with no exceptions to handle.
Applied to files:
jupyter/minimal/ubi9-python-3.12/Dockerfile.cuda
📚 Learning: 2025-09-15T07:35:57.126Z
Learnt from: atheo89
PR: opendatahub-io/notebooks#2455
File: base-images/cuda/12.8/ubi9-python-3.12/Dockerfile.cuda:103-105
Timestamp: 2025-09-15T07:35:57.126Z
Learning: In the opendatahub-io/notebooks repository, cuda-toolkit-12-8 package is not available for download under ubi9 base images, which explains why c9s CUDA variants include cuda-toolkit-12-8 installation while ubi9 CUDA variants do not. This divergence is expected due to package availability constraints between different base image ecosystems.
Applied to files:
jupyter/minimal/ubi9-python-3.12/Dockerfile.cuda
📚 Learning: 2025-08-27T14:49:24.112Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2145
File: runtimes/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cuda:152-159
Timestamp: 2025-08-27T14:49:24.112Z
Learning: jiridanek requested GitHub issue creation for CUDA version alignment in pytorch+llmcompressor runtime during PR #2145 review. Issue #2148 was created addressing the mismatch between Dockerfile CUDA 12.6 and pylock.toml cu124 PyTorch wheels. The issue includes comprehensive problem description covering affected files (runtimes/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cuda and pylock.toml), detailed solution with PyTorch index URL update from cu124 to cu126, lock regeneration steps using uv, clear acceptance criteria for wheel alignment verification, and proper context linking to PR #2145 review comment, assigned to jiridanek.
Applied to files:
jupyter/pytorch/ubi9-python-3.12/Dockerfile.cudajupyter/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cuda
📚 Learning: 2025-08-27T15:05:31.656Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2145
File: jupyter/trustyai/ubi9-python-3.12/Dockerfile.cpu:134-140
Timestamp: 2025-08-27T15:05:31.656Z
Learning: jiridanek requested GitHub issue creation for CUDA wheel optimization in TrustyAI CPU image during PR #2145 review. The CPU Dockerfile currently uses pylock.toml with CUDA-enabled PyTorch wheels (torch==2.6.0+cu126) which was previously discussed with harshad16 and grdryn but deferred. Issue created with comprehensive problem analysis covering unnecessary CUDA wheels in CPU-only image, multiple solution options including lock regeneration and separate CPU/CUDA files, clear acceptance criteria for wheel optimization verification, and proper context linking to PR #2145 review comment.
Applied to files:
jupyter/pytorch/ubi9-python-3.12/Dockerfile.cuda
📚 Learning: 2025-08-27T14:49:24.112Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2145
File: runtimes/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cuda:152-159
Timestamp: 2025-08-27T14:49:24.112Z
Learning: jiridanek requested GitHub issue creation for CUDA version alignment in pytorch+llmcompressor runtime during PR #2145 review. Issue addresses mismatch between Dockerfile CUDA 12.6 and pylock.toml cu124 PyTorch wheels. Comprehensive issue created with detailed problem description covering affected files, current state analysis, step-by-step solution including PyTorch index URL update and lock regeneration, clear acceptance criteria for cu126 wheel verification, and proper context linking to PR #2145 review comment.
Applied to files:
jupyter/pytorch/ubi9-python-3.12/Dockerfile.cudajupyter/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cuda
📚 Learning: 2025-09-15T07:36:41.522Z
Learnt from: atheo89
PR: opendatahub-io/notebooks#2455
File: base-images/cuda/12.8/cuda-repos/cuda.repo-amd64:1-1
Timestamp: 2025-09-15T07:36:41.522Z
Learning: atheo89 requested GitHub issue creation for cuda-repos directory reorganization during PR #2455 review. Issue #2456 was created addressing the redundant cuda-repos directories under version-specific paths (12.6 and 12.8), proposing consolidation to base-images/cuda/cuda-repos with comprehensive analysis covering both CUDA versions, affected Dockerfiles requiring ARG CUDA_REPOS updates, clear solution with directory move and reference updates, detailed acceptance criteria, and proper context linking to PR #2455 review comment, assigned to atheo89.
Applied to files:
jupyter/pytorch/ubi9-python-3.12/Dockerfile.cuda
📚 Learning: 2025-08-01T17:35:29.175Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#0
File: :0-0
Timestamp: 2025-08-01T17:35:29.175Z
Learning: jiridanek requested GitHub issue creation for adding pytorch+llmcompressor images to Makefile build targets during PR #1519 review. Issue #1598 was successfully created with comprehensive problem description covering missing build targets for both jupyter workbench and runtime images, detailed solution with specific Makefile code examples following established patterns, thorough acceptance criteria covering individual targets, BASE_DIRS variable inclusion, and all-images target integration, implementation notes about handling '+' characters in paths, and proper context linking, continuing the established pattern of systematic code quality improvements through detailed issue tracking.
Applied to files:
jupyter/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cuda
📚 Learning: 2025-09-05T12:10:28.916Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#2265
File: .tekton/odh-workbench-jupyter-pytorch-llmcompressor-cuda-py312-ubi9-pull-request.yaml:16-16
Timestamp: 2025-09-05T12:10:28.916Z
Learning: jiridanek requested GitHub issue creation for trigger path cleanup in pytorch+llmcompressor pipeline during PR #2265 review. The issue addresses copy-paste errors where irrelevant Minimal/DataScience trigger paths were included in the pytorch+llmcompressor pipeline on-cel-expression, causing unnecessary pipeline triggers. Solution involves removing unrelated paths and keeping only pytorch+llmcompressor-specific paths, build-args/cuda.conf, jupyter/utils, and the pipeline YAML itself, with comprehensive acceptance criteria and proper context linking.
Applied to files:
jupyter/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cuda
📚 Learning: 2025-08-19T11:45:12.501Z
Learnt from: jiridanek
PR: opendatahub-io/notebooks#1998
File: runtimes/pytorch+llmcompressor/ubi9-python-3.11/Dockerfile.cuda:159-161
Timestamp: 2025-08-19T11:45:12.501Z
Learning: jiridanek requested GitHub issue creation for duplicated micropipenv installation cleanup in pytorch+llmcompressor images during PR #1998 review. Analysis confirmed duplication exists in both pytorch+llmcompressor Dockerfiles with micropipenv installed twice: unpinned early install (lines 23/36) for Pipfile.lock deployment and pinned later install (lines 160/248) in requirements.txt block. Issue #1999 created with comprehensive problem analysis covering exact line numbers and affected files, three solution options (remove early install, consolidate installations, conditional logic), detailed acceptance criteria covering build testing and functionality verification, implementation notes for coordination with version pinning efforts, and proper context linking to PR #1998 review comment.
Applied to files:
jupyter/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cuda
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (35)
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-pytorch-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-pytorch-llmcompressor-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-codeserver-datascience-cpu-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-minimal-cuda-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-tensorflow-rocm-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-pytorch-rocm-py312-ubi9-on-pull-request
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-trustyai-cpu-py312-ubi9-on-pull-request
- GitHub Check: build (runtime-cuda-tensorflow-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (cuda-jupyter-tensorflow-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (runtime-cuda-tensorflow-ubi9-python-3.12, 3.12, linux/arm64, false) / build
- GitHub Check: build (runtime-minimal-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (runtime-minimal-ubi9-python-3.12, 3.12, linux/s390x, false) / build
- GitHub Check: build (rocm-jupyter-tensorflow-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (jupyter-minimal-ubi9-python-3.12, 3.12, linux/s390x, false) / build
- GitHub Check: build (jupyter-trustyai-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (rocm-runtime-pytorch-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (cuda-jupyter-tensorflow-ubi9-python-3.12, 3.12, linux/arm64, false) / build
- GitHub Check: build (jupyter-minimal-ubi9-python-3.12, 3.12, linux/ppc64le, false) / build
- GitHub Check: build (runtime-datascience-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (jupyter-minimal-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (rocm-runtime-tensorflow-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (rocm-jupyter-pytorch-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (cuda-jupyter-minimal-ubi9-python-3.12, 3.12, linux/arm64, false) / build
- GitHub Check: build (runtime-datascience-ubi9-python-3.12, 3.12, linux/s390x, false) / build
- GitHub Check: build (runtime-cuda-pytorch-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (codeserver-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (rocm-jupyter-minimal-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (jupyter-datascience-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (cuda-jupyter-pytorch-llmcompressor-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (codeserver-ubi9-python-3.12, 3.12, linux/arm64, false) / build
- GitHub Check: build (cuda-jupyter-minimal-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (cuda-jupyter-pytorch-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (runtime-cuda-pytorch-llmcompressor-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (jupyter-datascience-ubi9-python-3.12, 3.12, linux/ppc64le, false) / build
- GitHub Check: code-static-analysis
|
/kfbuild odh-workbench-codeserver-datascience-cpu-py312-ubi9 |
|
/kfbuild odh-workbench-jupyter-datascience-cpu-py312-ubi9 |
atheo89
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @jiridanek for adressing all these docker lint warnings. Builds looks good
/lgtm
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: atheo89, ide-developer The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
@jiridanek: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
https://issues.redhat.com/browse/RHAIENG-287
Description
How Has This Been Tested?
Self checklist (all need to be checked):
make test(gmakeon macOS) before asking for reviewDockerfile.konfluxfiles should be done inodh/notebooksand automatically synced torhds/notebooks. For Konflux-specific changes, modifyDockerfile.konfluxfiles directly inrhds/notebooksas these require special attention in the downstream repository and flow to the upcoming RHOAI release.Merge criteria:
Summary by CodeRabbit
New Features
Refactor
Chores
Tests