Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Investigate what's up with building codegate with llama-cpp-python 0.3.6 #579

Open
jhrozek opened this issue Jan 14, 2025 · 2 comments
Open

Comments

@jhrozek
Copy link
Contributor

jhrozek commented Jan 14, 2025

I don't have more details, but llama-cpp-python 0.3.6 broke our image builds. We just reverted the dep bump without investigating more, but we should to allow us to be up-to-date on the package.

@lukehinds
Copy link
Contributor

interesting , I would be it was some missing c-library or version mismatch

@aponcedeleonch
Copy link
Contributor

More context:

Strangely, building the image locally failed using make image-build but not on CI with our action image-build. The culprit was llama-cpp-python (0.3.6). Logs on failure below.

Steps to Reproduce

  1. Change the version of llama_cpp_python in pyproject.toml from
llama_cpp_python = "==0.3.5"

to

llama_cpp_python = "==0.3.6"
  1. Run locally make image-build
#0 25.77   
#0 25.77   -- Configuring done (0.6s)
#0 25.77   -- Generating done (0.0s)
#0 25.77   -- Build files have been written to: /tmp/tmp0d6vwh24/build
#0 25.77   *** Building project with Ninja...
#0 25.77   Change Dir: '/tmp/tmp0d6vwh24/build'
#0 25.77   
#0 25.77   Run Build Command(s): /tmp/tmpy46tdodh/.venv/bin/ninja -v
#0 25.77   [1/60] /usr/bin/g++ -DGGML_BUILD -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/tmp/tmpqqg7v28w/llama_cpp_python-0.3.6/vendor/llama.cpp/ggml/src/. -I/tmp/tmpqqg7v28w/llama_cpp_python-0.3.6/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -std=gnu++17 -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-threading.cpp.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-threading.cpp.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-threading.cpp.o -c /tmp/tmpqqg7v28w/llama_cpp_python-0.3.6/vendor/llama.cpp/ggml/src/ggml-threading.cpp
#0 25.77   [2/60] /usr/bin/gcc -DGGML_BUILD -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/tmp/tmpqqg7v28w/llama_cpp_python-0.3.6/vendor/llama.cpp/ggml/src/. -I/tmp/tmpqqg7v28w/llama_cpp_python-0.3.6/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -std=gnu11 -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wdouble-promotion -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-alloc.c.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-alloc.c.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-alloc.c.o -c /tmp/tmpqqg7v28w/llama_cpp_python-0.3.6/vendor/llama.cpp/ggml/src/ggml-alloc.c
#0 25.77   [3/60] /usr/bin/gcc -DGGML_BACKEND_BUILD -DGGML_BACKEND_SHARED -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_CPU_AARCH64 -DGGML_USE_LLAMAFILE -DGGML_USE_OPENMP -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_EXPORTS -I/tmp/tmpqqg7v28w/llama_cpp_python-0.3.6/vendor/llama.cpp/ggml/src/.. -I/tmp/tmpqqg7v28w/llama_cpp_python-0.3.6/vendor/llama.cpp/ggml/src/. -I/tmp/tmpqqg7v28w/llama_cpp_python-0.3.6/vendor/llama.cpp/ggml/src/ggml-cpu -I/tmp/tmpqqg7v28w/llama_cpp_python-0.3.6/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -std=gnu11 -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wdouble-promotion -mcpu=native+nodotprod+noi8mm+nosve -fopenmp -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/ggml-cpu.c.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/ggml-cpu.c.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/ggml-cpu.c.o -c /tmp/tmpqqg7v28w/llama_cpp_python-0.3.6/vendor/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.c
#0 25.77   FAILED: vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/ggml-cpu.c.o 
#0 25.77   /usr/bin/gcc -DGGML_BACKEND_BUILD -DGGML_BACKEND_SHARED -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_CPU_AARCH64 -DGGML_USE_LLAMAFILE -DGGML_USE_OPENMP -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_EXPORTS -I/tmp/tmpqqg7v28w/llama_cpp_python-0.3.6/vendor/llama.cpp/ggml/src/.. -I/tmp/tmpqqg7v28w/llama_cpp_python-0.3.6/vendor/llama.cpp/ggml/src/. -I/tmp/tmpqqg7v28w/llama_cpp_python-0.3.6/vendor/llama.cpp/ggml/src/ggml-cpu -I/tmp/tmpqqg7v28w/llama_cpp_python-0.3.6/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -std=gnu11 -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wdouble-promotion -mcpu=native+nodotprod+noi8mm+nosve -fopenmp -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/ggml-cpu.c.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/ggml-cpu.c.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/ggml-cpu.c.o -c /tmp/tmpqqg7v28w/llama_cpp_python-0.3.6/vendor/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.c
#0 25.77   cc1: error: unknown value 'native+nodotprod+noi8mm+nosve' for '-mcpu'
#0 25.77   cc1: note: valid arguments are: cortex-a34 cortex-a35 cortex-a53 cortex-a57 cortex-a72 cortex-a73 thunderx thunderxt88p1 thunderxt88 octeontx octeontx81 octeontx83 thunderxt81 thunderxt83 ampere1 emag xgene1 falkor qdf24xx exynos-m1 phecda thunderx2t99p1 vulcan thunderx2t99 cortex-a55 cortex-a75 cortex-a76 cortex-a76ae cortex-a77 cortex-a78 cortex-a78ae cortex-a78c cortex-a65 cortex-a65ae cortex-x1 ares neoverse-n1 neoverse-e1 octeontx2 octeontx2t98 octeontx2t96 octeontx2t93 octeontx2f95 octeontx2f95n octeontx2f95mm a64fx tsv110 thunderx3t110 zeus neoverse-v1 neoverse-512tvb saphira cortex-a57.cortex-a53 cortex-a72.cortex-a53 cortex-a73.cortex-a35 cortex-a73.cortex-a53 cortex-a75.cortex-a55 cortex-a76.cortex-a55 cortex-r82 cortex-a510 cortex-a710 cortex-x2 neoverse-n2 demeter generic
#0 25.77   [4/60] /usr/bin/g++ -DGGML_BUILD -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/tmp/tmpqqg7v28w/llama_cpp_python-0.3.6/vendor/llama.cpp/ggml/src/. -I/tmp/tmpqqg7v28w/llama_cpp_python-0.3.6/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -std=gnu++17 -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-backend.cpp.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-backend.cpp.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-backend.cpp.o -c /tmp/tmpqqg7v28w/llama_cpp_python-0.3.6/vendor/llama.cpp/ggml/src/ggml-backend.cpp
#0 25.77   [5/60] /usr/bin/g++ -DGGML_BUILD -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/tmp/tmpqqg7v28w/llama_cpp_python-0.3.6/vendor/llama.cpp/ggml/src/. -I/tmp/tmpqqg7v28w/llama_cpp_python-0.3.6/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -std=gnu++17 -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-opt.cpp.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-opt.cpp.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-opt.cpp.o -c /tmp/tmpqqg7v28w/llama_cpp_python-0.3.6/vendor/llama.cpp/ggml/src/ggml-opt.cpp
#0 25.77   [6/60] /usr/bin/gcc -DGGML_BUILD -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/tmp/tmpqqg7v28w/llama_cpp_python-0.3.6/vendor/llama.cpp/ggml/src/. -I/tmp/tmpqqg7v28w/llama_cpp_python-0.3.6/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -std=gnu11 -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wdouble-promotion -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml.c.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml.c.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml.c.o -c /tmp/tmpqqg7v28w/llama_cpp_python-0.3.6/vendor/llama.cpp/ggml/src/ggml.c
#0 25.77   [7/60] /usr/bin/g++ -DGGML_BACKEND_SHARED -DGGML_BUILD -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_CPU -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_EXPORTS -I/tmp/tmpqqg7v28w/llama_cpp_python-0.3.6/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -std=gnu++17 -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml-backend-reg.cpp.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml-backend-reg.cpp.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml-backend-reg.cpp.o -c /tmp/tmpqqg7v28w/llama_cpp_python-0.3.6/vendor/llama.cpp/ggml/src/ggml-backend-reg.cpp
#0 25.77   [8/60] /usr/bin/g++ -DGGML_BUILD -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/tmp/tmpqqg7v28w/llama_cpp_python-0.3.6/vendor/llama.cpp/ggml/src/. -I/tmp/tmpqqg7v28w/llama_cpp_python-0.3.6/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -std=gnu++17 -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/gguf.cpp.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/gguf.cpp.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/gguf.cpp.o -c /tmp/tmpqqg7v28w/llama_cpp_python-0.3.6/vendor/llama.cpp/ggml/src/gguf.cpp
#0 25.77   [9/60] /usr/bin/gcc -DGGML_BUILD -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/tmp/tmpqqg7v28w/llama_cpp_python-0.3.6/vendor/llama.cpp/ggml/src/. -I/tmp/tmpqqg7v28w/llama_cpp_python-0.3.6/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -std=gnu11 -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wdouble-promotion -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-quants.c.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-quants.c.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-quants.c.o -c /tmp/tmpqqg7v28w/llama_cpp_python-0.3.6/vendor/llama.cpp/ggml/src/ggml-quants.c
#0 25.77   ninja: build stopped: subcommand failed.
#0 25.77   
#0 25.77   
#0 25.77   *** CMake build failed
#0 25.77   
#0 25.77 
#0 25.77   at /usr/local/lib/python3.12/site-packages/poetry/installation/chef.py:164 in _prepare
#0 25.77       160│ 
#0 25.77       161│                 error = ChefBuildError("\n\n".join(message_parts))
#0 25.77       162│ 
#0 25.77       163│             if error is not None:
#0 25.77     → 164│                 raise error from None
#0 25.77       165│ 
#0 25.77       166│             return path
#0 25.77       167│ 
#0 25.77       168│     def _prepare_sdist(self, archive: Path, destination: Path | None = None) -> Path:
#0 25.77 
#0 25.77 Note: This error originates from the build backend, and is likely not a problem with poetry but with llama-cpp-python (0.3.6) not supporting PEP 517 builds. You can verify this by running 'pip wheel --no-cache-dir --use-pep517 "llama-cpp-python (==0.3.6)"'.
#0 25.77 
------
Dockerfile:18
--------------------
  17 |     # Configure Poetry and install dependencies
  18 | >>> RUN poetry config virtualenvs.create false && \
  19 | >>>     poetry install --no-dev
  20 |     
--------------------
ERROR: failed to solve: process "/bin/sh -c poetry config virtualenvs.create false &&     poetry install --no-dev" did not complete successfully: exit code: 1
make: *** [image-build] Error 1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants