Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compile bug: issue compiling in ubuntu (desktop and server version) using virtualbox #12164

Open
sandboxyer opened this issue Mar 3, 2025 · 0 comments

Comments

@sandboxyer
Copy link

Git commit

Build Issue: Compilation Errors with AVX2 Intrinsics and Const Correctness

Problem Summary
Compilation fails due to invalid conversion from const ggml_fp16_t* to ggml_fp16_t* in ARM64 SIMD functions, missing AVX2/FMA compiler flags for x86_64 targets, and missing intrinsic headers for SIMD operations.

Detailed Errors & Solutions

1. Const Correctness in AARCH64 SIMD Functions
Error Message: error: invalid conversion from 'const ggml_fp16_t*' to 'ggml_fp16_t*'

Affected Files: llama.cpp/ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp

Problem Locations: Original problematic signatures:
static inline __m256 __avx_f32cx8_load(ggml_fp16_t *x)
static inline __m256 __avx_repeat_f32cx8_load(ggml_fp16_t *x)
static inline __m256 __avx_rearranged_f32cx8_load(ggml_fp16_t *x, __m128i arrangeMask)

Solution: Add const qualifier to pointer parameters:
static inline __m256 __avx_f32cx8_load(const ggml_fp16_t *x)
static inline __m256 __avx_repeat_f32cx8_load(const ggml_fp16_t *x)
static inline __m256 __avx_rearranged_f32cx8_load(const ggml_fp16_t *x, __m128i arrangeMask)

2. Missing AVX2/FMA Compiler Flags
Error Message: error: inlining failed: target specific option mismatch
note: compiler does not support '-mavx2' or '-mfma'

Affected File: llama.cpp/CMakeLists.txt

Problem: AVX2/FMA flags not enabled for x86_64 targets

Solution: Add conditional compilation flags after project() declaration:
if(CMAKE_SYSTEM_PROCESSOR STREQUAL "x86_64")
add_compile_options(-mfma -mavx2)
endif()

3. Missing Intrinsic Headers
Error Message: error: use of undeclared identifier '_mm256_setzero_ps'
error: unknown type name '__m256'

Affected File: llama.cpp/ggml/src/ggml.c

Solution: Add Intel intrinsic header inclusion at file beginning, after existing includes:
#include <immintrin.h>

Expected Outcome

  • Successful compilation without const pointer conversion errors
  • AVX2/FMA instructions properly recognized
  • All SIMD types (__m256, __m128i) correctly defined

Notes

  • Required for x86_64 architectures using AVX2 extensions
  • Maintains ARM64 compatibility through conditional compilation
  • Preserves const-correctness for memory safety

the node code below apply the solution

const fs = require('fs');
const path = require('path');

// Verify these paths match your actual project structure
const PATHS = {
  aarch64: path.join('llama.cpp', 'ggml', 'src', 'ggml-cpu', 'ggml-cpu-aarch64.cpp'),
  ggml_c: path.join('llama.cpp', 'ggml', 'src', 'ggml.c'),
  cmake: path.join('llama.cpp', 'CMakeLists.txt')
};

function modifyAarch64File() {
  const filePath = PATHS.aarch64;
  let content = fs.readFileSync(filePath, 'utf-8');

  // Precise function signature replacements with exact parameter matching
  const replacements = [
    {
      search: /static inline __m256 __avx_f32cx8_load\(ggml_fp16_t \*x\)/g,
      replace: 'static inline __m256 __avx_f32cx8_load(const ggml_fp16_t *x)'
    },
    {
      search: /static inline __m256 __avx_repeat_f32cx8_load\(ggml_fp16_t \*x\)/g,
      replace: 'static inline __m256 __avx_repeat_f32cx8_load(const ggml_fp16_t *x)'
    },
    {
      search: /static inline __m256 __avx_rearranged_f32cx8_load\(ggml_fp16_t \*x, __m128i arrangeMask\)/g,
      replace: 'static inline __m256 __avx_rearranged_f32cx8_load(const ggml_fp16_t *x, __m128i arrangeMask)'
    }
  ];

  replacements.forEach(({search, replace}) => {
    if (!search.test(content)) {
      throw new Error(`Pattern not found: ${search}`);
    }
    content = content.replace(search, replace);
  });

  fs.writeFileSync(filePath, content);
}

function addImmintrinInclude() {
  const filePath = PATHS.ggml_c;
  let content = fs.readFileSync(filePath, 'utf-8');
  
  // Add include only if not present, after the first #include
  if (!content.includes('#include <immintrin.h>')) {
    content = content.replace(
      /(#include .+)\n/,
      `$1\n#include <immintrin.h>\n`
    );
  }
  
  fs.writeFileSync(filePath, content);
}

function updateCMakeLists() {
  const filePath = PATHS.cmake;
  let content = fs.readFileSync(filePath, 'utf-8');
  
  // Insert flags right after project() declaration
  const cmakeFix = `
if(CMAKE_SYSTEM_PROCESSOR STREQUAL "x86_64")
    add_compile_options(-mfma -mavx2)
endif()
`;

  if (!content.includes('add_compile_options(-mfma -mavx2)')) {
    content = content.replace(
      /(project\(.*?\)\s*)/,
      `$1\n${cmakeFix}\n`
    );
  }

  fs.writeFileSync(filePath, content);
}

function main() {
  try {
    modifyAarch64File();
    addImmintrinInclude();
    updateCMakeLists();
    console.log('All modifications applied successfully');
  } catch (error) {
    console.error('Error:', error.message);
    console.log('Please verify:');
    console.log('1. File paths are correct');
    console.log('2. You have write permissions');
    console.log('3. The files match expected content');
    process.exit(1);
  }
}

main();

Operating systems

Linux

GGML backends

CPU

Problem description & steps to reproduce

build issue on ubuntu sever/desktop using virtual box CPU

First Bad Commit

No response

Compile command

cmake

Relevant log output

1°

-- Warning: ccache not found - consider installing it for faster compilation or disable this warning with GGML_CCACHE=OFF
-- CMAKE_SYSTEM_PROCESSOR: x86_64
-- Including CPU backend
-- Adding CPU backend variant ggml-cpu: -march=native 
-- Configuring done (1.3s)
-- Generating done (0.9s)
-- Build files have been written to: /home/ai/llama.cpp/build
[  4%] Built target ggml-base
[  4%] Building CXX object ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/ggml-cpu-aarch64.cpp.o
/home/ai/llama.cpp/ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp: In function ‘void ggml_gemv_q4_0_8x8_q8_0(int, float*, size_t, const void*, const void*, int, int)’:
/home/ai/llama.cpp/ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp:877:82: error: invalid conversion from ‘const ggml_half*’ {aka ‘const short unsigned int*’} to ‘ggml_fp16_t*’ {aka ‘short unsigned int*’} [-fpermissive]
  877 |                 const __m256 col_scale_f32 = GGML_F32Cx8_REARRANGE_LOAD(b_ptr[b].d, changemask);
      |                                                                         ~~~~~~~~~^
      |                                                                                  |
      |                                                                                  const ggml_half* {aka const short unsigned int*}
/home/ai/llama.cpp/ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp:150:85: note: in definition of macro ‘GGML_F32Cx8_REARRANGE_LOAD’
  150 | #define GGML_F32Cx8_REARRANGE_LOAD(x, arrangeMask)     __avx_rearranged_f32cx8_load(x, arrangeMask)
      |                                                                                     ^
/home/ai/llama.cpp/ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp:136:64: note:   initializing argument 1 of ‘__m256 __avx_rearranged_f32cx8_load(ggml_fp16_t*, __m128i)’
  136 | static inline __m256 __avx_rearranged_f32cx8_load(ggml_fp16_t *x, __m128i arrangeMask) {
      |                                                   ~~~~~~~~~~~~~^
/home/ai/llama.cpp/ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp: In function ‘void ggml_gemm_q4_0_8x8_q8_0(int, float*, size_t, const void*, const void*, int, int)’:
/home/ai/llama.cpp/ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp:3005:76: error: invalid conversion from ‘const ggml_half*’ {aka ‘const short unsigned int*’} to ‘ggml_fp16_t*’ {aka ‘short unsigned int*’} [-fpermissive]
 3005 |                     const __m256 col_scale_f32 = GGML_F32Cx8_LOAD(b_ptr[b].d);
      |                                                                   ~~~~~~~~~^
      |                                                                            |
      |                                                                            const ggml_half* {aka const short unsigned int*}
/home/ai/llama.cpp/ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp:148:51: note: in definition of macro ‘GGML_F32Cx8_LOAD’
  148 | #define GGML_F32Cx8_LOAD(x)     __avx_f32cx8_load(x)
      |                                                   ^
/home/ai/llama.cpp/ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp:117:53: note:   initializing argument 1 of ‘__m256 __avx_f32cx8_load(ggml_fp16_t*)’
  117 | static inline __m256 __avx_f32cx8_load(ggml_fp16_t *x) {
      |                                        ~~~~~~~~~~~~~^
/home/ai/llama.cpp/ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp:3082:92: error: invalid conversion from ‘const ggml_half*’ {aka ‘const short unsigned int*’} to ‘ggml_fp16_t*’ {aka ‘short unsigned int*’} [-fpermissive]
 3082 |                         const __m256 row_scale_f32 = GGML_F32Cx8_REPEAT_LOAD(a_ptrs[rp][b].d, loadMask);
      |                                                                              ~~~~~~~~~~~~~~^
      |                                                                                            |
      |                                                                                            const ggml_half* {aka const short unsigned int*}
/home/ai/llama.cpp/ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp:149:75: note: in definition of macro ‘GGML_F32Cx8_REPEAT_LOAD’
  149 | #define GGML_F32Cx8_REPEAT_LOAD(x, loadMask)     __avx_repeat_f32cx8_load(x)
      |                                                                           ^
/home/ai/llama.cpp/ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp:126:60: note:   initializing argument 1 of ‘__m256 __avx_repeat_f32cx8_load(ggml_fp16_t*)’
  126 | static inline __m256 __avx_repeat_f32cx8_load(ggml_fp16_t *x) {
      |                                               ~~~~~~~~~~~~~^
/home/ai/llama.cpp/ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp:3169:76: error: invalid conversion from ‘const ggml_half*’ {aka ‘const short unsigned int*’} to ‘ggml_fp16_t*’ {aka ‘short unsigned int*’} [-fpermissive]
 3169 |                     const __m256 col_scale_f32 = GGML_F32Cx8_LOAD(b_ptr[b].d);
      |                                                                   ~~~~~~~~~^
      |                                                                            |
      |                                                                            const ggml_half* {aka const short unsigned int*}
/home/ai/llama.cpp/ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp:148:51: note: in definition of macro ‘GGML_F32Cx8_LOAD’
  148 | #define GGML_F32Cx8_LOAD(x)     __avx_f32cx8_load(x)
      |                                                   ^
/home/ai/llama.cpp/ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp:117:53: note:   initializing argument 1 of ‘__m256 __avx_f32cx8_load(ggml_fp16_t*)’
  117 | static inline __m256 __avx_f32cx8_load(ggml_fp16_t *x) {
      |                                        ~~~~~~~~~~~~~^
/home/ai/llama.cpp/ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp:3247:83: error: invalid conversion from ‘const ggml_half*’ {aka ‘const short unsigned int*’} to ‘ggml_fp16_t*’ {aka ‘short unsigned int*’} [-fpermissive]
 3247 |                     const __m256 row_scale_f32 = GGML_F32Cx8_REPEAT_LOAD(a_ptr[b].d, loadMask);
      |                                                                          ~~~~~~~~~^
      |                                                                                   |
      |                                                                                   const ggml_half* {aka const short unsigned int*}
/home/ai/llama.cpp/ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp:149:75: note: in definition of macro ‘GGML_F32Cx8_REPEAT_LOAD’
  149 | #define GGML_F32Cx8_REPEAT_LOAD(x, loadMask)     __avx_repeat_f32cx8_load(x)
      |                                                                           ^
/home/ai/llama.cpp/ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp:126:60: note:   initializing argument 1 of ‘__m256 __avx_repeat_f32cx8_load(ggml_fp16_t*)’
  126 | static inline __m256 __avx_repeat_f32cx8_load(ggml_fp16_t *x) {
      |                                               ~~~~~~~~~~~~~^
/home/ai/llama.cpp/ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp:2521:23: warning: unused variable ‘loadMask’ [-Wunused-variable]
 2521 |         const __m128i loadMask = _mm_blend_epi32(_mm_setzero_si128(), _mm_set1_epi32(0xFFFFFFFF), 3);
      |                       ^~~~~~~~
gmake[2]: *** [ggml/src/CMakeFiles/ggml-cpu.dir/build.make:104: ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/ggml-cpu-aarch64.cpp.o] Erro 1
gmake[1]: *** [CMakeFiles/Makefile2:1699: ggml/src/CMakeFiles/ggml-cpu.dir/all] Erro 2
gmake: *** [Makefile:146: all] Erro 2
An error occurred: Error: cmake build process exited with code 2
    at ChildProcess.<anonymous> (file:///usr/local/etc/EasyAI/core/Llama/LlamaCPP.js:236:22)
    at ChildProcess.emit (node:events:517:28)
    at ChildProcess._handle.onexit (node:internal/child_process:292:12)


2°

/home/ai/llama.cpp/ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp: In function ‘void ggml_gemv_q4_0_8x8_q8_0(int, float*, size_t, const void*, const void*, int, int)’:
/home/ai/llama.cpp/ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp:877:82: error: invalid conversion from ‘const ggml_half*’ {aka ‘const short unsigned int*’} to ‘ggml_fp16_t*’ {aka ‘short unsigned int*’} [-fpermissive]
  877 |                 const __m256 col_scale_f32 = GGML_F32Cx8_REARRANGE_LOAD(b_ptr[b].d, changemask);
      |                                                                         ~~~~~~~~~^
      |                                                                                  |
      |                                                                                  const ggml_half* {aka const short unsigned int*}
/home/ai/llama.cpp/ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp:150:85: note: in definition of macro ‘GGML_F32Cx8_REARRANGE_LOAD’
  150 | #define GGML_F32Cx8_REARRANGE_LOAD(x, arrangeMask)     __avx_rearranged_f32cx8_load(x, arrangeMask)
      |                                                                                     ^
/home/ai/llama.cpp/ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp:136:64: note:   initializing argument 1 of ‘__m256 __avx_rearranged_f32cx8_load(ggml_fp16_t*, __m128i)’
  136 | static inline __m256 __avx_rearranged_f32cx8_load(ggml_fp16_t *x, __m128i arrangeMask) {
      |                                                   ~~~~~~~~~~~~~^
/home/ai/llama.cpp/ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp: In function ‘void ggml_gemm_q4_0_8x8_q8_0(int, float*, size_t, const void*, const void*, int, int)’:
/home/ai/llama.cpp/ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp:2521:23: warning: unused variable ‘loadMask’ [-Wunused-variable]
 2521 |         const __m128i loadMask = _mm_blend_epi32(_mm_setzero_si128(), _mm_set1_epi32(0xFFFFFFFF), 3);
      |                       ^~~~~~~~
gmake[2]: *** [ggml/src/CMakeFiles/ggml-cpu.dir/build.make:104: ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/ggml-cpu-aarch64.cpp.o] Erro 1
gmake[1]: *** [CMakeFiles/Makefile2:1699: ggml/src/CMakeFiles/ggml-cpu.dir/all] Erro 2
gmake: *** [Makefile:146: all] Erro 2
An error occurred: Error: cmake build process exited with code 2
    at ChildProcess.<anonymous> (file:///usr/local/etc/EasyAI/core/Llama/LlamaCPP.js:236:22)
    at ChildProcess.emit (node:events:517:28)
    at ChildProcess._handle.onexit (node:internal/child_process:292:12)

3° 

/home/ai/llama.cpp/ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp: In function ‘void ggml_gemm_q4_0_8x8_q8_0(int, float*, size_t, const void*, const void*, int, int)’:
/home/ai/llama.cpp/ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp:2521:23: warning: unused variable ‘loadMask’ [-Wunused-variable]
 2521 |         const __m128i loadMask = _mm_blend_epi32(_mm_setzero_si128(), _mm_set1_epi32(0xFFFFFFFF), 3);
      |                       ^~~~~~~~
In file included from /usr/lib/gcc/x86_64-linux-gnu/13/include/immintrin.h:109,
                 from /home/ai/llama.cpp/ggml/src/ggml-cpu/ggml-cpu-impl.h:342,
                 from /home/ai/llama.cpp/ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp:9:
/usr/lib/gcc/x86_64-linux-gnu/13/include/fmaintrin.h: In function ‘void ggml_gemv_q4_0_8x8_q8_0(int, float*, size_t, const void*, const void*, int, int)’:
/usr/lib/gcc/x86_64-linux-gnu/13/include/fmaintrin.h:63:1: error: inlining failed in call to ‘always_inline’ ‘__m256 _mm256_fmadd_ps(__m256, __m256, __m256)’: target specific option mismatch
   63 | _mm256_fmadd_ps (__m256 __A, __m256 __B, __m256 __C)
      | ^~~~~~~~~~~~~~~
/home/ai/llama.cpp/ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp:910:42: note: called from here
  910 |                 acc_row = _mm256_fmadd_ps(_mm256_cvtepi32_ps(iacc), _mm256_mul_ps(col_scale_f32, row_scale_f32), acc_row);
      |                           ~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/lib/gcc/x86_64-linux-gnu/13/include/fmaintrin.h:63:1: error: inlining failed in call to ‘always_inline’ ‘__m256 _mm256_fmadd_ps(__m256, __m256, __m256)’: target specific option mismatch
   63 | _mm256_fmadd_ps (__m256 __A, __m256 __B, __m256 __C)
      | ^~~~~~~~~~~~~~~
/home/ai/llama.cpp/ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp:910:42: note: called from here
  910 |                 acc_row = _mm256_fmadd_ps(_mm256_cvtepi32_ps(iacc), _mm256_mul_ps(col_scale_f32, row_scale_f32), acc_row);
      |                           ~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/lib/gcc/x86_64-linux-gnu/13/include/fmaintrin.h:63:1: error: inlining failed in call to ‘always_inline’ ‘__m256 _mm256_fmadd_ps(__m256, __m256, __m256)’: target specific option mismatch
   63 | _mm256_fmadd_ps (__m256 __A, __m256 __B, __m256 __C)
      | ^~~~~~~~~~~~~~~
/home/ai/llama.cpp/ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp:910:42: note: called from here
  910 |                 acc_row = _mm256_fmadd_ps(_mm256_cvtepi32_ps(iacc), _mm256_mul_ps(col_scale_f32, row_scale_f32), acc_row);
      |                           ~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
gmake[2]: *** [ggml/src/CMakeFiles/ggml-cpu.dir/build.make:104: ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/ggml-cpu-aarch64.cpp.o] Erro 1
gmake[1]: *** [CMakeFiles/Makefile2:1699: ggml/src/CMakeFiles/ggml-cpu.dir/all] Erro 2
gmake: *** [Makefile:146: all] Erro 2
An error occurred: Error: cmake build process exited with code 2
    at ChildProcess.<anonymous> (file:///usr/local/etc/EasyAI/core/Llama/LlamaCPP.js:236:22)
    at ChildProcess.emit (node:events:517:28)
    at ChildProcess._handle.onexit (node:internal/child_process:292:12)

4°

[  0%] Building C object ggml/src/CMakeFiles/ggml-base.dir/ggml.c.o
/home/ai/llama.cpp/ggml/src/ggml.c: In function ‘ggml_bf16_to_fp32_row’:
/home/ai/llama.cpp/ggml/src/ggml.c:418:13: error: implicit declaration of function ‘_mm256_storeu_ps’ [-Werror=implicit-function-declaration]
  418 |             _mm256_storeu_ps(y + i,
      |             ^~~~~~~~~~~~~~~~
/home/ai/llama.cpp/ggml/src/ggml.c:419:29: error: implicit declaration of function ‘_mm256_castsi256_ps’ [-Werror=implicit-function-declaration]
  419 |                             _mm256_castsi256_ps(
      |                             ^~~~~~~~~~~~~~~~~~~
/home/ai/llama.cpp/ggml/src/ggml.c:420:33: error: implicit declaration of function ‘_mm256_slli_epi32’ [-Werror=implicit-function-declaration]
  420 |                                 _mm256_slli_epi32(
      |                                 ^~~~~~~~~~~~~~~~~
/home/ai/llama.cpp/ggml/src/ggml.c:421:37: error: implicit declaration of function ‘_mm256_cvtepu16_epi32’ [-Werror=implicit-function-declaration]
  421 |                                     _mm256_cvtepu16_epi32(
      |                                     ^~~~~~~~~~~~~~~~~~~~~
/home/ai/llama.cpp/ggml/src/ggml.c:422:41: error: implicit declaration of function ‘_mm_loadu_si128’ [-Werror=implicit-function-declaration]
  422 |                                         _mm_loadu_si128(
      |                                         ^~~~~~~~~~~~~~~
/home/ai/llama.cpp/ggml/src/ggml.c:423:52: error: unknown type name ‘__m128i’
  423 |                                             (const __m128i *)(x + i))),
      |                                                    ^~~~~~~
cc1: some warnings being treated as errors
gmake[2]: *** [ggml/src/CMakeFiles/ggml-base.dir/build.make:76: ggml/src/CMakeFiles/ggml-base.dir/ggml.c.o] Erro 1
gmake[1]: *** [CMakeFiles/Makefile2:1646: ggml/src/CMakeFiles/ggml-base.dir/all] Erro 2
gmake: *** [Makefile:146: all] Erro 2
An error occurred: Error: cmake build process exited with code 2
    at ChildProcess.<anonymous> (file:///usr/local/etc/EasyAI/core/Llama/LlamaCPP.js:236:22)
    at ChildProcess.emit (node:events:517:28)
    at ChildProcess._handle.onexit (node:internal/child_process:292:12)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant