Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

build: fix build error when build source code on Windows #12157

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

zhouwg
Copy link
Contributor

@zhouwg zhouwg commented Mar 3, 2025

I know nothing about Windows programming, but it seems there is a minor build error when build latest source code on Windows after verified twice:

Screenshot from 2025-03-03 13-19-00

pls help to close this PR accordingly if this is a misunderstanding, thanks.

@@ -148,7 +148,7 @@ struct lora_merge_ctx {

ctx_out = gguf_init_empty();
struct ggml_init_params params = {
/*.mem_size =*/ gguf_get_n_tensors(base_model.ctx_gguf)*ggml_tensor_overhead(),
/*.mem_size =*/ static_cast<size_t>(gguf_get_n_tensors(base_model.ctx_gguf)*ggml_tensor_overhead()),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
/*.mem_size =*/ static_cast<size_t>(gguf_get_n_tensors(base_model.ctx_gguf)*ggml_tensor_overhead()),
/*.mem_size =*/ ggml_tensor_overhead()*gguf_get_n_tensors(base_model.ctx_gguf),

Copy link
Contributor Author

@zhouwg zhouwg Mar 3, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for your comment. this modification seems more elegant(implicit type conversion), validation passed on Linux and Android, but still failed on Windows(this Windows10 ISO was downloaded from MS's official website) and the compiler suggest an explicit cast is needed:

cmake --preset x64-windows-llvm-release -DGGML_OPENMP=OFF

cmake --build build-x64-windows-llvm-release

Screenshot from 2025-03-03 22-18-09

Screenshot from 2025-03-03 22-19-30

after check the details:

struct ggml_init_params {
        // memory pool
        size_t  mem_size;   // bytes
        void * mem_buffer; // if NULL, memory will be allocated internally
        bool   no_alloc;   // don't allocate memory for the tensor data
    };

size_t ggml_tensor_overhead(void) {
    return GGML_OBJECT_SIZE + GGML_TENSOR_SIZE;
}

int64_t gguf_get_n_tensors(const struct gguf_context * ctx) {
    return ctx->info.size();
}

should I modify to:

 /*.mem_size   =*/ static_cast<size_t>(ggml_tensor_overhead()*gguf_get_n_tensors(base_model.ctx_gguf)),

based on your suggestion,

or keep my original modification:

 /*.mem_size   =*/ static_cast<size_t>(gguf_get_n_tensors(base_model.ctx_gguf)*ggml_tensor_overhead()),

I guess this is a compiler issue because your suggestion works fine on Linux and Android.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just noticed that for some reason, on your system size_t is 32-bit unsigned int. This means that likely this is a 32-bit Windows. It's better to switch to a 64-bit version, because it is very likely that llama.cpp will not run correctly on this system.

Copy link
Contributor Author

@zhouwg zhouwg Mar 3, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can confirm that this is a 64-bit Windows through VMware Player, pls refer to:
Screenshot from 2025-03-03 23-32-49

I guess this is a compiler issue:

  • your suggestion code can build fine with x86-64 Linux toolchain and Android's NDK
  • your suggestion code contains an implicit type conversion which should works fine
    Screenshot from 2025-03-03 23-42-06

or this issue because of this is Windows VM?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants