Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug: duplicate vulkan devices being detected on windows #9516

Open
tempstudio opened this issue Sep 17, 2024 · 0 comments
Open

Bug: duplicate vulkan devices being detected on windows #9516

tempstudio opened this issue Sep 17, 2024 · 0 comments
Labels
bug-unconfirmed low severity Used to report low severity bugs in llama.cpp (e.g. cosmetic issues, non critical UI glitches)

Comments

@tempstudio
Copy link

tempstudio commented Sep 17, 2024

What happened?

When running llama-cli.exe for vulkan on Windows, it detects the same graphics card twice, which means it does not work out of the box.
by default it will try to use "both" which then leads to the cryptic error

MESA: error: == VALIDATION ERROR =============================================
error: Total Thread Group Shared Memory storage is 33792, exceeded 32768.
Validation failed.

Setting

$env:GGML_VK_VISIBLE_DEVICES=0

is necessary and works, but that's not really documented anywhere.

Name and Version

./llama-cli.exe --version
version: 3772 (23e0d70)
built with MSVC 19.29.30154.0 for x64

What operating system are you seeing the problem on?

Windows

Relevant log output

ggml_vulkan: Found 2 Vulkan devices:
Vulkan0: AMD Radeon RX 6800 XT (AMD proprietary driver) | uma: 0 | fp16: 1 | warp size: 64
Vulkan1: Microsoft Direct3D12 (AMD Radeon RX 6800 XT) (Dozen) | uma: 0 | fp16: 1 | warp size: 32
@tempstudio tempstudio added bug-unconfirmed low severity Used to report low severity bugs in llama.cpp (e.g. cosmetic issues, non critical UI glitches) labels Sep 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug-unconfirmed low severity Used to report low severity bugs in llama.cpp (e.g. cosmetic issues, non critical UI glitches)
Projects
None yet
Development

No branches or pull requests

1 participant