You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
According to cuda documentation the memory of the cudaKernelNodeParams struct as returned by cudaGraphKernelNodeGetParams is owned by the associated node. In the case here
We have thus modified the node by simply replacing a pointer to a member with memory owned by the runtime with a pointer to memory we own. There is no way this results in well defined behavior and indeed the cuda documentation prohibits this action, see the link to the documentation above:
This memory remains valid until the node is destroyed or its parameters are modified, and should not be modified directly.
Presumably this happens to work on cuda right now either because the runtime happens to allocate the pointer we are updating as part of a larger block separately malloced and the pointer happens to not be the first address in the allocated block, or because the runtime simply is leaking this memory.
On hip, the runtime mallocs() memory all the kernel pointers separately and then frees() the memory when the node is destroyed. This of course causses an invalid free() when the runtime encounters the pointer we changed to point to our memory.
We could avoid this by not changing the pointer to memory we own, but to instead simply update the value it holds:
I have verfied by looking at the hip runtime code and discussion with an amd engeneer that this is fine to do for hip, but this still violates the provision to not modify the parameters given by the cuda documentation and i have no idea if this is safe to do there. The only way to not violate the constraints given by the documentation would be assemble a cudaKernelNodeParams struct by hand from scratch in this possition:
I agree that the above change is better, and confirmed it also works for CUDA. I'll check with our CUDA graphs team to get their comments on it. Note that we can bypass this altogether with #9017 which still needs some work as per comment #9017 (comment) - if there is a desire for this I can resurrect it.
Maybe you can submit feedback to them that this interface seams poorly thought out. Either:
cudaGraphKernelNodeGetParams should return a const struct
a helper function to make a deep copy of the params should be provided
a helper function to free the copy should be provided
or:
cudaGraphKernelNodeGetParams should return a deep copy
a helper function to free it should be provided
Otherwise this 'here is a non-const pointer to our internal memory structure but please dont modify anything' behavior of cudaGraphKernelNodeGetParams seams like a unnescary footgun.
According to cuda documentation the memory of the cudaKernelNodeParams struct as returned by cudaGraphKernelNodeGetParams is owned by the associated node. In the case here
llama.cpp/ggml/src/ggml-cuda/ggml-cuda.cu
Line 2546 in 14dec0c
we later modify this struct by replacing one of its pointer members with the address inside a block of memory we own:
llama.cpp/ggml/src/ggml-cuda/ggml-cuda.cu
Line 2565 in 14dec0c
We have thus modified the node by simply replacing a pointer to a member with memory owned by the runtime with a pointer to memory we own. There is no way this results in well defined behavior and indeed the cuda documentation prohibits this action, see the link to the documentation above:
Presumably this happens to work on cuda right now either because the runtime happens to allocate the pointer we are updating as part of a larger block separately malloced and the pointer happens to not be the first address in the allocated block, or because the runtime simply is leaking this memory.
On hip, the runtime mallocs() memory all the kernel pointers separately and then frees() the memory when the node is destroyed. This of course causses an invalid free() when the runtime encounters the pointer we changed to point to our memory.
We could avoid this by not changing the pointer to memory we own, but to instead simply update the value it holds:
I have verfied by looking at the hip runtime code and discussion with an amd engeneer that this is fine to do for hip, but this still violates the provision to not modify the parameters given by the cuda documentation and i have no idea if this is safe to do there. The only way to not violate the constraints given by the documentation would be assemble a cudaKernelNodeParams struct by hand from scratch in this possition:
llama.cpp/ggml/src/ggml-cuda/ggml-cuda.cu
Line 2564 in 14dec0c
The text was updated successfully, but these errors were encountered: