-
-
Notifications
You must be signed in to change notification settings - Fork 363
Zstd Codec on the GPU #2863
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Zstd Codec on the GPU #2863
Conversation
d6608e7
to
d548adc
Compare
Thanks for opening this PR! At the moment we do not have any codecs implemented in the |
@dstansby My understanding was that |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks nice overall. I think the async side of things ended up in a pretty good spot. The code itself is pretty easy to follow (a normal stream synchronize). Having to do that on another host thread is a bit unfortunate, but there's only one synchronize per batch so this should be fine.
I left comments on a few things to clean up that I can help out with if you want @akshaysubr.
checksum: bool = False | ||
|
||
def __init__(self, *, level: int = 0, checksum: bool = False) -> None: | ||
# TODO: Set CUDA device appropriately here and also set CUDA stream |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed with leaving devices / streams as a TODO for now.
I want to enable users to overlap host-to-device memcpys with compute operations (like decode, but their own compute operations as well), but I'm not sure yet what that API will look like.
If you have any thoughts on how best to do this I'd love to hear them, and write them up as an issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#3271 for planning on devices and streams.
src/zarr/codecs/gpu.py
Outdated
chunks_and_specs: Iterable[tuple[Buffer | None, ArraySpec]], | ||
) -> Iterable[Buffer | None]: | ||
return [ | ||
spec.prototype.buffer.from_array_like(cp.array(a, dtype=np.dtype("b"), copy=False)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is one spot where @weiji14's idea to use dlpack in #2658 (comment) would help. If NDBuffer
new how to consume objects implementing the dlpack protocol, we could (maybe) get rid of the cp.array
Fixed some merge conflicts and changed the |
if array_like.ndim != 1: | ||
raise ValueError("array_like: only 1-dim allowed") | ||
if array_like.dtype != np.dtype("B"): | ||
if array_like.dtype.itemsize != 1: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The new tests in test_nvcomp.py
were failing without this change.
I'd like to get us to a point where we don't care as much about the details of the buffer passed in here. This is an OK start I think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what exactly does this check for? It's not clear to me why any numpy array that can be viewed as bytes wouldn't be allowed in here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
and same for the dimensionality check, since any N-dimensional numpy array can be viewed as a 1D array.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I'm not really sure...
I agree that the actual data we store internally here needs to be a byte dtype. Just doing cp.asarray(input).view("b")
seems pretty reasonable to me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i'm not even convinced that we need Buffer
/ NDBuffer
, when Buffer
is just a special case of NDBuffer
where there's 1 dimension and the data type is bytes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we could even express this formally by:
- making
NDBuffer
generic with two type parameters (number of dimensions and dtype) - having APIs that insist on consuming a
Buffer
instead insist on consumingNDBuffer[Literal[1], np.uint8]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(super out of scope for this PR ofc)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not clear to me why any numpy array that can be viewed as bytes wouldn't be allowed in here
I think this is mainly because NDBuffer
objects don't need to be contiguous, but Buffer
objects must be contiguous in memory which might be important when we send those out to codecs that expect a contiguous memory slice.
But I agree that we can probably merge those two and make Buffer
a specialization of NDBuffer
.
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #2863 +/- ##
==========================================
- Coverage 61.25% 61.24% -0.01%
==========================================
Files 84 85 +1
Lines 9949 10017 +68
==========================================
+ Hits 6094 6135 +41
- Misses 3855 3882 +27
🚀 New features to boost your workflow:
|
Is that you speaking as a user of zarr or a maintainer? From the user perspective, this will only be used if you activate it. From the maintainer's perspective, I'd hope that through documentation and clear error messages we can pinpoint the issue for users. |
Sorry, I missed that. And sorry for the slow reply - I am not paid to work on zarr-python, and review stuff on a best efforts basis. re. where stuff lives, I did not realise that we had codec classes implemented directly in
Given that codecs classes are in
Am I right in thinking that GPU arrays and now decompression are a global option, not a per-array or a per-operation configuration? If so it would be good to clarify in gpu.rst that a) this is only a global option b) if it can be changed during a python session, and c) how decompression of GPU buffers is handled for codecs without a GPU implementation (I presume it just falls back to the CPU impelementation) re. @d-v-b's point about about skew across zstd implementations, I think this is a very important point. I'm personally on the fence about depending on a closed source library, but as a minimum I think there should be tests that compare the compression/decompression with the (open source I presume?) CPU implementation and the new GPU implementation, and make sure the results are the same. |
Both? Zarr users and maintainers routinely expose bugs in libraries we depend on, and we routinely forward issues to those libraries, and occasionally contribute patches. If users report a bug or problem in nvcomp, what do we do? Suppose you or akshay leave nvidia, should I email [email protected] when we have problems? |
Likewise :/
I'm not sure... In I think that https://zarr.readthedocs.io/en/stable/user-guide/config.html is the relevant documentation here:
This isn't adding any new concepts: just a new implementation that uses the existing configuration system (which I think is fair to summarize as a global managed through our configuration system).
Sure... It's the same as any other configuration option. We don't explicitly show using any option in a context manager, but we do link to https://github.com/pytroll/donfig, which documents that behavior. I can duplicate that in our documentation if you want.
Yeah, it's following the documented behavior: the codec class associated with a given codec ID is used. We can repeat that in the GPU documentation and cross-reference the configuration docs. So from a generic codec side of things, it'll just use whatever codec implementation is associated with that codec name. From a GPU-specific side of things, we'll want to document which codecs are currently implemented (I imagine the remaining codecs will be a relatively straightforward refactor based on top of what this branch implements).
An issue filed at https://github.com/NVIDIA/CUDALibrarySamples would be best (I personally would ask the reporter to submit that rather than filing on their behalf, but you're probably more patient than me 😄).
I'll add those to this PR. |
I believe that all the comments have been addressed now but let me know if I missed anything. I'm unsure why codecov is reporting that the coverage dropped here. Locally I see 100% coverage for |
I confess that I'm worried about retracing the steps that led to a large number of poorly-maintained storage backends in zarr-python 2. I'm not saying that this functionality will be poorly maintained, but it is a risk, and I think we need to make a decision as a project about how we weigh that risk. For this particular feature, what would it look like if we spun it off into a separate repo under the zarr-developers org? |
Let's see: For users, I think it could in principle be mostly seamless. We have entry points for loading a codec class without requiring users to import it. If this were in a separate package, I'd push for the We'd need some system to ensure that For maintainers, it'd be additional overhead from a separate project (CI, packaging, docs), which is minor but non-negligible. My main question: is zarr-python really willing to commit to this as a stable interface, to the degree that third parties can build on it without worrying about things breaking? I worry that we don't know enough about the codec interface and its interaction with buffers yet to really commit to that (e.g. all your suggestions about buffers earlier in the thread). |
I am personally not convinced that our codec API is as good as it could be, and I think the process of developing new codecs is a really important part of finding out how we can improve it. In particular, I'm worried that we are missing out on really simple performance optimizations like making the basic encoding and decoding routines take an optional buffer. But this is a separate question from whether we have the will to change this API any time soon. If we optimistically assume that the codec API might evolve in the future, then it does makes things easiest if the functionality in this PR is defined inside zarr python. So that's a solid point in favor of keeping it in this repo. As a counter-point, the other day I was doing development on my mac laptop and I made changes that caused failures in the GPU tests. My mac doesn't have an nvidia gpu, so I couldn't run any of the cuda-specific tests, and so I had to rely on github actions to debug. That's a pretty bad development experience, so I do worry that this will only get worse if we add more cuda-specific code that needs to be updated as part of codebase-wide refactoring efforts, but can only be developed on a system with an nvidia gpu. The development friction this adds might actually cancel out the friction reduced by keeping the gpu-zstd code close to the codec definitions. So in the end I'm really not sure where the cuda zstd codec should live. I don't want this effort to be stalled, and I think the functionality here is really important. It would be great to get input on this from other @zarr-developers/python-core-devs |
Most OSS devs I know don't have access to a NVIDIA GPU on their machine, so having NVIDIA-specific code in here does greatly increase the barrier to development (basically, I'm echoing the above comment). GPU development also requires provision of a non-standard GitHub actions runner, that I might be right in thinking we're paying for somehow? There's a risk that this goes away in the future. So I'd be +1 to moving code designed for non-CPU hardware outside of zarr-python, perhaps into zarr-python-gpu or similar? The other advantage is you could move faster and break stuff (and not rely on zarr-python core dev reviews), compared to developing in zarr-python that has to stay very stable for it's wide user base. |
I'm not that deep in the existing GPU code, so practically I'm not sure how feasible moving it out would be. But at least for codecs, it seems like it should definitely be possible to implement custom codecs in other packages? |
We talked about this a bit at last week's dev call. I think there's general agreement that native support for GPUs is important. And that at a technical level developing GPU-specific bits here or in some other repo is feasible. For better or worse, we've defined public interfaces for many parts of Zarr-python. Having We can split that out to another repo that contains the (NVIDIA) GPU specific bits (and likewise for MLX and other accelerators that want their own memory types, codecs, store, etc). But I do think it's important that the zarr-python project commits to supporting an interface. That involves a higher commitment to stability, and might mean some kind of (informal) policy on reverting commits that happen to break downstream libraries, assuming the issue is noticed and reported quickly. If we pair a strong commitment to stability from zarr-python core, along with "champions" from any of the downstream projects that commit to reporting issues and helping debug problems at the boundary, then I think splitting this out to a third-party repo and building on the extension points should work. |
we talked about this PR (and non-cpu-device-dependent extensions to zarr python in general) at the recent zarr-python dev meeting. While asymptotically I think we would be happy seeing a stand-alone |
Thanks for the update. I can get the merge conflicts and any remaining issues ironed out this week. |
This PR adds a Zstd codec that runs on the GPU using the nvCOMP 4.2 python APIs.
TODO:
docs/user-guide/*.rst
changes/