Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Balanced KMeans triggers limiting_resource_adaptor.hpp:152: Exceeded memory limit errors for large datasets #682

Open
csadorf opened this issue Feb 12, 2025 · 0 comments
Labels
bug Something isn't working

Comments

@csadorf
Copy link

csadorf commented Feb 12, 2025

Describe the bug

The Balanced KMeans implementation uses a RAFT get_workspace() resource that is used to allocated arrays on the order of minibatch_size within the build_fine_clusters() function (passed as device_memory) which then allocates mc_trainset_buf [mesocluster_size_max x dim] which is on the order of dataset size / n_clusters, i.e., orders of magnitude larger than minibatch size which is ~1GB. This will trigger a limiting_resource_adaptor.hpp:152: Exceeded memory limit exception, because the default allocation limit is set to total device memory / 4.

To avoid this problem for large datasets (~ device memory size), the user must increase the number of (mesoscale) clusters. However, while increasing the number of clusters commensurate with the dataset size is generally advisable, I believe that we should not artificially limit the allocation size when the user explicitly uses managed memory. This means even if we do not generally remove the resource limiter on the workspace resource, we should at least remove it specifically for the mc_trainset_buf allocation since there is no expectation that it should be on the order of minibatch size which is otherwise used to estimate the expected workspace resource needs.

Steps/Code to reproduce bug

The issue can be reproduced with the test script posted in this issue.

Expected behavior

I would expect to not run into a device resource allocator before device memory is sufficiently exhausted and I would expect to not encounter any OOM or resource limiter issues when using a managed memory allocator.

Environment details (please complete the following information):

  • Environment location: [Bare-metal, Docker, Cloud(specify cloud provider)]
  • Method of RAFT install: [conda, Docker, or from source]
    • If method of install is [Docker], provide docker pull & docker run commands used

Additional context
Add any other context about the problem here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant