Skip to content

Conversation

@faran928
Copy link
Contributor

Summary: trochrec uneven sharding changes

Differential Revision: D79603009

@meta-codesync
Copy link
Contributor

meta-codesync bot commented Nov 10, 2025

@faran928 has exported this pull request. If you are a Meta employee, you can view the originating Diff in D79603009.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Nov 10, 2025
… pool (meta-pytorch#3533)

Summary:

A few changes in the diff:
1. Support to proportionally shard the tensor pool based on memory capacity per rank. 
2. Using block_bucketize_sparse_features_inference to return bucket_mapping that can be used during request batching in inference w/ custom sigrid predictor engine
3. Wrapping some of the operations with fx wrappers to make it compatible with model split boundaries for DLRM serving where embeddings are sharded and split onto different pytorch modules
4. Exposing set_device() api to some of the modules if we want to place some shards to cpu while others to cuda.
5. Move _get_unbucketize_tensor_via_length_alignment to common util files.

Differential Revision: D79603009
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant