Conversation
d99b74f to
f733d51
Compare
|
/ok to test 3e8c042 |
|
Thank you for your contribution! NVIDIA Megatron-LM is currently transitioning to development on Github. We will aim to review your PR after we complete our transition and stabilize our Github development process. Thank you for your understanding. |
| f"{self.paged_tensors_to_reload[pp_schedule_layer]}" | ||
| ) | ||
|
|
||
| def allocate_stash_buffers(self, stash_buffer_size_factor=1.10): |
There was a problem hiding this comment.
Curious how stash_buffer_size_factor is going to be determined? Is 1.10 be reasonable enough?
There was a problem hiding this comment.
Whether 1.1 is enough depends on the distribution of the token distribution in each layer.
One can use the remaining GPU memory after fitting the model/activation as the stashing buffer size. Or this can be done through some iterative trial, similar to deciding the best sharding/microbatch size that can fit on GPU with load imbalance.
63126cc to
d4eee90
Compare
f30202f to
a1103bb
Compare
Get rid of legacy names like packed offloading Move the main code body of paged stash to transformer/moe/
Remove unused triton kernel for dropping token in case overflow happens
resolve accidental change in fused_a2a.py
…SIZE_FACTOR is positive. 2. fix int32 overflow in some triton kernels when token count is large 3. fix a problem where restored activation might get deallocate prematurely
a1103bb to
095db06
Compare
Signed-off-by: Kirthi Shankar Sivamani <ksivamani@nvidia.com>
… to use_transformer_engine_op_fuser Enforce Router padding for paged stashing Initial commit to enable paged stashing for TE fused op Enable stashing for 1D shape, colwise_scale_inv tensors Use moe_paged_stash to enable/disable stashing with fused op Use use_transformer_engine_op_fuser to enable/disable fused op Dynamic-shape no-stashing fallback for non-CG Dynamic-shape no-stashing fallback + Full CG Eliminate sync in mtp loss cal enable 1f1b overlap Add overflow check back temporarily before changes for PagedStashRunner is ready nanz/megatron-lm!1 - Paged stashing fallback
This reverts commit 7c7c9e1.
This reverts commit d71009f.
This reverts commit be3eec1.
Main contributors (Equal Contribution, sorted alphabetically): Nan Zheng (@nanz-nv), Vasudevan Rengasamy (@vasunvidia)
Other contributors (sorted alphabetically): Dennis Liu(@Victarry), Hongbin Liu(@lhb8125), Qi Zhang(@QiZhangNV), Robin Zhang(@buptzyb), Tong Liu(@Autumn1998), Zijie Yan(@yanring)
Background
In token-dropless MoE training, the number of tokens received by each expert might vary, resulting in dynamic shaped tensors. Dynamic shaped tensors are naturally supported by PyTorch, thanks to its eager mode nature. This is done by creating a tensor lazily when the shape of the tensor is known at run-time. Albeit working well in eager mode, dynamic shaped tensor poses challenges for CUDA graphs because the the size of a tensor cannot be dynamically adjusted at runtime without the intervene of the host. In order to remove the sync and enable CUDA graph, one solution is to oversize the buffer in the expert part. This however causes significantly higher memory consumption compared to the eager-mode baseline through the form of memory fragmentation.
Idea overview
To address this problem, paged stashing decouples the need of oversized buffers for compute and the need of a properly sized buffer for storing activations for the backward pass. Paged stashing achieves this through adding one level of indirection: stashing and restoring. The stash operation copies the activation from the oversized static buffer to a pre-allocated stashing buffer after the forward for that module is done, and the restore operation does the reverse operation during the backward pass.
The key of saving memory lies in the fact that the stash operation packs the variable-size activation into a contiguous stashing buffer to reduce memory fragmentation. For simple scheduling where the activation allocation and deallocation follows a first-in-last-out pattern, stash and restore can be done easily in a bump-allocation manner. To accommodate complicated scheduling schedules, e.g. pipeline parallel, paging can be used, hence the name paged stashing.
page management
To accomodate complex scheduling such as that needed in pipeline parallelism, activations are partitioned into pages and a light-weight memory management kernel is in charge of allocate and deallocate pages for stashing. Pages are managed by lightweight GPU memory management kernels that can be fused with the stash/restore GPU kernels. It maintains a freelist which is implemented as a circular buffer. Each freelist keeps track of one type of pages.
CPU offloading
Paged stashing naturally supports offloading. When the stashing buffer is a pinned CPU tensor, the activation is offloaded to the host memory during forward and is reloaded to the GPU during backward.
Furthermore, one can easily extend the paging management system to accommodate partial offloading or on-demand offloading. This feature is currently WIP.
scheduling
Overlapping stashing and restore operations with compute can be implemented by inserting two autograd functions before and after the expert compute layer: pre-scheduler and post-scheduler that schedules stash and restore operations. The roles of these autograd functions are enumerated below:
Additionally, in case of pipeline parallelism, this can be used to record the pipeline schedule during the first iteration.
Wait for restore operation for the current layer to complete. Additionally, in case of pipeline parallelism, this can be used to record the pipeline schedule during the first iteration.