-
Notifications
You must be signed in to change notification settings - Fork 253
Description
I change the img_backbone swintransformer with Vim in bevfusion like this
from .models_mamba import vim_small_patch16_stride8_224_bimambav2_final_pool_mean_abs_pos_embed_with_midclstok_div2 as imb `` self.imb = vim_small_patch16_stride8_224_bimambav2_final_pool_mean_abs_pos_embed_with_midclstok_div2( pretrained=False)
def extract_img_feat(
self,
x,
points,
lidar2image,
camera_intrinsics,
camera2lidar,
img_aug_matrix,
lidar_aug_matrix,
img_metas,
) -> torch.Tensor:
B, N, C, H, W = x.size()
x = x.view(B * N, C, H, W).contiguous()
x = self.imb(x)
And then I kept getting errors OutOfMemoryError:
return fwd(*args, **kwargs)
File "/HOME/scw6d49/.conda/envs/mit/lib/python3.8/site-packages/mamba_ssm/ops/selective_scan_interface.py", line 213, in forward
File "/HOME/scw6d49/.conda/envs/mit/lib/python3.8/site-packages/mamba_ssm/ops/selective_scan_interface.py", line 213, in forwardFile "/HOME/scw6d49/.conda/envs/mit/lib/python3.8/site-packages/mamba_ssm/ops/selective_scan_interface.py", line 213, in forward
File "/HOME/scw6d49/.conda/envs/mit/lib/python3.8/site-packages/mamba_ssm/ops/selective_scan_interface.py", line 213, in forward
return fwd(*args, **kwargs)
File "/HOME/scw6d49/.conda/envs/mit/lib/python3.8/site-packages/mamba_ssm/ops/selective_scan_interface.py", line 213, in forward
out, scan_intermediates, out_z = selective_scan_cuda.fwd(
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 398.00 MiB (GPU 3; 23.65 GiB total capacity; 21.26 GiB already allocated; 265.31 MiB free; 21.73 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
out, scan_intermediates, out_z = selective_scan_cuda.fwd(
torch.cudaout, scan_intermediates, out_z = selective_scan_cuda.fwd(.
OutOfMemoryError : torch.cudaout, scan_intermediates, out_z = selective_scan_cuda.fwd(CUDA out of memory. Tried to allocate 398.00 MiB (GPU 0; 23.65 GiB total capacity; 21.26 GiB already allocated; 265.31 MiB free; 21.73 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF.OutOfMemoryError: CUDA out of memory. Tried to allocate 398.00 MiB (GPU 5; 23.65 GiB total capacity; 21.26 GiB already allocated; 265.31 MiB free; 21.73 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
torch.cudaout, scan_intermediates, out_z = selective_scan_cuda.fwd( .
out, scan_intermediates, out_z = selective_scan_cuda.fwd(OutOfMemoryError
: torch.cudaCUDA out of memory. Tried to allocate 398.00 MiB (GPU 2; 23.65 GiB total capacity; 21.26 GiB already allocated; 248.25 MiB free; 21.73 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF.
OutOfMemoryError: CUDA out of memory. Tried to allocate 398.00 MiB (GPU 4; 23.65 GiB total capacity; 21.26 GiB already allocated; 265.31 MiB free; 21.73 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONFtorch.cuda
.OutOfMemoryError: CUDA out of memory. Tried to allocate 398.00 MiB (GPU 1; 23.65 GiB total capacity; 21.26 GiB already allocated; 265.31 MiB free; 21.73 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 10872) of binary: /HOME/scw6d49/.conda/envs/mit/bin/python
Is this a problem with the model or my use?