Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Correct way of defining the voxel size for custom dataset #1557

Closed
AJAY31797 opened this issue Jan 26, 2024 · 6 comments
Closed

Correct way of defining the voxel size for custom dataset #1557

AJAY31797 opened this issue Jan 26, 2024 · 6 comments
Labels

Comments

@AJAY31797
Copy link

Hi,

I am trying to train a custom dataset using PointPillar. I have created the data according to what is shown on the custom dataset page. However, on training, I am getting the following error:
Exception has occurred: RuntimeError The size of tensor a (995400) must match the size of tensor b (1670400) at non-singleton dimension 1 File "/home/aagr656/OpenPCDet/pcdet/utils/loss_utils.py", line 60, in forward pt = target * (1.0 - pred_sigmoid) + (1.0 - target) * pred_sigmoid File "/home/aagr656/OpenPCDet/pcdet/models/dense_heads/anchor_head_template.py", line 128, in get_cls_layer_loss cls_loss_src = self.cls_loss_func(cls_preds, one_hot_targets, weights=cls_weights) # [N, M] File "/home/aagr656/OpenPCDet/pcdet/models/dense_heads/anchor_head_template.py", line 217, in get_loss cls_loss, tb_dict = self.get_cls_layer_loss() File "/home/aagr656/OpenPCDet/pcdet/models/detectors/pointpillar.py", line 27, in get_training_loss loss_rpn, tb_dict = self.dense_head.get_loss() File "/home/aagr656/OpenPCDet/pcdet/models/detectors/pointpillar.py", line 14, in forward loss, tb_dict, disp_dict = self.get_training_loss() File "/home/aagr656/OpenPCDet/pcdet/models/__init__.py", line 44, in model_func ret_dict, tb_dict, disp_dict = model(batch_dict) File "/home/aagr656/OpenPCDet/tools/train_utils/train_utils.py", line 56, in train_one_epoch loss, tb_dict, disp_dict = model_func(model, batch) File "/home/aagr656/OpenPCDet/tools/train_utils/train_utils.py", line 180, in train_model accumulated_iter = train_one_epoch( File "/home/aagr656/OpenPCDet/tools/train.py", line 176, in main train_model( File "/home/aagr656/OpenPCDet/tools/train.py", line 231, in <module> main() RuntimeError: The size of tensor a (995400) must match the size of tensor b (1670400) at non-singleton dimension 1

I have read all the existing issues related to it and one thing I came to know is that the issue can be solved by adjusting the voxel size and point cloud range. The point cloud range, in my understanding, depends upon the input data, so we can't really do much with it. So what remains is adjusting the voxel size.

Here are the details in the config file I am using:
`DATA_CONFIG:
BASE_CONFIG: tools/cfgs/dataset_configs/custom_dataset.yaml
POINT_CLOUD_RANGE: [-172.8, -172.8, -2, 172.8, 172.8, 38]
DATA_PROCESSOR:
- NAME: mask_points_and_boxes_outside_range
REMOVE_OUTSIDE_BOXES: True

    - NAME: shuffle_points
      SHUFFLE_ENABLED: {
        'train': True,
        'test': False
      }

    - NAME: transform_points_to_voxels
      VOXEL_SIZE: [1.44, 1.44, 40]
      MAX_POINTS_PER_VOXEL: 960
      MAX_NUMBER_OF_VOXELS: {
        'train': 32000,
        'test': 40000
      }`

The voxel size satisfies both the requirements I could identify from the existing issues. The point cloud range along z axis is 40 and point cloud range along X-Y axis/voxel size is a multiple of 16. Still, I am still getting the error. I have tried several other values of the point cloud range and voxel size satisfying the two conditions, but the error still persists.
I want to know what is the correct way of defining the voxel size? And, is there anything else I can check to get rid of this error?

Thanks,
Ajay

Copy link

This issue is stale because it has been open for 30 days with no activity.

@github-actions github-actions bot added the stale label Feb 26, 2024
@Tony-syr
Copy link

Tony-syr commented Mar 5, 2024

There are several reasons for this error, check the structure of the model used and make sure it gives an output size compatible with your labels size.

@github-actions github-actions bot removed the stale label Mar 6, 2024
Copy link

github-actions bot commented Apr 5, 2024

This issue is stale because it has been open for 30 days with no activity.

@github-actions github-actions bot added the stale label Apr 5, 2024
Copy link

This issue was closed because it has been inactive for 14 days since being marked as stale.

@Petros626
Copy link

Petros626 commented Oct 8, 2024

Law: point cloud range along x,y -axis / voxel_size is the multiple of 16.
range_x = x_max - x_min ; range_x / voxel_size mod 16 = 0
Notice that the second rule also suit pillar based detectors such as PointPillar and CenterPoint-Pillar.
x = [0, 70.4] ; y = [-40, 40]

x-axis: 70.4 - 0 = 70.4 / 0.1 = 704 mod 16 = 0
y-axis: 40 - (-40) = 80 / 0.1 = 800 mod 16 = 0
VOXEL [length, width, height] 0.05m = 5cm
VOXEL_SIZE: [0.05, 0.05, 0.1] # in meters

#253 (comment)

@Hberto
Copy link

Hberto commented Feb 5, 2025

@AJAY31797 Facing same issues even when I read all the other issues, any new infos/experiences on this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants