-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Correct way of defining the voxel size for custom dataset #1557
Comments
This issue is stale because it has been open for 30 days with no activity. |
There are several reasons for this error, check the structure of the model used and make sure it gives an output size compatible with your labels size. |
This issue is stale because it has been open for 30 days with no activity. |
This issue was closed because it has been inactive for 14 days since being marked as stale. |
Law: point cloud range along x,y -axis / voxel_size is the multiple of 16. x-axis: 70.4 - 0 = 70.4 / 0.1 = 704 mod 16 = 0
y-axis: 40 - (-40) = 80 / 0.1 = 800 mod 16 = 0
VOXEL [length, width, height] 0.05m = 5cm
VOXEL_SIZE: [0.05, 0.05, 0.1] # in meters |
@AJAY31797 Facing same issues even when I read all the other issues, any new infos/experiences on this? |
Hi,
I am trying to train a custom dataset using PointPillar. I have created the data according to what is shown on the custom dataset page. However, on training, I am getting the following error:
Exception has occurred: RuntimeError The size of tensor a (995400) must match the size of tensor b (1670400) at non-singleton dimension 1 File "/home/aagr656/OpenPCDet/pcdet/utils/loss_utils.py", line 60, in forward pt = target * (1.0 - pred_sigmoid) + (1.0 - target) * pred_sigmoid File "/home/aagr656/OpenPCDet/pcdet/models/dense_heads/anchor_head_template.py", line 128, in get_cls_layer_loss cls_loss_src = self.cls_loss_func(cls_preds, one_hot_targets, weights=cls_weights) # [N, M] File "/home/aagr656/OpenPCDet/pcdet/models/dense_heads/anchor_head_template.py", line 217, in get_loss cls_loss, tb_dict = self.get_cls_layer_loss() File "/home/aagr656/OpenPCDet/pcdet/models/detectors/pointpillar.py", line 27, in get_training_loss loss_rpn, tb_dict = self.dense_head.get_loss() File "/home/aagr656/OpenPCDet/pcdet/models/detectors/pointpillar.py", line 14, in forward loss, tb_dict, disp_dict = self.get_training_loss() File "/home/aagr656/OpenPCDet/pcdet/models/__init__.py", line 44, in model_func ret_dict, tb_dict, disp_dict = model(batch_dict) File "/home/aagr656/OpenPCDet/tools/train_utils/train_utils.py", line 56, in train_one_epoch loss, tb_dict, disp_dict = model_func(model, batch) File "/home/aagr656/OpenPCDet/tools/train_utils/train_utils.py", line 180, in train_model accumulated_iter = train_one_epoch( File "/home/aagr656/OpenPCDet/tools/train.py", line 176, in main train_model( File "/home/aagr656/OpenPCDet/tools/train.py", line 231, in <module> main() RuntimeError: The size of tensor a (995400) must match the size of tensor b (1670400) at non-singleton dimension 1
I have read all the existing issues related to it and one thing I came to know is that the issue can be solved by adjusting the voxel size and point cloud range. The point cloud range, in my understanding, depends upon the input data, so we can't really do much with it. So what remains is adjusting the voxel size.
Here are the details in the config file I am using:
`DATA_CONFIG:
BASE_CONFIG: tools/cfgs/dataset_configs/custom_dataset.yaml
POINT_CLOUD_RANGE: [-172.8, -172.8, -2, 172.8, 172.8, 38]
DATA_PROCESSOR:
- NAME: mask_points_and_boxes_outside_range
REMOVE_OUTSIDE_BOXES: True
The voxel size satisfies both the requirements I could identify from the existing issues. The point cloud range along z axis is 40 and point cloud range along X-Y axis/voxel size is a multiple of 16. Still, I am still getting the error. I have tried several other values of the point cloud range and voxel size satisfying the two conditions, but the error still persists.
I want to know what is the correct way of defining the voxel size? And, is there anything else I can check to get rid of this error?
Thanks,
Ajay
The text was updated successfully, but these errors were encountered: