Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

run on our dataset, but not convergence #12

Open
liangxuejingjing opened this issue Feb 26, 2024 · 5 comments
Open

run on our dataset, but not convergence #12

liangxuejingjing opened this issue Feb 26, 2024 · 5 comments

Comments

@liangxuejingjing
Copy link

@MengyuWang826 hi~

When I ran your code on our dataset, a floorplan, we got white outputs...I don't know the reason...
QQ截图20240226093514

And the total loss kept rising, here is my training log.
20240221_075018.log.json

How do we resolve this problem? retrain your model on our dataset? or generate a new coarse mask dataset only about the edge?
Please reply to me ASAP.

@yanrihong
Copy link

can i check how you load your custom dataset?

@liangxuejingjing
Copy link
Author

can i check how you load your custom dataset?

I generated the coarse mask as the authors have released and then collected them into JSON format.

`base = [
'./segrefiner_lr.py'
]

object_size = 256
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations', with_bbox=False, with_label=False, with_mask=True),
dict(type='LoadPatchData', object_size=object_size, patch_size = object_size),
dict(type='Resize', img_scale=(object_size, object_size), keep_ratio=False),
dict(type='RandomFlip', flip_ratio=0.5),
dict(type='Normalize', **img_norm_cfg),
dict(type='DefaultFormatBundle'),
dict(type='Collect', keys=['object_img', 'object_gt_masks', 'object_coarse_masks',
'patch_img', 'patch_gt_masks', 'patch_coarse_masks'])]

dataset_type = 'HRCollectionDataset'
data_root = '/mmdetection/SegRefiner-main/data/'
train_dataloader=dict(
samples_per_gpu=6,
workers_per_gpu=1)
data = dict(
delete=True,
train=dict(
type=dataset_type,
pipeline=train_pipeline,
data_root=data_root,
collection_datasets=['vanyi'],
collection_json=data_root + 'collection_vanyi.json'),
train_dataloader=train_dataloader,
val=dict(),
test=dict())`

@yanrihong
Copy link

Great! I just want to test the performance of segrefiner, since i use mmsegmentation before, the dataloading is a really big problem for me, thank u for sharing how you do on your custom dataset, now i gonna check the model and see if it will occur the same problem as what you happen, 3Q

@yanrihong
Copy link

我也遇到了一样的问题,到最后IOU会直接变成0

@wfantastic
Copy link

Hello, I encountered the same problem during training, the iou became 0. Do you have any good way to solve this problem?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants