Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How about two stage training, should I add ghm loss in both rpn_head and two_stage_head ? #15

Open
nemonameless opened this issue Mar 23, 2019 · 1 comment

Comments

@nemonameless
Copy link

or just rpn_head ?
and should I use the same momentum ,mu and bins as the one stage detector? And if I use cascade_mask_rcnn, how about these hyperparameters?

Thanks a lot.

@libuyu
Copy link
Owner

libuyu commented Mar 25, 2019

It works on the classification branch of RPN in our early experiments. And the hyperparameters need a little finetuning.

However, although the proposals from RPN are improved, the final performance after the second stage has only a little improvement. So we just focus on the one-stage detector.

And this loss doesn't work on the two_stage_head, because two_stage detector uses a sampling strategy that avoids the imbalanced distribution of examples. Explanations in detail can be seen in this paper: https://arxiv.org/abs/1708.02002.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants