Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

strange normalization in Rand error #3

Open
thouis opened this issue Aug 16, 2016 · 6 comments
Open

strange normalization in Rand error #3

thouis opened this issue Aug 16, 2016 · 6 comments

Comments

@thouis
Copy link

thouis commented Aug 16, 2016

At these lines, two terms in the two sums are very different scales. The first is O(number of pixels squared), the second is in the range 0-1. Perhaps there are missing parentheses?

https://github.com/cremi/cremi_python/blob/master/cremi/evaluation/rand.py#L60-L61

(cc @iarganda - similar code appears in http://brainiac2.mit.edu/SNEMI3D/sites/default/files/SNEMI3D_metrics.m)

@thouis
Copy link
Author

thouis commented Aug 16, 2016

I wrote up what I think is the intended behavior, here:
https://gist.github.com/thouis/63888c375cbeb2f702e94e2e82eebee8

I've asked a colleague (@Maltimore) to review that code. Perhaps he can just comment here.

This might address #2, as well.

@funkey
Copy link

funkey commented Aug 17, 2016

Agreed, that doesn't look like it is intended. What is the paper introducing the adapted Rand error that you mention here?

@thouis
Copy link
Author

thouis commented Aug 17, 2016

Actually, I'm not sure it's the paper that introduced it.

I meant this paper, which describes what I think this code is doing:
Crowdsourcing the creation of image segmentation algorithms for connectomics
http://journal.frontiersin.org/article/10.3389/fnana.2015.00142/full

I wonder if the intent here was to avoid dividing by n entirely, since sumA, sumB, & sumAB are only used in ratios down below. In that case, the /n should be removed entirely.

@funkey
Copy link

funkey commented Aug 22, 2016

I agree. This is actually what I do in TED. I added a comment to make the link to the reference clear https://github.com/funkey/ted/blob/master/evaluation/RandIndex.cpp#L66-L98.

Interestingly, this produces the same numbers as the one implemented here, at least as long as there are no background labels. I am not sure what is intended in the reference implementation (here), but I know what I intended in mine: Every pixel that is labelled 0 in the groundtruth is not considered for evaluation. Label 0 in the reconstruction is treated just like every other label.

Unless there is a convincing reason to do otherwise, the TED way is what we will use for the challenge. I suggest to use the TED python wrapper (see example) until we merge it into the scripts in this repository.

I'll leave this open for now, should @srinituraga or @jni want to comment.

@jni
Copy link

jni commented Aug 23, 2016

I'm relieved to hear that this bug occurred upstream — gala's adaptive rand index (which I believe @funkey took with permission for this repo) is a direct port of @iarganda's Matlalb code from SNEMI3D, so it's not surprising that they share the bug.

I've made no secret of wanting Rand to disappear in favour of VI, so I don't think I have any more to offer here. Pull requests, or links to the "fixed" ARE code, are appreciated. ;)

@thouis
Copy link
Author

thouis commented Aug 23, 2016

I think these are fixed versions, for both adapted Rand and VI. If they look good, I can open a PR here (and we can do the same at any upstream repos you want to point us at).

https://gist.github.com/thouis/63888c375cbeb2f702e94e2e82eebee8

Thanks to @Maltimore for helping with this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants