-
-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
After converting the SAM2 model to ONNX , the inference results are significantly worse than the original model. #21
Comments
@lhz123 can you provide any comparison examples? |
Hi! :D |
@vietanhdev I've noticed all of the converted SAM2 models output a mask in a 256x256 resolution. Is this configurable? Ideally I want it to be the same as the input resolution (1024x1024). The reason 256 isn't good enough is that after upscaling to 1024, the edges are very rough and don't overlay perfectly with the source image. I've applied some basic post processing, but the result isn't very accurate, especially for small object/surfaces. Does the original SAM model output masks in 256 res? What are the limitations that make the onnx version different from the pytorch one? |
Pretty sure SAM1 also originally outputs them in 256x256 res and then upscales them |
@vietanhdev I recommend adding here is the updated colab notebook for export: https://colab.research.google.com/drive/1tqdYbjmFq4PK3Di7sLONd0RkKS0hBgId?usp=sharing |
Hi @ibaiGorordo |
This is great, thank you! |
No description provided.
The text was updated successfully, but these errors were encountered: