You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Jul 22, 2024. It is now read-only.
Hi @hwang595, a few weeks ago I made some questions in another issue thread about I problem that I had when trying to train a model with input image shape greater or equal to 224x224. Since then, I tried to reduce the dimensions of my problem to the default size, i.e. 32x32, and it worked well! But when I run using 224x224, I'm still locked in this training part.
So I'm gonna ask my questions here again:
Is there such a relationship? Training input size and FedMA communication process? If that's true, what can we do about it?
By adding a different model, in which part of the code should I take care? Besides changing, for example, the input dimensions to 1x224x224?
Obs.: As I'm working with medical images it is critical resize them.
Thanks for the great work!
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Hi @hwang595, a few weeks ago I made some questions in another issue thread about I problem that I had when trying to train a model with input image shape greater or equal to 224x224. Since then, I tried to reduce the dimensions of my problem to the default size, i.e. 32x32, and it worked well! But when I run using 224x224, I'm still locked in this training part.
So I'm gonna ask my questions here again:
Obs.: As I'm working with medical images it is critical resize them.
Thanks for the great work!
The text was updated successfully, but these errors were encountered: