You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently the code is hard coded to use device:0 with device = ("cuda" if torch.cuda.is_available() else "cpu"). It would be a great enhancement to also be able to set an environment variable with the GPU ID that we would want to use when there are multiple GPUs available.
I attempted to set CUDA_VISIBLE_DEVICES=1 to see if it would use the next GPU, but it did not.
So maybe something like in a terminal the user could set:
$export CUDA_DEVICE=1
And in seggpt.py:
if os.getenv("CUDA_DEVICE"):
device = torch.device(f"cuda:{os.getenv('CUDA_DEVICE')}" if torch.cuda.is_available() else "cpu")
else:
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
The text was updated successfully, but these errors were encountered:
That looks like a reasonable solution to me, I might simplify it to the one-liner (using the default param of os.getenv to fallback:
device = torch.device(os.getenv("CUDA_DEVICE_OVERRIDE", "cuda") if torch.cuda.is_available() else "cpu")
And then export CUDA_DEVICE_OVERRIDE=cuda:1 to set it.
Could you submit a PR after ensuring that it works?
Not sure why CUDA_VISIBLE_DEVICES isn't working; possibly related to this, depending on your torch version -- though it looks to have been fixed for quite a while.
Thank you. I will submit a PR after testing the new code. I didn't try setting the environment variable within the script and there is a possibility it has to do with the torch version. I figure if we have this option in the seggpt.py script it would make for easier implementation across different setups.
Currently the code is hard coded to use device:0 with device = ("cuda" if torch.cuda.is_available() else "cpu"). It would be a great enhancement to also be able to set an environment variable with the GPU ID that we would want to use when there are multiple GPUs available.
I attempted to set CUDA_VISIBLE_DEVICES=1 to see if it would use the next GPU, but it did not.
So maybe something like in a terminal the user could set:
$export CUDA_DEVICE=1
And in seggpt.py:
if os.getenv("CUDA_DEVICE"):
device = torch.device(f"cuda:{os.getenv('CUDA_DEVICE')}" if torch.cuda.is_available() else "cpu")
else:
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
The text was updated successfully, but these errors were encountered: