Skip to content

Commit 5828607

Browse files
Not sure if AMD actually support fp16 acc but it doesn't crash. (Comfy-Org#9258)
1 parent 735bb4b commit 5828607

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

comfy/model_management.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -340,7 +340,7 @@ def is_amd():
340340

341341
PRIORITIZE_FP16 = False # TODO: remove and replace with something that shows exactly which dtype is faster than the other
342342
try:
343-
if is_nvidia() and PerformanceFeature.Fp16Accumulation in args.fast:
343+
if (is_nvidia() or is_amd()) and PerformanceFeature.Fp16Accumulation in args.fast:
344344
torch.backends.cuda.matmul.allow_fp16_accumulation = True
345345
PRIORITIZE_FP16 = True # TODO: limit to cards where it actually boosts performance
346346
logging.info("Enabled fp16 accumulation.")

0 commit comments

Comments
 (0)