Skip to content

Conversation

@JWHennessey
Copy link

Currently when resuming from a checkpoint too much GPU memory is used. This work around resolves the issue.

@rosinality
Copy link
Owner

Ah, thank you! I think you can resolve this using torch.load(args.ckpt, map_location=lambda storage, loc: storage).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants