Skip to content

Potential overflow in RK4 integrator when using torch.autocast(dtype=torch.float16) #89

@chengxinlun

Description

@chengxinlun

While torch.autocast(dtype=torch.float16) can almost halve the VRAM usage, during experimentation we found that this sometimes trigger overflow in RK4 integrator and caused the output to be ``nan``` in one or more channels.

Fixing this problem is likely to be beyond the scope of this project, as it is known in the numerical analysis community that loss of precision and accumulation of rounding error might trigger numerical instability in higher order integrator.

We therefore advice testing on the problem you are working on before using torch.autocast optimization.

Metadata

Metadata

Assignees

No one assigned

    Labels

    wontfixThis will not be worked on

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions