Skip to content

Conversation

@JDBetteridge
Copy link
Member

Description

This adds an offloading preconditioner.

@github-actions
Copy link

github-actions bot commented Oct 2, 2024

TestsPassed ✅Skipped ⏭️Failed ❌
Firedrake complex8053 ran6471 passed1582 skipped0 failed

@github-actions
Copy link

github-actions bot commented Oct 2, 2024

TestsPassed ✅Skipped ⏭️Failed ❌
Firedrake real8059 ran7273 passed786 skipped0 failed

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Delete this file as discussed.

solver_parameters = solving_utils.set_defaults(solver_parameters,
A.arguments(),
ksp_defaults=self.DEFAULT_KSP_PARAMETERS)
# todo: add offload to solver parameters - how? prefix?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This needs addressing somehow. Not quite sure what is meant by this comment.

# u.getArray()

# else:
# instead: preconditioner
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

delete

from firedrake.petsc import PETSc
from firedrake.ufl_expr import TestFunction, TrialFunction
import firedrake.dmhooks as dmhooks
from firedrake.dmhooks import get_function_space
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably an unnecessary import.



class OffloadPC(PCBase):
"""Offload PC from CPU to GPU and back.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This docstring could perhaps contain more detail about what is actually happening and even perhaps why one may wish to do this.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

E.g. This is only for CUDA GPUs

x_cu = PETSc.Vec()
x_cu.createCUDAWithArrays(x) # end
with PETSc.Log.Event("Event: solve"):
self.pc.apply(x_cu, y_cu) #
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
self.pc.apply(x_cu, y_cu) #
self.pc.apply(x_cu, y_cu)

with PETSc.Log.Event("Event: solve"):
self.pc.apply(x_cu, y_cu) #
with PETSc.Log.Event("Event: vectors copy back"):
y.copy(y_cu) #
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
y.copy(y_cu) #
y.copy(y_cu)

y.copy(y_cu) #

def applyTranspose(self, pc, X, Y):
raise NotImplementedError
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe have a useful error message? Not sure what the usual approach is here.


def view(self, pc, viewer=None):
super().view(pc, viewer)
print("viewing PC")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
print("viewing PC")

if not isinstance(A, firedrake.matrix.AssembledMatrix):
# linear MG doesn't need RHS, supply zero.
lvp = vs.LinearVariationalProblem(a=A.a, L=0, u=x, bcs=A.bcs)
mat_type = A.mat_type
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pointless line to have, delete and use A.mat_type below

@connorjward connorjward self-assigned this Oct 22, 2024
@connorjward connorjward changed the title Picalarix/cuda OffloadPC (CUDA GPU) Nov 13, 2024
@connorjward connorjward added the gpu For special runner label Nov 13, 2024
@connorjward
Copy link
Contributor

Closing as this has been replaced by #4166

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

gpu For special runner

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants