-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Example using enzyme-auto-sparsity #69
base: develop
Are you sure you want to change the base?
Conversation
5201305
to
40f69a9
Compare
931d7df
to
2d1a06f
Compare
template <typename T> | ||
__attribute__((always_inline)) static void sparse_store(T val, int64_t idx, size_t i, std::vector<Triple<T>>& triplets) | ||
{ | ||
if (val == 0.0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I worry that this is still computing 0s and looping over them at runtime, rather than computing sparsity at compile time
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIRC from when @wsmoses showed us the LLVM IR for a simpler example, it showed the structural zeros being optimized out. I've messed with it since (https://fwd.gymni.ch/U2yB6g) and I don't understand LLVM enough to point it out, though.
It's also the case that non-structural zeros are stored at runtime, which would not be the case if just checking the values at run time. In this example, I purposefully chose the first value of the input to be 0 to check that the 0 derivative gets returned.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried removing the debug statements, and I can see that the structural zeroes are removed from consideration, which is nice. Check around line 170 in the output - you can see that it calculates the derivative input[i] * i * 2.0
and then returns that value with the index (i,i)
.
It looks like this is due to the -enzyme-auto-sparsity=1
flag? This flag seems a bit volatile... for example I can get this example (which represents more what the models in GridKit actually look like) to work with -enzyme-auto-sparsity=0
, but it does do the runtime checks for non-structural zeroes (it does still automatically elide the structural zeroes, though). Changing the flag to -enzyme-auto-sparsity=1
causes Enzyme to fail to compile. It seems this flag has a fairly fragile dependence on functions which have a size parameter passed in and loop until that parameter, so we can reintroduce the N
variable, which causes it to compile, but now there are many runtime checks on the N
variable...
This may be worth submitting an issue to Enzyme about, since it seems like a fairly reasonable example?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like it's because of loop unrolling. If you disable loop unrolling it works fine. Good thing to know for the future!
The first thing I'd check is the failing test. Going over the rest. |
{ | ||
assert(0 && "should never load"); | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What does this function do? Except for the fact that one should not call it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is a "dummy" function required for calling __enzyme_todense
, but on an output variable, so it should never be read from.
This example uses
enzyme-auto-sparsity
to compute Jacobians directly in a sparse format.