-
Notifications
You must be signed in to change notification settings - Fork 87
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dialects: (vector) Add for vector.transfer_read
and vector.transfer_write
operations
#3650
base: main
Are you sure you want to change the base?
Conversation
Fixed bug in TensorOrMemrefOf.verify (?)
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #3650 +/- ##
========================================
Coverage 89.03% 89.04%
========================================
Files 317 317
Lines 43338 43479 +141
Branches 5403 5436 +33
========================================
+ Hits 38588 38714 +126
- Misses 3407 3414 +7
- Partials 1343 1351 +8 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM modulo comments
xdsl/dialects/builtin.py
Outdated
if isinstance(attr, VectorType) or isinstance(attr, TensorType): | ||
attr = cast(VectorType[Attribute] | TensorType[Attribute], attr) | ||
if isinstance(attr, MemRefType) or isinstance(attr, TensorType): | ||
attr = cast(MemRefType[Attribute] | TensorType[Attribute], attr) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would you mind moving this to a separate PR with a test case?
We seem to be lacking any verification testing on this type.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seeing the changes, I'd still recommend doing this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will do that in the next few days
vector.transfer_read
and vector.transfer_write
operations
….transfer_write" ops This blew up and should now be several PRs
…or.transfer_write" ops
…ArrayAttr[BoolAttr] instead of a IntegerAttr
|
vector.transfer_*: Disabled mask and inferred_mask comparison for now until VectorType is fixed
…ead_transfer_write_ops
|
You can always verify what is currently here and document what deviates (as you have done), this is fine IMHO. Thanks for the work so far! |
Transferred more filecheck tests for transfer ops from MLIR
Added the (now mandatory) in_bounds property
xdsl/dialects/vector.py
Outdated
# WJOG9GVF: TODO fix: uncomment this when 7S4F0FZA has been fixed | ||
# if mask_type: | ||
# if mask_type != inferred_mask_type: | ||
# raise VerifyException( | ||
# f'"{op.name}" inferred mask type ({inferred_mask_type}) and mask operand type ({mask_type}) don\'t match' | ||
# ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as below. I will move it to a new pull request
…ead_transfer_write_ops
…nd depend on issue xdslproject#3654 to be fixed
Hi @watermelonwolverine! I'd love to get this in, please let me know if you'd like some help with the PR. From my understanding, these bits of code that are currently in this PR would benefit from their own mini PRs:
Do you have time to have a go at these? I'm hoping it will be a fairly easy cherry-picking process, I'm happy to review each one very quickly :) After that, I think this PR should be ready to merge with minor changes, but it would be good to reduce it first. |
…ead_transfer_write_ops
This PR is actually pretty much done from my side. |
I can split this PR into smaller PRs over the next few days if it is simply too big |
Yeah that would be great, thank you |
Just opened #3947, which fixes the above issue. Would you have the time to iterate on this in the coming weeks? |
Hi, I am very busy with my thesis, but I think I can still fit this in somehow. I'll update this PR once the other PR is merged. Should I still split this PR into smaller PRs? |
That would be fantastic, but also please let me know if there's another time to check in for this, there's no particular rush, it's just nice to be able to get to 0 PRs eventually. |
Remove compose_with_values function as it is in essence the same as eval
…ead_transfer_write_ops
…r_read_transfer_write_ops
Cherry picked from #3650. Prerequisite for implementing `vector.transfer_read` and `vector.transfer_write` ops
…ead_transfer_write_ops
@@ -1,5 +1,5 @@ | |||
// RUN: XDSL_ROUNDTRIP | |||
|
|||
#map = affine_map<(d0, d1) -> (d0)> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
seems unused
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Used in "permutation_map" of vector.transfer_read
and vector_transfer_write
line 13 and 14.
@@ -10,6 +10,9 @@ builtin.module { | |||
"vector.maskedstore"(%0, %2, %2, %1, %6) : (memref<4x4xindex>, index, index, vector<1xi1>, vector<1xindex>) -> () | |||
"vector.print"(%6) : (vector<1xindex>) -> () | |||
%7 = "vector.create_mask"(%2) : (index) -> vector<2xi1> | |||
%8 = "vector.transfer_read"(%0, %2, %2, %2) <{"in_bounds" = [true], "operandSegmentSizes" = array<i32: 1, 2, 1, 0>, "permutation_map" = #map}> : (memref<4x4xindex>, index, index, index) -> vector<4xindex> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would it be worth adding custom syntax here? seems quite verbose
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, at some point it would totally be worth to add custom syntax. But I don't have the time at the moment because of my thesis..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm happy to add it if you are. I think that a good way to merge this would be to first add the ops, and then add verification. I can open a new PR with your newly added operations with no verification at all, but with custom syntax, then this PR will track adding only the verifiers and verifier helpers. Does that sound OK?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, sounds good
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No promises on when, Ill try to do it in the next week or two! Enjoy thesis writing ;)
Added stubs for "vector.transfer_read" and "vector.transfer_write".
Verification is missing but that also seems to be a lot of work.
Also fixed a small bug in TensorOrMemrefOf.verify (?)