-
Notifications
You must be signed in to change notification settings - Fork 5
Initial implementation of SUMMA like MatrixMult #136
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@astroC86 good start!
As you will see I left a number of comments, many are stylistic (still to review test_matrixmult
, but in some cases I am not entirely sure I can follow (especially as I am not sure if this is meant to match the algorithm in your GSoC proposal Appendix)...
In general, I think it would be important if you add a well written docstring to the SUMMAMatrixMult
method and comments in both the code and the example; after that I will do another full review and we will hopefully be closer to a final version that we can have into the pylops-mpi library 🤓
Finally, whilst I think that this algorithm is very interesting and worth having, we should make sure to understand if this is really the SUMMA algorithm from https://www.netlib.org/lapack/lawnspdf/lawn96.pdf (I am not so sure, as there they block each matrix over both rows and columns) and either refer to a paper that implements your algorithm or give it a name (not SUMMA) and explain it in quite some details in the Notes of the docstring of the class.
12d73e4
to
f22f5ec
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@astroC86 thanks for the updates, the main class and example start to look much better.
I have pushed some code, mostly trying to clean things up a bit and make them more consistent with the rest of the codebase. Whilst unfortunately we have not yet been able to setup proper linting in pylops-mpi
- we will eventually do it soon following what we have in pylops
- we try to keep the line length below 88 (some of your lines are very very long)... also when I rebuilt the documentation, I noticed that some of your text both in the docstring of MPIMatrixMult
and in the example was not rendering properly - I guess you didn't build the documentation yourself yet?
More importantly, I made a change towards the end of the example as discussed offline trying once again to be consistent with the rest of the codebase: basically we want to check consistency with serial code in rank0 AFTER grouping the distributed arrray... whilst doing this, I realized that your definition of mask
in x
was wrong. I would like to ask you to modify the tests accordingly to have the same kind of checks. Note that this code won't run if you don't stack it on top of this PR ( (#139) - once in, we can rebase it.
And our convention in PyLops and PyLops-MPI (See for example https://pylops.readthedocs.io/en/v2.2.0/gallery/plot_matrixmult.html#sphx-glr-gallery-plot-matrixmult-py) is to have operators that map inputs of size m to outputs of size n (so A is nxm) but you have decided to do the opposite... I understand that here neither of the two is strictly possible but for consistency, and to avoid confusion, I'd like to ask to modify all our codes so that A is nxk and X is kxm so that n and m still are in the right position although multiplied by k 😄
Finally, this one I did not change as I need your feedback. I noticed that if I swap X_local = self._layer_comm.bcast(x_arr if self._group_id == self._layer_id else None, root=self._layer_id)
with `X_local = x_arr I still get correct results.. and looking back at your notes it makes somehow sense to me as hat you bcast (say B0|B1 from P0 to L0) are already present in the rank you passed them to (P1 has already B0|B1)... unless I am missing something?
@hongyx11 I think it is time for you to do a proper technical review on the approach and implementation
depends on #139