More tweaks#375
Conversation
3bed38d to
8665c4a
Compare
eabfce9 to
0c903ac
Compare
Codecov Report❌ Patch coverage is
... and 1 file with indirect coverage changes 🚀 New features to boost your workflow:
|
|
Let's make this a draft too to cut down on CI thrash |
f5857b3 to
32e182d
Compare
f5faaf6 to
2359d28
Compare
lkdvos
left a comment
There was a problem hiding this comment.
Left some comment throughout, there are some things that I am not entirely convinced by but the rest looks great, thanks for working through all of this!
For the similarstoragetype(tensor, storagetype) calls that you added, this seems like something we should probably discuss over a separate PR, and it would be great if we could consolidate this one to get the remainder of the fixes in.
Would you be up for splitting these two things, and then getting this merged?
The same kind of holds for some of the comments I made too, if we can just postpone the things that are not obvious, but already get the other parts in, that would probably be helpful.
(Note that I am very much aware that none of this is your fault and this PR has lived for too long so the design shifts a bit, for which I do apologize!)
| twistB = false | ||
| end | ||
|
|
||
| TTC = storagetype(C) |
There was a problem hiding this comment.
I guess this effectively means that we are deciding to promote inputs to the storagetype of the output. I'm not sure if I am fully convinced that we should solve this automatically at all, since I think that is also inconsistent with how regular matrices work (same for adding):
julia> CUDA.rand(2, 2) * rand(Float32, 2, 2)
ERROR: Scalar indexing is disallowed.I do think that this might be the right approach, and requiring explicit conversions in the cases of mixed inputs seems like the right call to me. (Even though I can see how that is annoying for MPSKit 😉 )
There was a problem hiding this comment.
No, it's for
TTC = storagetype(C)
# Bring A in the correct form for BLAS contraction
if copyA
Anew = TO.tensoralloc_add(TTC, A, pA, false, Val(true), allocator)
Anew = TO.tensoradd!(Anew, A, pA, false, One(), Zero(), backend, allocator)
twistA && twist!(Anew, filter(!isdual ∘ Base.Fix1(space, Anew), domainind(Anew)))
else
Anew = permute(A, pA)
endWithout this change, Anew will always have Vector{scalartype(T)} storage even if A was a BraidingTensor or some other object that only gets instantiated here. With the changes in #393 this won't be necessary. It's more than "annoying", with this change or #393 you have to define new tensoralloc methods for the mixed case, it's quite painful 😭
There was a problem hiding this comment.
reopening this since it seems like you changed this again: I'm really confused about why you are claiming that Anew will have the wrong storagetype: it should have the same storage type as A, not as C no? It sounds like you are trying to make contractions with mismatching storagetypes in the inputs work, while I would have expected that this is not what we want to support? Am I missing something here, or am I wrong?
There was a problem hiding this comment.
What seems to be happening is that:
TTC = scalartype(C)gives you aFloat64(let's say)Anew = TO.tensoralloc_add(TTC, A, pA, false, Val(true), allocator)then spits out something with storage typeVector{Float64}ifAis aSparseBlockMatrix{AbstractTensorMap}even if it should in fact be giving youAnewhaving storage typeCuVector{Float64}-- this is because of the fact thatAbstractTensorMapstorage type forcibly defaults toVector, I think
So it's not that I'm trying to mix incompatible storage types, it's that TensorKit as currently set up forces me to do so without this change.
There was a problem hiding this comment.
Does this make sense? I think the JordanMPOTensor changes could obviate this but I think we should link that issue in a comment and keep this moving
There was a problem hiding this comment.
That does make sense, but that mostly seems to indicate that we should remove the storagetype(::Type{<:AbstractTensorMap}) default, as that is just incorrect, and instead make storagetype(x::SparseBlockTensorMap{AbstractTensorMap}) do a runtime computation of the storagetype when this isn't possible to deduce from the type itself.
There was a problem hiding this comment.
Maybe so but does this have to happen in this PR?
There was a problem hiding this comment.
It raises additional problems if the SparseBlockTensorMap or BlockTensorMap has empty blocks
|
It's completely fine!! This has stayed open as I work through adding more tests for MPSKit, so I think we can pare off the simpler stuff we agree on, and then discuss things that are more contentious. |
|
Your PR no longer requires formatting changes. Thank you for your contribution! |
8a12178 to
ad62dad
Compare
d29251a to
3c5a575
Compare
lkdvos
left a comment
There was a problem hiding this comment.
It seems like some of the rebasing and the github UI has made it hard to spot the comments I left before, although I think many of them are still unresolved and could be discussed :)
|
GitHub UI against the world |
376490c to
9e56082
Compare
|
GitHub UI acting up again. Does anyone have any objections to squashing this later today if tests pass? |
| end | ||
| end | ||
| # constructors from another TensorMap -- no-op | ||
| TensorMap{T, S, N₁, N₂, A}(t::TensorMap{T, S, N₁, N₂, A}) where {T, S <: IndexSpace, N₁, N₂, A <: DenseVector{T}} = t |
There was a problem hiding this comment.
| TensorMap{T, S, N₁, N₂, A}(t::TensorMap{T, S, N₁, N₂, A}) where {T, S <: IndexSpace, N₁, N₂, A <: DenseVector{T}} = t |
I would actually be in favor of only having the second implementation, since there is a subtle semantic difference that I would like to retain:
In general, (although this is very poorly documented), the difference between a constructor and a converter is that a converter is meant to be as lossless as possible, and is allowed to take ownership of all parts of the original data. On the other hand, a constructor is typically meant to make an independent copy. You can see this for regular Arrays in the example below:
Where it becomes a bit more vague is of course that you want to have a constructor that takes the different fields and builds up an object from that, without having to create an independent copy (e.g. the inner constructors of a type). This is also where the Julia docs are completely underspecified, so this is more something we should try and figure out ourselves.
I would say that in this case though, creating an independent copy is probably the safest solution, and it is probably easier to reason in generic code that if you call TensorMap{...}(t::TensorMap), you get something that does not share data with t, but I am of course open to other opinions :)
julia> a = rand(3)
3-element Vector{Float64}:
0.3155466339261911
0.6265705423537413
0.6035244268238111
julia> b = Array(a)
3-element Vector{Float64}:
0.3155466339261911
0.6265705423537413
0.6035244268238111
julia> b[1] = 1
1
julia> a
3-element Vector{Float64}:
0.3155466339261911
0.6265705423537413
0.6035244268238111
julia> c = convert(Array, a)
3-element Vector{Float64}:
0.3155466339261911
0.6265705423537413
0.6035244268238111
julia> c[1] = 1
1
julia> a
3-element Vector{Float64}:
1.0
0.6265705423537413
0.6035244268238111
lkdvos
left a comment
There was a problem hiding this comment.
I left some comments, and tried to minimize some of the diff. I think overall this PR looks pretty good, and especially the tests are starting to look like they contain less and less @allowscalar and more idiomatic code so this is great work!
How would you feel about punting on the more in-depth design decisions, such as making the TensorOperations functions auto-promote storagetypes, and merging the rest?
| function TO.tensoradd_type(TC, A::DiagonalTensorMap, ::Index2Tuple{1, 1}, ::Bool) | ||
| M = similarstoragetype(A, TC) | ||
| return DiagonalTensorMap{TC, spacetype(A), M} | ||
| return DiagonalTensorMap{scalartype(M), spacetype(A), M} |
There was a problem hiding this comment.
I'm a bit torn by this, does this ever happen that the two don't match? I think we would like similarstoragetype(A, TC) -> Type{<:DenseVector{TC}} to always hold, and it might actually be nicer if this explicitly errors instead of silently giving a different result?
I just realized that this is probably because of trying to pass TC <: Type{<:DenseVector}, which I am not a big fan of. I think that is kind of misusing the TensorOperations interface slightly?
| function similarstoragetype(::Type{<:AbstractTensorMap{T, S, N₁, N₂}}, ::Type{TA}) where {T <: Number, TA <: DenseVector, S, N₁, N₂} | ||
| return similarstoragetype(TA, T) | ||
| end | ||
| function similarstoragetype(t::AbstractTensorMap{T, S, N₁, N₂}, ::Type{TA}) where {T <: Number, TA <: DenseVector, S, N₁, N₂} | ||
| return similarstoragetype(typeof(t), TA) | ||
| end |
There was a problem hiding this comment.
| function similarstoragetype(::Type{<:AbstractTensorMap{T, S, N₁, N₂}}, ::Type{TA}) where {T <: Number, TA <: DenseVector, S, N₁, N₂} | |
| return similarstoragetype(TA, T) | |
| end | |
| function similarstoragetype(t::AbstractTensorMap{T, S, N₁, N₂}, ::Type{TA}) where {T <: Number, TA <: DenseVector, S, N₁, N₂} | |
| return similarstoragetype(typeof(t), TA) | |
| end |
I think my comment got lost in the github UI, but these methods seem a bit strange to me. Since similarstoragetype(t, T::Type{<:Number}) is supposed to give you a storagetype that is compatible with t, but has the scalartype T, I would infer that similarastoragetype(t, A::Type{<:DenseVector}) should give you a storagetype that is compatible with storagetype A, which is simply A itself. Am I missing something here?
There was a problem hiding this comment.
Yes because the problem here is again the AbstractTensorMap, which has a bad default storagetype of Vector. This is again working around the other design decision about JordanMPOTensor and could be gotten rid of once that is changed.
| bstyle isa SymmetricBraiding || | ||
| throw(SectorMismatch("only tensors with symmetric braiding rules can be contracted; try `@planar` instead")) | ||
| TC = scalartype(C) | ||
| TC = storagetype(C) # without this, Anew below has wrong storagetype |
There was a problem hiding this comment.
| TC = storagetype(C) # without this, Anew below has wrong storagetype | |
| TC = scalartype(C) |
I understand that this is a way to make the issues go away, but I don't think this is really the right strategy. If Anew has the wrong storagetype, that means A had the wrong storagetype to begin with, and I would like to make tensorcontract just error in that case, since I don't think we have any support for mixed storagetypes at the moment?
For example, what TensorOperations does with the cuTENSOR backend is that the implementation first sends everything to the GPU: https://github.com/QuantumKitHub/TensorOperations.jl/blob/880d046330a665488f6592a6e6979fdd8af665ec/ext/TensorOperationscuTENSORExt.jl#L80-L91
I think I would be more in favor to have similar behavior, where if you want to run things on a GPU you should either make sure everything is on the GPU from the start, or explicitly select a backend that will copy everything over.
We might also just want to schedule a meeting about this and talk through the implications, since I do agree that there are advantages and disadvantages to making things work more automatic, and I might just be overly dense about this.
For the purpose of getting this PR merged ASAP though, I think it would be nice to just already get the smaller tweaks in, without having to make these decisions?
There was a problem hiding this comment.
The problem is not having this breaks all of the more complex MPSKit support, which is why I added it in the first place.
There was a problem hiding this comment.
Is reworking how all of this works right now worth forcing everyone in the group who wants to try GPU + MPSKit to wait (be realistic here) another 6 months?
Co-authored-by: Lukas Devos <ldevos98@gmail.com>
This will completely negate the point of this PR, which is to finally get things to actually work for MPSKit and also possibly break quite a few of the tests and force the return of |
Needed to get more MPSKit examples working