added: be more efficient with weights and conversion matrices for NonLinMPC
#202
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Following #198 and discussion at #193, this is a summary of the revision for structural sparsity in
PredictiveController
types.The big picture here is as follows: since it is meant for real-time applications, the computational performance of matrix operations has a higher priority than the memory footprint for storing the matrices.
Conversion of decision vector Z̃ to input increment ΔŨ
Since the$\mathbf{\tilde{P}_{\Delta u}}$ matrix is required at controller construction in the case of linear plant model, it is still constructed as a normal dense matrix. But I no longer use the matrix in
getΔŨ!
function (which is called multiple times in the objective function ofNonLinMPC
). I now use indexing with a@views
.Conversion of decision vector Z̃ to input U
I made no change here. The$\mathbf{\tilde{P}_{u}}$ matrix contains around 30 to 40% $H_p$ and $H_c$ parameters). It is very unlikely that any special sparse type will outperform computations with a dense matrix (a matrix product). Also note that I tested in the past storing the matrix as dense array of
1.0
values with aSingleShooting
transcription (the rest is zeros, it depends ofBool
or aBitArray
. The matrix product was always slower than a dense array of floats.Moreover, this matrix will have a more weird and complex structure with the new upcoming move blocking feature. In this case, doing the conversion manually with a for loop and conditionals in
getU0!
would be prone to bugs, and presumably less efficient than a product with a dense matrix with a nice contiguous structure in memory.Objective function weights
VERY frequently, the weights will be diagonal matrices. But in very rare case, they will be generic positive semi-definite hermitian matrices (e.g. tuning rules based on optimization or frequency weighting). Ideally, we need to support both cases, but without loosing the performance advantage of matrix operation with pure diagonal matrices.
Before this PR, the weight matrices were simply stored as
Hermitian{NT, Matrix{NT}}
in theControllerWeights
object to be as generic as possible. Doing so, we loose the performance boost of matrix products withDiagonal
types, in the case of diagonal weights. This also allows the user to specify block-diagonal weights as e.g.SparseMatrixCSC{Float64, Int64}
and the type will be preserved inmpc.weights
structure.As suggested by @gdalle in his PR, I introduce 3 new parameters in the parametric struct
ControllerWeights
to store the type of the 3 weights, in order to preserve special types likeDiagonal{NT, Vector{NT}}
. The performance advantage will mainly visible inNonLinMPC
based on nonlinear plant models, since the objective value is computed with e.g.dot(Ȳ, mpc.weights.M_Hp, Ȳ)
, which is faster whenmpc.weights.M_Hp isa Diagonal
:The default values of the weights are now also specialized e.g.
Diagonal(repeat(Mwt, Hp))
instead ofdiagm(repeat(Mwt, Hp))
.