Skip to content

Conversation

@aneeshs1729
Copy link
Contributor

@aneeshs1729 aneeshs1729 commented Feb 16, 2025

Added ability to take higher order mimetic divergences in 1d for the Matlab version. This is a partial fix to issue #20 . If someone can implement higher order mimetic gradient and higher order interpolation operators, then we can close #20 .

Added ability to take higher order mimetic divergences in 1d.
@jbrzensk
Copy link
Collaborator

@jcastillo001 @mdumett know the response for this.

@jcastillo001
Copy link

jcastillo001 commented Feb 17, 2025 via email

@aneeshs1729
Copy link
Contributor Author

Could you write down an explanation about how you compute the coefficients? Prof. Jose E. Castillo, Ph.D. Director/Professor Computational Science Research Center San Diego State University Google Scholar https://scholar.google.com/citations?user=6qGWVfkAAAAJ&hl=en www.csrc.sdsu.edu

On Mon, Feb 17, 2025 at 2:32 PM Jared Brzenski PhD @.> wrote: @jcastillo001 https://github.com/jcastillo001 @mdumett https://github.com/mdumett know the response for this. — Reply to this email directly, view it on GitHub <#72 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ATWZMCKDY34TQ5UP4WT3OI32QJPOZAVCNFSM6AAAAABXHE7UHSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDMNRUGE3TAOJZGY . You are receiving this because you were mentioned.Message ID: @.> [image: jbrzensk]jbrzensk left a comment (csrc-sdsu/mole#72) <#72 (comment)> @jcastillo001 https://github.com/jcastillo001 @mdumett https://github.com/mdumett know the response for this. — Reply to this email directly, view it on GitHub <#72 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ATWZMCKDY34TQ5UP4WT3OI32QJPOZAVCNFSM6AAAAABXHE7UHSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDMNRUGE3TAOJZGY . You are receiving this because you were mentioned.Message ID: @.***>
Hello Dr Castillo,
Here is how i calculated the coefficients for the higher order mimetic divergence.
algorithm_for_computing_higher_order_mimetic_divergences.pdf .

@mdumett
Copy link
Collaborator

mdumett commented Feb 21, 2025

Aneesh, could you provide some documentation of the method to be able to understand what it does and to verify if it is well implemented?

@mdumett mdumett self-requested a review February 21, 2025 19:31
@jbrzensk
Copy link
Collaborator

@mdumett
Copy link
Collaborator

mdumett commented Feb 23, 2025

This is a very smart algorithm for computing first-order derivative stencils. The first part of the document derive stencil formulas for interior points (points whose stencil does not use boundary data). In the last section, it derives stencils for points near the boundary.

Nevertheless, to clarify my understanding of these ideas it will help if you can improve a bit some of the details and the notation utilized for this algorithm.

  1. Mimetic differences use a staggered mesh. The indices utilized for the weights a_i^{(k)} of the linear combinations of function values to approximate derivatives have limits for k that depend on where the data is: at the centers or at the nodes because the cardinality of these two sets is different. In fact, formula (5) is using points that are on the cell centers and hence the approximation of the derivative is for the gradient and not the divergence. But, this does not matter because divergence and gradient use exactly the same stencils for interior points (a fact that is confirmed by the fourth-order stencil of section 2.3).

  2. In the derivation of the stencils for point near the boundary (last section), the derivatives are calculated at the cell centers (the way the divergence does). This techniques extends the usual node grid adding points to the left of the left vertex and points to the right of the right vertex. The function at the added points are approximated by formula (12).

A similar argument following the same ideas, could be used for approximating the gradient for non-interior nodes. Please, add to the code the formulas for the gradient at non-interior points in your PR. You should write a CSRC report explaining these ideas.

@aneeshs1729
Copy link
Contributor Author

This is a very smart algorithm for computing first-order derivative stencils. The first part of the document derive stencil formulas for interior points (points whose stencil does not use boundary data). In the last section, it derives stencils for points near the boundary.

Nevertheless, to clarify my understanding of these ideas it will help if you can improve a bit some of the details and the notation utilized for this algorithm.

1. Mimetic differences use a staggered mesh. The indices utilized for the weights a_i^{(k)} of the linear combinations of function values to approximate derivatives have limits for k that depend on where the data is: at the centers or at the nodes because the cardinality of these two sets is different. In fact, formula (5) is using points that are on the cell centers and hence the approximation of the derivative is for the gradient and not the divergence. But, this does not matter because divergence and gradient use exactly the same stencils for interior points (a fact that is confirmed by the fourth-order stencil of section 2.3).

2. In the derivation of the stencils for point near the boundary (last section), the derivatives are calculated at the cell centers (the way the divergence does). This techniques extends the usual node grid adding points to the left of the left vertex and points to the right of the right vertex. The function at the added points are approximated by formula (12).

A similar argument following the same ideas, could be used for approximating the gradient for non-interior nodes. Please, add to the code the formulas for the gradient at non-interior points in your PR. You should write a CSRC report explaining these ideas.

Hello professor dummett. I finally added the ability to take higher order gradients in my most recent PR

Extends the gradient to an arbitrarily higher order
altPascal is now calculated in a single loop. Both divergence helper and extensionOperatorDivergence now create their outputs without inserting into a sparse matrix(which is slow).
…helper and gradienthelper

The higher order extensionoperatorgradient had a bug where I accidentally assigned a 1 by r vector to an r by r matrix.
I added tests for gradienthelper and divergencehelper to make sure they actually agree with grad(k,m,1) and div(k,m,1) respectively.
@mdumett
Copy link
Collaborator

mdumett commented Apr 2, 2025

@jbrzensk: I am in the process of accepting this code. There are several m files that are specific to this feature. We should probably add them in the future Operators Utils subdirectory. Should we wait until I finished the file reorganization or just review positively them and later reorganize it together with the reorganization o interpolators, BCs, and Operators?

@jbrzensk
Copy link
Collaborator

jbrzensk commented Apr 2, 2025

@mdumett I think Aneesh is still working on the stability of higher-order operators. Currently, the method calculates the existing coefficients, but the stability of order >8 operators is too low for inclusion in the library. Until this is resolved, we should keep the current method, which is faster and more transparent.

The file movement will be after accepting this, unless this takes a long time.

@jcastillo001
Copy link

jcastillo001 commented Apr 2, 2025 via email

@aneeshs1729
Copy link
Contributor Author

aneeshs1729 commented Apr 8, 2025

@mdumett I think Aneesh is still working on the stability of higher-order operators. Currently, the method calculates the existing coefficients, but the stability of order >8 operators is too low for inclusion in the library. Until this is resolved, we should keep the current method, which is faster and more transparent.

The file movement will be after accepting this, unless this takes a long time.

Hello Jared, I think that higher order mimetic divergence and gradient are inherently unstable. The coefficients for the boundary rows are supposed to blow up. In fact you can notice the issue with calculating the weights for divergence/ gradient as early as k=8 .

Here's the output after trying weightsQ(8, 21, 1),
notice how some of the weights are pretty far from being 1,

ans =

    1.0000
    1.2494
    0.0740
    2.7307
   -1.0505
    2.5545
    0.2780
    1.1778
    0.9885
    0.9974
    1.0000
    1.0000
    1.0000
    0.9974
    0.9885
    1.1778
    0.2780
    2.5545
   -1.0505
    2.7307
    0.0740
    1.2494
    1.0000

@jcastillo001
Copy link

jcastillo001 commented Apr 8, 2025 via email

…e divergence based off of inverting the vandermode matrix again. This ruins the accuracy of the gradient

Hello Contributors, I added a file to calculate the vandermode inverse quickly based off of this:https://www.mathworks.com/matlabcentral/fileexchange/127624-fastest-analytic-form-inverse-of-square-vandermonde-matrices ,edited the divergence to be based off of vandermode inversion and test 7 to be like the test I used to generate the figures for the poster presentation. Test 7 fails(it starts off good but the error starts exploding for higher values of k unlike in the poster presentation where it stays mostly stable). This proves that using a fast algorithm for inverting the vandermode matrix kills the accuracy of the calculations for higher orders of k.
@aneeshs1729
Copy link
Contributor Author

aneeshs1729 commented May 1, 2025

Hi Dr Castillo, here is the code that i tested for inverting the vandermode using the algorithm used in the report

function V_inv = invertVandermonde(x)
% invertVandermonde Inverts a Vandermonde matrix using the Björck-Pereyra algorithm
%
%   V_inv = invertVandermonde(x)
%
%   Input:
%       x    - vector of interpolation nodes (length n)
%   Output:
%       V_inv - inverse of the Vandermonde matrix generated by x

n = length(x);
V_inv = zeros(n,n);

% Solve V*c = e_k for each column k
for k = 1:n
    % Create e_k (k-th unit vector)
    e = zeros(n,1);
    e(k) = 1;
    
    % Step 1: Forward substitution
    c = e;
    for j = 1:n-1
        for i = n:-1:j+1
            c(i) = c(i) - x(j) * c(i-1);
        end
    end

    % Step 2: Backward substitution
    for j = n-1:-1:1
        for i = j:n-1
            c(i) = c(i) / (x(i+1) - x(j));
            c(i) = c(i) - c(i+1);
        end
    end
    c(n) = c(n) / prod(x(n) - x(1:n-1));

    % Save c as k-th column
    V_inv(:,k) = c;
end

end

I am looking to see if there is any improvements that can be made to this.
needless to say it didn't work either.

@jcastillo001
Copy link

jcastillo001 commented May 1, 2025 via email

@jbrzensk
Copy link
Collaborator

Closing this @aneeshs1729 after 30 days with no updates. If you make some more progress, we can reopen it.

@jbrzensk jbrzensk closed this May 30, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Adding support for order 2k mimetic operators on matlab

4 participants