Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

question about returning by reference + PackedArray #5

Open
samuelpmish opened this issue Sep 25, 2023 · 4 comments
Open

question about returning by reference + PackedArray #5

samuelpmish opened this issue Sep 25, 2023 · 4 comments

Comments

@samuelpmish
Copy link
Contributor

Thank you for developing wll-interface, it's been incredibly useful in my personal projects. My question is:

Is there a way to return a wll::tensor by-reference if the dimensions are not known a priori?

Right now, I'm splitting up any C++ routines that have multiple returns into:

  • one function that calculates the outputs and writes them to some global variable
  • "getters" that access that global and return each output individually

Returning multiple things by-reference would be cleaner and thread-safe, but when I tried it, my C++ functions that resized the by-reference output wll::tensors caused the kernel to crash.

@njpipeorgan
Copy link
Owner

I think it might be possible if the arguments passed though librarylink is not a proxy. I need to check that.

By the way, how did you "resize the by-reference output wll::tensor's"? I would be best if you can show some actual code.

@samuelpmish
Copy link
Contributor Author

Passing by-reference works when I don't interfere with the tensor dimensions in C++

// bindings.cpp
#include "wll_interface.h"

void set_to_two(wll::tensor<double, 2> & foo) {
  for (std::size_t i = 0; i < foo.dimension(0); i++) {
    for (std::size_t j = 0; j < foo.dimension(1); j++) {
      foo(i,j) = 2;
    }
  }
}
DEFINE_WLL_FUNCTION(set_to_two);
setToTwo = LibraryFunctionLoad[mylib, "wll_set_to_two", {{Real, 2, "Shared"}}, "Void"];
------------------------------------------------------------------------------------------
arr = ToPackedArray[Table[i + j + 0.5, {i, 1, 2}, {j, 1, 2}]];
> {{2.5, 3.5}, {3.5, 4.5}}
------------------------------------------------------------------------------------------
PackedArrayQ[arr]
> True
------------------------------------------------------------------------------------------
setToTwo[arr]
arr
> {{2., 2.}, {2., 2.}}
(* works! *)

But when I do something like (e.g. if I can't figure out the tensor dimensions until running the C++ function)

// bindings.cpp
#include "wll_interface.h"

void set_to_two(wll::tensor<double, 2> & foo) {
  foo = wll::tensor<double, 2>({1, 3}); // <--------
  for (std::size_t i = 0; i < foo.dimension(0); i++) {
    for (std::size_t j = 0; j < foo.dimension(1); j++) {
      foo(i,j) = 2;
    }
  }
}
DEFINE_WLL_FUNCTION(set_to_two);

Then, on the mathematica side of things I get this on my linux machine

------------------------------------------------------------------------------------------
setToTwo[arr]
arr

(...) LibraryFunction::dimerr: An error caused by inconsistent dimensions or exceeding array bounds was encountered evaluating the function wll_set_to_two.

LibraryFunctionError["LIBRARY_DIMENSION_ERROR", 3]

> {{2.5, 3.5}, {3.5, 4.5}}

and just a kernel crash on my mac when invoking setToTwo[arr].

@njpipeorgan
Copy link
Owner

The error occured on this line

foo = wll::tensor<double, 2>({1, 3});

foo is a tensor with shape 2x2 and its memory shared between mathematica and c++, and you are try to assign to it another temporary tensor with shape 1x3 and its memory managed by c++. In this case, wll-interface will try to copy the data from one tensor to the other, but since they have different dimensions, it will throw LIBRARY_DIMENSION_ERROR.

You can actually get the error message after getting this error with

exception = LibraryFunctionLoad[(*libpath*), "wll_exception_msg", {}, "UTF8String"];
exception[]

I guess in terms of designing operator=, to copy the data for this assignment is the safest behavior. There could be a function like tensor1.clone_from(tensor2) that copy/move both the data and the attributes from tensor2.

@samuelpmish
Copy link
Contributor Author

Does this mean the shape of PackedArrays passed into C++ functions is immutable, or that the wll::tensor container just isn't set up to support resizing?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants