Right now, encase recursively calls write_into when serializing out types into GPU memory. This typically follows the flow of copy field/value -> advance the writer by it's padding -> repeat until done.
This unfortunately breaks vectorization when copying the data. Larger structs with heavily recursive types like 4x4 matrices end up with tens or hundreds of these steps when they could be just directly memcpy-ed into the target buffer.
For all types that have a runtime fixed size and do not have any additional padding, they're trivially memcpy-able into and out of GPU buffers. Similarly, arrays, slices and Vecs of these types are can also be batch memcpy-ed where applicable.
This information is statically available at compile time in a type's METADATA. If statements on constant expressions will optimize out the unused branch. This should be doable even without compiler support for specialization.
Right now, encase recursively calls
write_intowhen serializing out types into GPU memory. This typically follows the flow of copy field/value -> advance the writer by it's padding -> repeat until done.This unfortunately breaks vectorization when copying the data. Larger structs with heavily recursive types like 4x4 matrices end up with tens or hundreds of these steps when they could be just directly memcpy-ed into the target buffer.
For all types that have a runtime fixed size and do not have any additional padding, they're trivially memcpy-able into and out of GPU buffers. Similarly, arrays, slices and Vecs of these types are can also be batch memcpy-ed where applicable.
This information is statically available at compile time in a type's METADATA. If statements on constant expressions will optimize out the unused branch. This should be doable even without compiler support for specialization.