You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We need a method for custom test harnesses to enumerate tests. dtolnay (maintainer of linkme and ctor) was pushing for an in-language #[distributed_slice]
davidhewitt, stepancheg, max-sixty, net, mirror-kt and 3 more
partial constALL_THINGS:&[Thing] = &[..];// "[..]" means this is a definition of a partial slice constant.// It's a compilation error to have more than one definition.
partial const ALL_THINGS = &[..,Thing(1), ..];// "[.., x, ..]" means it's an extension or a partial slice constant.// It has to be defined elsewhere.
partial const ALL_THINGS = &[..,Thing(2), ..];// Another extension of the same constant.
// crate `foo`// extendable static should be public and have type `&[T]`#[distributed_slice]pubstaticERROR_MSGS:&'static[(u32,&'staticstr)] = &[(ERROR_CODE_1,ERROR_MSG_1),(ERROR_CODE_2,ERROR_MSG_2),];// crate `bar` which depends on crate `foo`use foo::ERROR_MSGS;// if extended static has type `&[T]`, then this const should have type `T`#[distributed_slice(ERROR_MSGS)]constCUSTOM_ERROR:(u32,&'staticstr) = (0x123456,"custom err message");
linkme provides an implementation that mostly works
I spent my evening doing a tiny bit of research, expanding on the points you'd already mentioned. Mostly as an overview for myself, but I thought making those thoughts public wouldn't hurt.
Use cases
Custom Test Harnasses
This is essentially required if you want to run tests on embedded systems. Current libtest doesn't support this, but with the experimental custom test harnesses feature you can enumerate all tests to call them manually after boot for example.
language binding registration:
PyO3 is currently using inventory for their multiple-pymethods feature. Using a macro you can define python methods in Rust. However, you can only have a single such #[pymethods] annotated block. With multiple-pymethods you can define multiple such blocks, which using inventory are added to a static array at compile time to then be used in the bindings to python.
Crates such as inventory can essentially be replaced by this feature. This is essentially why PyO3 uses inventory as a dependency. However one can also imagine using this to
register routes to a webserver
typetag uses inventory to find all annotated types
Registring benchmarks with libraries such as criterion (related to custom test harnesses)
Both ctor and linkme use linker magic. It's kind of amazing this works as well as it does.
ctor works by adding functions to a specific link section (.init_array), which can be run before main. This allows (essentially) arbitrary code to run before main, as much as running any code before main is supported. This code could register an item in some kind of global which can then be picked up in main.
Rust's #[test] annotations work completely differently. Instead of (ab)using the linker, the rust compiler itself collects all the tests during compilation.
Why implement this in the compiler?
Since there are already libraries (like ctor and linkme) that can do this, why would we need to add a new feature to rustc?
It would be an alternative way to implement custom test frameworks, making #[test] less magical. This is something the testing devex team wants.
It would provide a clear, supported way to do things for which rust users might otherwise use global constructors (through ctor, for example). Limitations of those are described in the next section (this was noted by David Tolnay)
We would be able to create custom syntax for this, something libraries could not. However, I'll later note that we might not actually want to do that.
Using the linker or the compiler?
Prior art shows that this feature could be implemented in two ways.
Either through the linker or directly in the compiler.
The linker route means possible limitations in platform support.
linkme, at the time of writing, is tested on Linux, Macos, Windows, FreeBSD and illuminos. It will possibly work on even more targets. The repository also seems to include some tests for embedded targets, although it requires a modified memory.x file. If this became a standard feature of the language, it would be available everywhere. Thus, everyone would need to use such modified memory.x files on embedded. I also worry (maybe wrongly?) that this would limit what kind of linkers we are able to support.
ctor can run code before main, and it's README.md on github starts with a list of warnings. Code in them can only use libc as std is not loaded yet. Making this a standard language feature would either require us to define code before main properly (which sounds rather impossible) or make it unsafe and something library authors might use as an implementation detail. However, theoretically we could use global constructors under the hood to implement distributed slices. That would certainly not be zero-cost though. Code would need to be run for every part of a distributed slice before main. Additionally, ctor has limited platform support, much more limited than linkme.
However, with "linker magic", rust could support such distributed slices, possibly through dynamically loaded libraries. Dynamic linking in Rust has always been kind of hard, and usually goes through cdylibs. However, if rust ever wants to support that, adding a language feature that would block this might not be very productive. I say that, because the alternative might do exactly that. Supporting distributed slices through the compiler like #[test] essentially blocks dynamically linked libraries to add to distributed slices. The slices are already built before linktime.
Note that #[test] itself isn't affected by this, as tests are never dynamically linked in.
Ed Page noted during RustNL that this might not be so big of an issue, saying that rust crates span the entire spectrum from c's single header libraries to gigantic dynamically libraries like openssl for example. He thinks that the kind of applications you would use distributed slices for are not likely to be these gigantic libraries you might want to dynamically link. (correct me if I'm now misquoting you @epage)
Personally, I think going through the compiler, not the linker is much less hacky and provides a more natural extension of #[test].
Complications
As noted by newpavlov, the order of elements in distributed slices would ideally be deterministic. They propose to use the crate name and module path to sort elements in a distributed slice essentially lexiographically.
libtest sorts test based on their (test function) name, also alphabetically.
Incremental compilation might complicate things, since distributed slices need to be rebuilt when any code that might add to it is recompiled.
Publically facing dylibs: this would be similar to when people use dylibs in C/C++ for large subsystems, like gitoxide or bevy. The story here isn't too well developed at this time. It might be enough to do manual registration of the dylibs
Plugin systems: manual registration is probably the best way to go
Fast rebuild hack: turn everything into dylibs so you don't have slow link times during development. This could be negatively impacted but is there much of a use case if we use faster linkers? Might be good to check with Bevy as I think they do this.
One idea that has me more and more convinced is to not implement this feature as a slice at all. Distributed slice is maybe not even the right name. We cannot easily make the order stable so keeping indices to elements in the slice is nonsensical. Instead implementing as a type that implements Iterator (a bit like iterators over hashmaps) with a randomized order (that's seeded by something in the source code, to keep reproducible builds) we can do a lot better. Actually, the way this iterator works can then also be changed internally. If a shared library is loaded the iterators could in theory essentially be Iterator::chain-ed to make the "distributed iterator" work.
I also just talked to @m-ou-se who is working on an RFC to implement functions in a different library than their definition, which turns out to be a related issue. Essentially, they're "single item distributed slices". There, dynamic library loading is even more of an issue, since there can be only one implementation of a function. Thus, a dynamic library that has a conflicting definition would never be able to be loaded.
Write down very clearly the problem and list the current possible solutions.
Order them by viability.
Start a T-lang experiment and implement the one that currently feels most viable (I'm guessing that's going to be an implementation in the compiler, not linker and an iterator, not a slice).
Test that experiment, for example on dioxus and maybe bevy?
Either go through with that (write an rfc) or implement one of the alternatives.
#[test] is magical.
Using the attribute, functions spread all over a crate can be "collected" in such a way that they can be iterated over.
However, the ability to create such a collection system, or global registration system, turns out to be useful elsewhere, not just for #[test].
The following are just a few examples of some large crates using alternative collection systems (inventory, linkme based on ctor) for one reason or another:
Additionally, one can imagine webservers registring routes this way, although I found nobody doing that at the time of writing.
In almost all the examples above, doing global registration is is an opt-in feature, behind a cargo feature flag.
Existing solutions are a bit of a hack, and have limited platform support.
Especially `inventory`, based on `ctor`,
which most crates mentioned above use, is only regularly tested on windows, macos and linux,
and use on embedded targets is complicated.
On embedded targets you must manually ensure that all global constructors are called, or a runtime like [`cortex-m-rt`](https://crates.io/crates/cortex-m-rt) must do so.
However, also `linkme` had 3 cases where it broke with linker errors, or was missing platform support in 2023.
It seems, authors of libraries are wary including registration systems in their library.
I conjecture because random breakages due to a bug in a downstream crate, or limited platform support is painful and limiting.
Bevy has discussed exactly this, citing limited or no wasm support.
Specifically for the testing-devex team, working on libtest-next.
It was proposed by Ed Page (and in in-person conversations) that we should make #[test] less magical so rust can fully support custom test frameworks.
This plan was explicitly endorsed by the libs team.
Custom test frameworks are useful for all kinds of purposes, like test fixtures.
Importantly, it is essential for testing on #![no_std].
The only way to currently do that is using #![feature(custom_test_frameworks)]
It was discussed (in-person) that this is also useful for rust for linux.
In summary:
#[test] is a magical registration system which cannot be used for any other purpose than tests.
Libraries do seem to have a need for registration systems.
Crates offering registration system by using the linker are in use, but it seems platform support and fragility is an issue for downstream crates.
To advance the state of testing in the language, having access to a better supported registration system is desired.
Existing solutions
Linkme
Because this pattern is so useful,
there are libraries available in the ecosystem that try to emulate this behavior.
Primarily, there's linkme's distribted_slice macro, by David Tolnay.
As the crate's name implies, this works by (ab)using the linker.
linkme has had issues because it was broken on various platforms in the past.
Indeed, it has some platform specific code, though most platforms are now supported.
The crate works by creating a linker section for each distributed slice,
and placing all elements of that slice in this section.
Based on special start and end symbols that are placed at the start and end of this section,
he program can figure out at runtime how large the slice has become reconstruct it using some unsafe code.
Inventory
An alternative approach, also written by David Tolnay is inventory, based on ctor.
Using ctor you can define "global construtors".
Entries in a special linker section that,
on various platforms, are executed before main is called.
The name and semantics of these sections changes per-platform,
and using them users can execute code before main.
This is wildly unsafe, as std is not yet initialized. ctor's README.md on github starts with a large warning to be very careful not to call std functions and to use libc functions instead if you must.
In inventory, these ctors each execute a little bit of code to register some element globally before main starts in a linked list.
#[test]
#[test] is unique, in that it does not involve the linker at all.
Instead, the compiler collects all the marked elements and generates a slice containing all elements from throughout the crates.
It's super stable. It is guaranteed to work on any platform
If something goes wrong, you don't get a nasty linker error, but a nice compiler error
Because it works on any platform, it indeed could support custom test frameworks on #![no_std], a part of the reason why we'd want a global registration system.
It might be possible to support during const evaluation, though comments on a recent RFC by Mara show that this can also be undesirable, as it means that all crates need to be considered together during const evaluation.
Disadvantages:
Building a slice at compile time simply does not support registering elements loaded through a dynamic library (though there might be some ways around that: TODO). Rust's story for dynamic libraries isn't great anyway, but this would add another major blocker.
Exactly this might make hot-patching binaries harder. There were some proposals for this floating around but it would make future implementations of this harder. Actually, that goes for this entire feature, whether supported through the linker or the compiler.
Possible alternative solutions
Keep things as-is
Having this feature only supported through downstream crates.
Providing linkme's distributed_slice or inventory as part of the compiler
Either of these methods would have limitations in platform support,
but if they were also tested as part of the compiler we might be able to guarantee some sort of stability.
I'm especially wary of the ctor based approach, but maybe linkme isn't so bad.
It seems to support many platforms,
and even has a test of it running on cortex-m #![no_std].
It does require a modified linker script listing the sections used for the distributed slices.
Theoretically the compiler could automate those additions to the linker script.
However, it's unclear whether linkme supports WASM,
and based on my own testing I don't think it does.
I'm unsure what would be required to start supporting that.
Ignoring dynamic libraries: providing distributed slices like #[test]
This is the approach rust-lang/rfcs#3632 takes.
Their reasoning is that current similar systems don't either: global_allocator doesn't work with dylibs either.
Indeed, tests also don't work across dylibs.
However, that's never a concern as tests are usually crate-local and always statically compiled with the binary they're testing.
Ed page also has an opinion about this,
and thinks we shouldn't worry too much dynamic linking right now, though we should check with Bevy whether it'd be benificial for them.
Personally, I do think we could keep in mind that dynamic linking exists, and we should make sure that the only possible implementation of a design is not to support dylibs at all.
A hybrid approach: a proposal to move forward
I think there is a hybrid approach we can take.
One that does not completely rule out dylibs, but might initially not support them while still meeting most people's needs.
The name "distributed slice" might not be very accurate.
With global registration, the ordering of elements is not important, and essentially deterministically random.
It's more like a distributed set of elements actually, where the index of elements in the slice is essentially useless.
Instead, I propose to expose a registration system as an opaque type that implements IntoIterator, just like std::env::Args.
Initially, we can choose to not even expose a len method, as the lenghth might depend on the number of dynamic libraries loaded.
The implementation could then be a slice, or a linked list, or a collection of slices linked together (one per dylib?).
Crucially, the key here is that in this way, we expose the minimal useful API for global registration,
leaving our options for implementation details completely open, such that we can change the internals of it at any point in the future.
The only downside of this approach that I could find sofar is that iterators are not const-safe (yet).
Whether we want to support iteration over globally registered elements in const context is questionable
(as highlighted above; then const evaluation depends on all the crates are being compiled and might register elements),
but it would restrict that feature.
I believe that's acceptable, especially now for experimentation, and where any linker-based approaches wouldn't support that use case either.
We should also make sure that we only implement traits for this opaque type that stay compatible with slices,
so we're free to expose a slice in the future if we want to.
I'd like to experiment with that approach, to see if it meets enough people's needs.
If not we can consider one of the other approaches highlighted.
I propose calling the feature global_registration, not distributed_slice to be more generic.
netvultix, davidhewitt, LukeMathWalker, max-sixty, Skgland and 1 more
Specifically for the testing-devex team, working on libtest-next.
It was #3 (and in in-person conversations) that we should make #[test] less magical so rust can fully support custom test frameworks.
To add weight, this was a plan agreed to in conjunction with libs-api with an explicit endorsement to implement a language feature and not to use ctor or linkme from dtolany
#[test] is unique, in that it does not involve the linker at all.
iirc the plan was to stabilize the compiler's test collection for use by users.
The key, is that the name "distributed slice" might not be correct.
imo the biggest argument for an opaque type is that we are able to stabilize the least amount of the API and then go in one of several directions in the future, depending on how things evolve.
Instead, I propose to expose a registration system as an opaque type that implements IntoIterator, just like std::env::Args.
We should call out that traits will be implemented with being compatible with [] in case we decide to expose that as a future possibility
I propose calling the feature global_registration, not distributed_slice to be more generic.
This doesn't describe how we would be defining and registering items, but the name we'd use. This is where an opaque type might be difficult and require alternative design work instead.
Activity
epage commentedon Feb 7, 2024
Summary of Past Work
Past discussions
Use cases
Prior art
#[test]
support is built into the compilerPotential RFC collaborators
Design ideas
New language syntax:
Attribute
linkme
provides an implementation that mostly worksjdonszelmann commentedon May 9, 2024
I spent my evening doing a tiny bit of research, expanding on the points you'd already mentioned. Mostly as an overview for myself, but I thought making those thoughts public wouldn't hurt.
Use cases
multiple-pymethods
feature. Using a macro you can define python methods in Rust. However, you can only have a single such#[pymethods]
annotated block. Withmultiple-pymethods
you can define multiple such blocks, which usinginventory
are added to a static array at compile time to then be used in the bindings to python.main
.inventory
can essentially be replaced by this feature. This is essentially why PyO3 usesinventory
as a dependency. However one can also imagine using this totypetag
usesinventory
to find all annotated typesPrior art
#[test]
support is built into the compilerBoth ctor and linkme use linker magic. It's kind of amazing this works as well as it does.
ctor
works by adding functions to a specific link section (.init_array
), which can be run before main. This allows (essentially) arbitrary code to run before main, as much as running any code before main is supported. This code could register an item in some kind of global which can then be picked up in main.linkme
's distributed slice works by adding specific names to the start and end of a linker section. All "registered" symbols are added inbetween these start and end markers, and a slice can later be constructed by iterating over the section from the start to the end marker.Rust's
#[test]
annotations work completely differently. Instead of (ab)using the linker, the rust compiler itself collects all the tests during compilation.Why implement this in the compiler?
Since there are already libraries (like
ctor
andlinkme
) that can do this, why would we need to add a new feature to rustc?#[test]
less magical. This is something the testing devex team wants.ctor
, for example). Limitations of those are described in the next section (this was noted by David Tolnay)Using the linker or the compiler?
Prior art shows that this feature could be implemented in two ways.
Either through the linker or directly in the compiler.
The linker route means possible limitations in platform support.
linkme
, at the time of writing, is tested on Linux, Macos, Windows, FreeBSD and illuminos. It will possibly work on even more targets. The repository also seems to include some tests for embedded targets, although it requires a modifiedmemory.x
file. If this became a standard feature of the language, it would be available everywhere. Thus, everyone would need to use such modified memory.x files on embedded. I also worry (maybe wrongly?) that this would limit what kind of linkers we are able to support.ctor
can run code before main, and it's README.md on github starts with a list of warnings. Code in them can only uselibc
asstd
is not loaded yet. Making this a standard language feature would either require us to define code before main properly (which sounds rather impossible) or make it unsafe and something library authors might use as an implementation detail. However, theoretically we could use global constructors under the hood to implement distributed slices. That would certainly not be zero-cost though. Code would need to be run for every part of a distributed slice before main. Additionally,ctor
has limited platform support, much more limited thanlinkme
.However, with "linker magic", rust could support such distributed slices, possibly through dynamically loaded libraries. Dynamic linking in Rust has always been kind of hard, and usually goes through
cdylib
s. However, if rust ever wants to support that, adding a language feature that would block this might not be very productive. I say that, because the alternative might do exactly that. Supporting distributed slices through the compiler like#[test]
essentially blocks dynamically linked libraries to add to distributed slices. The slices are already built before linktime.Note that
#[test]
itself isn't affected by this, as tests are never dynamically linked in.Ed Page noted during RustNL that this might not be so big of an issue, saying that rust crates span the entire spectrum from c's single header libraries to gigantic dynamically libraries like openssl for example. He thinks that the kind of applications you would use distributed slices for are not likely to be these gigantic libraries you might want to dynamically link. (correct me if I'm now misquoting you @epage)
Personally, I think going through the compiler, not the linker is much less hacky and provides a more natural extension of
#[test]
.Complications
epage commentedon May 10, 2024
Regarding dylibs, there are
jdonszelmann commentedon May 10, 2024
One idea that has me more and more convinced is to not implement this feature as a slice at all. Distributed slice is maybe not even the right name. We cannot easily make the order stable so keeping indices to elements in the slice is nonsensical. Instead implementing as a type that implements Iterator (a bit like iterators over hashmaps) with a randomized order (that's seeded by something in the source code, to keep reproducible builds) we can do a lot better. Actually, the way this iterator works can then also be changed internally. If a shared library is loaded the iterators could in theory essentially be
Iterator::chain
-ed to make the "distributed iterator" work.jdonszelmann commentedon May 10, 2024
I also just talked to @m-ou-se who is working on an RFC to implement functions in a different library than their definition, which turns out to be a related issue. Essentially, they're "single item distributed slices". There, dynamic library loading is even more of an issue, since there can be only one implementation of a function. Thus, a dynamic library that has a conflicting definition would never be able to be loaded.
m-ou-se commentedon May 10, 2024
Here's the RFC for that: rust-lang/rfcs#3632
jdonszelmann commentedon May 11, 2024
@epage got an ok from Scott. The plan:
#![no_std]
testing #7jdonszelmann commentedon May 11, 2024
Problem Description
#[test]
is magical.Using the attribute, functions spread all over a crate can be "collected" in such a way that they can be iterated over.
However, the ability to create such a collection system, or global registration system, turns out to be useful elsewhere, not just for
#[test]
.The following are just a few examples of some large crates using alternative collection systems (
inventory
,linkme
based onctor
) for one reason or another:pyo3
to add methods to python classes defined in Rust (to collect methods created over multipleimpl
blocks around your codebase).cucumber
, a custom testing framework (to collect test cases).typetag
, for serializing and deserializing&dyn Trait
using serde (to collect possible types to deserialize a dyn trait into).gflags
, for defining command line parameters all over your crate (to collect the definitions of command line parameters).leptos
' server fn, for marking functions that should be executed on the server, not the client (to collect all such functions).dioxus
for a very similar purpose as leptos.apollo
graphql, for registering plugins.zookeeper-client
for its sasl feature -rsasl
then uses it to register custom authentication mechanisms.bevy
has discussed having a need for thisAdditionally, one can imagine webservers registring routes this way, although I found nobody doing that at the time of writing.
In almost all the examples above, doing global registration is is an opt-in feature, behind a cargo feature flag.
Existing solutions are a bit of a hack, and have limited platform support.
Especially `inventory`, based on `ctor`, which most crates mentioned above use, is only regularly tested on windows, macos and linux, and use on embedded targets is complicated.
On embedded targets you must manually ensure that all global constructors are called, or a runtime like [`cortex-m-rt`](https://crates.io/crates/cortex-m-rt) must do so.It seems, authors of libraries are wary including registration systems in their library.
I conjecture because random breakages due to a bug in a downstream crate, or limited platform support is painful and limiting.
Bevy has discussed exactly this, citing limited or no wasm support.
Specifically for the testing-devex team, working on libtest-next.
It was proposed by Ed Page (and in in-person conversations) that we should make
#[test]
less magical so rust can fully support custom test frameworks.This plan was explicitly endorsed by the libs team.
Custom test frameworks are useful for all kinds of purposes, like test fixtures.
Importantly, it is essential for testing on
#![no_std]
.The only way to currently do that is using
#![feature(custom_test_frameworks)]
It was discussed (in-person) that this is also useful for rust for linux.
Existing solutions
Linkme
Because this pattern is so useful,
there are libraries available in the ecosystem that try to emulate this behavior.
Primarily, there's linkme's
distribted_slice
macro, by David Tolnay.As the crate's name implies, this works by (ab)using the linker.
linkme
has had issues because it was broken on various platforms in the past.Indeed, it has some platform specific code, though most platforms are now supported.
The crate works by creating a linker section for each distributed slice,
and placing all elements of that slice in this section.
Based on special start and end symbols that are placed at the start and end of this section,
he program can figure out at runtime how large the slice has become reconstruct it using some unsafe code.
Inventory
An alternative approach, also written by David Tolnay is
inventory
, based onctor
.Using
ctor
you can define "global construtors".Entries in a special linker section that,
on various platforms, are executed before main is called.
The name and semantics of these sections changes per-platform,
and using them users can execute code before main.
This is wildly unsafe, as
std
is not yet initialized. ctor's README.md on github starts with a large warning to be very careful not to callstd
functions and to uselibc
functions instead if you must.In
inventory
, thesector
s each execute a little bit of code to register some element globally before main starts in a linked list.#[test]
#[test]
is unique, in that it does not involve the linker at all.Instead, the compiler collects all the marked elements and generates a slice containing all elements from throughout the crates.
This can be both an advantage and a disadvantage.
Advantages:
#![no_std]
, a part of the reason why we'd want a global registration system.Disadvantages:
Possible alternative solutions
Keep things as-is
Having this feature only supported through downstream crates.
Providing
linkme
'sdistributed_slice
orinventory
as part of the compilerEither of these methods would have limitations in platform support,
but if they were also tested as part of the compiler we might be able to guarantee some sort of stability.
I'm especially wary of the
ctor
based approach, but maybelinkme
isn't so bad.It seems to support many platforms,
and even has a test of it running on cortex-m
#![no_std]
.It does require a modified linker script listing the sections used for the distributed slices.
Theoretically the compiler could automate those additions to the linker script.
However, it's unclear whether
linkme
supports WASM,and based on my own testing I don't think it does.
I'm unsure what would be required to start supporting that.
Ignoring dynamic libraries: providing distributed slices like
#[test]
This is the approach rust-lang/rfcs#3632 takes.
Their reasoning is that current similar systems don't either:
global_allocator
doesn't work with dylibs either.Indeed, tests also don't work across dylibs.
However, that's never a concern as tests are usually crate-local and always statically compiled with the binary they're testing.
Ed page also has an opinion about this,
and thinks we shouldn't worry too much dynamic linking right now, though we should check with Bevy whether it'd be benificial for them.
Personally, I do think we could keep in mind that dynamic linking exists, and we should make sure that the only possible implementation of a design is not to support dylibs at all.
A hybrid approach: a proposal to move forward
I think there is a hybrid approach we can take.
One that does not completely rule out dylibs, but might initially not support them while still meeting most people's needs.
The name "distributed slice" might not be very accurate.
With global registration, the ordering of elements is not important, and essentially deterministically random.
It's more like a distributed set of elements actually, where the index of elements in the slice is essentially useless.
Instead, I propose to expose a registration system as an opaque type that implements
IntoIterator
, just like std::env::Args.Initially, we can choose to not even expose a
len
method, as the lenghth might depend on the number of dynamic libraries loaded.The implementation could then be a slice, or a linked list, or a collection of slices linked together (one per dylib?).
Crucially, the key here is that in this way, we expose the minimal useful API for global registration,
leaving our options for implementation details completely open, such that we can change the internals of it at any point in the future.
The only downside of this approach that I could find sofar is that iterators are not const-safe (yet).
Whether we want to support iteration over globally registered elements in const context is questionable
(as highlighted above; then const evaluation depends on all the crates are being compiled and might register elements),
but it would restrict that feature.
I believe that's acceptable, especially now for experimentation, and where any linker-based approaches wouldn't support that use case either.
We should also make sure that we only implement traits for this opaque type that stay compatible with slices,
so we're free to expose a slice in the future if we want to.
I'd like to experiment with that approach, to see if it meets enough people's needs.
If not we can consider one of the other approaches highlighted.
I propose calling the feature
global_registration
, notdistributed_slice
to be more generic.jdonszelmann commentedon May 11, 2024
There we go, that's a kind of proposal. I hope it makes some sense. Any thoughts?
epage commentedon May 11, 2024
To add weight, this was a plan agreed to in conjunction with libs-api with an explicit endorsement to implement a language feature and not to use ctor or linkme from dtolany
An important part of prior art in this is
https://rust-lang.github.io/rfcs/2318-custom-test-frameworks.html
iirc the plan was to stabilize the compiler's test collection for use by users.
imo the biggest argument for an opaque type is that we are able to stabilize the least amount of the API and then go in one of several directions in the future, depending on how things evolve.
We should call out that traits will be implemented with being compatible with
[]
in case we decide to expose that as a future possibilityThis doesn't describe how we would be defining and registering items, but the name we'd use. This is where an opaque type might be difficult and require alternative design work instead.
17 remaining items