-
-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remote entity reservation #18195
base: main
Are you sure you want to change the base?
Remote entity reservation #18195
Conversation
Co-authored-by: Chris Russell <[email protected]>
Co-authored-by: Chris Russell <[email protected]>
Can we add #18003 to this PR (as it should fix it) |
Also, I started this from #18190 since it is close to merging. Should merge that before this ideally. |
I forgot to reset the free cursor lol.
len: u32, | ||
|
||
/// This functions exactly like [`Self::meta`], only it counts backwards instead of forwards. | ||
remote_entities: Vec<EntityMeta>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a reasonably workable solution, but it does have downsides:
- Increases the cost of entity lookups (the fallback logic, less cache friendly as we're now hopping around two lists, etc).
- Parallel entity reserving does not pull from the recycled entity pool. For parallel-heavy workloads (ex: assets that are continually loaded and unloaded), this could meaningfully eat into our id space.
- It introduces a "meet in the middle" error case (which is not currently handled and solving this would introduce overhead). And given the "no parallel recycling behavior", the odds of hitting this go up.
If we're willing to forgo recycling (like we do in the current impl), we might be able to do away with the "two sided allocation" behavior to solve (1) and (3).
To do recycling, we'd probably need to introduce parallel queues (ex: what I do with our current Asset allocators) or introduce locking of some kind. Neither of which feels particularly good.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In regards to 2: I think we should just recreate the whole structure of pending + free_cursor for the parallel stuff. Whenever we flush, we can transfer the freed "parallel" entities to the RemoteEntityReservation
. This will require us to lock on flush, and lock whenever reserving entities using RemoteEntityReservation
. Personally I don't think this is a big issue - I think it's fine if remote entity reservation is slow. What is most important is non-remote entity reservation should be as fast as we can make it.
Edit: So I guess this is a parallel queue, but maintaining the "start from the back" idea.
|
||
/// This functions exactly like [`Self::meta`], only it counts backwards instead of forwards. | ||
remote_entities: Vec<EntityMeta>, | ||
/// The counter for [`Self::remote_entities`]. This will equal it's length after flushing. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/// The counter for [`Self::remote_entities`]. This will equal it's length after flushing. | |
/// The counter for [`Self::remote_entities`]. This will equal [`Self::remote_entities`] length after flushing. |
/// By starting at 2, we ensure we don't conflict with that, and, we allow similar Entity constants to be made in the future. | ||
const REMOTE_FIRST_GENERATION: NonZero<u32> = const { | ||
// SAFETY: We pass 2, which is greater than 0. | ||
unsafe { NonZero::new_unchecked(2) } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we want to get rid of this unsafe, I think just doing NonZero::new(2).unwrap()
should work? Both new
and unwrap
are const.
const REMOTE_FIRST_META: EntityMeta = const { | ||
let mut default = EntityMeta::EMPTY; | ||
default.generation = Self::REMOTE_FIRST_GENERATION; | ||
default | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
const REMOTE_FIRST_META: EntityMeta = const { | |
let mut default = EntityMeta::EMPTY; | |
default.generation = Self::REMOTE_FIRST_GENERATION; | |
default | |
}; | |
const REMOTE_FIRST_META: EntityMeta = EntityMeta { | |
generation: Self::REMOTE_FIRST_GENERATION, | |
..EntityMeta::EMPTY | |
}; |
} | ||
} | ||
|
||
/// Constructs a new [`RemoteEntityReserver`] for this [`Entities`]. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/// Constructs a new [`RemoteEntityReserver`] for this [`Entities`]. | |
/// Get a [`RemoteEntityReserver`] for this [`Entities`]. |
The previous description sounded like it's creating like a new allocator or something to me.
self.len = 0; | ||
self.pending.clear(); | ||
self.remote_entities.clear(); | ||
self.remote_reservations.0.fetch_and(0, Ordering::Relaxed); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
self.remote_reservations.0.fetch_and(0, Ordering::Relaxed); | |
self.remote_reservations.0.store(0, Ordering::Relaxed); |
#[cfg(target_has_atomic = "64")] | ||
use bevy_platform_support::sync::atomic::AtomicU64 as RemoteIdCursor; | ||
#[cfg(target_has_atomic = "64")] | ||
type RemoteInner = u64; | ||
|
||
/// Most modern platforms support 64-bit atomics, but some less-common platforms | ||
/// do not. This fallback allows compilation using a 32-bit cursor instead, with | ||
/// the caveat that some conversions may fail (and panic) at runtime. | ||
#[cfg(not(target_has_atomic = "64"))] | ||
use bevy_platform_support::sync::atomic::AtomicUsize as RemoteIdCursor; | ||
#[cfg(not(target_has_atomic = "64"))] | ||
type RemoteInner = usize; | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think these need to be u64? From what I understand, the normal cursor needs to be i64 because 1) we need to be able to handle 2^32 entities in pending, 2) we need to be able to handle 2^32 entities to be allocated that requires 33 bits to go +2^32 and -2^32. However, remote reservation only ever increments the cursor - it never goes down. So we only need to handle the +2^32, so we can stick to u32.
Objective
fixes #18003
For assets as entities and components as entities, it is convinient to reserve an entity from an
Arc
, fully outside of the ECS. See here on discord for why.It is probably possible to do both assets and components as entities without this, but we would have to eat extra complexity for that, and so would users making their own custom assets.
Solution
This design comes from here on discord.
Before this PR,
EntityMeta
is stored in a singleVec
calledmeta
. Apending
list stores free entities, and an atomic cursor manages whichpending
entities are reserved and what new entities need to be created onmeta
.All of that remains the same, but after this PR, an additional
Vec<EntityMeta>
calledremote_entities
exists. These indices are reversed frommeta
, so the index inremote_entities
=u32::MAX - entity.index
and visa versa. We only extendremote_entities
based on anArc<AtomicInt>
. That counter representsremote_entities
's length after the next flush. When we free one of these entities, we just add it to the pending list like normal, and the existing atomic cursor manages it.Problems
I kinda had to tiptoe around
Entity::PLACEHOLDER
. The docs for that say it may exist, but with this, it will exist the first time the remote allocator is used. For now, I just start the generations at 2 instead of 1 to avoid accidental conflicts. This is easily changed if there are other ideas.Alternatives
It is possible to create a paging/chunking system for Entities that would additionally allow entity partitioning. This design will simplify that if bevy chooses to add it in the future. In the meantime, this PR is much simpler than that.
Testing
We are not using the feature anywhere, so if current tests pass, we shouldn't have an issue.
However, it makes sense to add tests for this. If anyone has specific ideas for situations to test, I'd be more than happy to write them.