Skip to content

Subscription-based Events (legion-like) #2287

@tower120

Description

@tower120

What problem does this solve or what need does it fill?

Current Event system does not play well with out-of-sync readers. (systems with different FixedStep's, update on demand, etc...)

What solution would you like?

legion - like subscription based events. EventReader subscribes to EventSource. EventSource-EventReader keep track of EventData usage, and free memory after EventData was consumed.

I see 3 possible implementations. In all cases EventReader should be stored as system-local resource. EventSource can be global resource. All versions also could have unsubscribe().


First - EventSource does not store EventData queue, EventData copied into each EventReader queue.

//pseudocode in cpp
template<class Data>
struct EventSource {
     std::vector<EventReader<Data>*> subscribers;

     void trigger(Data data){
        for(EventReader* subscriber : subscribers)
           subscriber->event_list.push_back(data);
     }
}

template<class Data>
struct EventReader{
    circular_buffer<Data> event_list;
}

Simple to implement, read-efficient, easy to make thread-safe. Memory consumption and trigger event processing time grows linearly with EventReader's count.


Second - move each EventData in Arc, have Arc-pointers in EventReader - list.

//pseudocode in cpp
template<class Data>
struct EventSource {
     std::vector<EventReader<Data>*> subscribers;

     void trigger(Data data){
        auto shared_ptr = make_shared_pointer(data);
        for(EventReader* subscriber : subscribers)
           subscriber->event_list.push_back(shared_ptr);
     }
}

template<class Data>
struct EventReader{
    circular_buffer<shared_pointer<Data>> event_list;
}

Memory fragmentation, memory alloc/dealloc (can be solved with per EventSource memory-arena [custom Arc?]), Arc dereferencing. But trigger costs does not grow with EventReader's count.

N.B. Since events operate on inter-system level - it could turn out that memory fragmentation is not a problem... CPU cache probably will already be clogged with iterated components data.


Third - second version variation, using intrusive linked list and intrusive Arc.

//pseudocode in cpp
// lack of actual synchronization
template<class Data>
struct EventSource {

    struct EventData{
         Data data;
         atomic<int32> use_count;
         // intrusive linked list (indices as alternative)
         atomic<EventData*> next;
         atomic<EventData*> prev;
    }
    atomic<EventData*> event_list_begin;
    std::array<Data, 128> memory_arena;  // or in heap. Or maybe global memory arena?

   int event_readers = 0;

    void subscribe(EventReader& event_reader){
       event_reader.data_ptr= event_list_begin;
       ++event_readers;
    }

     void trigger(Data data){
        EventData* event = push_list_back(memory_arena, event_list_begin);
        event->use_count = event_readers;
     }
}

template<class Data>
struct EventReader{
    EventSource<Data>* src;
    EventData* data_ptr;  // can't be freed while in use

    optional<Data> consume(){
       data_ptr = list_next(src->event_list_begin, data_ptr);
       if (!data_ptr)
          return {};       

       Data data = data_ptr->data;
       data_ptr->use_count--;
       if (data_ptr->use_count == 0){
           list_free(src->event_list_begin, data_ptr);
       }
       return std::move(data);
    }
}

Synchronized Intrusive List may be somewhat tricky to implement. Lowest memory consumption. Reader have fixed memory consumption.

Metadata

Metadata

Assignees

No one assigned

    Labels

    A-ECSEntities, components, systems, and eventsC-FeatureA new feature, making something new possibleC-UsabilityA targeted quality-of-life change that makes Bevy easier to use

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions