-
Notifications
You must be signed in to change notification settings - Fork 20
Event Sourcing
Duncan Jones edited this page May 21, 2019
·
2 revisions
Event sourcing is a way of storing data in computer systems in a forward-only, write once manner such that every state effecting event occurring to a thing is stored. The current state of that thing can then be recreated by replaying the history of state changes.
Event sourcing can be made a lot easier and the sanity of the developer(s) can be preserved by following these few guidelines:
- Each aggregate has its own event stream. These may or may not be physically distinct but logically any event stream may only contain events pertaining to one aggregate.
- The event needs to record the event type (or record type) and also the version of the system that wrote the event. This allows the software to discern if an attribute can be expected to be present for any given event.
- Classes that represent events should be final (sealed / not inheritable) to minimise the risk of system changes overflowing their intended target.
- The payload (or information content) of each event is variable. In practice this lends itself to the use of a NOSQL system or linked JSON or other object storage methodology.
- The event stream is a forward only, write once data store.
- “Adjustment” style events are not allowed. If an incorrect event is written then a cancellation-rebook pair of events must be written to effect the change.
- Events should be made idempotent.
- All the data pertaining to any given event should be stored - whether or not there is a current business need for that data. Ideally no data should be discarded.
- All read events should operate against projections.
- Projections can be cached as at a given event and this can be used to speed up generating the current state by starting from that cached state and only applying events occurring since the cache.
- Multiple projections can be generated from the same event stream.
Because write operations only append to the event stream we do not need to worry about data losses due to concurrent updates overwriting each other. This means that you do not need an always-on consistency engine (most commonly a database) nor any form of write locking.