Senior Principal Architect, Executive Director
Now that we’ve identified the API that empowers our users to perceive and change the state of our system, let’s turn our attention back to the events that comprise our system’s narrative. In this module, we'll determine that we actually have multiple independent Event Streams instead of one monolithic narrative. We'll also learn how to define integrations among these separate Streams so that each represents a causally-independent business capability or context.
We will only share developer content and updates, including notifications when new content is added. We will never send you sales emails. 🙂 By subscribing, you understand we will process your personal information in accordance with our Privacy Statement.
Now that we've identified the API that enables our users to perceive and change the state of our system, let's turn our attention back to the events that comprise our system's narrative. In the event model for our autonomous vehicle ride sharing app, Autonomo you'll notice that our current model only has one lane or stream of events along the bottom of the diagram. This makes sense because up to this point we've considered the business process that we've been modeling as a single coherent narrative. However, as we look at our API of commands we notice that some of these interactions happen independently of others and don't have an immediate effect on the same read models. For example, here we can see that owners adding vehicles to and removing vehicles from their inventory is independent of writers requesting and completing rides. In most cases, the set of events in the event model represents several mostly independent causal narratives that correspond to specific business domains. We call these independent event narratives, streams and the business domains described by these narratives we often call bounded contexts. The goal of step four in the event modeling process is to identify these streams and then to design how these streams integrate with each other. So how do we identify separate event streams? The clearest indication is that a stream of events represents a business process or capability that should stand out to someone familiar with the domain, and we'll look at an example of such a business process in a moment. This common sense heuristic is often good enough to mostly determine which events belong to which streams. One important guideline to discover the boundaries of our streams is that the narrative of this business capability will be mostly independent of others with perhaps a few touchpoints with other streams. Usually at the start or end of the process. Often your organization will have an existing department or team that owns or handles the business process or capability we're modeling as a stream. And too many handoffs between teams slows things down. Similarly, when automating the process, too many crossovers in the middle of the stream usually indicates that we have the boundaries wrong. Since too many integrations between streams will become complex and expensive. The other important guideline when discovering the boundary between streams is that the set of events that comprise the stream must be sufficient to record the history of the state changes for that stream's business process or capability. Since the events are our primary source of truth we'll record the events durably and in order so that the affected remodels and downstream consumers can consume the stream and build out their state or rebuild at a later time if necessary. This requirement means that any foreign events that affect the state of the stream must be translated or copied into the local stream rather than merely being observed on the foreign stream. Looking at Autonomo, there are two domains that we can identify, our vehicle population and the details and life cycles of each ride. We can add a lane for each of these two streams we identified to our event model diagram and roughly sort the events into the proper lane while preserving their order along the timeline. We might have some questions or overlap which we'll sort out when we discuss the integration between these streams. These streams seem to be mostly independent though obviously we will need to have vehicles available before any of our rides can be scheduled. Owners registering vehicles that they can make available for rides is obviously independent of any particular ride's lifecycle. At the beginning of the ride lifecycle, a vehicle leaves the available pool and when the ride is completed or canceled, the vehicle rejoins that pool. These boundaries seem about right to indicate independent causal narratives. The set of events comprising these streams must be enough to tell the whole story of the business capability. We should be able to walk forward and backward through the stream to test the logic of its own independent narrative. As we just illustrated, we represent streams on the event model by labeling separate horizontal lanes of events along the bottom of the diagram below the timeline of commands and read models. By extension, the commands and read models connected to the various events also belong to those events' streams. For example, in the left rectangle we see the part of the vehicles event stream with the associated read models and commands. And on the right we see part of the rides event stream and associated commands and read models minus the available vehicles read model as indicated. In my experience identifying these streams is the best indication of what domain-driven design practitioners call bounded contexts or separate areas of the business domain. Separate teams can own these separate streams and the software that automates their business processes which creates team scale autonomy and aligns the technology solution to the business problem rather than the inverse. In our technical architecture, each stream will have its own Kafka topic that represents the source of truth for its bounded context. Since the events that affect a particular domain entity or subject must be recorded in order, we must be careful to partition our topics to preserve ordering or else use a single partition topic for our event streams. This topic should be configured for indefinite retention or else for a very long retention period that aligns to legal or policy record retention periods. Since this topic represents our original source of truth and we don't wanna forget these events anytime soon. This topic will also contain several event types and so it must be configured in the schema registry to support multiple schema types per topic. Even though we split the events in our example system into mostly independent streams there are a few places where integration between these streams will be needed. The first way to get state change from the outside world of course, is to expose our command interface which we've already done. However, if we need to more actively perceive or import foreign state there are two techniques for doing so. The first technique for importing foreign state is illustrated nicely in Autonomo. By foreign, I mean importing events from a separate stream onto my stream. How can the vehicle inventory know when to take a vehicle out of availability when it becomes occupied during a ride? The vehicle inventory stream can import external state from the ride stream by listening to the rides event stream. When a ride is scheduled, the vehicle stream reacts with a mark vehicle occupied command, which results in a vehicle occupied event on the vehicle stream. This stateless reaction to a foreign event to trigger a local command is sometimes called a saga in domain-driven design. The other mechanism for importing external state is slightly more involved. We observe foreign event streams or read models such as an upstream read model Kafka topic to build a local read model about work that needs to be done, sort of like a to-do list. Then an automated job represented on the model as a job type interface with a little gear icon can work on these to-do items by invoking commands that result in the recording of a local event. The job type interface is a special purpose interface that we didn't cover during the storyboard module since its only role is to automate the integration between streams. To illustrate in the ride stream we have the ride scheduler job interface that listens both to its local model of rides to be scheduled as well as to the foreign model of available vehicles in order to find a match between the requested rides and nearby vehicles. This job invokes the local scheduled ride command which results in the ride scheduled event being recorded to the local ride stream. What if rather than perceiving foreign state our stream needs to cause state change in another stream? As before, we've already modeled one mechanism for this exposing our own local read models and event stream which other contexts are free to read as needed. However, what if we need to directly cause change in an external stream? We can invoke that stream's command interface. In order for the result of this command invocation to become part of our local narrative, we'll need to record an event of what happened as a result of this interaction. This local event can happen in one of two ways. First, we can invoke the command and then listen on the external event stream reacting to specific outcome event by invoking a local command to record a local event. For example, in Autonomo we have a tricky problem when the owner wants their car back, they can request their vehicle's return, but that vehicle might be currently occupied by a rider. So as before, we'll track the vehicles to return as a local to-do list read model and rely on a job type interface to work on these to-dos. If the vehicle is not scheduled or occupied it will be immediately confirmed as returned by the recording of the vehicle returned event on the vehicle stream. If the vehicle is scheduled for a ride but not yet occupied the vehicle returner job can invoke the ride stream's cancel ride command, and then await the outcome as the external ride canceled event. Once the vehicle becomes available either through the ride cancellation or drop off, the vehicle returner job confirms the return of the vehicle to its owner. This mechanism of invoking external commands and then awaiting the resulting external event only works if we have access to the external event stream. If we can't read or subscribe to the external event stream the job interface must record the result of invoking the external command by recording a local event. This allows the job to keep track of failures and retries in the to-do list read model and is a very common pattern when interacting with third party services like payment processors. Identifying and integrating event streams was the last step in the modeling process. So now we've completed our event model and our next module we'll begin implementing the system described by our model in code. If you aren't already on confluent developer head there now using the link in the video description to access other courses, hands-on exercises, and many other resources for continuing your learning journey.