Get Started Free
‹ Back to courses
course: Practical Event Modeling

Event Modeling Step 3: Identifying the API of Commands and Read Models

10 min
Bobby Calderwood

Bobby Calderwood

Senior Principal Architect, Executive Director

Event Modeling Step 3: Identifying the API of Commands and Read Models

Overview

So far, we’ve considered the business narrative of Events that we’ll need to record on the backend, and we’ve envisioned how our users experience the system via the Interfaces it exposes on the frontend. In this module, we’ll turn to the task of connecting these two worlds via an API of Commands and Read Models.

Use the promo codes EVNTMODELING101 & CONFLUENTDEV1 to get $25 of free Confluent Cloud usage and skip credit card entry.

Be the first to get updates and new content

We will only share developer content and updates, including notifications when new content is added. We will never send you sales emails. 🙂 By subscribing, you understand we will process your personal information in accordance with our Privacy Statement.

Event Modeling Step 3: Identifying the API of Commands and Read Models

So far we've considered the business narrative of events that we'll need to record on the backend and we've envisioned how our users experience the system via the interfaces we expose on the front end. Now we'll turn to the task of connecting these two worlds via an API of commands and read models. So what are commands and read models? Simply put, a read model is a view into the state of the system at a specific point in time and a command is a user's expression of intent to change the current state of the system. There is a system design principle called Command Query Responsibility Segregation, commonly abbreviated a CQRS, which states that state changing operation's commands should take a separate path through the system from operations that merely retrieve or perceive the state of the system, queries or read models. Many event-based systems using Kafka naturally adopt this separation. For example, with streams recording state changes and tables servicing reads in many Kafka streams and ksqlDB applications. Event modeling makes this distinction explicit and visual by specifying separate model components for commands and read models. A real life example of CQRS might be found in a sporting event. Many people, fans, officials, photographers, commentators, coaches and players can perceive what's happening in the game by watching it unfold, play by play, which is like reading the event log of our system or by looking at the scoreboard, which is like the read model of our system state. However, only a few of those people, mainly the players and perhaps coaches and officials, can actually take actions to try and affect the course of the game. We call these actions that might affect the system commands. By adopting the separation of concerns between commands and read models, our goal is to make the system simple and intuitive to our users so they can get their jobs done. To that end, we aim to inform our users about system state in the most convenient, helpful, and timely way possible. Rather than exposing a whole database schema or resource hierarchy to our users, we reduce cognitive load by only showing them the summarized information relevant to their next action. Many systems that are designed database first expose complex, fine grain crud or create read update and delete operations to their users and place the burden of avoiding illegal or nonsensical system states on these users' expertise. In contrast, systems designed through event modeling expose commands, which are composite actions natural to the business and designed to achieve a specific set of business outcomes as specified by events. Finally, we aim to make it impossible for the system to represent illegal or invalid states, which keeps our users out of trouble and also out of our technical support channels. Our command handling logic, which we'll discuss in a future module, is responsible for synchronously validating the state change requested by the user to ensure that it makes sense and maintains system consistency. The read models are certain to be consistent as of a particular event. These characteristics take the burden of avoiding illegal states away from the user. We represent commands in our event model diagram by using blue sticky notes containing a verb phrase in the imperative. For example, we see here the end ride command from our autonomous vehicle ride sharing app, Autonomo. Just as we saw with events, commands have an associated data payload containing details of the change the user is requesting. For example, the end ride command must include the ride identifier and might also include details like the drop-off time and location. We place the command in the central timeline lane between the triggering interface and the resulting event. Once we have added the command, we can connect the triggering interface to the command and the command to its resulting event using data flow arrows. Each flow from interface to command to event is called a state change slice. Every event model can be decomposed into four types of slices, where each slice represents a unit of implementation effort. Decomposing a model into slices helps us measure the work to be done versus the work accomplished, sort of like tickets in a Kanban board. Slices also allow us to focus on and discuss specific state changes in isolation from the rest of the model. We've just covered the state change slice, which is a path from an interface to a command to an event. We'll talk about another type of slice, the state use slice, in a few moments as we discuss read models. We'll introduce the other two types of slices, the external state import slice and the internal state export slice in our next module. Commands are speculative and untrusted and merely express the user's request to change the state of the system. In contrast, an event is a factual record of a state change. In practice, a command is usually triggered by an interface element like a button click or a form submission and is almost always handled synchronously. Because it's handled synchronously, the command provides us with a transactional moment, where we can evaluate the change being requested in the context of the current state. We can validate that certain invariant conditions are met in order to ensure consistency before we commit ourselves to a state change by recording an event. We can say no to a command by responding to the request with an error. If everything looks good, we can publish some events to our Kafka event stream. Because it serves as the transactional moment, the command is almost always handled synchronously, often by a web service tier of an application. The logic used for validating an incoming request is called the decide function, which we'll describe in much more detail in a future module of this course. Our users need to understand the current state of the system so that they can issue the proper command at the proper time. We inform them about the state of the system via read models. Read models are created and updated by the arrival of events. A function called Evolve encapsulates the business logic for how to modify a read model with the new information conveyed by a particular event. We'll discuss the Evolve function more in module nine. So for now, it's enough to understand that read models are populated by events according to some business rules. For example, the state of a particular ride will change over time as new events occur. A ride is initially requested by the user, then matched with a vehicle and scheduled by the system. Next, the rider is picked up and finally dropped off at their desired destination. Each of these event changes the state of the ride by adding new information, much like a finite state machine. Since the arrival of the events that populate a remodel happen at discrete times, this event stream forms a kind of logical clock for our system. These models represent the state of the system as of a particular point in time at the arrival of the most recent event. This property of events or systems makes it easy to maintain consistency, to reason about how our system changes over time and can even make it possible to recreate the system's state as of a particular moment in time. For example, the ride will only be in progress for a short period of time, namely the time between when the vehicle picks up a user and when that user is dropped off at their destination. If we don't happen to query this ride state during that time, we might not see the ride during its in progress state. However, since we have the events, we could rebuild the state of the system as it was after the rider was picked up but before that rider was dropped off, which we've indicated here as version three of this ride's read model state. We can build read models to serve a variety of data access patterns. For example, we might need to query the population of available vehicles by a wide variety of criteria when matching a vehicle with a ride request so we might be well served to store them in a relational database. On the other hand, the rider's view of their currently scheduled ride is a standalone data structure that goes through a sequence of statuses during its lifecycle and might be better stored in a key value store or cache for fast retrieval by its ID. Other use cases might need full text indexing or graph algorithms. Additionally, read models could be stored or conveyed in push oriented media like a compacted Kafka topic, an email or a web socket connection or they could be poll oriented like the queryable relational database or key value store. The main point is that we have lots of flexibility to build the correct read model for the job at hand. Now that we know a little bit about what read models are, let's take a look at how we represent them on an event model. We represent read models on our event model diagram using green sticky notes containing a noun phrase. For example, we see here the available vehicle's read model from Autonomo. As with events and commands, we can also make a note of the data comprising this read model. All components of the event model can be duplicated and placed several times along the model flow as needed. And this is especially true of read models, which are often populated by several events at various points during a business process. For example, a ride is affected by a rider initially requesting it, the system scheduling it, possibly a rider canceling it and the rider being picked up and finally, the rider being dropped off. We duplicate this read model card and place it several times along the central timeline lane alongside the commands. Once we have added the read model, we can connect events to the read models they populate and from the read models to the interfaces that need their data using data flow arrows. Each flow from event to read model to interface is called a state view slice and represents a single unit of development effort. This is what the Autonomo event model should look like after adding the commands and read models and connecting them to the interfaces and events with data flow arrows. Notice how the flow of information from interface to command to event and then from event to read model back to interface looks a little bit like a sine wave. The commands and read models occupy the central lane of the event model called the timeline and comprise the system's API. This API is often implemented as web services using rest with read models serving GET requests and commands as put, post or delete requests or via GraphQL with queries serving read models and mutations serving as commands or via GRPC with command and query RPCs distinguished by clear naming conventions. Regardless of implementation details, reads and writes are cleanly separated and mapped directly to business outcomes and serving to empower and inform the users. We've completed step three of our event modeling workshop by mapping out our API. However, currently all of our events and their associated commands and read models are all mixed up together in a single event stream. In our next module, we'll separate our events into different streams, based on which separate narrative each event belongs to and then learn how to model the interactions among these distinct narratives. If you aren't already, on Confluent Developer, head there now using the link in the video description to access other courses, hands on exercises, and many other resources for continuing your learning journey.