Get Started Free
Anna Mcdonald

Anna McDonald

Principal Customer Success Technical Architect (Presenter)

Ben Stopford

Ben Stopford

Lead Technologist, Office of the CTO (Author)

Command Query Responsibility Segregation (CQRS)

You now know that event sourcing is an alternative way of storing data, using a log of events rather than a table that is updated, read, and deleted. You also know that reading data with event sourcing requires an additional step:

event-sourcing-requires

Basically you need to read all of your events in, then perform the extra step of a chronological reduce to get to the current state. So you get to collect your detailed events, while the transformation step ensures that your application logic also gets the data in the shape that it needs, i.e., a compact, table-shaped description of what's in the shopping cart.

Large Chronological Reductions

A problem with this setup occurs, however, when the number of events becomes very large. Lots of events will make the reduce operation take a long time to execute and may use more clock cycles than you might be comfortable with. This may not be an issue with a small shopping cart, but if you were to perform a chronological reduce across the entire history of a checking account, for example, it could be lengthy.

The solution to this problem is Command Query Responsibility Segregation (CQRS), which performs computations when the data is written, not when it's read. This way, each computation is performed only once, no matter how many times the data is read in the future.

How CQRS Works

CQRS is by far the most common way that event sourcing is implemented in real-world applications. A CQRS system always has two sides, a write side and a read side:

cqrs

In the write side (shown on the left side of the diagram), you send commands or events, which are stored reliably. The read side (shown on the right side of the diagram) is where you run queries to retrieve data. (If you are using Apache Kafka, it provides the segregation between the two sides.) So unlike in vanilla event sourcing, the translation between event format and table format in CQRS happens at write time, albeit asynchronously.

Separating reads from writes has some notable advantages: You get the benefits of event-level storage, but also much higher performance, since the write and read layers are decoupled. Of course, there is also a tradeoff: the system becomes eventually consistent, so a read may not be possible immediately after an event is written. This must be taken into account when designing a system.

Implementing CQRS with Kafka

To implement CQRS with Kafka, you write directly to a topic, creating a log of events. (This is faster than writing to a database, which has indexes that must be updated.) The events are then pushed from Kafka into a view on the read side, from where they can be queried. Chronological reductions are performed before the data is inserted into the view. If you have multiple use cases with various read requirements, it's common to create multiple views for them, all from the same event log. To make sure that this is possible, events should be stored in an infinite retention Kafka topic so they're available to the different use cases should the read side need to be rebuilt, or a new view need to be created from scratch.

Adding ksqlDB

To actually implement CQRS, you can use Kafka with Kafka Connect and a database of your choice, or you can simply use Kafka with ksqlDB.

ksqlDB provides both the required streaming computation—the chronological reduce—as well as the materialization of the view to read from, all within one tidy package. As mentioned, the reduce is done at write time, not at read time. So as you write a new event, the effect of that event is immediately incorporated into the view. This means that the contents of the shopping cart are calculated once and are updated every time you add, remove, or delete an item from the cart, much like how you might use a regular database. However, all of the events are held in the event log in Kafka, preserving all of their detail.

Now that you understand the basics of event sourcing and CQRS, get some practical experience with events on Confluent Cloud by completing the exercise in the next section.

Use the promo code EVENTS101 to get $25 of free Confluent Cloud usage

Be the first to get updates and new content

We will only share developer content and updates, including notifications when new content is added. We will never send you sales emails. 🙂 By subscribing, you understand we will process your personal information in accordance with our Privacy Statement.