Staff Software Practice Lead
The foundation of an Event Streaming system is the events themselves. They take the form of messages that can be passed from one service to another through a Kafka topic. These messages consist of a unique identifier or key, and the value of the message. In this video, we'll see how to use the Kafka .Net Client to construct a message. We'll also discuss some of the key principles that should be followed when defining the message.
Topics:
We will only share developer content and updates, including notifications when new content is added. We will never send you sales emails. 🙂 By subscribing, you understand we will process your personal information in accordance with our Privacy Statement.
Hi, I'm Wade from Confluent. Let's take a moment and talk about how to represent events in Kafka when working with .NET. An event-driven system is made up of many moving parts. The individual pieces often take the form of microservices but that's not a strict requirement. You can build event-driven systems using a variety of different architectures. These systems are often built on the backbone of Kafka. Essentially, Kafka acts as the central nervous system of the application. It facilitates communication between the various pieces. A key element of this is that the services communicate primarily using asynchronous events. This asynchronous communication allows each piece of the system to operate with increased autonomy. This in turn leads to other benefits, such as looser coupling, increased reliability, and better scalability. But, what do we mean when we call something an event? Martin Fowler describes an event as "something that has happened in the outside world that is of interest to the application." This can be obvious, such as a customer buying an item in a store, but it can also be something more technical in nature. For example, if we have a fitness tracker, we might record an event each time the tracker connects to a mobile device in order to import data into our backend services. Regardless of what it represents, events are always something that happened in the past. This has important consequences because back in the present, we can't change the event. It's considered to be immutable. Changing a past event would be like inventing time travel, and if Hollywood has taught me anything, that's not a good idea. Another consequence is that the events are typically named in past tense. For example, we would name an event User Created rather than Create User. We also try to be descriptive with the names so that the events reveal their intended effect. An event such as User Updated doesn't reveal much, while an event like User Address Changed reveals a lot more. Once we've established what our event is and what kind of data it will contain, the next step is to package it into a message that we can push into Kafka. A Kafka message consists of two main parts. You can see them represented here as the generic types we have labeled Key and Value. Message keys are usually represented by simple types, such as strings or integers, although they can be more complex types, it's not common. The actual value of the key can be defined inline when you construct the message. Its primary purpose is to determine how the messages will be distributed in a cluster. This in turn impacts the ordering guarantees of those messages. If you care about the ordering of your messages, you'll want to pay close attention to the key. However, if the order doesn't matter, then the key can be considered optional. Often, the message key will be the identifier of a specific domain entity, such as a UserId or perhaps a DeviceId in our fitness tracker example. Using the device ID will ensure that all messages for that specific device are handled in order. The second part of the message is the value. This is where we would typically store the details of our event. Like with the key, it can be a simple type such as a string, but often we use a more complex object that can be serialized into formats such as JSON, Avro, or Protobuf. This might be a representation of our event, or it might be a domain entity that is relevant to the event. This is the part of the message that most of the downstream consumers are going to be interested in. They will consume the message, extract the Value, and use the data contained in it. In addition to the Key and Value fields, a message also contains optional metadata, such as a Timestamp and a collection of Headers. The Timestamp is populated by default but available for us to override. The Headers are useful for situations such as providing the type of serializer that should be used to deserialize the value. However, be careful when populating the metadata. If data is critical to downstream consumers, it might make sense to put it into the value rather than hiding it in the metadata. For example, if the Timestamp of our event is important, then perhaps it should be added to the value rather than populating the Timestamp field. Now that we know what an event is and how it can be represented in Kafka, the next step is to start producing and consuming those events inside of our applications. If you aren't already on Confluent Developer, head there now using the link in the video description to access the rest of this course and its hands-on exercises.