Get Started Free

Event

Events represent facts and are used by decoupled applications, services, and systems to exchange data across an Event Streaming Platform.

Problem

How do I represent a fact about something that has happened?

Solution

event An event represents an immutable fact about something that happened. Examples of Events might be orders, payments, activities, or measurements. Events are produced to, stored in, and consumed from an Event Stream. An Event typically contains one or more data fields that describe the fact, as well as a timestamp that denotes when the Event was created by its Event Source. The Event may also contain various metadata, such as its source of origin (for example, the application or cloud service that created the event) and storage-level information (for example, its position in the event stream).

Implementation

In Apache Kafka®, Events are referred to as records. Records are modeled as a key / value pair with a timestamp and optional metadata (called headers). The value of the record usually contains the representation of an application domain object or some form of raw message value, such as the output of a sensor or other metric reading. The record key is useful for a few reasons, but critically, it is used by Kafka to determine how the data is partitioned within a stream, also called a topic (for more details on partitioning, see Partitioned Parallelism). The key is often best thought of as a categorization of the Event, like the identity of a particular user or connected device. Headers are a place for record metadata that can help to describe the Event data itself, and are themselves modeled as a map of keys and values.

Record keys, values, and headers are opaque data types, meaning that Kafka, by deliberate design to achieve its high scalability and performance, does not define a type interface for them: they are read, stored, and written by Kafka's server-side brokers as raw arrays of bytes. It is the responsibility of Kafka client applications, such as the streaming database ksqlDB, or microservices implemented with the client libraries, such as Kafka Streams or the Kafka Go client, to perform serialization and deserialization of the data within the record keys, values, and headers.

When using the Java client library, events are created using the ProducerRecord type and are sent to Kafka using the KafkaProducer. In this example, we have set the key and value types as strings, and we have added a header:

ProducerRecord<String, String> producerRecord = new ProducerRecord<>(
  paymentEvent.getCustomerId().toString() /* key */, 
  paymentEvent.toString() /* value */);

producerRecord.headers()
  .add("origin-cloud", "aws".getBytes(StandardCharsets.UTF_8)); 

producer.send(producerRecord);

Considerations

  • To ensure that Events from an Event Source can be read correctly by an Event Processor, they are often created in reference to an Event schema. Event schemas are commonly defined in Apache Avro, Protocol Buffers (Protobuf), or JSON Schema.

  • For cloud-based architectures, evaluate the use of CloudEvents. CloudEvents provide a standardized Event Envelope that wraps an event, making common event properties such as source, type, time, and ID universally accessible, regardless of how the event itself was serialized.

  • In certain scenarios, Events may represent commands (instructions, actions, and so on) that should be carried out by an Event Processor reading the events. See the Command pattern for details.

References

Confluent Cloud is a fully managed Apache Kafka service available on all three major clouds. Try it for free today.

Try it for free