Get Started Free

Data Contract

An Event Processing Application can send an Event to another Event Processing Application. It's essential that the communicating applications understand how to process these shared events.

Problem

How can an application know how to process an Event sent by another application?

Solution

data-contract

Using a Data Contract or Schema, different Event Processing Applications can share Events and understand how to process them, without the sending application and receiving application knowing any details about each other. The Data Contract pattern allows these different applications to cooperate while remaining loosely coupled, and thus insulated from any internal changes that they may implement. By implementing a data contract or schema, you can provide the same record consistency guarantees as a relational database management system (RDBMS), which integrates a schema by default.

Implementation

By using a schema to model event objects, Apache Kafka® clients (such as a Kafka producer, a Kafka Streams application, or the streaming database ksqlDB) can understand how to handle events from different applications that use the same schema. For example, we can use Apache Avro to describe a schema:

{
  "type":"record",
  "namespace": "io.confluent.developer.avro",
  "name":"Purchase",
  "fields": [
    {"name": "item", "type":"string"},
    {"name": "amount", "type": "double"},
    {"name": "customer_id", "type": "string"}
  ]
}

Additionally, using a central repository, such as the Confluent Schema Registry, makes it easy for Kafka clients to leverage schemas.

Considerations

Rather than implementing custom support for a data contract or schema, consider using an industry-accepted framework for schema support, such as the following:

References

Confluent Cloud is a fully managed Apache Kafka service available on all three major clouds. Try it for free today.

Try it for free