In a software architecture, loosely-coupled components allow services and applications to change with minimal impact on their dependent systems and applications. On the side of the organization, this loose coupling also allows different development teams to efficiently work independently of each other.
How can we decouple Event Sources from Event Sinks, given that both may include cloud services, systems such as relational databases, and applications and microservices?
We can use the Event Broker of an Event Streaming Platform to provide this decoupling. Typically, multiple event brokers are deployed as a distributed cluster to ensure elasticity, scalability, and fault-tolerance during operations. Event brokers collaborate on receiving and durably storing Events (write operations) as well as serving events (read operations) into Event Streams from one or many clients in parallel. Clients that produce events are called Event Sources and are decoupled and isolated, through the brokers, from clients that consume the events, which are called Event Sinks.
Typically, the technical architecture follows the design of "dumb brokers, smart clients." Here, the broker intentionally limits its client-facing functionality to achieve the best performance and scalability. This means that additional work must be performed by the broker's clients. For example, unlike in traditional messaging brokers, it is the responsibility of an event sink (a consumer) to track its individual progress of reading and processing from an event stream.
Apache Kafka® is an open-source, distributed Event Streaming Platform, which implements the Event Broker pattern. Kafka runs as a highly scalable and fault-tolerant cluster of brokers. Many Event Processing Applications can produce, consume, and process Events from the cluster in parallel, with strong guarantees such as transactions, using a fully decoupled and yet coordinated architecture.
Additionally, Kafka's protocol provides strong backwards compatibility and forwards compatibility guarantees between the server-side brokers and their client applications that produce, consume, and process events. For example, client applications using a new version of Kafka can work with a cluster of brokers running an older version of Kafka. Similarly, older client applications continue to work even when the cluster of brokers is upgraded to a newer version of Kafka (and Kafka also supports in-place version upgrades of clusters). This is another example of decoupling the various components in a Kafka-based architecture, resulting in even better flexibility during design and operations.