Get Started Free
‹ Back to courses
course: Designing Event-Driven Microservices

The Listen to Yourself Pattern

6 min
Wade Waldron

Wade Waldron

Staff Software Practice Lead

The Listen to Yourself Pattern

Overview

The Listen to Yourself pattern is implemented by having a microservice emit an event to a platform such as Apache Kafka, and then consuming its own events to perform internal updates. It can be used as a solution to the dual-write problem since it separates Kafka and database writes into different processes. However, it also provides added benefits because it allows microservices to respond quickly to requests by deferring processing to a later time.

Topics:

  • What is the dual-write problem?
  • What is the listen-to-yourself pattern?
  • How does the listen-to-yourself pattern eliminate dual writes?
  • When are the events processed in the listen-to-yourself pattern?
  • Is the listen-to-yourself pattern eventually consistent?
  • How can we deal with eventual consistency?
  • How do we validate events?

Resources

Use the promo code MICRO101 to get $25 of free Confluent Cloud usage

Be the first to get updates and new content

We will only share developer content and updates, including notifications when new content is added. We will never send you sales emails. 🙂 By subscribing, you understand we will process your personal information in accordance with our Privacy Statement.

The Listen to Yourself Pattern

Hi, I'm Wade from Confluent.

Sometimes, when we build a microservice, it may need to perform a series of complex steps in response to a command.

It's possible that these steps may require transactions that span multiple systems and may take longer than we are willing to wait.

In these situations, we can reach for the Listen to Yourself pattern as a potential solution.

Let's consider a simple example.

Imagine a user has a fitness tracker.

This tracker collects data such as steps, heart rate, etc.

The data is periodically synced to the user's phone,

and then pushed to a microservice somewhere in the cloud.

It can then be processed to build a complex model of the user's health.

However, this raises potential issues.

Throughout the process we might need to access the database multiple times, both for reads and writes.

We also need to process all of this data to build up the necessary models of the user's health.

Meanwhile, we don't want the user to be stuck waiting while we execute all of this logic.

We want to respond to them as fast as possible so they can disconnect and move on.

At the same time, we might want to create some events and emit them to Apache Kafka.

This might mean that we need to update both the database and Kafka in a transactional fashion.

And that puts us face-to-face with the dual-write problem.

We have to update two systems that aren't transactionally linked.

If you aren't familiar with the dual-write problem, check out the video linked in the description.

This is where the Listen to Yourself pattern can help.

Rather than performing all of the update logic while the user waits, we can defer it.

Instead, when we receive the command from the user,

we convert it to an event,

and then emit the event to Kafka.

Once the event is in Kafka, it's safe and we can respond to the user.

So in our fitness tracker example, the user's device might send a SyncDevice command.

We would convert that to a DeviceSynced event and send it to Kafka.

Then we immediately reply to the device without any further processing.

This allows a fast response without excessive waiting.

At the same time, the original command only writes events to Apache Kafka.

It doesn't write anything to the database, which means we don't encounter the dual-write problem.

After all, there's only one write happening here.

However, our processing and database writes do have to be completed at some point.

Therefore, the next step is to have a separate process in our microservice that listens for the event.

This is where the name "Listen to Yourself" comes from.

When it's received, it can initiate all of the complex processing

and update the database as appropriate.

Once again, here we are only updating the database, not Apache Kafka, so we continue to avoid the dual-write problem.

However, this approach does come with some challenges.

Because we write the event and asynchronously update the database, we introduce potential race conditions.

Once the original caller receives its reply, it might assume the database has been updated.

If it goes to look for the data before the event is processed,

then it won't find what it is looking for.

Meanwhile, if a downstream system consumes the events and calls the microservice looking for further details,

again, they might not be there yet.

This happens because the microservice is eventually consistent.

At any given moment, there may be inconsistencies in the data that result from unprocessed events.

Eventually, once all of the events have been processed, the microservice enters a consistent state, but that can take time.

To some extent, we can mitigate this with careful naming of our events.

A "DeviceSynced" event only suggests that the data has been synced, not that it has been processed.

We could emit a "DataProcessed" event once we are done,

but that happens along with our database update which is going to re-introduce the dual-write problem.

In general, careful naming only gets you so far.

If you find yourself encountering this problem, you might be better off considering something other than the Listen to Yourself pattern.

And what if we encounter an error after the caller has been disconnected?

For example, we may need to perform validation on the data contained in the event.

If that validation fails, the caller has moved on and won't know about the failure.

This can create additional inconsistencies in our system.

We can try to address this by moving the validation logic.

If we perform validation on the command, before emitting the event, then we can catch errors while the caller is still connected.

We can then ensure that the event being emitted is valid.

This will slow things down, but that may be an acceptable tradeoff.

We do need to be careful, however, that we do all of this validation without requiring database writes, otherwise we re-introduce the dual-write problem.

The Listen to Yourself pattern can be a useful tool for solving the dual-write problem, however, it isn't perfect.

It's a great option for situations where we are looking to minimize upfront processing time.

It can also be useful in situations where we don't have to worry about other consumers trying to read the data immediately after we emit the event.

However, in other situations, we might consider looking at alternative solutions such as a Transactional Outbox or Event Sourcing.

Have you had a use case for the Listen to Yourself pattern that you want to share? Let me know in the comments below.

Don't forget to follow our courses on Confluent Developer and our YouTube channel.

And please, like, share, and subscribe so we can keep bringing you more content like this.

Thank you for joining me, and I'll see you next time.