In most businesses, a variety of traditional and Event Processing Applications need to exchange Events across Event Streams. Downstream Event Processing Applications will require standardized data formats in order to properly process these events. However, the reality of having many sources for these events often leads to a lack of such standards, or to different interpretations of the same standard.
How can I process events that are semantically equivalent but arrive in different formats?
Source all of the input Event Streams into an Event Standardizer that passes events to a specialized Event Translator, which in turn converts the events into a common format understood by the downstream Event Processors.
As an example, we can use the Kafka Streams client library of Apache Kafka® to build an Event Processing Application that reads from multiple input Event Streams and then maps the values to a new type. Specifically, we can use the mapValues function to translate each event type into the standard type expected on the output Event Stream.
SpecificAvroSerde<SpecificRecord> inputValueSerde = constructSerde();
builder
.stream(List.of("inputStreamA", "inputStreamB", "inputStreamC"),
Consumed.with(Serdes.String(), inputValueSerde))
.mapValues((eventKey, eventValue) -> {
if (eventValue.getClass() == TypeA.class)
return typeATranslator.normalize(eventValue);
else if (eventValue.getClass() == TypeB.class)
return typeBTranslator.normalize(eventValue);
else if (eventValue.getClass() == TypeC.class)
return typeCTranslator.normalize(eventValue);
else {
// exception or dead letter stream
}
})
.to("my-standardized-output-stream", Produced.with(Serdes.String(), outputSerdeType));