Get Started Free
‹ Back to courses
course: Apache Kafka® for .NET Developers

Schemas and the Schema Registry

3 min
Wade Waldron

Wade Waldron

Staff Software Practice Lead

Schemas and the Schema Registry


Schemas are a way for us to ensure that our data streams match a specified contract. We register them with a Schema Registry so that other users will have access. The built-in serializers for Avro, Protobuf, and JSON all include integration with the Confluent Schema Registry. Although JSON is normally a schemaless format, there is a JSON Schema add-on that we can leverage. This video will introduce the Schema Registry Client and show how it can be attached to a serializer in order to ensure message quality.


  • Schemas
  • Configuring a Schema Registry
  • The Schema Registry Client
  • Using the Schema Registry Client with a Serializer


Use the promo code DOTNETKAFKA101 to get $25 of free Confluent Cloud usage

Be the first to get updates and new content

We will only share developer content and updates, including notifications when new content is added. We will never send you sales emails. 🙂 By subscribing, you understand we will process your personal information in accordance with our Privacy Statement.

Schemas and the Schema Registry

Hi, I'm Wade from Confluent. Today, we're going to talk about how to integrate our .NET application with a Schema Registry. The Confluent Kafka client includes support for three major message formats, including Protobuf, Avro and JSON. Each of these three formats supports a message schema. This is basically a set of rules that outline the exact structure of a message. These schema can be stored in an external service known as a Schema Registry. The Confluent Cloud built-in Schema Registry is a good choice here. To make use of the schemas we can connect our serializers to the Schema Registry. The serializer will have its own version of the schema that it will use to serialize each message. The first time we try to serialize an object we look for a matching schema in the registry. If a matching schema is found, the current message and any future ones that use the same schema will be sent to Kafka. However, if no matching schema is found, then any messages that use that schema will be rejected. Essentially, an exception is thrown. This ensures that each message going to Kafka matches the required format. To connect to Schema Registry, we need to provide the appropriate configuration. A basic configuration includes a URL for the Schema Registry and a method of authenticating. Here we see a configuration that uses HTTP BasicAuth. However, we can also use other methods such as an SSL key store. Much like with our producer, it's unlikely we would want to hard-code these configuration values. Instead, we can leverage the ASP.NET configure method to create the SchemaRegistryConfig for us. We just need to provide it with an appropriate JSON configuration and it will automatically convert it to a SchemaRegistryConfig object. In this case, we are fetching the section that we labeled SchemaRegistry. Once we have a config, we can create a Schema Registry client. The CachedSchemaRegistryClient will store copies of the schema in a cache. This reduces the number of round trips we need to make which can improve performance and reliability. We can configure the size of the cache by adjusting the MaxCachedSchemas configuration value. The Schema Registry client is provided to the serializers through the constructor. Once the serializer has been given access to the Schema Registry, everything is ready. The serializer will handle downloading the schema and comparing any messages against it. This means that we can be confident that any messages published to Kafka will match the advertised schema. If you aren't already on Confluent Developer, head they're now using the link in the video description to access the rest of this course and its hands on exercises.