Get Started Free
‹ Back to courses

How to Create a Data Product

Learn the first principle of a data mesh by creating new applications from your existing data products, and how it works in action.

4 min
‹ Back to courses
course: Data Mesh 101

Hands On: Creating a Data Product

4 min
Danica Fine

Danica Fine

Senior Developer Advocate (Presenter)

Rick Spurgeon

Rick Spurgeon

Integration Architect (Author)

Hands On: Creating a Data Product

Data mesh enables us to create new applications from existing data products. While the "Explore Data Products" page of the prototype allows you to view the data products you have access to, tab 2 "Create Apps Using Data Products" shows you one way to use them. This hands-on course walks you through the first principle of data mesh, and how it works.

Create a Data Product Within a Data Mesh

Open the "Create Apps Using Data Products" tab.

data-mesh-create

The data mesh creation script has bootstrapped a ksqlDB application for you. Using ksqlDB, you can build streaming applications using persistent queries, which implement further business logic from the input data products you have access to. In our prototype we’ve made this easy by creating a set of predefined business use cases represented by ksqlDB statements, which you can execute with the click of a button.

Select one of the provided sample business use cases:

data-mesh-creation-selected

When you select a use case, the prototype provides additional information about the ksqlDB query and the output topic that will be created. This Kafka topic, along with its schema, will be eligible for promotion to data product (we’ll come back to this a bit later). By executing the SQL statement, you create a new ksqlDB persistent query inside of the ksqlDB application, which will publish the resultant events to the Kafka topic.

For each use case, you can create a new data product by clicking the EXECUTE THIS KSQLDB STATEMENT button. The prototype application will submit the SQL command to the running ksqlDB application.

After successfully executing the ksqlDB statement, there will be a new persistent query running in your ksqlDB application. You have not yet created a data product, you have just created a new topic with transformed events from the input data products.

Publish a Data Product

In order to create a new data product, there is the additional step of publishing the event stream as a data product.

Open the final tab, "Publish Data Products."

data-mesh-publish

The "Publish Data Products" tab exemplifies how you might manage your own data products within your domain. You're a member of the Analytics domain, and you can see that the output topic from your ksqlDB application is available to publish.

To publish a topic as a data product, you must provide the necessary metadata as outlined by our prototype’s hypothetical governing body. This includes quality level, SLA, owner, and domain, which will help prospective customers discover and select the data products they need. This metadata is stored in the Confluent Cloud Data Catalog via HTTP API.

In this case, you are creating a new data product for high value stock trades, so you’ll add appropriate metadata for your team, a description, and appropriate values for the data quality of your new data product.

data-mesh-publish-dialog

Once you click Publish, the metadata for the data product is written to the Confluent Cloud Data Catalog, and the view changes to show it added to the mesh.

data-mesh-product-added Now that your new data product is published, go back to the "Explore Data Products" tab, where you can observe your new data product. Selecting the high_value_stock_trades product, you can see the values added in the previous step.

data-mesh-explore-new-product

Now that you’ve completed building and exploring the data mesh prototype, feel free to check out the project repository for details on how it's implemented using Confluent Cloud APIs and services!

Destroy Cloud Resources

Note

When you have completed your evaluation of the data mesh prototype, you should destroy all of the Confluent Cloud resources to prevent unexpected charges.

The prototype repository contains a command for you to use to destroy everything created in Hands On: Data Mesh Prototype Setup, but be sure to check the Confluent Cloud web console to ensure everything has been removed as expected.

To destroy the Confluent Cloud resources created by the prototype script, run:

make destroy

Summary

This course was initially created in the summer of 2021. Five years from now, we are likely to look back on this time as the early days of data mesh. We will probably consider its early adopters to be risk-taking visionaries, who paved the way for the data mesh frameworks and products of 2026. If you're blazing this trail, you really are making a significant contribution to the community. We appreciate your efforts and look forward to hearing about the things you build.

For more resources like this be sure to explore the other resources on Confluent Developer and reach out on the Confluent Community for any questions or feedback.

Use the promo code DATAMESH101 & CONFLUENTDEV1 to get $25 of free Confluent Cloud usage and skip credit card entry.

Be the first to get updates and new content

We will only share developer content and updates, including notifications when new content is added. We will never send you sales emails. 🙂 By subscribing, you understand we will process your personal information in accordance with our Privacy Statement.

Hands On: Creating a Data Product

Let's delve deeper into this data mesh prototype. In this exercise, we'll learn to create a new event streaming application from an existing data product. Then we'll cover how to publish our derived event stream as a new product for others to consume. Data mesh enables us to be able to create new applications from existing data products. While the Explore Data Products page of the prototype allow you to explore the data products you have access to, the Create Apps Using Data Products tab will show you one way to use them. The data mesh prototype has bootstrapped a ksqlDB application for you. Using ksqlDB we can build stream based applications, also known as Persistent Queries, derived from the existing data products we have access to. Here, we've made this easy by creating a set of predefined business use-cases represented by ksqlDB statements that you can execute with a click of a button. Each use case gives some additional information including the data product inputs as well as the resulting output topic. The ksqlDB statement that will be executed is shown for reference. After successfully executing the ksqlDB statement, there will be a new persistent query running in your ksqlDB application. Note that we have not yet created a data product, we have simply created a new event stream of transformed events from the input data products. In order to create a new data product, there is the additional step of publishing the event stream. The Publish Data products screen is designed to be a prototype of how you might manage your data products within your domain. As a member of the Analytics domain, we can see that the output topic from our ksqlDB query is available to publish. When you publish a kafka topic as a data product, you provide the necessary metadata. This metadata is stored in the Confluent Cloud Data Catalog, and this is what allows it to be discovered by others later on. In this case, we are creating a new data product for high value stock trades so we'll add appropriate metadata for our team, a description, and appropriate values for the data quality of our new data product. Now that we've published, go back to the Explore Data Products tab where you can observe our new data product. To remove a data product from the data mesh, return to the Publis Data products screen and click Remove From Mesh. Now that you've built and explore the data mesh prototype, feel free to check out the project repository for the details on how it's implemented using Confluent Cloud APIs and services. An important thing to note as you finish up, when you have completed your evaluation of the data mesh prototype, you should destroy all of the Confluent Cloud resources to prevent unexpected charges. For your convenience, the prototype repository contains a command for you to use to destroy everything created in Module two, hands-on exercise. To destroy the Confluent Cloud resources, stop the webserver with Ctrl-C and then run make destroy in the terminal. The script shows you what cloud resources it has destroyed, but it's always a good idea to check the Confluent Cloud web console to ensure everything has been removed as expected. With that, we can wrap up this module. To summarize, over the course of these exercises, we successfully brought up a data mesh prototype and within it, we explored data products, created a new data product, and published it for others to use. With the experience and understanding you've gained here today, I can't wait to see what you build next.