Adam Bellmare is the author of the O'Reilly book, Building Event-Driven Microservices. And he's joining us today to take us on his journey from working on Shopify's very first event streaming service, to the things he's been doing more recently on data meshes. It was great to talk to him because it's always nice to hear the big ideas coming from people who've actually spent time at the coalface, making them happen. We start though with the reminder that Streaming Audio is brought to you by Confluent Developer, which is our site for learning more about event-driven services in general and Apache Kafka specifically. It can teach you something new, whether you're an event systems guru or a complete beginner. So, take a look at developer.confluent.io.
And while you're learning, if you need a Kafka cluster, where you can easily spin one up with Confluent cloud, sign up with a code PODCAST100, and we'll give you $100 of extra free credit. And with that, I'm your host, ### Kris Jenkins, this is Streaming Audio. Let's get into it.
My guest today is ### Adam Bellmare, who is a colleague of mine at Confluent. He is the author of Building Event-Driven Microservices, the O'Reilly book, and forms a member of the data teams at BlackBerry, FLIP, and Shopify, whom I'm sure we all know. Adam, thanks for joining us on Streaming Audio.
Thanks for having me, Kris.
It's good to have you. So, you've got one of those interesting careers that are very much in the retail space, that has that problem where when you're dealing with retail, you're dealing with lots and lots of different customers, with lots and lots of different needs, and lots of departments with different interests on them, slicing up that data. Tell me a bit about life on that wide scale.
Yeah. Well, working backward, before coming to Confluent, my most recent role at Shopify, I was working as a staff data platform engineer. And so, during my time there, our primary focus was on getting Shopify's main e-commerce data, from the main application itself, into Kafka events, Kafka topics. And to do that, we were using Kafka Connect and Debezium to source all of that data out of the MySQL databases underpinning each of the 250 odd shards of the main application.
Right. So, it was one MySQL database logically split into 250 [inaudible 00:02:51]. Okay.
That in itself is a challenge.
Yes. Yeah. Honestly, the engineering team did a fantastic job with it. I was truly impressed at the rigor that they have applied around it.
Maybe we should start there then. Particularly, at some point, they go from being perfectly happy to having 250 relational databases. What's the pressing need to get those into event streams?
Right. So, there are a few cases actually. For one, the operational side of the business wanted to be able to react to events, things that have happened. So for example, when we get a new user when a user registers, we want to be able to send them a welcome package and say, "Welcome to Shopify. Here's the information you need to get started. Here's where you can find some help," and so forth, and so on. Now to do that, you could, of course, have the main application perform that operation. But eventually, if that main application is doing everything that your whole business could possibly ever do with any of your data, it can get quite crowded. So, having that decoupling was one of the main operational reasons.
Now the analytics team, whose operations... sorry, whose work I was more familiar with, because being in the data team, all of the data analytics would also occur there. Basically, we wanted a better experience for our customers to know what's going on in their Shopify stores. What's going on there right now? What's selling? What's not selling? And of course, the older way to do it is we'll do a report, we'll run a job, maybe at 3:00 in the morning, and when you wake up, you'll see yesterday's data. But we wanted that faster. We wanted that in real-time, what's coming through, what's selling, what's hot, what's not?
Yeah. Yeah. That makes sense. So, you are coming to this as the author and expert in microservices, right?
Yes. Yeah. Yeah.
You published the book, you've got to take that badge.
Yeah. Okay. Thank you.
So, how does that experience play out then? Gradually moving to an event stream-based thing, you're thinking microservices... Tell me how that factors in.
Right. So, for Shopify specifically, at the time that I entered and to when I left, Shopify was focused primarily on getting data into event streams, the Kafka topics, that had a big in-house team running their own Kafka clusters and running Kafka Connect. And part of the work that I did is I worked with the team that was getting all of the connectors working to pull data in from all of these MySQL servers. And there were some corner cases and nuances there that we don't really have to go into, but mostly just around how data is managed and how data is... there's data locality laws as well that we had to ensure we're following.
But all of that aside, the goal was we take data out of MySQL servers and we get it into Kafka topics, well defined, strictly schema'd Kafka topics. At which point then, our first customer, the analytics teams, would be able to start building up these streaming models of existing sales, what orders are coming through, what are people looking for, and then to start building additional products like machine learning recommendations on top of that, for example.
Oh, yeah. Yeah. That one must be interesting. I've often wondered how do you train models in real-time once you've got an event system-based thing?
I don't know. I'm not qualified enough to answer that.
We will pull someone in for a future podcast. I'm getting distracted.
But I do mention where Shopify was in their journey there. Because I would say that one of the big first steps to building event-driven microservices is you do need that data, you need to get some data into an event stream. Otherwise, you have nothing to drive your microservice. And so, that's where Shopify was in that journey. So, early days, and it does remain to be seen... because now that I no longer work there, it does remain to be seen how deep of an adoption do they go into microservices if they do at all. Because I know that they had very well built servers... services, I should say, nice modular services, where it was quite easy to deploy just parts of the new module. So, I wouldn't say there was a hot pressing need for the main engineering teams to move to microservices. But for the analytics team, if you think of just stream processors that build up models to provide business insight as microservices, which you can and should do, then the analytics team was definitely the first number one customer where that was going to occur.
Yeah. It blurs the line between reporting and event streaming and microservices, right?
It does. Yes. Very much so. Because a long-running streaming job versus a long-running event-driven microservices, well they both react to events and they both may emit events. And so-
And they're both building up a state where-
Yeah. Let's not dwell on Shopify too much. Obviously, we can reference them as much as you like, but I want to slightly get to the larger picture, which is the evolution of that from the initial centralized database, be it sharded or not, into this event streaming world, and then what problems you face as that grows?
Right. Yeah. So, getting that data into event streams is obviously one of the more important first steps. Like I said, you need to be able to react to that data. And so, change data capture, a great way to get started doing that. One of the things that I've learned when going to build microservices is that a lot of it... It matters quite a bit... Yeah, sorry. I was going to say, it matters quite a bit as to what data you have available, but also what tooling you have available to do stuff with it. So, that's where I would say the micro... it's often called the microservice tax. So, this is that baseline of investment that you need to make it easy to run services. Because if you think-
What does that consist of?
Yeah. So, if you think back to, I don't know, let's say a decade ago, arbitrarily. If you have several services that you're running in your company, maybe you're an early adopter and you've just moved into the cloud, you are still requisitioning a server, and you're still logging into this server, and you're still executing scripts on it, and you're still executing things you want to install, and it's very purpose-built for that application. If you need different packages, different libraries, dependencies, all of that installed, but it's wedded to that infrastructure. And so, if you're doing other services, then you always have that overhead of building that up. And so, that burden, let's call it, that toil, prevented people from just spinning up services whenever they wanted. Because there's that overhead, you got to do that.
So, the microservice tax is, okay, we're going to spend the work to make that easy for people to avoid. You want to run a service? In the ideal world, it should be as easy as you push a button, it creates your GitHub, it creates your continuous integration pipeline, it creates your, let's say you're using Kubernetes, creates a base manifest file. It includes all of that, pushes that into a repo, starts it up, and then maybe on another dashboard, for example, you have your blue-green deployment options, and you can just use those tools and you can deploy your Hello, World! application that you just generated in minutes. So, that's the ideal. That's what you really want if you're going to be investing into microservices.
So, this is the idea where you've got your own platform team perhaps?
Right. Yes. Or, and as we're moving more into software as a service and cloud computing, maybe you just purchase it from someone, from a vendor. You say, "Hey, can you provide me this functionality? Can you make this easy for me? Can you lower my toil and my burden so that I can just focus on building the applications?"
Yeah. Yeah. That's a nice place to get to.
Right. Yes. That's the ideal.
Yeah. Probably, everyone's got some version of that, be it perfect or cobbled together, right?
Yeah. Okay. So, that's the infrastructure layer of organizing these things. But let's assume that's taken care of and you've got an event streaming system, I want you to tell me about the business requirements' downside of it as well. Do you know what I mean? You've got these things where different parts of the business have different needs on what should be, to them, the same substrate of data stuff happening.
Yeah. I would say that when you're looking at it from a domain-driven design perspective, which is my favorite way to look at it, the business functionality that you're looking for, whatever that problem is that you're solving as a business. And if I can allude back to Shopify, one of the business purposes was we wanted to make a nice, real-time dashboard that showed recent sales, what items were selling well, and the most popular products, for example.
That's a business requirement. And to encapsulate that all within one domain, would itself make a really good service. So, having that mapping between your business logic or your business requirements, I should say, to the service, most of the time, is fairly direct. Now, it can obviously become more complicated when you have a very large business requirement like we want to sell things to make money. Well, of course. Okay. Yes, that's true. But you're probably not going to have a single service that can do that.
Yeah. If we could, if only it was like a chicken that laid golden eggs. Yeah. We could break that down.
So, there's that complexity involved there. But assuming you have a fairly focused, well-defined problem there, a lot of what it comes down to then next is, where do you get the data you need to make that solution? And this is a problem that exists intrinsically when you have lots of data when you're in a distributed system, a distributed environment. And there are different schools of thought as to how you can approach this. The approach I prefer, and as evidenced by my microservices book, Event-Driven Microservices-
Get the plug in.
... is to make the data available via event streams. Because there are many good reasons why. They're immutable, self-updating, as a consumer, you can pull information in from many different streams and mix it all together and compose your own products. But like I said, there are different schools of thought around this, because there's also the request-response or synchronous microservices where you typically ask services to do work on your behalf. So you say, "Can you do this work for me?" And then the service says, "I've done that work for you." And you can divide up your business boundaries differently depending on whether you're doing event-driven ones or whether you're doing synchronous ones.
Do you have any theories about... that's almost like Conway's Law, right? Depending on which one of those roads you choose, you're going to end up with a different kind of business infrastructure. Right?
Yes, absolutely. It's always very much about trade-offs. I guess the choices that you make there do depend on what it is you're trying to achieve. But one of the nice things about microservices, but even about making data available, both of those, is that one doesn't preclude the other. So, if you choose, for example, like Shopify is doing, if you choose to make a bunch of your important business data available via event streams through change data capture, it doesn't preclude you from also offering APIs for other servers to call and use. You have some options now and you have some flexibility.
And one of the things that I am very fond of, event streams, and Kafka topics, or is that they provide you with a lot of flexibility. A lot of flexibility to, first of all, to build production-grade applications, but also just to experiment with stuff, to be able to access data from operational systems, or maybe from analytical systems that are doing streaming analytics, but to be able to access that data and start mixing it together to try to come up with new products, novel solutions, different ways of looking at problems.
For example, cross-referencing data from accounting, like accounts receivable streams, warehouse input streams, and warehouse egress streams, to track maybe are we losing product anywhere? Are things getting misplaced? What are our efficiencies? And when this data is encapsulated in these domains and is inaccessible otherwise, you can't do those things. You lose a lot of potentials and you lose a lot of, I like to call it, operational mobility in deciding what it is you're going to focus on and what it is you're going to build.
Yeah. Yeah. You can easily end up with a situation where there are two services you need in order to answer the new question you're asking. And if you're in that situation where you have to ask other servers to do things for you, where you can't ask A because it doesn't know about B, and you can't ask B because it doesn't know about A, you have to interfere in behind the curtain of both. Right?
Yeah. And a lot of the times it's really about, if I just had access to that data, I could probably get my own answer. And that's really the crux, in my opinion, of a lot of the shortcomings or problems that I've seen in my career, so far, in data. That difficulty in accessing the actual fundamental truth, the actual fundamental data, in a way that's consistent, in a way that's reliable and trustworthy.
Yeah. Which, again, I think plays back into the whole immutable data thing. Because, firstly, you've got to guarantee that no matter who you let read your data, they can't change it.
But also, you've got to guarantee that if you read from A, and then read from B, A hasn't changed its answer. So, you've got a chance of synchronizing the two essentially.
Yes. Yeah. Exactly. Yeah.
But this plays into one of the big ideas of data mesh, which is data as a product.
Yes, absolutely. One of the things I really like about data mesh is that it's provided us a very good set of language. It's provided us a great language, a set of tool, a set of concepts that we can use to talk about these things. And it gives us a really great set of components to talk about, the principles, the four principles are excellent because it acknowledges that, first of all, it is partially a technical problem. How do we make data available? How do we make it useful to people and to systems that want to use it?
And it acknowledges that we haven't done a very good job of that collectively. We've tried hard, don't get us wrong. No one's sleeping at the switch here. We've all tried lots of different things and they work. The centralized data lake where you pull data in and then you clean it up and you maybe promote it from bronze, silver, gold layers, and there's oversight, it can work, but it's often quite brittle. Things break. The people who are typically responsible for creating the data aren't the ones who are on the hook for making sure it's readily available to everybody else outside of their domain boundary.
I mentioned there's the technical reason, but there's also the social acknowledgment in data mesh. That we need to come together, we need to de renegotiate responsibilities, we need to find common ground to make things efficient and effective, not just for the users of the data, but also for the producers and for everyone who's in between as well, the data product manager, which is a title that has been introduced by data mesh, who's responsible for bridging that gap between the data in a domain and what are our prospective customers need, what are the data sets that they would want.
Yeah. Yeah. That seems to me to be the most fundamental shift of data mesh, that the people producing the data are taking responsibility for making it available. Right?
Yes. Yeah. Yeah.
And the other stuff falls out of that almost.
Yeah. It's really based on having good, clear communication between those who would use it and those who own it, and finding how do we make this useful? And this question is answered by discussion. And the answer that you'll get is going to vary from company to company, from team to team. And so, it's really important to engage in those conversations and to find out what it is we're doing, where is it we're going, and what are our needs around that. And how do we make sure those needs are met.
Do you have any experience of different companies answering that question on the ground?
I do, yeah. So, there are a couple of commonalities that I've seen, again and again, I've seen in a number of Slack channels, in conversations with others, and in conversations with our own teammates. And again, to touch back with Shopify, it's the same thing, data is coming in faster. People are using their phones, they're doing stuff on their devices. So, data is happening with greater volume, but also with the expectation that we'll be doing something with it sooner, that we don't have until tomorrow, or the next day, or the day after, that we want to be able to react to it quickly and in real-time, primarily for our customer experiences, but also just to be competitive, to ensure that we're doing the absolute best we can.
And so event streams, you could probably tell that's where I was going with this, but event streams fill that need and they fill it very, very well. So, I'm obviously biased in my favor of event streams. But I have to say, by publishing that important data, important business facts, to event streams, you make it accessible to all these others, you make it available for them to use it as they need, and you can also start bridging the gap between operational workloads. Because event-driven microservices, operational stuff, no problem. Event-driven microservices for analytical stuff, no problem. You can fulfill both of your use cases with an event streaming data mesh.
Right. Yeah. I remember you were saying a while back, something about how this relates to... was it Saxo Bank?
Oh, okay. Oh, yeah, yeah. Sorry. I talked around [inaudible 00:25:24].
I want some specifics.
Yeah. Okay. So yeah, actually, we have a great customer, Saxo Bank. They actually gave... I believe his name is Paul. Yeah. Paul gave a presentation on their use cases and how they have built a data mesh and are very focused on event streams. And so, there's a couple of interesting things that they ran into, and this is why I say it is quite specific how you're going to build this to your organization. So, for one example, they are largely a .NET shop. And so, they found it difficult to use Apache Arrow to integrate it properly as they needed with .NET. I think I had to do with some of the lack of tooling around, I believe it was code generation and validating schemas using .NET tooling.
Oh, I know .NET has some great stuff for type providers. I imagine if it's got missing a particular plugin, that could be painful.
I'm not 100% sure, but what I can say is they did decide to go with Protobuf but they made that decision centrally. And so, one of the pillars of data mesh is federated governance. And as a federatedly governed body, they decided that we're going to use Protobuf and we're only going to use Protobuf for our event streams. So, that means that if you want to write to it, if you want to create a data product and publish that data to an event stream, has to be in Protobuf.
But the benefit there is, while it may be a restriction, in one sense, any of the toolings that's built by one producer to write Protobuf-centric stuff to validate their schemas on the consumer side to take the schemas, generate code, make sure that it matches, and that you can generate test data, let's say, out of the Protobuf schema, all of that's reusable by everybody in the organization. And like that microservice tax I mentioned earlier, you do have to invest in that. But once you've invested in it, you can replicate that across all your producers and consumers so they can all commonly use that tooling.
Another example of things that the... and now I do have to touch back on that, but that's also an example of self-service, right? So these tools are built, hey, as a consumer, I can consume that data, I can create test data, I can run through my tests with it, make sure everything's looking fine, make sure that my application's processing the data correctly, but I can do that all on my own with the self-service tooling.
Yeah. It reminds me of that age-old debate in companies where you have like, do we use one common programming language?
Right. Yes. Yeah.
It's great until you're the person that wants to use something different for this and then it's painful. But we have the choice to speak the same programming language or speak the same data language.
Yes. I would say that, I think we're fortunate in a way that there are seemingly far fewer of these, let's call them event schema definition languages, than there are programming languages. But you do touch on a very important point there, which is... I think the term is polyglot support. If we want to support multiple languages for our consumers, if we want to make it easy for them to generate a new client, let's say, if we want to make it easier for them to generate a microservice client that would read from a data mesh, this is again the responsibility of federated governance. And with federated governance, you want people from across your company to get together and have a discussion, an actual discussion, a debate, about what it is we're going to support and what it is that we won't support. And these can get somewhat heated, or emotional, or passionate because we all have our favored things.
Yeah. And you either get to a final decision or somebody walks away unhappy.
Right. And the thing to make it work well though is you generally need to establish... I call it a reason for the change. And it isn't to say you can't change, but there needs to be a bit of a barrier to entry here. So, for example, if your company uses Scala and Java and someone's like, "We really need to use Kotlin instead." Then the question goes back to the person who said that and says, "Well, why? Not to say we won't do it, but why do we need to do it? And what is it that we're doing right now that maybe we should stop doing or replace it?"
Because if you keep expanding to say like, "You can use almost any language under the sun, you can do whatever it is you want." Sure, you have a lot of freedom, but now you also have a lot of potential issues on the consumer side, because they're going to have to figure out how do you tie all this stuff back together, right?
Yeah. Yeah. You're absolutely right. We're lucky to have fewer choices for data representation formats. Because when you can agree on data, and especially immutable data, which you say at once, you just say the same thing every time, then we have a chance of building that Tower of Babel where we speak the same language.
You've been building a prototype, you and other people on the team for... Oh, there goes a dog. People on YouTube, we've just seen a dog run by. People on audio might have heard the tail wagging, which is fun.
She's going out for her walk.
Oh, good for her. So, you've been building a prototype to try and illuminate, illustrate, these ideas about data mesh.
Yeah. So, we were working on this in the fall of last year. So, the idea with the prototype is we wanted to showcase some possibilities of what a data mesh, using event streams, would look like or could look like, but in actual creation, as a clickable, navigatable, usable, piece of software. We actually had a bit of a challenge because, first of all, and I have to be quite clear about this, data mesh isn't just something you can download and install. There's a lot of social conventions, there's the responsibilities, there's renegotiation, there's determining what technologies you're going to support, what data formats, what type of data products, so if you want queryable ones, if you want some that are more event stream based.
But that being said, this is an opinionated prototype. Because to actually create something, we have to start saying, "Okay, listen, we're going to use event streams. We want to make it so that people can see what streams are available as formally published data products." When I was talking about creating microservices earlier, I mentioned how being able to see what data you have is number one. You need the data available and you need to be able to find it, to see it. Otherwise, you're not going to be able to build anything.
Yeah. This is a discovery piece, what have you actually got in your organization.
Exactly. So, we wanted to model what discovery would look like. For one example, so I am a consumer. I come to the data mesh prototype and I can see, okay, we have several schemas in here... Oh sorry, several event streams. And with those event streams, we can see who owns it. We can see their schema. We can see information about service levels. So, for example, if the system responsible for publishing data to this stream has a failure or an outage, is it going to be restored within an hour, or do we have to wait until Monday when the team comes back from the weekend? And then additional metadata perhaps, like tags, about like, is it personally identifiable information in here? Do you need special security clearance? So, those sorts of metadata are attached to it, but it's there to also provide information to you as a perspective consumer to identify self-service. What do we have available? What can I use? What can I do?
So, let's make that more concrete. So, imagine I am some business analyst at Shopify. And I come into my data mesh and I won't see a literal list that says here's analytics data for users, and that has an SLA of 99.9%, and here's a catalog of parcel tracking events as they stream around the world, and here's latitude and longitude lookups for that, which has a lower SLA, if that might be something that's not fixed until Monday morning. But I'm seeing all the different streams of things I can choose from my business, almost like its own little shop.
Exactly. Exactly. Yeah. And providing that information to the user to make those decisions is our main goal here. We want to have them make a well-informed decision. And to touch on those SLAs, for example, let's say you want to build a new use case for this data, for this data and this data mesh, and you're using several event streams that are fairly high priority, but then you have one that's of a lower priority in terms of SLAs, when you union or merge all of these together, the new SLA of your data set could only be as good as the weakest guarantee. Because if you have that outage, that will be one of the things. But the consumer should be able to say, "Okay, perhaps I want to use Kafka Connect and I want to sync those into an S3 bucket. And once they're in the S3 bucket, I'll do what I want with them."
Now, a non-streaming extension, because the data mesh is... it's not as if a data product is only served by one way, you can serve it many ways. So, that data team owner may also... Sorry, that data product owner may also have already write that data to an S3 bucket as well. So, you could have an event stream, and you could also have a batch data set. Now, depending on who owns that is another matter, and that would be something... again, federated governance.
Yeah. Yeah. But you've got these different ways you can access the data quality of it. So, then you're starting to think about the how of merging it together for your own use case.
Right. And so, that depends on the technologies you want to use. When we talked about microservices with that potential scroll of languages and frameworks in our prototype, again, our opinionated prototype, we already have a really good stream processor available in Confluent cloud. We use ksqlDB, right? And because it's fully integrated, what I do like about it, I should say, is it showcases how it should be easy to use your data. You shouldn't have to do a lot of toil to start applying business rules to it, to start doing aggregations, or filtering, or transformations, joins, what have you.
And so, there's several tabs in our prototype. And the middle tab is to create a ksqlDB application with some sample use cases that we have, illustrating how you would take these data products, these streams, what's the business objective, because we tie it back to something we're trying to do important for the business, and produce something of value, whether that's a queryable data set, where you'd issue request response queries, or whether that is where you would admit perhaps a new event stream. For example, if you are enriching information about sales and you want to provide that as its own data product for another team or another consumer to use.
So, you might combine several sources to create something you need, and or, you might be then onwardly publishing that as a new richer source that anyone can then discover that will get added to the opening catalog.
That's where the third step comes in where not everything you create... So, the important thing with data products is that a data product has to have commitment to it. There needs to be a commitment by an owner, a team, to provide that data, to create it, to make sure it's accurate, to make sure it's available, to make sure it's of an acceptable form, and that they can handle requests from customers to say, "Hey, can you expand this domain to maybe include some additional information? Or if you could actually increase your SLAs, and I know we might need to dedicate more on-call resourcing to it. Let's go talk to management about that." But that rigor is important for having a data product. So, just because you have an event stream... in fact, you'll more likely to have a lot of event streams that are not data products.
They may be event streams that are, if you're using Kafka streams, for example, you might have change logs or repartition streams. So, those are certainly not going to be used. Private streams won't be used. You might have streams that are dedicated to interservice communication, where they're directly sending messages to each other. But that's not a data product. That's just asynchronous communication. So, when you go to publish a data product, you have to... it's basically a contract. You're agreeing, "Hey, I want to publish this data product. I'm the owner of this domain. Here's the service level I'm willing to guarantee. If you need to call someone for on-call support, this is who you're going to call," or like, "Here's a pointer to the calendar that holds the rotation." So, you can have this information there, this metadata about your product, and you use it as a barrier to entry. And if you can't pass that barrier, you can't publish it as a product. It's basically, do this only if you are willing to accept these responsibilities.
Yeah. And the flip side, I assume, is that if you accept those responsibilities and say, "This isn't just a stream of events, it's a proper product you can read from," then you have the chance that people in the organization will build stuff without bugging you.
So, a slightly cynical way to put it possibly. It's possible that other people in the organization will delight you with the new ways they use your data.
Yeah. But I think it's, as you put it, is fine. Because some people like to be empowered to build the things they need to build on their own. That self-service component is a big component, I know I mentioned it a few times, but it really is a good guiding principle. If you, as a data product owner, can publish and manage it on your own, great. It reduces the amount of work that other people need to do and makes things more efficient. And for all the participants in the data mesh, whether you're producing it, whether you're managing it, whether you're consuming from it or whether you're doing all three, optimizing for those roles and responsibilities is definitely one of the things that we were focusing on when we built this.
Yeah. So, just to underline the shape of this demo, you are not saying that this is the way to do data mesh. But you are saying there are some choices to make, here's a set of choices so you can see concretely how this might play out.
Yes, exactly. That's a great way to put it. Here are some choices we made and you can add onto this. There's nothing stopping you, for example, from registering... If you really wanted, you could register some big query data sets in there as data products as well. But that's not the business that we're in obviously. But you can extend it. But it depends on what data products you want to support. What are those formats? What tooling you're going to offer for your producers and your consumers. And it's a very specific decision for every individual company to make on their.
Yeah. Yeah. Makes sense. So, last question before I let you go. Because you've been here from the monolith through event streaming, formalizing it as a data product thing. Do you have any sense of where this idea is going to travel next, or where the demo's going to travel next?
I think wherever the demo goes is going to be largely informed by, let's say, where the market goes. Because this really isn't a product. But this is an idea. Right here, here's what it could look like, here's an envisioning. For data mesh specifically, I think it's going to go in a lot of interesting directions. From the different companies that I've seen, various presentations, to cloud service providers, for example, showcasing what a data mesh could look like in their domain or their environment, there's quite a rich diversity of options. So, there's a lot of different things you can do.
Because at the end of the day, a lot of what we do with computers is to optimize stuff, to get really good at doing something, and to do it efficiently. And so, I think as time goes on, we're going to find commonalities. I think we're seeing some already. Event streams do have a good hold, but there's also a lot of data products. There's a lot of just different possibilities there. But I think we're going to emerge into certain patterns, becoming more common or more dominant, or people will say, "Hey, we've built a data mesh like this. These are the learnings we've had. We think these things work great. We think these other things maybe don't work so well."
And it's just going to be a process of iteration. Every good idea is great, and then you start doing it and you're like, "Oh, there's things that maybe don't work quite so well, or there's some rough ends. So, let's try this, let's try that." And so, we're going to be trying things. We're going to be trying things, taking what works and moving forward with it and learning from the things that just don't work that well.
Yeah. Yeah. And some of those will be technical and they'll be easy to replicate, and some of them will be social and maybe a bit more amorphous. Right?
Exactly. Yeah, exactly.
Yeah. Yeah. I can see that. Well, whatever happens, I think we're going to keep talking about this story in different forms. I've got the feeling like the terminology is going to evolve and change. But we're going to be talking about data and shipping data around the organization and taking responsibility for certain parts of data. We're going to be talking about that for years to come.
Absolutely. I agree.
Yeah. Well, thank you very much for this window into it. I hope in those years to come, you come back and talk to us more about the evolution.
Well, thanks for having me, Kris. I appreciate it.
Thanks for joining us, ### Adam Bellmare. And with that, we leave Adam to either write a new book called, Building Event-Driven Microservices or just to take his dog for a walk. We'll let him decide. Anyway, hopefully, that's left you with a more concrete idea of what a data mesh is. But if you want to make it more tangible, take a look in the show notes because we'll leave a link to the demo that we were chatting about and that's the way to try it hands on. Speaking of show notes, that's also the place to look if you want to get in touch with us. Drop us a line if you've got any comments or questions, or you think you should be in a future episode. We've all got a story to tell. And if you just wanted to let us know that you've enjoyed this episode, then look for that thumbs up icon, or the subscribe button, or the ratings box, or the review form, or whatever your app has. For more on Kafka, head to developer.confluent.io, where you'll find all our educational content, including some written by Adam and some by me, actually.
And to make the most of it all, you can easily get Kafka clusters running by registering at the Confluent Cloud. If you sign up with the code PODCAST100, we'll give you $100 of extra free credit to get started. And with that, it remains for me to thank ### Adam Bellmare for joining us and you for listening. I've been your host, ### Kris Jenkins, and I will catch you next time.
Data mesh isn’t software you can download and install, so how do you build a data mesh? In this episode, Adam Bellemare (Staff Technologist, Office of the CTO, Confluent) discusses his data mesh proof of concept and how it can help you conceptualize the ways in which implementing a data mesh could benefit your organization.
Adam begins by noting that while data mesh is a type of modern data architecture, it is only partially a technical issue. For instance, it encompasses the best way to enable various data sets to be stored and made accessible to other teams in a distributed organization. Equally, it’s also a social issue—getting the various teams in an organization to commit to publishing high-quality versions of their data and making them widely available to everyone else. Adam explains that the four data mesh concepts themselves provide the language needed to start discussing the necessary social transitions that must take place within a company to bring about a better, more effective, and efficient data strategy.
The data mesh proof of concept created by Adam's team showcases the possibilities of an event-stream based data mesh in a fully functional model. He explains that there is no widely accepted way to do data mesh, so it's necessarily opinionated. The proof of concept demonstrates what self-service data discovery looks like—you can see schemas, data owners, SLAs, and data quality for each data product. You can also model an app consuming data products, as well as publish your own data products.
In addition to discussing data mesh concepts and the proof of concept, Adam also shares some experiences with organizational data he had as a staff data platform engineer at Shopify. His primary focus was getting their main ecommerce data into Apache Kafka® topics from sharded MySQL—using Kafka Connect and Debezium. He describes how he really came to appreciate the flexibility of having access to important business data within Kafka topics. This allowed people to experiment with new data combinations, letting them come up with new products, novel solutions, and different ways of looking at problems. Such data sharing and experimentation certainly lie at the heart of data mesh.
Adam has been working in the data space for over a decade, with experience in big-data architecture, event-driven microservices, and streaming data platforms. He’s also the author of the book “Building Event-Driven Microservices.”
If there's something you want to know about Apache Kafka, Confluent or event streaming, please send us an email with your question and we'll hope to answer it on the next episode of Ask Confluent.Email Us