Get Started Free
May 27, 2020 | Episode 102

Scaling Apache Kafka in Retail with Microservices ft. Matt Simpson from Boden

  • Transcript
  • Notes

Tim Berglund (00:00)

When the Kensington palace, Instagram account posts a picture of the Duchess of Cambridge and a new polka dot dress, what does that have to do with event-driven architectures? Well, Matt Simpson solution architect with UK-based retailer, Boden is going to tell us on today's episode of Streaming Audio, a podcast about Kafka, Confluent, and the cloud.

Tim Berglund (00:26)

Hello, and welcome to another episode of Streaming Audio. I am as always your host, Tim Berglund, and I'm joined in the virtual studio, the very virtual studio from across the Atlantic Ocean, today by Matt Simpson. Matt, welcome to Streaming Audio. Now Matt has been involved in the all important journey of refactoring a monolith to microservices with Kafka and event streaming. And that's what we're going to talk about today. This is a story I always love talking about. I think one of the reasons for that is that I personally spend a lot of time talking about these ideas. And so it's always a treat for me to talk to somebody who has implemented these ideas. So Matt, tell us a little bit about yourself, like how you came into the work that you do. And tell us a little bit about Boden, your employer.

Matt Simpson (01:24)

Yup. No problem. So, my background sort of been development engineering, but primarily data and business intelligence. I was lucky enough to work with Boden who are a really big UK retailer, sort of high-end clothing, mostly targeted towards women and very nice dresses that some of our Royal family liked to wear from time to time, which helps us with marketing. So that's great.

Tim Berglund (01:54)

Not two minutes in yet, and I have already learned something. I obviously knew Boden was a retailer. I didn't know that that the Royal family was okay, good.

Matt Simpson (02:03)

Yes. I've actually had a project to rename dresses to the Kate Middleton dress. So yeah, there's some tight link into the Royal family. So it's a really nice retailer they're based in North London. They've got some very nice offices. They're very creative, very designed, focused, and very brand aware. And I was excited to work with them on a business intelligence project around six years ago and really helped them go from sort of zero traditional reporting to what was at the time, a relatively cutting-edge or business intelligence, data warehousing solution for them. I came back to visit them a few years later to find that their growth has been huge and their legacy systems, which could be called to modern lifts, in the modern terminology. We were really struggling and the BI solution was struggling, and there was a new desire to sort of with the public cloud being more accessible to businesses, Boden size, there was a new desire to say, “Look, can we build a new architecture and not just go for sort of additional, you know, slow, additional benefit, but actually can we transform the way we do it within retail?”

Matt Simpson (03:26):

So I came in as a consultant and we had a couple of projects where we did some proof of concepts and then some pilots around building out microservices. Alongside that, we were looking at modern data architectures. And my boss, Alex Ives, I think came to one of your Confluent seminars and was just blown away by how the idea of event-driven integration and event-driven architectures could really be the answer to some of our big problems with integration and one of our big blockers on how we were going to be able to transform the architecture Boden. As an example, our core order management system had something like two to 300 integrations in and out of it. So if we wanted to change one of our systems, like a product master system that we're bringing in, we really had no idea what the impact would be. By carving out that system and putting in a new one, because of all these point-to-point integrations at the same time, there's a huge appetite in Boden to say that, you know, we've gone from this batch availability of data, from a traditional data warehouse overnight runs to get sales reports first thing in the morning to say, well, we'd like to see how well we're doing a little bit faster.

Matt Simpson (04:52)

With things like a lot of the business moving from catalogs onto the web, we want to sort of see what's happening on the web in a closer to sort of real time view if possible. So we were transforming that side of things at the same time. So we did a proof of concept around a stock service that would find out what stock levels were being changed by orders on the website and through our retail stores. And we used Kafka to be the sort of platform that we pushed those events effectively or the stock balance events into Kafka. And then we use that in order to sort of feed our service on stock, which could then in turn, it'd be the view of what's happening with stock for us, for our products.

Tim Berglund (05:57)

Sorry, I'm just kind of processing through what you're saying here. I just want to pause on a couple of things. You said, you know, catalog sales are shifting to online, and those are funny words to hear, because, you know, as I think about retail, I don't know how long it's been since I bought something out of a catalog. You know, 20 years ago, that was normal behavior. But I think for, for clothing, retail, catalogs are still a thing kind of, aren't they? I mean, that's not, we can't…they're there, they're going away, maybe, but that's a legitimate part of the business. So, you know, this transition is happening on the one hand. And on the other hand, what you said was, as that is happening, more of this business is coming online. The business leaders are looking for real-time analytics, and that's kind of funny because, you know, 20 years ago, when 25 years ago, when a substantial portion of a clothing retailer’s business, you know, it was almost entirely catalog based. A data warehouse that ran nightly didn't feel like a bad thing at all. It's not like you didn't want analytics, but dude, the data is going to be so slow and coming to you from, you know, responding to catalog sales. Tomorrow's fine. That's not the long pole in the tent. Yeah. But when it's a website and it's all there, crunching that overnight and getting reports tomorrow now feels like cold molasses. You know? So that's, that's funny that, that transition—I was just thinking through the causality there, you know, of course you're going to want faster reporting. So anyway…

Matt Simpson (07:31)

Yeah, it’s interesting for the same reason as the business was changing from what the customer wanted. The systems that we had in place to support a catalog-driven business, now where we're struggling with a more omni-channel approach to our retail sales. So yeah, it was like a sweet spot really for me to come and be involved with Boden. I was lucky enough to…you don't always get involved at the right time in certain product or project. And this was one of those right times. So we did this proof of concept and we learned an awful lot. The first thing we learned was Kafka is really complicated. So, you know, we're a Microsoft house mostly. So Windows servers, .NET. So having to spin up Linux boxes, support those, and finding some APIs weren't available to be written in.NET was tough for us.

Matt Simpson (08:27)

And one of the learnings we had from now, one of the mitigations to that risk, I guess, was that we, we looked around for the managed service offering that Confluent offered. And that really mitigated that one risk that we'd identified and meant that we could focus on the benefits of moving to event driven, you know, having this way to integrate between microservices, without coupling them. But at the time, not have to worry about the underlying complexities of the infrastructure and you know, some of the complexities around how you scale the Kafka engine, and that was really useful for us.

Matt Simpson (09:08)

You know, if we made trailers for podcast episodes, that, that right there, just, we just recorded the trailer. That was great. Thank you. But, it sounds like you take, you know, and, and Boden engineering has taken a very data-centered approach to trying to, like you decided to refactor stock into as your first service to pull that out of the monolith and make it a service. And it sounds like your reasoning was fairly data centric, and of course that's your perspective cause you come from a data background, so your reasoning is going to be data centric, but could you walk us through the reasoning for why stock? Maybe it sounds obvious cause you're a retailer and, and I mean, that's what I'd put on the slide. If I were trying to explain things to people, but the, the process by which one refactor is a monolith is a question that comes up a lot. So what was your thinking that, that led you to pick that thing to make you to be your first service?

Matt Simpson (10:07)

Yeah, so interestingly, so there's a couple of nuances within that question. So the first thing was that actually it was a pre-POC. So we decided stock was a bit too complicated to do as our first actual service. So we learned what not to maybe start with from that. But the key with stock for retailers is that is the lifeblood of your business. So as I've learned, really from Boden, is that if you don't manage your stock correctly, you know, you can't get the product to the customer. So you can have the best website, the best marketing, but if they press go and that product never arrived or arrived late, or it's wrong, then they're never going to come back to you. And good retailers and strong brands understand the importance of retention.

Matt Simpson (11:04)

So stock is, you know, often talked about by my boss as the lifeblood of a retail business. So if we can get stock right, and then if you think about it as well, stocks key everywhere from backend systems to frontend systems. So the way we sell, you know, you don't want to advertise something that isn't in stock, or maybe you do cause you want to drive a bit of extra demand, but you want the customer to understand they can't actually get it straight away. So stock’s key. It's sort of a centrifugal force, if you like, of a retail business, but we did learn that it was really complicated and it might not be the easiest place to start for us. So the actual pilot service we went with was for product information. So alongside delivering a new master system into Boden, what we realized was that that was a software as a service MDM-type tool.

Matt Simpson (12:02)

We realized it's fantastic for mastering those attributes, wasn't so great at making that data available to all of our other systems, and what we didn't want to do was more point-to-point integrations and our, our PIM to become yet another bottleneck for change because we've built all of these point-to-point integrations. So we built a product service, very simple product service. We use the AWS cloud platform. So we leveraged the serverless framework and just simply Lambda, DynamoDB. We have data coming out of our PIM platform and that's based in Azure. Microsoft was spitting out sort of changes of products as they happen into Event Hub. And we were picking those up from a Lambda, moving them into our service. And then the important piece was we were raising an event into Kafka to say this is a product change.

Matt Simpson (13:02)

What that allowed us to do was decouple our backend and our frontend. So our frontend systems didn't really have to care that we had a new PIM. They don't have to care that, you know, what the interface is; all they can, all they need to do is effectively listen for the event that says a product change, pick that up, and then make their changes their way. And one of the big features we were really sold on with the Confluent Platform was the Schema Registry. So, you know, point to point, great until you change the interface and then it breaks. It's not great really, but, you know, changing of the schemer and the interface is often the problem with any integration, ESPs, or anything. So the Schema Registry allowed us to sort of enforce what I would turn referential integrity of that event between, you know, for the downstream systems.

Matt Simpson (13:56)

So we went with product, what was the actual service that we went with. And the first events that we started to capture, I guess alongside that, it's worth noting that my data background, as you say, came into play and we also realized that we could quite simply push our web stream information, our clickstream information from the Adobe platform into Kafka and we, at the same time, implemented a new data platform called Snowflake. And that had a wonderful way of just that cloud analytics database.

Tim Berglund (14:37)

And that Snowflake, the company Snowflake, not an internal name of yours.

Matt Simpson (14:42)

No, sorry. Yes. It's the very fast-growing new sort of cloud platform. So we went with Snowflake and we went with Kafka and really quickly, within a matter of weeks, rather than months, we were able to build out a new way of reporting on web session data, basically.

Matt Simpson (15:04)

So we had all markets pushing their clickstream information through, into Kafka, and then ultimately into Snowflake where we had some BI dashboards off the top of that, that we're changing, certainly within the hour that we're allowing people to look at customer journeys and buying preferences and drop-offs and things like that. So it was quite good that we had made this investment into Kafka for integration, but very quickly, and alongside this, we were able to have our BI team learn very quickly, how to push events into Kafka, how to use KSQL to aggregate that data and get some real business benefit in some reports that the business had wanted for ages, and now they could get it. So we raised, in other words, you know…

Tim Berglund (15:52)

Yeah, you started using Kafka for integration purposes and slightly parallel parts of the engineering team. And the business started using Kafka for integration, you know, in ways that you didn't anticipate at first, and this is a thing that Kafka does, right? There's this gravitational pole. You put some data in it and other people say, “Oh, I also want my data there because now KSQL’s a possibility, and I can do these better things than I would have been able to do if the data stayed locked up in Adobe” or wherever it was.

Matt Simpson (16:24)

Exactly. And the big win is if you have every single business event or your key business event, and we talked a bit about what defines a microservice, it's quite tricky and we've had lots of debates – and I often revert back to Martin Fowler for definitions, but ultimately [inaudible] microservices tend to be unique to the company that's building it and why they're building it. But if you have all your business events generated and stored in Kafka then you've just got your first perfect data source for your data warehouse. So whereas when I traditionally built BI platforms, I constantly had to understand the schemas of different source systems, export that data, then it would change, or then I would think that this column meant it was a PO number, but actually it was only entered from time to time.

Matt Simpson (17:15)

And it didn't really mean it was a PO number, but you often didn't know that until you'd built this BI platform and produced a report for the business and they sort of trend fall back. So having all these business events said, well, this is great, because our data architecture version 2 means that once we've built this new integration layer, I then have every single event pushing into Kafka, pushing on into our data platform which will allow us to have a really good source of information to build BI on. And the other thing is as data architectures are evolving, was that with the Schema Registry, I can now deal with change in schemas, I don't have to write any ETL process that understands that column 123 is actually called column 123. I don't care. I just pull in and [inaudible] for my event, and then I just look up the schema and say, what schema do I apply for when I'm reading it. So it's gone really nicely in that we found a way to accelerate the building of these microservices and the integration between them, and we are not there by any stretch, but we've also had this fantastic positive knock-on effect to our analytics capability as well.

Tim Berglund (18:31)

That is fantastic. And everybody listening, I just would like to say, I mean, the normal preproduction process for an episode of streaming audio, we always have an idea of what we're going to talk about. There's at least a few questions that we're going to cover and we kind of make some notes and who's doing what and everything. I absolutely did not ask Matt to speak all of the talking points that he's speaking. So if you're a longtime listener and you know what I'm interested in and sort of know what Confluent cares about, you might think, wow, these guys really just got Matt to say all the words. No, Matt is saying the words on his own. He's a free man, and I'm asking questions. No, you wouldn't but I need to call it out.

Matt Simpson (19:20)

Yeah. No. I'm an architect. And as an architect, there's no silver bullet technology, whether it's public cloud, whether it's Eventing or it's Kafka. For me, as an architect, it's finding the right technological fit for the business problem I'm trying to solve. And I've been in IT for 20 years, so I've seen technologies come and I've seen them go. And don't get me wrong, we’ve had our issues with Kafka, I mean, the case equal work, we did, we really tried to do too much with case equal when we first started doing the transformations from individual clicks into sessions and things like that. But, no, overall, it's been really positive and it's a key part of the strategy for us, to the point where it's now on the CTO, sort of top list of areas that he's focused on, and we're having more and more sessions with him around the benefit of this and how this helps him realize his strategy and roadmap around things like replacing our Order Management System, replacing the warehouse system, doing those types of things. So it's really...

Tim Berglund (20:33)

And there are big things going on. You snuck another one in there, by the way, when you were talking about the product service, you talked a little bit about the stuff in dynamo and then the stuff in event hubs. You get two different public clouds there. But you also offhandedly, said, oh yeah, by the way, we were building a new product master. Now, that's tremendously complex in itself and maybe a little bit off topic, but I wonder if you might just expand on that a little bit, cause I know that's, number one, technically a lot of work. There's just a lot of lifting to do in terms of engineering and in terms of impact on the business. Usually, that's a strategic sort of initiative. So talk to us about that, and if there are streaming touch points there, I'd love to hear them, but I would just like you to share the pain of the new product master.

Matt Simpson (21:22)

Yeah, no problem. So the first thing is we didn't build it. So Boden is sort of moving to this very mixed model, which is very common nowadays of sort of a mixture of off the shelf and custom built solutions. So we went for a platform that was a leader, and the way we'd been implementing that is by doing it incrementally. So rather than just a big bang and saying, right, we're going to move all of the product information from all of these various legacy systems and centralize them, we've done that in an intuitive way to say, okay, what business benefit do we want or do we need by having more flexibility over how often we could change product information and maybe have that available on the web. So I think I mentioned the royal family earlier, and one of the things that drove us to do our first use case for the PIM was actually about just something that sounds really simple, but changing product titles. So if I have a dress like a polka dot dress, for example, that's the title. And if somebody is lucky enough to Google polka dot dress, we might appear in a Google search. Now, if Kate Middleton wears that dress, then it'd be really useful if we could very quickly, you know, the picture comes out in the press and we can—or the social media these days—and we can say, right, let's rename that to the Kate Middleton polka dot dress or as worn by. And if I can make that available on the web so that the bots can crawl and re-catalog it, then, hopefully, I can capitalize on effectively free advertising, and I can showcase basically how, well thought of our brand is and our products are, and lead to more sales. So the first thing we did was say, let's just pull in the product title into PIM, allow the product team to override the prototype title, and through our product service, make that available on the website. Now, I don't need to tell you or the listeners—there's two ways to integrate between microservice, back end to front end, you generally will do like an API call. Between services, you tend not to couple them. So we had an API call from our front runner...

Tim Berglund (23:39)

You'd rather not, right?

Matt Simpson (23:40)

Yeah, we’d avoid it where possible, but we had an API call to the product service that would then grab the latest product title. The problem then that can happen and did happen was that the front end is cached. So it's not always going to go and get the latest information. So we have a design where an event will raise from the product service to say the product has changed, and then the thing that controls the cache in the front end, I say the thing, cause I don't really get involved in the front end too much, but the piece of technology that deals with the cache could then be listening for that event and say, yeah, the product's changed, why don't I refresh the cache. So we thought, just by having the API call, great, now the dress is going to be updated really quickly, but it wasn't, because it was cached. So the event allowed us to then refresh that cache basically.

Tim Berglund (24:34)

Wonderful, cache and validation through.

Matt Simpson (24:36)

Yeah. And we're now probably about half way to having a proper enterprise PIM where currently we've got about 50% of our product information in the PIM. And every time we do that, we just enrich the event out of our product service and enrich the data that's in our product service, so then it's available for all of our downstream systems.

Tim Berglund (25:00)

And PIM, again, stands for?

Matt Simpson (25:02)

A product, it's a product master. I never knew what the I stands for, but it's an MDM solution built specifically for products as a domain.

Tim Berglund (25:11)

Got you. And just for anybody not doing MDM that prefers to [inaudible] and you know what, this is a – yeah, master data management, and Matt, I'm going to give the non-data person’s account of an MDM, and you can clarify me if I'm not quite right, but the idea being, if you've got, say, lots of different data sources that are describing the same entity or different components of the same entity, the classical examples, like people or customers, there could be eight systems in the company that have a picture of the customer. MDM is the tooling and associated business processes that try to create a comprehensive view of that entity. In this case, the entity is a polka dot dress.

Matt Simpson (25:52)

That's it, yeah, that's it. We often talk about systems of record or systems of origination and what you want is a single source of the truth for your key business domain. So yeah, customer product.

Tim Berglund (26:05)

And those entities, you said, now reside in a DynamoDB databases, the product—the canonical view of the product is in Dynamo.

Matt Simpson (26:16)

Well, the main view is in our PIM system, but that PIM system is fantastic for storing and managing the results and workflows and all this good stuff that that does. It's not so great for making that information available to the downstream systems into the front end. So we built a service over the top of that for product information that's linked to using the event hub, and that product service stores then that data in Dynamo and serves that through that. That then pushes out or we'll do when we manage to get the next release done, it will push out an event [inaudible] of every single change, of every single product that changes from the PIM. So it's that integration layer again, so it's like a service that's effectively then picking up the changes from the PIM and pushing that into Kafka. And you might ask, why do we do that, and why don't we just have the event hub event pushing into Kafka. And the reason is that the PIM event is not very eventy. So it's a huge JSON. If you change one field for product, it basically gives you a massive JSON file of every single attribute, even if it's not changed. So we have to do some stuff with that to sort of shrink that down and identify the pieces we want rather than everything it gives us.

Tim Berglund (27:40)

Got it. That makes a lot of sense. That makes a lot of sense. It's a very good PIM. It's not necessarily an event driven, fundamentally event driven, event native sort of thing.

Matt Simpson (27:51)

Yeah, which is a common problem.

Tim Berglund (27:53)

And then again...common problem with?

Matt Simpson (27:56)

It's a common problem of buying off the shelf and software as a service mixed with build your own components and your own solutions, is that a lot of good, solid business systems are not event driven, weren't built at that time. So we found that for the [inaudible] and for other systems, we're going to have to build this integration layer, the service layer that's going to give us the events that we need in the format we need them.

Tim Berglund (28:27)

Right, and that's because your architecture is still considered to be a forward looking or sort of leading edge approach. Now, we have a consensus that this is the way we ought to build systems, but not everybody is doing this. And most people who are doing this, and I say this all the time, are doing their first one. Like, you're here talking about the first one that you've built. And if you wanted to talk about traditional analytics systems, you wouldn't have account of how many of those you've built, because it's what you've been doing forever. So we're all on our first one, and any component like this PIM is a, you know, it's a hosted service that sells to retailers, I guess. And it would have had to have been built and come to maturity years ago so it doesn't get to be this event native thing, which is fine. I mean, there's all kinds of great products. And like you said, that's kind of your job now as an architect to figure out what that layer is to bring it into what's becoming this event driven architecture that you're turning the rest of Boden into. So what came next? We've been focused on product information and there's really a lot of interesting stuff to talk about there, but what was, like, how did you take that then to make decisions about what you were going to chisel off of the monolith next and what was that process like?

Matt Simpson (29:54)

Yeah, so we are still in that process right now. So our new CTO is really visionary. He really wants us to question the traditional retail architectures and come up with the right blend of off-the-shelf systems and building in our own way with a high level of engineering expertise. So that Boden IT can be thought of as really cutting edge and we can attract really good talent. So really, it's been about proving out a vision of a microservice and event driven integration. So that's now proven. The steps or the part of the process we're in right now is a combination of education and setting up for the next big wave of change. Now, obviously, in the current climate, like most businesses, that wave has slowed down somewhat. But our vision still remains, and what we're doing is we are doing a combination of education, so education ground up and top down as well. So we have some great Kafka one-on-one talks that some of your engineers have been helping us run with our tech leads and our BI teams on our different product teams that we have, our engineers. And then we've also had some sessions with some of our other solution architects, with CTO, with our new director of engineering to really sort of make sure everybody's got a base level understanding of what—not so much what Kafka is, but actually what event driven means and why and what the use case is and then where Kafka fits to that. And we've been able to run through the whole IT team and the management team within the last probably six-eight weeks, and it's gone really, really well. So now we are making sure that all of the services we need to build out, whether it's an image service, whether it's the stock service – stock service is actually being built right now – this service is now being built with Kafka in mind with that Schema Registry piece in place. And also, with the fact that those events will automatically then be pushed into our new data platform for our new data architecture as well. So that's sort of where we are right now. So a combination of education and then making sure we've reset our roadmaps in the right way.

Tim Berglund (32:33)

Right. And the focus of your education, you said something that I think is key, which is that getting people to understand event driven architectures is the hard part, Kafka is not the hard part. Now, people who operate Kafka clusters, usually twitch a little bit when I say that, cause it's hard to – it can be hard to run it. You guys are Confluent cloud users, so that's not a pain that you feel. You pay somebody else to feel that pain. But I think it's just such an important thing that you said, it's not Kafka, like, you can kind of teach somebody how topics work and how to produce and consume and you teach them some case equal. Like, I could do that, give me half a day with a group of engineers and we'll get there. But the thing that's hard that when I sit with a group of architects and try to reason through real problems in the business, like, Kate Middleton Instagram something and 15 minutes later the search indexes need to be being updated, that's... Okay, now we'll do that, and everybody comes to the table with their traditional synchronous database centric application architecture and data architecture chops, like, we've all done that, we know how to do that, we come to the table. But now we know it needs to be event driven because of these other constraints that are in the world, like, we have to respond to what Kate does on Instagram right away, which, by the way, I love that example because it's obviously the case in the UK that you have to do that. But when I sit with architects and try to reason through these things, I see what you said, which is that Kafka's not hard, event driven architecture is the hard thing to think about. So what has that been like in your exposure to the educational program, what's hard for people about that.

Matt Simpson (34:34)

I think in our particular case, the hard thing is just the volume of change for engineers all at once. So we've got a big move to AWS, a big move to serverless, so we've got a group of engineers who are awesome, but they're all going, hang on a minute, I'm used to visual studio, I'm used to.NET, I'm comfortable here, I've got to learn all this new stuff. And now there's this solution architect banging on about this Kafka thing, which I know is really cool, but crikey, when am I going to get the time to learn that as well. So that's the first barrier is the volume of change for engineering is big right now. So what we've done is we've sort of tackled it two ways. The first way is really trying to come up with the reason why you need to be event driven. So there's a great Gartner quote about responding to business moments and they class that as being digital. So you know digital transformation is on everybody's lips, but nobody really knows what it means. Sometimes it maybe just a way to get some shadow IT out there, but I thought the Gartner quote about being able to respond to business moments a key, and when I talk to the engineers and I say to them, what's your big problems, so one of the problems we're working on right now is how quickly we can get a price change that we make in the business to the web. So we want to be really reactive, we want to be reactive to the market, especially in these tough times and be able to tweak prices for all our different global markets at the right time.

Matt Simpson (36:07)

Well, right now, that can take, I think, something like 24 hours to four days, depending when it was done to push through a price change. And as you say, you go into these technical design sessions and the guys are coming up with fantastic ideas, and then I'm going, well, actually, okay, if I do that and then I want to change it later, or if I do that and that price change, I want to use that say in my new ERP, so my new ERP is there and that needs to know the price change, so I can push it out to my new POS system, my retail POS system in the shops, what you've just done, doesn't allow me to do that. Right? You're going to have to build something else. So if we think of a price change as an event already, instead of thinking about technologies, I'm just thinking about the key businesses—I call them entities from the data world, but these big key business domains I think is the description for them in the microservice world, if we capture that event and it's captured somewhere that we can store it and not lose it, and multiple people can then use it, well, we've just won, we've just set ourselves up in a way that means that the next time we need to use a stock balance event, a product change event or price change event, I've got it available and it's been sort of – with the engineers, it's been shown them, as you said, Kafka is not that hard, and one of the ways we've done that is we've extracted the API for publishing events into Kafka, and we've now just got a module that everybody doing their raise event service basically just uses this standard module, so that's easy [inaudible] does the Schema Registry piece that we need as well. But then also it's been about the management as well, not falling back on their traditional ways of doing things and them having the same message in about why we need to be responsive, how events enable us to be responsive so that we're having the conversation, so when the director of engineering meets he's tech leads and they go, yeah, we're going to raise an event and we're going to do it using Kafka, and then that way is a little bit of extra effort may be to convert these every five minute batches of 200,000 stock balance event into singles, but the benefit of that is X. And the director of engineering goes, good idea, yeah, because that's part of the strategy we've been thinking about, and that will make that easy. And then the BI team get involved and go, awesome, I'm going to have a stock balance event, that's going to save me working out. And if I do it every night at midnight or one minute past midnight when an event sneaked up, a change [inaudible] in the batch, so it's been about trying to sort of educate, take on people's fears and work through them. And actually, even me who, I've not done development for a long time other than data stuff, roll your sleeves up and have a go and just build something out, I mean, the nice thing was Serverless and Lambdas and some of the other technologies that we have now, you can build out solutions really, really easily, and that's really, really helped us.

Tim Berglund (39:28)

What are you looking forward to doing next from where you're positioned right now? What's the next win?

Matt Simpson (39:35)

So the big win for me is the acceleration. So it's taking us maybe a year to get to where we are for a number of different things, and I now want to see a big acceleration. So we have a big new, big couple of projects coming in play where we're going to be replacing our core business systems, and I want to make sure that it's the real proof for me of our vision. If we have all these services and we integrate using events, that's going to be really seamless piece of work and then getting real time analytics out of that, again, it's going to be a doddle. So I want to see some real, some more and more faster business wins as we start to realize our vision. And yeah, this year is going to be tough for everybody, but I think there's a real desire, what I'm seeing at Boden, especially, is a huge desire to learn to accelerate because sometimes it takes a big change in people's lives to sort of give you that confidence to start learning and to embrace even further change. And that's what I'm seeing, we've got great vision, we've got really good strategy, and now I'm just excited to start making it happen.

Tim Berglund (40:54)

My guest today has been Matt Simpson. Matt, thanks for being a part of Streaming Audio.

Matt Simpson (40:57)

Thanks. Thanks, Tim. Thanks for having me.

Tim Berglund (40:59)

And there you have it, I hope this podcast was helpful to you. If you want to discuss it or ask a question, you can always reach out to me at, @tlberglund on Twitter. That's @tlberglund or you can leave a comment on YouTube video or reach out in community Slack. There's a Slack signup link in the show notes if you want to register there. And while you're at it, please subscribe to our YouTube channel and to this podcast wherever fine podcasts are sold. And if you subscribe through iTunes, be sure to leave us a review there. That helps other people discover the podcast, which we think is a good thing. So thanks for your support, and we'll see you next time.

Apache Kafka® is a powerful toolset for microservice architectures. In this podcast, we’ll cover how Boden, an online retail company that specializes in high-end fashion linked to the royal family, used streaming microservices to modernize their business. 

Matt Simpson (Solutions Architect, Boden) shares a real life use case showing how Kafka has helped Boden digitize their business, transitioning from catalogs to online sales, tracking stock, and identifying buying patterns. Matt also shares about what he's learned through using Kafka as well as the challenges of being a product master. And lastly, what is Matt excited for for the future of Boden? Find out in this episode!

Continue Listening

Episode 103June 1, 2020 | 40 min

Introducing JSON and Protobuf Support ft. David Araujo and Tushar Thole

Confluent Platform 5.5 introduces long-awaited JSON Schema and Protobuf support in Confluent Schema Registry and across other platform components.

Episode 104June 8, 2020 | 51 min

Exploring Event Streaming Use Cases with µKanren ft. Tim Baldridge

Tim Baldridge joins us on Streaming Audio to talk about event streaming, stream processing use cases, and µKanren.

Episode 105June 17, 2020 | 40 min

From Monolith to Microservices with Sam Newman

Author Sam Newman catches up with Tim Berglund in the virtual studio on what microservices are, how they work, the drawbacks of microservices, what splitting the monolith looks like, and patterns to look for. The pair talk through Sam's book “Monolith to Microservices” chapter by chapter, looking at key components of microservices in more detail.

Got questions?

If there's something you want to know about Apache Kafka, Confluent or event streaming, please send us an email with your question and we'll hope to answer it on the next episode of Ask Confluent.

Email Us

Never miss an episode!

Confluent Cloud is a fully managed Apache Kafka service available on all three major clouds. Try it for free today.

Try it for free