My colleague and fellow developer advocate, Viktor Gamov recently did an online Spring Boot workshop that turned out to be a lot of fun. Of course, this meant it was time to get him back on the show to talk about this all important framework and how it integrates with Kafka and Kafka Streams. Listen in on today's episode of Streaming Audio, a podcast about Kafka, Confluent, and the Cloud.
Hello, and welcome to another episode of Streaming Audio. I am as usual, your host, Tim Berglund and I'm joined in the virtual studio today by repeat guest once and future and current guests, Viktor Gamov. Viktor, welcome to the show.
It's good to be back. Thank you for having me, Tim. Hello everyone.
Always. Now, Viktor, I want to talk about, Kafka and Spring Boot today. But before, that let's imagine I know this sounds kind of crazy, but let's imagine somebody doesn't know who you are and they're new to the show, and they haven't met you, they haven't seen any of your stuff before. Tell us a little bit about yourself.
Yes, of course. It's always great to introduce, so my name is Viktor Gamov. I'm a developer advocate here with Confluence and I am like a fourth time in this show. And so things, what I do for [inaudible 00:01:29] Is to helping developers to build their apps, their Stream processing apps, using different languages. And today we're going to be talking mostly Java and maybe a little bit of Kotlin, because this is where my current expertise lies. And somehow it's aligned with the topic of today's show.
Yes, it is aligned very well. You recently, we're recording this at the very beginning of-
Yes, that's the year, thank you. I was like, wait, what's the year? The very beginning of January and a couple of months ago now you did a workshop, having to do with Spring Boot and Kafka, and wanted to just talk through that. Like what [crosstalk 00:02:19] Do in that workshop and tell us the story of that integration.
I think we're getting into very good habit or tradition, we're running the workshops on some official or unofficial holidays. First workshop, around like [inaudible 00:02:43] Spring Boot in Kafka, in Confluent and cloud was in Halloween. And recently, like just a couple of weeks ago, we did another one, where we were doing like Christmas themed workshop. So, if you haven't seen those, go check them out in the youtube.com/confluent, you can find those recordings there. But, essentially idea on this workshop is to naturally demonstrate how modern application developers specifically event driven or stream processing applications developers can use the productivity tools like Spring Boot and integrate this with basically the best managed Kafka in the world, and how they can write their apps and how they can think about their apps, how to deploy those.
The first workshop was mostly around writing apps, and the second workshop mostly was around like running the things in, say, Kafka running in Confluent cloud and your Spring Boot application running in Kubernetes somewhere also in the cloud that allows you to not think about infrastructure and some other things. So, that's a nutshell, if you're interested in this recordings, there are even, how you call it, like the thing that you give people so they can walk through document or something like that, they can go through these tutorial steps and they can do this on their own pace. So those things are also available. So if you miss that, you can go and check this out.
Yes. And which you definitely should. Those links are in the show notes for both of those recorded workshops, the Halloween and the Christmas one. Talk to us that the first one was more, I think, of an API-centric discussion and less of an OpsCentric discussion. But talk us through that. So, I would like to assume by the way, basic familiarity with Kafka, basic familiarity with Spring, although if while you're talking, I'll probably ask you to do just a little bit of clarification on Spring stuff in case anybody's not fully ramped up on that. But, what is the approach that the Spring Boot Kafka integration takes? Like where are the touch points?
That's a great question that you asked, and from get-go, it's actually for the people who wants to start, they might have some questions and they might run into some of the confusion because there's multiple things that you need to know. And obviously naming all the things is something that in 2021 is still one of the hardest problems in computer science. So when you start looking things Spring Kafka integration, it's just like verbatim in the Google. You might see some results that you didn't anticipate and for specific reasons. There is a framework called Spring Integration that implements some of the enterprise integration patterns, and that framework also has a connector to Kafka.
But essentially what the people wants to use and the people wants to try to use it's part of the Spring project that's called the Spring [inaudible 00:06:27]. And what it gives, it gives very thin obstruction, very thin wrapper around native producer, consumer gain, around like a Kafka clients. It brings some of the Spring and Spring Boot magic in terms of how the things work and some of the opinionated default configuration.
Spring, first of all, is developer enabler or the booster tool that allows you to do more things with less. So like focusing on the actual things and do not think about some of the low level details much. So this is why like a Spring Kafka could provide you like very meaningful thing to start. You just like go there, if you want to produce some of the data into Kafka topic using this abstraction called Kafka template. And if you are familiar with this template pattern, the pattern also runs inside the Spring framework for very long time.
Many of people who use Spring, they might be familiar with JDBC template. Or if you need to interact with some restful web service, or you need to have a restful client, there is a Rest template that allows you to write the client to the service. And there's others. If you're interacting with some sort of system, you need to read data from it, or like write data, in most of the cases write data, using this template pattern. And this thing is very thin wrapper around like Producer API.
That's the template?
Yes. And so if I know the JDBC template, is it going to feel pretty familiar? Is it-
Well, it will feel more familiar if you familiar with the Kafka producer. So, same way is [crosstalk 00:08:37] The JDBC template-
That kind of validates, you said it's a thin wrapper. And if knowing the producer is the best way for me to learn the template, then it must be a thin wrapper.
Yeah. Like if you about this, the JDBC template was wrapper around like JDBC, like, prepare statements and JBDC statement and results sets, that's you getting from database where you still kind of operating on the knowledge, how the prepare statement works in JDBC. And after that you have results that they need to iterate over the results set, and the Spring provides you the ways how you can simplify. Like for example, there's easy mapping between objects to results from these results set.
Same thing is for Kafka template. If you know that your producer needs to write the data into [inaudible 00:09:34] It provides you some of the similar methods, like there's a method to that, that produce data topic based on the key in value. You can have a different callbacks, because like, if you just do producer [inaudible 00:09:47] Actually have a one method that will get the metadata as a result of this callback. Spring provides some of the more modern abstraction for handling this asynchronous responses, like a completable future or reasonable future as a result. And after that-
You don't like callback Viktor? Come on.
No, I don't. It's 2021, like, we're not animals here. So, yeah-
I had trouble remembering the month of the year when we were just getting started talking, but you're right, it is 2021. So maybe there are better ways to do that.
Yeah. So the ways how you can passing the futures around that allows you to do better composition and better handling of the requests and things like that. Do not to be confused with the reactive Spring Kafka, which is like totally different thing we're talking about for now is a Spring Kafka that allows, like I said, the wrapper that allows to operate and do not write all this [inaudible 00:10:56] Yourself.
Got you. The typical springy kind of way?
Yes. And Spring Kafka actually includes some of the components that enable some of the magic to work on the springboard. Like there's an annotation enabled Kafka that automatically will discover some of the properties in your application, the [inaudible 00:11:19] Files that will be responsible for interacting with Kafka. Or if there's no such properties, but you have annotation called enable Kafka, it will assumes that you're running this in development and you don't need to even provide your, how you call it? Like bootstrap server and some other connections. [crosstalk 00:11:39] It will just assumes that it's going to be local host and just we'll run this with some default settings.
Cool. You mentioned Spring Reactive Kafka. Could you give us a few minutes on that? I know it's not precisely on topic today, but, I don't want to let it go by without a little bit of definition.
Yeah. I think this particular topic about reactive programming will require its own episode to talk about this and like, based on people's feedback, give us some, I don't know, we do have iTunes, like put some of the good stars in iTunes. And if you want to hear episode about reactive programming, and Kafka and other things. But essentially-
Or Viktor's Twitter handle will be in the show notes. You can just tweet.
Please by all means, yes, tweet.
[crosstalk 00:12:37] Like to hear about reactive programming specifically in Spring.
Yes. so yeah, the current state of reactive programming around Kafka is quite, what's the word I'm looking for? Interesting, because there's multiple other things around this. So essentially, the project reactor, it's a runtime and that implements reactive sophistication and the project reactor, also has a core that's implements all this reactive thing. So your code that will be running using this will be reactive.
But there's some other beats around this. If you do like external call to the system that like underlying client, for example, in the Kafka, it's not reactive per se. It doesn't support reactive specification, but there's ways how you can wrap existing application is this is where, the completable future, [inaudible 00:13:40] Future things comes handy. Those things can be wrapped and transformed into this reactive primitives, like a flux or model, which is like a system that will return multiple responses, that's flux, or like one response it's a mono.
So the Kafka reactor project or spring Kafka reactor project, that was absolutely separate project that was developed, from some folks from pivotal and our current colleague. She was at the pivotal that time, [inaudible 00:14:23] She was like actively involved in the development of the client. But the API is slightly different. It's more aligned with some of the reactive, the primitive rather than being just a wrapper around Kafka producer consumer.
So, yeah, like I said, that's a conversation maybe for another episode, which I would love to talk about in the future, but let's stick to the plan. Apart from the template pattern, very famous and very actively used pattern is called message-driven bean. It was unfamously or famously introduced in some of the Java EE specifications.
Oh yes, it was.
And the idea was, is to have a continuer managed the component that method of this component would be invoked by underlying framework when some of the events happened, in particular case of J2EE it was the MQ system. So if something happens on JMSQ, this bean would be method listen or method something that either annotated with particular annotation, or configured, that method would [inaudible 00:15:58]. So the same idea was translated to spring back in the day when Rod Johnson was creating better Java enterprise framework-
Re-writing it in a simpler way.
Exactly. So where would you have just like a simple Java bean, which is like Java object that also annotates to some method, and it will be invoked by some message that comes in through the pipe. The same pattern is available for JMS. The same pattern was available for say Rabbit if you're using like a Spring game to be a library. And not surprisingly, it was translated to Kafka.
One of the reasons that the same team and the same, same people wrote the Rabbit integration, they're also work on Kafka integration. But also if you will think about Kafka as an advanced messaging system, you still can [inaudible 00:17:03]... you still can say that it is messaging system. So the same pattern also implemented in Kafka, there's a annotation called Kafka listener. They're going to annotate a particular method of your message-driven bean, and it will automatically create a consumer for you.
So in your application, you don't need to write this wild loop in this application, you don't need to write anything. You will just get the message injected into the middle. And it's quite powerful. If you need to write like a small consumer real quickly, you can just do this in a matter of multiple lights, only thing that you need to specify are consumer group ID and the topic that you want to listen, and after that, the sprinkle handle some of the myths around sterilization, things like that.
To become a message-driven bean, is there an interface you have to implement or is it just a matter of annotating?
Never was. Yeah. So if you remember, Java EE times, you need to implement some sort of like a message listener interface. This is where the mean team showing off their professional age
The history that, but the modern systems, they're not required. We know that everything is annotation driven, we just need to annotate these bean-
[crosstalk 00:18:37] Anyone was thinking it, you're just not thinking it.
Yeah. But even though, you still can, if you're not interested in using this approach, you still can have similar wrapper that you have for a producer, you have similar pattern for consumer, even though you can even combine those, like you can create the Kafka listener with method that will accept a Kafka consumer as a parameter. So inside this method, you will be able to do whatever consumer method that you will have. So there is a flexibility, like I said, there is a performance productivity was not performance, performance as a different type of conversation. Productivity boost for people who wants to write application faster.
There you go. Which, I mean, that's sort of the spirit of Spring. So you've got the template for producing, you've got the listener, which is an annotation for consuming, and that's kind of bread and butter right there. What I obviously wanted to get on to Kafka Streams and all of that stuff, but what else are we missing?
So, in my opinion, one of the powers and the reason why I like to use and why I like to promote this, definitely is the speed how you can create applications in not focusing on some of the lower-level things. Not like they don't matter, but it's just something like, okay, how many of you, and also you can add me at [inaudible 00:20:16] At Twitter, or TL [inaudible 00:20:18] in Twitter. How many of you wrote, actual configuration framework in any language? At least like five and usually, or use some configuration framework?
This is something that's not brilliant, not fun, and you need to come up with more and more flexibility. For example, reading from the property files, reading from XML files, reading from YAML, or reading from environment variables now.
[crosstalk 00:20:49] The podcast now, careful with that.
Yeah, exactly. I know. And in this case, like the thing that Spring handles, all this configuration things and ability to inject this or [inaudible 00:21:02] This properties through environment variables. So say you deploy this in one environment and in order to connect to another environment, you need to redefine the start-up server. So what do you do? You can just inject a new environment variable that will automatically overwrite whatever you have in your application processes. It's kind of nice. It's it just works and it's there. And one of the things that I don't remember if we discussed this on this podcast, but essentially the Spring Boot was highly inspired by this a 12-factor app manifesto that allows writing the portable apps, and extract the configuration, do not include configuration as a part of your application framework needs to help you to do this.
And the Spring provides the different profiles that allows me to customize my deployment. I can have a profile for running on the development in my laptop. I have a profile that only runs when I run say Gradle test, or like Maven test, or whatever, like CI server I'm running. Or another profile that will run against cloud, another profile that runs against some of the Kubernetes deployed the confluent operator and things like that.
But your code, your code will be the same. It's just the configuration that Spring will handle. And this is quite powerful. And one of the reasons why I like to use it, because I don't need to worry about this. I can focus on showing people some of the Kafka bits, but configuration will be just simply injected. I just put this, like, let's assume things will go to this application to profile, and that's it. That's very powerful, and like I said, even though in the society circles, that me and Tim are very close, people can slap you in the face when you mention YAML, but you also can use YAML for configuring your application.
In some cases it's actually, I don't believe myself I'm saying this it's sometimes okay. And I can give you an example where it's okay. Yeah, that's the basics, but on top of this, we know that there's no conversation about the Kafka and Java without conversation about Kafka Streams. The Spring Boot provides us with another notation called enable Kafka Streams. And it's also part of the spring Kafka framework that allows to also provide a opinionated configuration and enable this opinionated configuration discovery for Kafka Streams. So for Kafka Streams, we do have a entry point starting with streams builder. Streams builder, this is object that you start interacting and creating this typology that will define your data flow.
And after that, it's the boilerplate. You need to create Kafka Streams object put topology there, put some properties there, and after that, there's another, the infinite loop of waiting for signal to die, because Kafka Streams application runs also [crosstalk 00:24:35]-
That's well, processing new events and waiting to die?
A dark way of putting it. But, I don't think I'm really objective.
It's also very, every time you're doing the same. Again, remember the configuration. So you have these property files, you already probably have these property files defined in your configuration. If you need to write something, there is a way how we can do that, next thing is creating this Kafka Streams object again, do we need this? So this is why you only need to define the bean that will expect streams builder to be injected. So in this case, streams builder also will be provided to you by a framework for Kafka Streams or for spring Kafka the library, and that's it. So you're also focusing on developing your own topology. That's super easy and super straightforward. Life cycle in order to create this Kafka Streams object and start this and stop it in the interact with underlying infrastructure, it's already handled by Spring.
So same similar approach. In this case, this wrapper is even thinner because you are still defining your topology the same way as you like to do. There are ways how you can customize some of the things, but Spring provides your entry points, where you can customize it, but in general, it's just simple as that. An update with enabled Kafka Streams-
And you get a [crosstalk 00:26:23]-
Create a method that will be expecting Spring Streams builder out wire, and define your typology, that's it.
Easy as that. So, wow. Yeah. What else is there to that? I guess nothing.
Well, it's nice that you asked Tim.
Yeah, that seems too easy. Go ahead.
So, no, I mean like, this is where I think it's like a very low hanging fruit to start using Spring and Kafka. There's an interesting thing that you might find, remember this, like a few minutes ago, I was talking about you searching Kafka, Spring Kafka integration, things like that. So Spring Integration Kafka component was there for a while because Spring is an integration similar to what? Fuse, similar to the camel, those frameworks allow you to define your data flow. Kafka is just one step of this data flow. Usually similar to the messaging system that used as a place where the systems interact, I'm simplifying right now. I understand that there are some other things.
So what about to bring this idea to somewhere closer to the developer? And the first attempt to do this was thing called Spring Cloud Stream. So, Spring Cloud Stream, it's the other wrapper thicker, I would say, and this wrapper might include some of the more opinionated things. Let me explain. So, Spring Integration operates on the concepts of channel, there's an input channel, output channel, and there's some processor. So it's very similar to generic graph that describes any type of data flow system. [crosstalk 00:28:51].
I'm thinking of a patchy storm right now. I mean, it sounds like that sort of model.
Yes, but it's not only in a patchy storm, it's just like a generally accepted the way how you can describe your data processing. So same thing with the Spring Cloud Stream there, you can define like input channel that would be represented as a Kafka topic, or it can be a Rabbit MQ topic, or it can be [inaudible 00:29:24] Stable, something like this. And you just defined these data flows around this concept of channels, and you define the bindings and the bindings would be materialized using binder. Binder is the control or the special component that will be actually dealing, and handling hard work. They will be interacting with your messaging system, it knows all intricacies how to do this, but to outside world, to the world of the Spring Cloud Stream, it will provide the same interface, this like channel input and output channel.
So this is where you can declaratively define your like ETLs using notions or notations that the Spring Cloud Streams uses. So in this case, you only need to write this handlers, there's like a process. And there's three types of process. That one can get result, another can create a result. Sorry, not to get result, get something another one can produce something. And the third one is actually, getting something and producing something new. Does it sound familiar, Tim?
Well, it does. I'm thinking of abstractions and Kafka Streams. I don't know what you're thinking of?
Also, you can think about abstractions in the Kafka because something, some channel that can produce data, it can be Kafka producer because, like imagine there's some generated data, or data retrieved from database and push it into Kafka. There's some entity that will be like a [inaudible 00:31:09], for example, it will just receive some message, and prints out also the console or doing something that will not affect the output channel. And then in between here, you were exactly right. In between there would be something that process because you need to have something in, and something out. And this is something that was demonstrated in the Kafka summit New York, 2019. Wow. That was long time ago, by Tim and Josh if you've missed the podcast of Tim and Josh, you can return to a few episodes back and listen the Josh Long from Spring.
Yes, they did a presentation where they were explaining the concept of Spring Cloud Stream. [crosstalk 00:31:56] But this is not the end, so this concept was powerful, but I cannot say that it was very intuitive for people. So the Spring Cloud Stream folks thought, okay, how about we do this even more complicated, to be fair, I don't believe that's their intention. Their intention was to create something like a function as a service or like a function that will be performing certain logic. That would be totally abstracted out from underlying a channel, underlying the transport.
So for the recent version of Spring Cloud Stream. So the concept of the channels is just disappeared. Now we still have a binder in order to configure integration with the underlying message bus, underlying system, but they switch to more like a functional approach. So if you need to write something that we'll be producing data to Kafka topic or wherever, there is a functional interface in Java that represents this type of pattern, this interface called the supplier and in the supplier, you can always, when the [inaudible 00:33:33] Method get, as far as I remember, you can get something.
Every time when you call the get of [inaudible 00:33:47] Method supplier, you'll get some results. So same concept they take for an interface called consumer and interface consumer, as we know has a method, I don't remember the method that, let me... it's a method except. So, in Java we have another interface that has called consumer that will receive something from an external system that allows you to do. Remember, the thing that we talk about, like earlier in this episode about this event listener that has a method listen in, something will be injected.
Now there's a standard interface for that, and it does exactly what it's supposed to do. So there's a consumer interface, do not be confused with a Kafka consumer, which has happened to be Kafka consumer, if you were using Kafka binder for this. But, like I also mentioned this episode, there's very difficult task in computer science called, naming things. So that's why the consumer. Now, and something in between, and we both know this interface that allows to get something in and get something out, it's interface function. We have a type that something going in, and something going out.
So if we were using the same, like the single abstract method interfaces, as our programming model, we're getting this new version of Spring Cloud Stream. Where in your code, it's just a job interface. You can do tests without bringing any testing frameworks, you're testing Java interface. You have a configuration that can be property file, or it can be YAML. In this particular case, YAML is okay because it fits into this kind of structure. You don't need to repeat yourself multiple times because it needs to invoke the binder properties like multiple times. So in this case, it would be multiple copies of the same line, in the property files. But you still can do that if you in this type of [inaudible 00:36:12].
So your programming now would be like fully operates in the concepts of functional interfaces, or like a single abstract method interfaces, or producer. You need to define the method that will return supplier for consumers. You need to define method that returns consumer and for function, you can define interface that will [inaudible 00:36:43] And as a input and output types for this function, you can have K-Streams, you can have K-table and all this kind of things. And in your configuration, in your YAML or your application of properties, you actually do this wiring. You're saying, okay, output of this will go into input of this. And this is where your texts, the definition allows you to like juggle with the components and to connect them just using configuration, using whatever the binders there.
There it is.
So, yeah, and that's why it might create of the confusion when the people try to Google something, because there's some of the old examples that use old notion of using the annotation binders, and channels and the bindings and things like that. There's a new version of Spring Cloud Stream that also supports kind of like it's duplicated, but backward compatible. Yes, this functional approach, or if it's too much of abstraction for many of you, you still can use spring Kafka, and as a matter of fact, I demonstrated all these techniques in some of the tutorials and workshops, and even in live streams.
Specifically, I think, I did a video where I'm showing Kubernetes operator with Confluent operator. And in this example, I used Spring Cloud Stream apps. One application was just like spinning out of the records. It was either Chuck Norris codes, or no, it was back to the future codes. And another application that was using Spring Cloud Streams, but it only use Kafka Streams to do a processing. So in this case it's like building functional, or like function based, the logic for Microsoft. And thanks to a framework like springboard, it also handles some of the hard stuff, like building images and building correct version of images using right JVM and like, do not recreating the images if something changed because, we have layers, and all this kind of stuff. So I am excited about this-
Images there, you mean Docker images, right?
Yes. Like, well, Docker is becoming more-
Yeah. So we're talking about like a, [inaudible 00:39:29] Container initiative, compliant images, because it means not only Docker.
Which it's probably a Docker, but an image, and you're saying that tooling is there if you're using Spring Boot, you've got the momentum of all that kind of tooling behind you?
Yeah, exactly. So the things-
[crosstalk 00:39:47] Kubernetes.
Absolutely. So the things that I mentioned in the show, like the underlying important things, not sexy, not funny things, but important things are handled by the framework. That's how it's supposed to be, in order to make a developer productive and enable them to build some awesome things. The framework focusing on providing you the things that you will be doing yourself eventually. You will be responsible for creating the bill that will produce your jar, with your dependencies. And after that, you need to put this into the correct place inside the Docker, and you need to make sure that you're putting on the top layer, only things that will be changing constantly on your code, but your dependencies are not changing very often. So you're putting them in a different layer, and all these things are there, there are tools, but Spring Boot and the Spring tooling already handles this for you.
So, same thing with configuration, you still need to configure your app somehow, connection strings, bootstrap servers, configuration of your sterilizers, or even if you go into this crazy world of the Spring Cloud Stream, configuration of your data flow, all these things are important things, not fun to program, Spring got you covered.
The best way to get started is of course to watch one or both live streams episodes link in the show notes. But, after that, what if somebody wants to start getting hands on keyboard, how do we do that?
Well, so we do have, we acknowledge we as a team at Confluent, we acknowledge that this is an important part of the client infrastructure, even though it's a Java client, we do have a native integration for configuration side, the springboard inside the Confluent cloud. So if you go there and they were trying to connect your application, you can copy snippet and place it directly to your Springfield application, and these applications should work with Confluent cloud. I'm working on bringing Spring Cloud Stream like YAML based configuration also into a Confluent cloud. So the people would just copy paste it and use it.
We do have a very cool project that my colleagues at the developer relations team is developing. I believe Rick and Alison were on one of the past episodes that we're talking about some of the cool stuff they do. And they recently Rick released a kind of Spring flavored service that you can integrate in overall Kafka DevOps. If you want to look how all this thing work in the real life and or closer to real life, or how real life should look like, you want to check this out, this Kafka DevOps project. It has a order service that is like a Spring Bootized version. And you can learn how you can integrate this in a more advanced way.
I showed how we can integrate this in Kubernetes in one of my workshops, it's like the next step. If my tutorial was more like one or two versions, it's not fully like 101, but Rick's version was more like a 201. So you need to have this background from my workshops. And after that, you can go into the world of a real, continuous integration, continuous delivery practices.
My guest today has been Viktor Gamov, Viktor, thanks for being a part of Streaming Audio.
Thank you very much. And as always have a nice day.
Hey, you know what you get for listening to the end, some free Confluent cloud. Use the promo code 60PDCAST that's 6-0-P-D C-A-S-T, to get an additional $60 of free Confluent cloud usage. Be sure to activate it by December 31st, 2021, and use it within 90 days after activation. And any unused promo value on the expiration date will be forfeit. And there are limited number of codes available. So don't miss out.
Anyway, as always, I hope this podcast was helpful to you. If you want to discuss it or ask a question, you can always reach out to me at @tlberglund on Twitter. That's T-L-B-E-R-G-L-U-N-D. Or you can leave a comment on a YouTube video or reach out in our community Slack. There's a Slack signup link in the show notes, if you'd like to join. And while you're at it, please subscribe to our YouTube channel, and to this podcast, wherever fine podcasts are sold. And if you subscribe through Apple Podcasts, be sure to leave us a review there that helps other people discover us, which we think is a good thing. So thanks for your support. And we'll see you next time.
Viktor Gamov (Developer Advocate, Confluent) joins Tim Berglund on this episode to talk all about Spring and Apache Kafka®. Viktor’s main focus lately has been helping developers build their apps with stream processing, and helping them do this effectively with different languages. Viktor recently hosted an online Spring Boot workshop that turned out to be a lot of fun. This means it was time to get him back on the show to talk about this all-important framework and how it integrates with Kafka and Kafka Streams.
Spring Boot enables you to do more with less. Its features offer numerous benefits, making it easy to create standalone, production-grade Spring-based applications that you can just run. The pattern also runs inside the Spring framework for a long time. The Spring Integration Framework implements many enterprise integration patterns and also has a pre-built Kafka connector.
Spring Boot was highly inspired by a 12-factor app manifesto that allows you to write portable apps and extract the configuration, providing different profiles for you to customize your deployment.
This is a critical part of the Kafka client infrastructure. Even though it's a Java client, Confluent offers a native Spring for Apache Kafka integration for the configuration as a springboard inside Confluent Cloud. If you try to connect your application, you can copy a snippet and place it directly to your Spring application, which works with Confluent Cloud. Now, he’s working on bringing Spring Cloud Stream like YAML-based configuration into Confluent Cloud too so folks can easily copy and paste to work out of the box.
To close, Viktor shares about an interesting new project that the Confluent Developer Relations team is working on. Stick around to hear all about it and learn how Spring and Kafka work together.
If there's something you want to know about Apache Kafka, Confluent or event streaming, please send us an email with your question and we'll hope to answer it on the next episode of Ask Confluent.Email Us