Serverless. It's one of the buzzwords of the moment, but is there some substance behind that buzzword? That's what we're going to discuss in this episode of Streaming Audio. Streaming Audio is brought to you by Confluent Developer, which you'll find at developer.confluent.io. Which is our site teaching you everything you need to know about event streaming, event-driven architectures, and of course, Apache Kafka. You can find everything from getting started guides that will teach you how to connect to Kafka with your favorite language, all the way up to the high-level abstract stuff like best practices for event streaming textures, and plenty of information in the middle, like courses that will get you started with understanding how to build event-driven systems. If you check out one of our courses, you'll probably want to sign up to Confluent Cloud. You can get a free account there. And if you use the code PODCAST100, you'll get a hundred dollars of extra free credit, so you can kick those tires a little harder.
My guest today is Bill Bejeck, member of the DevX team at Confluent, former member of the streams team, author of Kafka streams in action, and a colleague of mine, quite happily. Hi, Bill. How's it going?
I'm doing great, Chris.
Good. Enjoying the time timezone difference. It's like breakfast time over there.
Yeah, exactly. Exactly. I guess this is, what mid-afternoon for you?
Mid-afternoon. Ready for a cup of tea and a chat. Ideal time. So our topic today is serverless, And as soon as I hear that word, I desperately want to say, "Surely there are servers." So I'm going to start you with that question. What does it mean, serverless? What is this buzzword?
Exactly. That was the thing. When I first heard it, I was like, "What do you mean, serverless? You need a computer somewhere in the equation." I've come to believe this might be a crude definition, that to me, it's literally someone else's computer. You're not worried about it. You write your code, but you're not worried about deployment. I should say, you've got a deployment piece where you actually have to deploy it. But maintaining it, if something goes wrong, making sure the health of the servers and everything like that's not your concern anymore. So it's really, you're just focused on code and that's it.
You're not the guy running apt-get to get where your packages and all that stuff.
Exactly.
You'd have to get a Linux expert to get going.
Yeah. And maybe more importantly, you're not the person watching the PagerDuty dashboard, you're not the one on the hook for something's wrong and you get called at 3:00 AM. That's not you anymore.
Yeah. We've all been there. And if you haven't, as soon as you are, you realize you don't want to be there anymore.
Exactly, exactly.
So, okay. Someone else is worrying about the service. And other people worrying about problems is a good thing, but why else is it interesting?
I think from once you get away from, okay, you're not worried about the servers anymore, from a business standpoint, at least in my opinion, it makes a lot of sense because you're ... Let's take a step back. When you're a business and you have an application, and let's say you're self hosting, you spend a lot of money and time care and feeding, so to speak, of those servers. And there's a lot of things that you have to do that don't directly impact your bottom line, but there's things that need to be done, [crosstalk 00:04:01]
Necessary evils. Right?
Yeah, exactly. Yeah. And then that also, it requires hiring people to maintain that. And I used to, I won't name them, but I used to work at a place where they were using Hadoop, and you hired a huge team of people just to maintain this Hadoop cloud. And eventually the place went to AWS and used AWS the cloud version. And for a while, they needed to maintain the internal thing. But you quickly came to see the economy, I don't know if economies of scale is a proper word, but all of that resource, all that money. Okay, you paid AWS for the service, but now you were solely focused on things, or are more focused, not solely, more focused on things that directly related to, I don't want to use the term bottom line, but your business purpose, so to speak.
Yeah. The reason you are actually going to customers in the first place. Right?
Exactly.
And I think economies of scale might be the right phrase, but it's not scale of servers per se, it's scale of expertise.
Yeah. I guess maybe comparative and relative advantage. I was an economics major in undergrad.
Oh, were you?
Yeah. And so I guess if you think about it in terms of comparative advantage, yes, you can do those things to maintain the servers, but you have your expertise in, sort like you were saying, serving your customers. So let someone else that has the expertise, and their business purpose is maintaining all the computers and the servers and everything. Pay them for that and then you gain back more by focusing solely on serving the customer and the things towards that end.
Yeah. I worked for a while for a company that did social media network analysis for marketing people. And they spent half their time writing that kind of software, and the other half taming the beast that was this Hadoop cluster. In hindsight, they probably could have got half of their week back, or at least close to half their week back.
Yeah, exactly.
That's what they were supposed to be the experts in social media network graph theory type stuff.
That's a perfect example. Perfect example. That's a highly specialized field in and of itself that requires a decent amount of expertise. And you want to spend your time on that, not, "Hey, the name node is not looking so healthy."
Yeah. Yeah. Okay. So making that a bit more concrete, what are some examples of this? Who do you think is doing this service well for different types of serverless deployment? Is it all databases? Is it other stuff? Tell me.
Well, I'm a Confluent employee, but I think Confluent Cloud is an excellent example of that.
In what way?
Well, okay, you've got Kafka. Apache Kafka is well known in software circles now. And you use it for your essential nervous system, if you will, of your incoming data. But it is a distributed application, it is distributed and it's works over our network. So that comes with, anytime you have something distributed, that just brings a whole host of things that could happen.
And then installing it locally is one thing, but to install it, and going back to care and feeding to maintain a cluster, takes time and expertise. So with Confluent Cloud, it's the same thing. You step away from that you don't have to be a networking expert. You don't have to be an expert in Brokers. You go to a webpage and with a couple clicks you have a cluster up and running. And then you have your endpoints that you point your clients to, whether it's producing or consuming. And then you get started on, that's it, you have some setup work, and then you have your configuration and then you plug in your code. Your applications are now talking to your cluster.
Yeah. And you're not freed from thinking about clusters entirely. You just have to focus on the application level cluster issues. Right?
Exactly. Exactly, yeah. You still have to be aware to bar, I don't know if this is quite the meaning of it, but you need some sort of mechanical sympathy. There's a school of thought of you need to understand what's going on under the hood, because you don't want to ask a machine or an application to do something it's not well suited for. And so you need some sense of mechanical sympathy of what it is that you're pointing to. You don't want to spin a Kafka cluster and then start piping 10 megabyte videos across to it.
Yeah. You still have like the responsibility to make good decisions. It's just not going to take you a team of people to try and make that decision. Right?
Exactly. And then to take it a step further, so you've got you've got a couple clicks, you've you've got your Kafka cluster there going. So you've hooked up your clients to it and your producers, and you've got data flowing in, but data in and of itself isn't going to answer your problems. You need to query, you need to ask questions from it. You need to look at it to determine. And to take it a step further, we've got something, ksqlDB, which is a streaming database. And again, this takes serverless another step in the fact that you click on a link and you bring up a console, and then you write some SQL, and then you can start getting real results from what is flowing into your Kafka cluster.
Yeah. Start to get some of that analysis and processing layer.
Yeah, exactly.
I know we're going to talk about an example of that later, but let's just ask the other side of the question, which is, okay, so you've got a database data store in the cloud serverless, what about computing? What about serverless processing? Where do you go for that?
Well, again, there's lots of options. But with AWS, and depending what you want to do, but I'll talk about Lambda first. And I would say a lot of developers are familiar with the term Lambda, and that just represents, to me, it comes from the functional programming world where [crosstalk 00:11:51]
It's a world I like living in.
A lambda's a way of passing in, like you've got a function that accepts a function as a parameter that does something, and allows you to really build composable code. I probably shouldn't go any further myself, because I'm going to get this wrong.
I think you could say even simpler. And you've got just a world of anonymous functions that could have a name, but maybe don't, and you connect them together. Right?
Yeah. And each one has a purpose. It has a single purpose, which makes things easier to test and reason about. You chain them together to get a much larger scope of functionality, if you will. But AWS Lambda has something that's like that where you write code, but it's a singular function. I'll talk about it in Java terms, it implements a single interface record handler, I think is the interface name. And it accepts a parameter of ... Oh, I won't go into that level of detail. But you're handed off a batch of records and you do something with them.
Yeah. We can keep it high level. I think we're going to put some links to GitHub in the show notes.
So you've got AWS Lambda and it's basically, you literally write a function that does something and then you package it up and deploy it. And then you basically, you just define what triggers this function. You can connect it to any number of event sources. An event source is something that is going to capture, maybe an event happens, I think the canonical example is a user uploads a photo, a picture to an S3 bucket, and you automatically want to resize it. Well, your Lambda "listens" to that S3 bucket. And when a new file arrives, it's going to handle that image, resize it, do whatever you want to with it, and then put it back somewhere else. I'm sorry, go ahead.
Ahead. So it could be listening to an S3 bucket, presumably you're going to tell me equally it could be listening to a topic in Kafka or any number of external sources.
Exactly.
So you've got data ingress, some kind of processing layer, maybe some ksql to chew through the data as well. And then do you host a web server? I always think in terms of web server, because it's a nice set of tiers and it's a nice [crosstalk 00:14:36]. So you stick maybe a web service somewhere on AWS to then spit out the end of that pipe of functions and topics.
You could. You deploy your Lambda AWS, and then basically, you host it on AWS, you don't have to deploy your own server. And then where it writes to it can connect across the internet to write your final result. I guess an example would be, you can have a AWS Lambda that is triggered off of, say, a a Kafka topic. And where it's hosted, it doesn't really matter too much, because the Lambda is just, you give it a bootstrap server, that's a Kafka specific term. You just give it a URL for it to connect to a broker somewhere, and it doesn't necessarily care where that is. So it's going to listen to that topic. Well, actually, the coverage you're going to have a consumer, but the consumer associated with that Lambda is going to read in records, pass them off to the Lambda, it's going to do whatever with it. And then within that Lambda, you could have it actually produce back to a different topic in that same Kafka cluster.
So it's almost like this classic factory model of conveyor belts and people churning the thing and putting it on a next conveyor belt. Right?
Yeah, exactly. And the interesting thing about that is you end up, okay, so I've talked about you go in and you have a few clicks and you spin up your Kafka cluster. You write some SQL to do some processing of the records coming into your topics you have, maybe stateful processing, but again, that's just SQL. You're just writing SQL statements. Then you've written some code for your Lambda, but then you upload that. And that is the extent of what your development effort has been. So you haven't expended too much energy in that area.
So to process the raw data, you're using a mixture of SQL queries, different SQL statements and Lambdas. What's the intuition for which one suits what kind of use case? Where am I using one versus the other?
Lambdas excel at stateless processing. Because Lambdas come alive, they're triggered or whatever, they're fired, they come alive, do work on the records they've been handed, and then go back to sleep, they're done. So best suited for stateless processing. ksqlDB really excels at stateful. It's reading records in from the topic and it's designed to-
Like [crosstalk 00:18:11]
Exactly. You can do joins, you've got two topics. You've got a table and a stream. Your stream is events as they happen, and your table is things that can be changing, but it's got a primary key, and you're interested in the latest change for that key. Whereas with your event stream, each record is important on its own. They're independent of each other, even if the key is the same. I always like to use the example of a stock ticker, where it's coming through and you might have, well, I guess that's a better example of a table. Think of a table where you've got a stock ticker coming through, and you're only interested in the latest price, and this symbol is the key. An event stream could be, think about IOT. Every record, you're interested in the trend, [crosstalk 00:19:25] like whether it's a temperature gauge or something like that. Everyone matters because it's not just the latest, they all matter, because you want to see a trend.
That kind of relates back to one of the things I like about event streaming architectures. Because one guy in the trading floor says, "I only care about the latest price," and everyone agrees, "I only care about the latest price." But then next year someone comes along and says, "Well, actually I was interested in all the historical prices. What did you do with those?" And you say, oh, "I throw them away. I just updated the table every time.'" And that's when you start thinking an immutable event driven architecture where you don't throw data away by habit.
Exactly. You've got that full log. You've got that full history there and you can replay it if you need to.
You adapt to different purposes, yeah.
Exactly. But getting back to ksql. So let's say you've got this event stream of trades coming in and you've got a customer ID, but you'd like more information. You're a person, you're not just some combination of letters and numbers identifying you, there's a person behind that. So you could easily do a join against a user table, if you will, that has more information about you. Maybe it's demographic, you want to look at more demographics, whatever. But you do that join. And the user information could be, it's not going to change very frequently, but it changes. It could be the user information, like the person logs in, so that would change quite a bit, someone's logging in, logging out. But as this event stream's coming in, you want to join it against this user table. And that's stateful, because you have to keep the state of the current users logged in.
Yeah. So join stateful stuff would be the right place to move to ksql. And then Lambda's stateless. Would you be doing stuff like accessing third party APIs where things may or may not succeed? Is that the right place to switch over to Lambdas?
Well, the nice thing is, so you've got your stateful processing going in case equal, and then you've made you join, and then you can write that out to a topic. And you might want to do some sort of additional processing on that. And that's where a Lambda figures in really nicely, because you're writing results to a topic, and it's very loosely coupled. So ksql writes the topic, then your Lambda is going to consume from that. And if that involves heavier processing, or like you're saying, having to reach out to another party, that level of decoupling is great, because ksql query is just going to continue to churn. And the fact that this look up, this contacting external service or heavier processing that you're doing doesn't affect the query. It just keeps running.
And additionally, if it's, let's say it's, like I said, it's heavy processing, if it becomes too slow, then Lambdas will automatically scale. You get this horizontal scaling, where, like I said, it's heavy processing, it's taking while to process. And in the case of using a Lambda with a Kafka topic as the external mechanism for triggering it, it monitors the lag. The consumer lag is how far is your consumer, it's been consuming records, how far is it from the latest record that's come from the topic? So let's just say, for a simple example, you've got a hundred records in your topic. It's zero based offset, so the latest offset would be 99. And the latest offset you've consumed is 80. So you've got a lag of 19 records or so, because you've only consumed up to 80, but 99 is in the topic. That's the latest record that's been written. So you want to monitor that, because there's always going to be, what I like to call a frictional lag. Because producing, you really don't do much, you just write, you just keep appending to the law. But consuming, you want to do something with it. Inherently, you're interested in that, so you're going to do something with it once you consume. So in my mind, consuming takes a little more work than producing.
That's interesting, because that's kind of counterintuitive. Because you think of the consumer as the reader, but it's reading for a purpose.
Yeah. And that's just my own personal take on it.
Yeah. But it's a nice way of thinking about [crosstalk 00:24:44]
Yeah. So at any rate, the Lambda servers is going to monitor that lag. And if it continues to grow, it's going to automatically scale out and spin off another Lambda instance.
Right. Which being stateless is more scalable.
Exactly. Lambdas can scale out to thousands of instances, and our particular case that we're talking about, the most you would end up with would be a Lambda per input partition of your topic, the topic you're using.
Okay. So by thinking about the number of input partitions, you get a sense of a higher threshold, how much you're going to spend on the processing too, right?
Yeah. So if you've got 10 partitions, the most that would scale out to would be 10 Lambdas. But that's just that particular case. Now this doesn't necessarily pertain to, I guess we should put the link in the show notes, but outside of the Lambda world, if you have just a Kafka consumer, and you don't want to be bound by input partitions, there is a parallel consumer written by a colleague of ours, Antony Stubbs. Oh yeah. And we'll put the link in the show notes. But if you needed scalability of, I need my consumer to process more than just, I don't want to be bound by partitions, that would be your ticket there. Yeah. But that's a detour on my part. So getting back to Lambdas, yes, you would automatically scale out, pick up the slack. And then the converse is true, if that level of processing power is not needed, it would reduce the instances down. So it's always looking for a level of equilibrium, I guess, is what I want to say.
Yeah. And that's something that's really well trodden in AWS that you wouldn't want to reinvent in your own server, That kind of auto scaling.
E exactly. Exactly. It's built in. It happens automatically. And the thing is, I don't know about most important thing, is if you were to do it yourself, that's a tricky thing to get right. So you might be tempted to over provision all the time, or to cut corners, "It can wait," if you will. But this way it's nice because you're only going to pay for what you use. So if you have a period of, let's just say your topic that you're using to trigger your Lambda, if it's not spotty, if it's not constant data-
Bursty.
If it's bursty, you will not pay. You don't pay anything for uploading your Lambda and letting it sit there. So you only incur a cost when that traffic is flowing and triggering the Lambda working.
Yeah. Plus you never incur a cost for writing that mechanism yourself. You get the mechanism for free, which I think is nice.
Related to that, it makes me think of a system from my own past, where we had customers coming in, they would register their address, as you'd expect. And you stick that in a table and you wish you had up the postcode or zip code, for different listeners, and gone off to a third party service and got the address and stored all the details there. So we did that in the system I'm thinking of, but it was a pain and it was a reliability thing. In this, you would stick the user's address and a topic, and then there'd be a Lambda that did the third party look up service. And that's sounding nice and scalable. The piece that makes me wonder is, what about transactions? Do we have the kind of transactional guarantees so if the look up fails?
Well, I guess it depends on what you're doing with the result of the Lambda, but if the Lambda fails, it's not going to its internal consumer, it's not going to commit those offsets. So whatever offsets it consumed, or the records corresponding to the offsets, whatever it consumed to that, it would try again, it would consume those the same records.
So as they say in [inaudible 00:30:04] world, you'd just let it crash. And someone else would try that same offset again.
Yeah, exactly. For whatever reason, bad network connection and it timed out. You've consumed records five through 10, bad network connection and it timed out. Let say the Lambda incident itself failed, it would fire up again, but it would pick up right where it left off. That's the nature of working with Kafka.
That sounds a lot easier than the system I built way back when, but there were fewer pieces. We've got a lot of infrastructure for free if we want to just go and grab it. So is there anything else I should know? We're going to put a link to an example, and I think you've done a blog post about this in the show notes, but is there anything else I should know, we should know before going off to that link?
No, I think it's a pretty simplistic example, but I think it showcases what you can do. So don't drill into the details too much. It's a join between an event stream and a table that writes to a topic, the Lambda picks that up, does some simulated processing, and then writes back to a topic which then ksql does some further analysis on. That's the end to end demo.
Right. Yeah. So you've got like the show you all the bits of scaffolding for a system.
Yeah. Yeah, exactly.
Okay. Okay. So let me think. I know what I'm trying to do, I know where I can go and do it, and I've got a good example of how to get started. Let's step back to the bigger picture then. If I decided that in my company we were going to go serverless or pilot serverless, I know which providers I'm going to start talking to, but what does the world look like when you are writing software that way? How does my day to day change as a software developer? Tell me a bit about that.
I think it becomes a little easier because you're more focused on the code you've written and the testing you still have to test, but your tests are more solely focused on how does your code work properly? And you still have to factor in all the error conditions and things like that. But then, I think we touched on this before, you deploy it. And there's still going to be, I would guess, some level of monitoring. Because you deployed to a service, the service is going to worry about, they care about keeping the servers up, but it's not their responsibility to keep an eye on your code. If you need to still monitor it, because let's just say, going back to your postal code example, you've always expected it to be six digits and you get one that's five and you did not anticipate that, that causes an error. That's not the responsibility of the service provider. So there's still a level of monitoring you have to do.
But I feel it's a bit simpler, because you put the hooks in, like for looking at logs and things like that. And then you can monitor latency, how long is it taking for your code to respond and where is it spending most of its time? So you don't necessarily get to drill down like you would if you were going to do a profile locally, but you still get some good metrics as where is it spending its time mostly. And then you can get logging for are there errors? What's happening within your code.
Yeah. I can imagine you get a different level of detail than you're used to, but you also get it much more easily than you're used to. Right?
Yeah. It's easier. All of those things are already kind of built up, you just have to go in and enable them. You can enable, speaking from AWS terms, and I'm sure the other cloud providers offer the same thing, you can enable tracing, if you will, that'll, like I said, show you the timing. From the time that it came online and accepted the request, how much time was it spent doing at various stages of the life cycle of the Lambda? And then you can have logging that just shows ... And then that's something you have to write, but you can write in the various steps of where you are. And of course, any place that there's a potential for error, you want to capture that and log that [crosstalk 00:35:34] tracing.
So still some time spent deploying, but much less time setting up that deployment pipeline. Is that fair?
Yeah, I would say that's fair. You're always going to have that. You built it, you have to deploy it. And I think what's nice is you get a high level of repeatability that you're not writing that tooling infrastructure yourself. Again, going back to, if you're managing yourself, there's a lot you have to write to do all of that. And in this case, you provide a lot of tooling yourself, you just need to kind of stitch it together if you will.
Yeah. Yeah. Okay. What about switching from my developer hat to my business hat, how do things change now I'm the CTO of my imaginary company?
I think you're looking at, there's always a cost, but you really are only paying for what you use, which I think is big. I've worked in a few places that had, this is going back a few years, but I've worked in a few places where you had an in-house Hadoop cluster. And one of the concerns was one place I worked, we had a Hadoop cluster and we were only using eight to 10 hours out of the day. We'd get these data sets, we're doing this massive join, pushing it out somewhere. And then there was an application that would do queries on this data, and then the data would get refreshed and we'd do the massive joins again, then push it out. But from a business perspective, it's sitting there a huge amount of time.
And so the department I worked in was actively going out and trying to find other people that could use the cluster, because otherwise it was just sitting there. And it's like, it costs money. And I guess they were also looking at the cost of it, you spent X amount of dollars per server or whatever, and you're paying for people to maintain it. And you're paying that regardless if you are using it for an hour a day or 24 hours a day. So now you're not concerned about that, because from the Lambda perspective, you only pay for when it comes up. And on the Confluent Cloud side, I believe you're only paying for data that's flowing through. You don't pay for the servers themselves. You pay for the usage as the records are flowing through.
Yeah. I think you have to pay something for the ksql side.
Yeah, ksql, there's a charge there. But again, it's minimal when you compare it to what you would pay to have all of that in house and maintaining it.
Yeah. And you don't end up, as you say, going outside trying to find someone else to use your accidental cloud service.
Yeah, exactly. Exactly. Yeah. And that is a great way to put it as an accidental cloud service.
Oh, God. The things we've done along the way over the past, what, 20 years of internet land, trying to make life easier.
Yeah.
Okay. Well, I think I'm going to go and take this for a spin myself. Because I have much smaller scale problem as a kind of bedroom hacker at times, that I want to deploy something and I don't want to maintain it, even though it's tiny. I want to get away from the problem of maintaining service myself. So it's not just a big or medium sized company that cares about getting rid of the server problem, it's also little old me.
Yeah. I like your term bedroom hacker. Yeah, and that's ideal in those situations as well, especially for prototyping, because you can quickly get a clean environment, have things running to flush ideas out. And then when you're done, you just get just a couple of clicks and you get rid of it all.
Yeah. Nice.
And then, like you said, for something that's longer lived, if your bedroom hacking turned into something that would be longer lived and would run, then you can do that. But again, you're just focused on the piece that you've completed yourself.
Yeah. That's the dream of every bedroom hacker, that it develops its own legs and they never need to do anything to scale it.
Yeah, exactly. Exactly.
Brilliant. Okay. Well, I look forward to looking through the code base.
Oh, great. Hopefully it's instructive.
I'm sure it will be. I know you're a good writer. Bill's book is available in all good shops. Bill, thank you very much for talking to me. It's been a pleasure.
It's been great talking to you, Kris. Thanks for having me.
Thanks for [crosstalk 00:40:51] We'll see you again.
And that brings us to the end of another episode of Streaming Audio, the Confluent Developer podcast. Find us developer.confluent.io, the one stop shop for everything you need to know about learning Kafka and event driven architectures, from the low level getting started guides to the high level architectural walkthroughs. I hope you've enjoyed this week's episode. I've certainly enjoyed talking to Bill. I hope you found it as instructive as I did. If you have any thoughts or comments, then you can get in touch with us. You'll find my Twitter handle in the show notes. If you are watching this, there are comment boxes that you can enter a comment in now. And I'm probably legally required to say, don't forget to like and subscribe and click that notification bell, because that's what all the cool kids say these days. But I hope one way or another you'll join us on the next podcast. And if you have any thoughts, get in touch. It remains to me to say thank you to our guest, Bill Bejeck of Confluent, for talking to us this episode. And thank you to you for listening. We'll see you next time.
What is serverless?
Having worked as a software engineer for over 15 years and as a regular contributor to Kafka Streams, Bill Bejeck (Integration Architect, Confluent) is an Apache Kafka® committer and author of “Kafka Streams in Action.” In today’s episode, he explains what serverless and the architectural concepts behind it are.
To clarify, serverless doesn’t mean you can run an application without a server—there are still servers in the architecture, but they are abstracted away from your application development. In other words, you can focus on building and running applications and services without any concerns over infrastructure management.
Using a cloud provider such as Amazon Web Services (AWS) enables you to allocate machine resources on demand while handling provisioning, maintenance, and scaling of the server infrastructure.
There are a few important terms to know when implementing serverless functions with event stream processors:
Serverless commonly falls into the FaaS cloud computing service category—for example, AWS Lambda is the classic definition of a FaaS offering. You have a greater degree of control to run a discrete chunk of code in response to certain events, and it lets you write code to solve a specific issue or use case.
Stateless processing is simpler in comparison to stateful processing, which is more complex as it involves keeping the state of an event stream and needs a key-value store. ksqlDB allows you to perform both stateless and stateful processing, but its strength lies in stateful processing to answer complex questions while AWS Lambda is better suited for stateless processing tasks.
By integrating ksqlDB with AWS Lambda together, they deliver serverless event streaming and analytics at scale.
EPISODE LINKS
If there's something you want to know about Apache Kafka, Confluent or event streaming, please send us an email with your question and we'll hope to answer it on the next episode of Ask Confluent.
Email Us