When we say app modernization, what do we really mean? Usually, containers and microservices. That's definitely what Google's Jay Smith means. We talk about both on today's episode of Streaming Audio, a podcast about Kafka, Confluent and the cloud.
Hello and welcome to another episode of Streaming Audio. I am as ever your host, Tim Berglund. I'm joined in the virtual studio today by Jay Smith. Jay is an app modernization specialist at Google. Jay, welcome to the show.
Thank you for having me. It's a pleasure.
We're going to talk about serverless eventing today. There's a great deal to unpack in those two words more than 40 minutes' worth. Good luck to us. Before we do that, as always, I love to ask my guests to talk a little bit about what you do and more importantly, how you got there. What was your career journey that got you to where you are?
Yeah. My main goal here, what I do at Google Cloud is I kind of help our customers understand how to modernize their apps, how to move for more legacy monolithic applications to be cloud native, which obviously includes a lot of Kubernetes and Kubernetes-related products, best practices around that.
The way I got to where I am, a little bit of a non-traditional path, I guess you could say. I did go to school for business, not tech. However, I was always a tech geek, if you will. I remember I built my first home computer out of parts that I found at garage sales when I was 11 or 12. I kind of always had my head in that. I've always been self-taught, self-directed and that's kind of what drove me through my career.
I ran a small tech firm in San Antonio, Texas while I was in college. After college, decided to move on to another venture and moved to Austin. Then, I started working in the corporate world, working as a server monitor for a hosting company working as a tier three support rep at a CRM company and now I'm at Google.
I've always considered myself to be not so much forward thinking as much as curious which kind of led to forward thinking because I was always curious about what was on the edge. How can things be done better? What's this cool new technology? All of that stuff and that led me to learning more about containers when they first started to become popularized by Docker than cloud.
Once the kind of the orchestration wars began between Kubernetes and every other platform and a lot of startups and whatnot, I got [crosstalk 00:03:07]. Yeah. I really got into Kubernetes and that kind of what led me to joining Google was my knowledge there. Yeah. It's been a very fun path. Just a lot of self-led education and learning from other people who've traveled down the path and learning from my own mistakes, sometimes, too.
Sometimes, you just got to not be afraid to fall and get back up and try again.
Yeah. Always good advice. Always good advice. Where you are now when you say app modernization specialist for Google Cloud, you said it, of course that's going to be in Kubernetes. It sounds like it probably means just ... Okay, when I hear people say app modernization, I don't think containers first, I think microservices first. Is that fair in your view of what you do?
Oh, absolutely. Containers are kind of just an implementation of the concept of microservices. The idea of microservices, you could argue, predates the idea of containers. I think people were starting to kind of decompose services into smaller services in the 2000s, if not before, when we had services, when PaaS was becoming a bigger app engine, Heroku whatnot. The idea of microservices predates that.
I think what we see today, is that the most popular implementation of microservices is through containers. Will that be the case 10 years from now? Who knows? I'm pretty sure we'll have something new. Yeah, for now, whenever people say microservices, they kind of see it as synonymous with containers just due to the nature of how we talk about it.
Yes, my main focus is more towards telling people how to be microservices first, because that's how you're going to develop your application. Developers aren't really worried about, oh, am I using a container? Am I using this? Or that? Am I deploying on whatever? They want to write codes. We got to attack that portion first, that architecture, that mindset of how we write code. That's the microservice mindset.
That makes a lot of sense. I would absolutely agree that microservices predated containers. It was just awful, then. Containers are a good fit and trying to do this without a standard container orchestration platform, I think, didn't feel very good. It's good that we have one now. Cool, that makes a lot of sense.
As I said before, our main subject for the day is what you call serverless eventing. I want to give you an opportunity to define that. If I could, before you define serverless eventing, could you give me your account of what the word serverless means? This is a word that admits to more than one definition so I want to hear yours.
Right. Yeah. We often have a lot of people when we ask about serverless, everybody has their own kind of definition of what that means. I've seen people who refer to a managed server or managed service as being serverless. In some cases, that is the case, but it doesn't really follow the same model of serverless. A lot of people think serverless is simply functions. While that is we're talking about containers being an implementation, microservices functions are a small piece of the whole story of serverless.
To me, serverless is where you extract all the servers, all the infrastructure from the end user. In this case, the end user is the developer. Obviously, none of us have figured out how to run software on literal cloud yet. Everything is somebody else's server. Obviously, there's no true serverless. The thing is you abstract all that away from the developer, so where all the developer has to focus on is writing code, packaging it and then deploying it. Then, our abstraction layer kind of automatically deploys the software, makes it work.
Obviously, there's a lot more going on behind the scenes. As far as the developer is concerned, it's like, hey, I've pushed my code and now it's running and everything's good to go. It also enables operators. A lot of times in that DevOps handoff, the developers will have to pass the code along to the operators and the operators will then do what they do to deploy, or the developers have to become operators themselves.
What we do is we take away all of that, make it easier, but then we also change kind of the model or the implementation of how it works. Where people used to think about I'm spending X amount for VMs this month, or I'm spending X amount for this much storage, X amount for this, we stop thinking about purely memory storage, CPU capacity. We think more about use. Requests, so instead of being billed by, I have 50 workers running right now and that's how much I'm going to be paying for.
Yes, they automated the deployment of the workers, but I'm still paying for that. You're just paying for request calls or oftentimes also how long that worker is running. If those workers are more elastic, and let's say containers here and the container or the pod spins up when a request comes in and spins back down when the request is done. That's truly a serverless model, in my mind is when you've abstracted the technology or abstract the server, the infrastructure from the developer, but you've also simplified the model, the pricing model, to where it is, by request or purely by compute time.
Or some kind of usage quantum whatever it is?
Exactly. Exactly. You're paying for use. You're not paying for idle workers that may or may not be used.
Yeah. Ladies and gentlemen, in the listening audience, you should probably rewind and listen to all that again, because that's among the, in my opinion, among the better accounts of what serverless ought to mean that I've heard. Thanks, Jay.
I just want to recap a couple things like functions and that you said it's not just functions. I think we have to spend some time talking about this because that is where serverless came into the popular consciousness of the developer is the first service was AWS Lambda and all Cloud providers have similar functions as a service offering now.
Then that was "serverless". There were conferences that talks about serverless that were serverless functions, which is a way of doing serverless computation. You have a little piece of program that you need to run somewhere. Like you said, we haven't figured out how to run software on actual clouds. Everybody is aware that there are computers doing this, but what are they? Is there some kind of container management system going on there? How do the containers get spun up and cold start and all that?
You'd be vaguely aware if you thought about it for 30 seconds that to build a serverless function platform, you'd have to solve those problems, but you don't know, you pay for a request and the runtime. Figured out for computation, here's how we bill for computation abstracting all details of the infrastructure away. That made sense for functions. I think a lot of us just got functions stuck in our head and think that's what serverless means.
As we move other things to serverlessness, like I occasionally talk about serverless Kafka you kind of have to back up and do a little bit. It's sort of a process of inference so you're reasoning from particulars to general things and you're trying to find, what is that general thing? What is the right abstraction that I should expose? It's not brokers. It's something else. I don't want to even want to answer what that is yet because we've got a lot to talk about.
Yeah, I like the way you put that. Okay. Wait, go ahead.
Yeah, you're exactly right. Bringing up serverless Kafka is a good point too, because serverless is more than just compute. Taking for example, Google, I know a lot of people use our AI APIs or LM APIs. It's you make a simple request to call, REST request and send some data, maybe some text, to our AI, our APIs will run an ML model against it, spit out a output and that's that. The only thing you pay for is that usage time. That's not applications. It's not compute ... Or I mean, yes, technically, that is compute but it's not compute in the traditional sense when we think of like functions and the fact that when I'm running a piece of code, I'm not just sending data to an API, it's spitting out information and I'm good to go.
Right. Serverless eventing, you might even begin by telling me what you mean by eventing and then talk about how you make it serverless.
Yeah. Eventing, everybody has their own little definition on what data driven means. I kind of feel like it's become a bit of a marketing term a little bit where people say, I want a data driven organization. I start some of my speeches saying, like, you might have heard a manager say it or you might have heard it at all hands or something. Does anybody actually know what that means to be data driven?
I literally get that phrase by the way.
Yeah. Go on.
Just something people are throwing around. I think data driven means event driven because data in and of itself is kind of useless. If I'm storing tons of data about something but I'm not using it, I'm not pulling any kind of value out of it, then why am I storing it unless at some point, I think, I can exchange megabytes for dollars or something. What you need is you need to start using that data.
Now, just the very nature of the world we live in is so instantaneous. Because of IoT, because of mobile applications, so many things are driven by real time data. I order food and I get an update on my app saying, such and such pizza place has got your order and they're cooking it. Your order is done, it's been picked up, it's on the way. It's four minutes away. It's three minutes away.
I can look and see if my lamp is on while I am three states away. If it is, I can turn it off. That happens instantaneously. It's not waiting for some cycle to come by every hour. These are all events that are happening in the real world that our applications are responding to.
I think of kind of as a verb, you're doing something, you are doing something with the events, or the events are doing something so that's why I kind of go with the term of eventing to kind of turn it into a verb. We've all done it.
That's because [crosstalk 00:15:20]. You turned it out into a verb. Right, that's it.
You take a class on that and I respect it but go on.
Exactly. No, I just like the term. Quite frankly, I also kind of borrowed it from Knative Eventing. I always liked it because I'm like, oh, yeah, events doing something, that makes sense.
Yeah, we've all been working with eventing forever, like obviously, it's becoming more popular. You work in Confluent, know all about Kafka and ingesting real time events for social media and whatnot or different ... I mean, so many used cases. I remember maybe I always tell this story about a decade ago, maybe a little longer than that. I was at a bar with some friends. I guess my credit card was swiped by somebody.
About two or three days later, I get a call from the bank saying, "Hey, I noticed that you have some suspicious charges." I have to go back and look, because I've made other charges since then so I need to kind of parse which ones were the real ones and which ones are the fake ones. Nowadays, if that happens, I get a text within seconds saying, "Hey, we noticed some suspicious activity on your credit card."
All of that is a venting. All of that is on some kind of server. It's either on prem, it's on VMs, it's somewhere. That's all, in my mind, eventing.
Got it. Let me drill into that a little bit more because a lot of those things, being broadly data driven, I could imagine you doing that with data infrastructure that did not put events at the center of things, right, like you could do that with state-based databases. I feel like a lot of the discussion about event driven architecture is new, even though like you said, you've never written a program that didn't process events.
If you write a quine or something like that, those programs that produce themselves, their little self-reproducing program ... I'll put a link in the show notes. There's some kind of artistic programming that does not process events, but anything that you do for a company, stuff comes in, a thing happens and you do stuff with it. We always process events, but I feel like the pivot to making events first class citizens and not materialized representations of the state of entities is a recent pivot.
When I talk to people wrapping our minds around that transition seems like a difficult thing and a part of the process we're all going through, like a paradigm shift is happening and we're like, trying to get through this thing. Do you think that's fair?
Yeah. I think that's when people are saying data driven organization, that's kind of the idea they're trying to put out there. It's the idea that now data events, ingesting events, doing something with events in next to real time, if not real time, is becoming more front and center. Unless maybe you're a financial institution or something like that, where getting real time information was very important, most people didn't care.
Even if there was a minor lag to getting a notification or a post or something like that, most people would just deal with it. Now, because of the way our worlds changed and I mainly say this is largely due to a lot of IoT and of us having a lot of things. I mean, even my fan has a little chip in it that I can use an app to control. Everything is connected right now.
For a business to remain competitive nowadays, it is very important for them to at least have a story around how we are handling events in our application. I would say, that is becoming more front and center these days when people are building software.
Yeah. It comes as no surprise to you that I agree with that being a guy who works for Confluent does a podcast about Kafka. Cool. Tell me with all that background, what is serverless eventing? I think you've laid it out, but give me ... there's probably products and services and give me the whole view.
Yeah. When we talked about serverless, I used my long definition, but then that net of it is we want to make life easier for developers. We don't want them to have to prevision more than what's necessary for them to get the application working. Anybody who's written an application that has to connect to some message bus, which you should be using. I've had those arguments with customers before, not really arguments, just fun discussions where we talk about whether or not they need to just make direct calls to their application or whether they should use some kind of message bus in between them.
I'm like, you want to make sure you don't lose your messages or anything like that. Let's get a message bus going. A lot of times, when you're writing those applications, you're having to make some imperative building there. You're having to say, okay, my application, send data to this Kafka broker or RabbitMQ or whatever tool you want to use, whatever homegrown solution which if you're using more power too.
If you know you're making these direct connections, passing along certs, all that kind of stuff. That's fine. When we start implementing microservices when we start wanting to think serverless, microservices and serverless microservices, by nature, they're supposed to be decoupled. If you have that stuff hard coded to connect to a specific broker or specific bus or a specific queue, it kind of defeats the purpose, it kind of makes it harder to scale.
What happens if you need to change an IP address for something? Or you move providers? How much of that code has to be revised? The idea is to declaratively bind the events to their sources, the event sources to their event syncs. From a developer's perspective, the only thing they need to care about when they're writing their application when they're writing their microservice is either ingressing or egressing data.
Simple requests command if you're using Python or some side effects collecting post data. That's all you have to do, any developer can do it, you don't necessarily need it to connect to a specific message bus. The serverless eventing tool from there, ideally, that's the abstraction layer that's actually handling the connections and where things are supposed to go.
You can connect a serverless eventing tool to an existing system, let's say, Kafka for this case. You can use that and it will connect to the Kafka. It will know what the right brokers are, how to authenticate to them, et cetera, et cetera. It's the point of ingestion for your application and then it will process in some things to the right topic. Then, it could also act as a subscriber and as push events come through, or pull events for that matter, it can just go ahead and say oh, there's something in this topic, let me go ahead and send it to this service.
The service will simply ingest the data as a regular REST call and take it from there. The idea behind serverless eventing, I guess, a quick recap is simplifying how we bind event sources and their syncs and really making it more declarative rather than them having to focus specifically on I need to connect to this broker, I need to connect to this queue, whatever the case might be.
Got it. There is a framework to declare end-to-end connections. There's this event source and my intent is that it go to this sync. More like legacy messaging, or is it necessarily that?
It's kind of like that. Yeah, it's very similar to that. You're still able to use a lot of the real time technology, you're able to use large technologies like say Kafka buses or whatnot. Let's say, for the sake of example, I have a mobile app and I want to be able to push real time alerts. I'm a news organization, I want to push real time alerts to all of my subscribers.
My application will push it to the Kafka cluster and then the Kafka cluster or app will start pulling that data from the Kafka cluster as well for the mobile app to the millions of users that I have who have my app. From a developer ... Yeah, go, go ahead [crosstalk 00:24:36]. Sorry, you can go. You can go. You can go.
There is a little bit of delay in Zencastr here and Jay are stepping on each other. Sorry about that. To clarify, one of the points of the framework is there's some messaging substrate, eventing substrate, and you darn sure don't want to be turning knobs on that. Like you've said, authentication and which broker to talk to all the plumbing stuff, that's abstracted away.
Instead, you get a REST interface because it's Cloud service and that's what you expect.
REST interface and they're declaring stuff in the middle.
Yup. REST or gRPC, I should say. Yeah, that's essentially what you get. It works. Yeah. Obviously, we're talking about abstraction. That message bus or that messaging system still exists, something still has to connect to it. From the developer perspective, they don't have to worry about it. That's your Kafka engineers' concern, the developer doesn't have to worry about it at all.
That's the benefit is that the developer can just focus all of their energies on to actually just developing an application and figuring out how to egress or ingress data and then the eventing layer takes care of all of the other stuff making sure things go to the right place and whatnot. The benefit of that too is upgrading or scaling or making changes.
The developer doesn't have to go back to make code changes, because new topics are created or anything like that. We just make simple changes to the YAML of the eventing system.
Right. To where those declarations are made. You've got a naming layer in that, a namespace in that abstraction layer that allows you to keep underlying changes abstracted where the application is still talking to this named thing. That name can remain constant.
Question about sources and syncs, there's a subtle distinction that I bumped into between messaging and what I think of as event driven architectures where sometimes you are producing a message and you know it is going to something and you want to know when it has been consumed, so I'm producing it. I need to know when the consumer has consumed it or have confidence that a particular consumer has consumed it, which is more classical messaging.
Then there's, I am producing events and they get remembered somewhere in Kafka, you call it a topic, but we cannot care about that, like we can serverless that sort of thing out of existence, but I'm producing events, and they go to some named collection, and somebody might consume them. The producer is, if you will mentally decouple from the responsibility of that consumer consuming it.
The scheme you've described is the former where I produce it and somebody consumes it and I've got it on my mind that that particular consumer is going to get the message. Is there a way in serverless eventing simply to log things that happen and let producers sprout out of that log as the application evolves?
Yes. Right now I think in terms of the serverless eventing, one of the best technologies out there that really brings this concept home is the Knative Eventing tool, which is open source. I encourage anybody to contribute to it if they want to or use it. It's still pretty, I wouldn't say early stage, I know there are people who use it for enterprise but there's not a ton of ... Some assembly required like a lot of open source, early stage open source tools are.
Basically, what you can do is you can actually do the ladder there because it has concepts called like brokers and channels, where the producer will simply send data somewhere and that's that and it will just kind of rest in a channel until somebody requests it and picks it up.
At the end of the day, the producer doesn't really care if it gets there because their job is to write to that channel, not necessarily to make sure it goes to a specific subscriber. That's the channel's job to make sure it goes to the specific subscriber. Actually, that's one of the big reasons I always recommend the message buses because what happens if one of the services gets a hiccup or something to that effect. How do you guarantee that the message got there? Whereas, if the receiving service goes down, but it first goes to say, a Kafka topic, when the service comes back up, it'll just consume it from the Kafka topic rather than the message possibly being lost in the ether or having a problem with messaging ordering, because now it has to try to resend it and the service doesn't come up for three minutes.
Or you just have a whole mess of things. Yeah, usually you are able to do both methods with the serverless events and where you can do the straight message delivery to the source or have some kind of whatever you want to call it, a topic, a channel, a broker. In Knative Eventing, it's brokers and channels, you're able to kind of have that middle ground, that kind of brokers information between the senders and receivers.
Right. I like that. I want to talk about application architecture in a little bit. When you begin to try to apply that vision of event driven applications, where you produce an event, and of course somebody's going to consume it, of course, you're thinking about the way the system works, and you've got guarantees because the business has to make sure that the order gets shipped after it gets paid after payment is cleared or whatever.
There's still this unnerving process you go through in designing and writing services that you have to not care. There's this little lecture I've given lots of times and I feel like a dang life coach or something when I'm doing it, but it's like, you have to tell people, well, you've done your work and you've put it in the channel or the topic or whatever it's called. Don't over function. Don't sit there and go check up on people and make sure they've done what you expect them to do. You have to let them do their work.
Really. It sounds goofy, but it's not goofy. It sounds like you're life coaching somebody which is not a goofy thing. It really is a discipline, this kind of new discipline and application architecture where you have to decouple yourself from the responsibility of the next service in the dance, in the choreographed set of operations.
Yes. Yup. Treat your services like adults is how I say it. In real world, would you say, "Hey, Susie, tell John this," and then maybe a few minutes later you come up and say, "Susie, did you tell John this?" Or even stranger analogy is you'll go up to John and say, "Hey, John, I need to tell you something." "Hey, John, did you hear what I told you," kind of thing.
You just want to be able to tell somebody, hey, do this and then it just gets done and you don't have to worry about it.
There's still accountability for whether we do what we do. That gets to observability. That's a whole different discipline. That's like business outcome is being achieved by the services that we've deployed. That's a monitoring question. It's not a go bug your friend and make sure she did what you just asked her if she could do, it's not how it goes.
A little bit of a different subject. I love serverless. I love things in the Cloud. Everything in this conversation is making me happy. The reality is not everybody runs everything in the Cloud. There are a number of reasons for that. I think it's 10 years ago, it was this radical thing and you had to be super forward looking. You didn't really have to be all that conservative to say, no, I'm going to run on prem.
That's a very conservative position now, but there are regulatory reasons for it and other good reasons for infrastructure to be on prem. We must not, as sort of Cloud first people, we must not ever view on prem deployments as somehow second best. This raises the question of, really, just in general, hybrid Cloud and that specifically hybrid Cloud with an on prem component.
How do you do this kind of thing? Because it sounds like stuff that's built into GCP and I get to know and love it and that's it.
Yeah. Actually, that's a great question. Taking a step back where I was talking about Knative. Knative is an abstraction layer or we can call it building blocks that allow you to build a serverless platform on top of Kubernetes. At the end of the day, it is Kubernetes. Anything you could do with Kubernetes or any kind of Kubernetes installation, whether it's on prem or on the Cloud, you can install Knative.
If I'm running Kubernetes on Bare Metal in my data center, I can also run Knative and have that serverless Knative Eventing, that serverless eventing feature in my data center. When we're talking about communicating, we're talking about hybrid Cloud. Hybrid cloud means you have some workloads on prem, some workloads in the Cloud, maybe there's some workloads that are less of a security risk, regulation risk.
Let's say you're a grocery store or something and you want to have a curbside pickup app or something to that effect or some kind of messaging app, maybe that's not as important from a scalability perspective and reliability perspective. Running it on prem doesn't make much sense, however, your accounting, other kind of stuff, you might want to keep on prem and on the Cloud for whatever reason.
We see a lot of used cases like that. You would have serverless eventing tool or framework installed on ideally, in whatever platform you have. You would have it set up in the Cloud, setup on prem, that way your developers have a similar experience, they're not having to do one thing one way and then another thing another way to kind of the idea with hybrid Cloud is to make it easy to where you're not having to learn 30 different platforms, because you're running in 30 different environments, or whatever the case might be.
Yeah, [crosstalk 00:35:47]. It's not viable if the APIs aren't the same.
Exactly. That's always been a big thing for me, like I remember in the early days, it's like if I'm choosing this Cloud provider, I know I need to use this tool set, but then if I'm on this Cloud provider, I need to use this tool set. Then, both of those are proprietary so I need to find a completely different tool set to use in my data center. It's always fun. Then, licensing comes in and that's the battle days as I like to call them.
Yeah. You can absolutely do it by implementing something like Knative Eventing in your Cloud and on prem environment or just your on prem environment if you want to. Then, in order to merge the environments or get the best experience, then you would also plug in a more, I don't want to say legacy because when we're talking about Cloud natives, a lot of times you say legacy is kind of a dirty word.
I would say traditional, let's say a more traditional message bus or a more traditional system like Kafka, connect the two and now you're able to interact between your different environments, your external users, your internal users, and all have kind of a similar platform across all of your setup, your developer environment.
Yeah. I appreciate that you've been speaking in sort of vendor neutral terms. I want to ask you to not for a moment, because I want to understand stuff better. It's completely okay. You're an application modernization specialist at Google working for GCP. We don't need to pretend you're not. Just tell me about the actual things. You're saying, the on prem thing is Knative Eventing and we agreed that the APIs for a hybrid Cloud scenario, the APIs must be the same for the on prem and the Cloud thing or its death.
I would consider that system to be not one with a hybrid cloud option if that were not the case. Just tell me about the name of the GCP service, but it's APIs, it basically hosted Knative eventing, right?
Yeah. Taking a step back, yeah, I'm sure people who have been following kind of the hybrid news, I feel like when you were ... last year when we were out and about and you pick up a trade magazine or go to a conference or listen to a podcast, hybrid Cloud seemed to be the buzzword. Google created a product called anthos. Actually, I should call it a platform because it's not really a product in the sense that it's a binary that I can download and install it.
It's more like a suite of tools that help you build this unified development platform on whatever Cloud you're using. It is all Kubernetes based. I once heard somebody say Kubernetes is the Linux of the Cloud and I really agree with that. In the same vein of Linux having multiple distros, you have Ubuntu, you have Red Hat, you have Suse, you have Arch, all of those different ones.
I would say, if Kubernetes is Linux, GKE, Google Kubernetes Engine, is like its own distribution of Kubernetes. What we've done is we've packaged GKE to where you can install it on multiple clouds. On top of that, we don't want to just give you Kubernetes and say, hey, you have Kubernetes and it runs everywhere and you have the single pane of glass that you can see all of your clusters across the globe, on any Cloud provider in one console.
We also give you development tools, because that modernization is more than us just saying Kubernetes is magic. Here you go. We want you to actually be able to use it. One of the tools for Knative that we have ... manage Knative is called Cloud Run. When we're talking about open APIs and whatnot standardization, it is Knative. It is Knative API compliant. If you know how to use Knative, if you know how to use Knative objects, you can use Cloud Run and vice versa.
If you've written stuff for Cloud Run, it will be backwards compatible with Knative. That's kind of nice having that open platform. And with Anthos, you can install it on ... Right now, we support on prem via VMware and we also support AWS. You can install GKE on AWS and run Cloud Run, run Knative kind of get that whole serverless feel going.
Very much, very much. Okay, that's great. That, to me, I think is what makes adopting a Cloud service feel safe for a developer or an architect who's making a big decision about what Cloud to go to or what big pieces of functionality to use when there is an open source API that is behind it. Because your investment is coding against the API, that's where the money happens. Developers spend time doing that and it's got its fingers all over your code.
That's where the transaction cost is switching away from that API. If your Cloud service is an implementation of that API, it doesn't feel like getting married. It feels like being friends and getting coffee occasionally, with the Cloud service. It's less of a commitment. In practice, I think, we really don't see people switching Cloud providers very often. We all want that sense that it would be okay if we did.
I talk this talk with Confluent Cloud that there are other things in Confluent Cloud that are not in Apache Kafka. You better believe it. There's all kinds of cool things in there that open source Kafka, by itself doesn't do. Can you tell by the APIs that you're using? Not really. For the most part, those are Kafka APIs so we guess ksqlDB would be a little bit of a different thing, but just the actual Kafka stuff it's an open source API, by which [crosstalk 00:41:49] implemented by open source software.
Exactly. Yeah, you could say the same thing with our Anthos. You could say, yeah, I can just install mini cube or whatever. There's like a hundred thousand ways to install Kubernetes. If you're feeling bold, you can use Kelsey Hightower's Hightower tutorial and get that going. Yeah, you can do that. You can also deal with a lot of the headaches of managing masters and not having the greatest tools for management or having to build your own or having to rely solely on a Slack channel for support, which is fine.
I've known people who've done that. More power to them. A lot of enterprises will think, is there any value in me managing this? Is there any value in me creating monitoring tool? Does that help my business at all? No, not really. Why not give that job to somebody else who will actually benefit from it or who loves doing that? Then, you can focus on the things that work for you. We've hardened our Kubernetes to where it's more secure than a lot of other ones we use in our container optimized operating system, as well as a few other tools that we offer with Kubernetes or with GKE that you can only get with GKE.
Could now have granted at the end of day? Yes, it is Kubernetes so you are writing on Kubernetes. You're just getting a few extra goodies to make it enterprise ready.
Right? What do you see as you've run the tape forward and people adopt tools like this, whatever the motivation is in your used case for why you want to start using a serverless eventing framework. You start doing it, you start building event driven applications. The choice to use that tool starts to make choices for you. The way you build your applications, your architecture gets affected by that tool.
As you play that tape forward, how do you see this changing application architectures? That's a tremendously broad question but yeah ...
Yeah. That's a good question. I think what we have to do is we have to look at the two personas involved in development architecture, at least from a deployment perspective and that would be our DevOps or developers and our operators. Our developers are going to be completely agnostic in the sense that they're not going to care one bit. Maybe they'll have some opinion because they've done research or benchmarking.
They see that Kubernetes is so much better or not Kubernetes, one message system is better than another for their used case. Ultimately, they're not going to care. Their job is just to do simple egresses and ingresses, which is something they know how to do and continue doing that. Now, their job has been simplified. From an operator perspective, when we're developing these architectures, it's going to be so much easier for them because now they can focus on how to best optimize their streaming service, rather than focusing on making sure the developers' code is up to snuff or making sure the developers are connecting to the right brokers or whatever the case might be.
The operators are now able to dedicate their time into improving the setup that they have. That's improving the messaging system and focus on that. I think, we're going to see better division of service, division of labor, which in turn will have, I think, better outcomes as time progresses, thanks to serverless eventing.
My guest today has been Jay Smith. Jay, thanks for being a part of Streaming Audio.
Thank you very much.
Hey, you know what you get for listening to the end? Some free Confluent Cloud, use the promo code 60PDCAST, that's 60PDCAST to get an additional $60 of free Confluent Cloud usage. Be sure to activate it by December 31st 2021 and use it within 90 days after activation. Any unused promo value on the expiration date will be forfeit. There are a limited number of codes available, so don't miss out.
Anyway, as always, I hope this podcast was helpful to you. If you want to discuss it or ask a question, you can always reach out to me at tlberglund on Twitter, that's @tlberglund. Or you can leave a comment on a YouTube video or reach out in our community Slack. There's a Slack signup link in the show notes if you'd like to join. While you're at it, please subscribe to our You Tube channel and to this podcast wherever find podcasts are sold.
If you subscribe through Apple podcast, be sure to leave us a review there. That helps other people discover us which we think is a good thing. Thanks for your support and we'll see you next time.
Jay Smith helps Google Cloud users modernize their applications with serverless eventing. This helps them focus on their code instead of managing infrastructure, as well as ultra-fast deployments and reduced server costs.
On today’s show, he discusses the definition of serverless, serverless eventing, data-driven vs. event-driven architecture, sources and sinks, and hybrid cloud with on-prem components. Finally, Jay shares how he sees application architecture changing in the future and where Apache Kafka® fits in.
If there's something you want to know about Apache Kafka, Confluent or event streaming, please send us an email with your question and we'll hope to answer it on the next episode of Ask Confluent.Email Us