Get Started Free
June 10, 2021 | Episode 162

Confluent Platform 6.2 | What’s New in This Release + Updates

  • Transcript
  • Notes

Tim Berglund:

Hi, I'm Tim Berglund with Confluent. Here, we're not in any exotic outdoor location this time, but in my studio at the house of Berglund, to tell you all about Confluent Platform 6.2. To build really robust systems that harness data in motion, to deliver rich frontend customer experiences with scalable performance real-time backend operations, you need specialized infrastructure different from what we've had in the past. That's Confluent Platform. Here data is not static and at rest, but continuously moving and continuously being processed and analyzed in real time. That's the key point. The notion is the data is moving, it's not just sitting there. For a modern software-defined business, a platform for data in motion is crucial to connecting every part of that businesses, large and non-trivial IT infrastructure together. It enables the business to process, react, and respond to their streams of event data in real time. And when I say the business, you and I both know that we mean software you're writing. What happens if we don't do this? Well, bad things. When mission-critical processes and applications fail, the business notices and not in a good way. Mitigating the risk of business disruption is critical for you and the systems you build to be able to compete and maybe even innovate and maybe win in the event-first digital world.

Tim Berglund:

With the release of Confluent Platform 6.2, we're introducing a feature called Health+. This provides the tools and visibility needed to ensure the health of your environment and minimize business disruption, which as we discussed we don't like. It gives us a few new things, intelligent alerts, cloud-based monitoring and a streamlined support experience. Let me tell you about these things, first up intelligent alerts. So these allow you to get customizable rule-based alerts to identify problems before they become critical issues that lead to things we don't like downtime and business disruption. We've managed over 5,000 clusters as of the time of this recording in Confluent Cloud. So we've had the ability to build up some interesting algorithms around alerting. Now that functionality is available to you, even if you're not a Confluent Cloud user.

Tim Berglund:

Next, a cloud-based monitoring dashboard. Again, extracted from our development of Confluent Cloud, monitoring dashboards allow you to look at all the critical health metrics of your clusters in a single dashboard that we serve to you. You don't manage extra monitoring infrastructure of your own because life is short and it's not just a dashboard, it's Confluent-backed insights and recommendations. That's in the same vein as intelligent alerts there. And of course you can integrate the monitoring data into your existing tools like Prometheus and Grafana and all that.

Tim Berglund:

Third is accelerated Confluent Support. Because the confluent support team has secure access to your cluster metadata. That's not data in topics, mind you, but metadata without you having to manually input information about your environment via support tickets in a moment when you're maybe possibly slightly, a little bit stressed, and not at your best, this ensures a much smoother support experience and hopefully reduced time to issue resolution. Like imagine if you could use a computer to write your Zendesk tickets for you. I mean, how great is that? Now digging into intelligent alerts just a little bit more. There are 10 total alerts right now as of this release, including validation of disk usage, unused topics, offline partitions and other things. Now, you know full well, product managers will not be able to contain themselves, they will add more alerts to this collection in the future and that is a very good thing. So count on this list to grow. You can also customize the types of notifications you receive, right now you're able to choose Slack, email, and webhook.

Tim Berglund:

Health+ also provides real-time and historical monitoring data aggregated and made visible to you for easy troubleshooting and trend analysis. You can view broker throughput, topic throughput, replicated partitions, disc usage, network handler pool usage, request handler pool usage and even more than that. Health+ services all this stuff clearly, and in an organized way, enabling you to dig into problems and analyze usage without having to run so much as a single local monitoring server. And remember the auto support feature I mentioned. Well, Health+ provides Confluent support engineers with critical context about your environment automatically. Again, why not use computers for these things? By enabling access to configuration and cluster metadata through Health+, we have a real-time view of the performance of your console platform deployment without having to access any of your clusters' data. And let me say that again, because it's important. The data that we can see only includes nonsensitive metadata, not payload or topic data. Sharing this context can help resolve Confluent Platform support tickets significantly faster and it offloads many of the manual steps, which nobody really enjoys required to file a support ticket.

Tim Berglund:

But Health+ isn't all there is in CP 6.2. Let me tell you about the rest. We're introducing new enhancements to cluster linking which is still in preview state for Confluent Platform. We're introducing new failover command, which makes it even easier to do an HADR failover using cluster linking. If you create a disaster recovery cluster in another region and have synced data using cluster linking, you can call failover on a per topic basis to keep up and running if your main cluster goes down. So, better disaster recovery and better cluster availability. In the ksqlDB department, we're enabling any serialization format to be used for keys in ksqlDB. Previously, you didn't have as many options typically of keys with just primitive types and if you want a big Avro object for your key, a lot of people do now, you actually can. 0.17 is the version of ksqlDB. In Confluent Platform 6.2, it's also got support for query migrations, very important, Lambda functions, array, instruct, key types if you really like those complex and compound key types and more. You wanna check the release notes on ksqlDB.io on that website for full coverage of the release.

Tim Berglund:

Following the longstanding tradition of every Confluent Platform release, 6.2 is built on the most recent version of Kafka, in this case, 2.8. 2.8 does a lot much to tell you about two things. Number one, it's the initial merge of KIP-500 which replaces ZooKeeper with a self-managed quorum. That's Kafka's own inter broker quorum protocol, but it's not possible to start a cluster without ZooKeeper and it goes through some basic product and consumer use cases. It's a huge step for Kafka. Another one, KIP-700. This is an enhancement to an admin API. It's the add described cluster API. The Kafka AdminClient has historically used the broker's metadata API to get information about the cluster. However, that is primarily focused on supporting the consumer and producer clients and they follow different patterns than the AdminClient. KIP-700 decouples the AdminClient from the metadata API by adding a new API to directly query brokers for information about the cluster. This change enables the addition of new admin features in the future without disruption to the producer and consumer and is also related broadly to the removal of ZooKeeper.

Tim Berglund:

And there's so much more than that. We've got a release video as always, a blog post, and the Apache Kafka release notes are always where you wanna go for full details. And that does it for Confluent Platform 6.2. Check it out today.

Based on Apache Kafka® 2.8, Confluent Platform 6.2 introduces Health+, which offers intelligent alerting, cloud-based monitoring tools, and accelerated support so that you can get notified of potential issues before they manifest as critical problems that lead to downtime and business disruption.

Health+ provides ongoing, real-time analysis of performance and cluster metadata for your Confluent Platform deployment, collecting only metadata so that you can continue managing your deployment, as you see fit, with complete control.

With cluster metadata being continuously analyzed, through an extensive library of expert-tested rules and algorithms, you can quickly get insights to cluster performance and spot potential problems before they occur using Health+. To ensure complete visibility, organizations can customize the types of notifications that they receive and choose to receive them via Slack, email, or webhook. Each notification that you receive is aimed at avoiding larger downtime or data loss by helping identify smaller issues before they become bigger problems.

In today’s episode, Tim Berglund (Senior Director of Developer Experience, Confluent) highlights everything that’s new in Confluent Platform 6.2 and all the latest updates.

Continue Listening

Episode 163June 15, 2021 | 25 min

Boosting Security for Apache Kafka with Confluent Cloud Private Link ft. Dan LaMotte

Enabling private links on the cloud is increasingly important for security across networks and even the reliability of stream processing. Staff Software Engineer II Dan LaMotte and his team focus on making secure connections for customers to utilize Confluent Cloud. With the option of private links, you can now also build microservices that use new functionality that wasn’t available in the past. You no longer need to segment your workflow, thanks to completely secure connections between teams that are otherwise disconnected from one another.

Episode 164June 22, 2021 | 35 min

Chaos Engineering with Apache Kafka and Gremlin

The most secure clusters aren’t built on the hopes that they’ll never break. They are the clusters that are broken on purpose and with a specific goal. When organizations want to avoid systematic weaknesses, chaos engineering with Apache Kafka® is the route to go. Patrick Brennan (Principal Architect) and Tammy Butow (Principal SRE) from Gremlin discuss how they do their own chaos engineering to manage and resolve high-severity incidents across the company.

Episode 165June 29, 2021 | 27 min

Data-Driven Digitalization with Apache Kafka in the Food Industry at BAADER

Coming out of university, Patrick Neff (Data Scientist, BAADER) was used to “perfect” examples of datasets. However, he soon realized that in the real world, data is often either unavailable or unstructured. This compelled him to learn more about collecting data, analyzing it in a smart and automatic way, and exploring Apache Kafka as a core ecosystem while at BAADER, a global provider of food processing machines. After Patrick began working with Apache Kafka in 2019, he developed several microservices with Kafka Streams and used Kafka Connect for various data analytics projects. Focused on the food value chain, Patrick’s mission is to optimize processes specifically around transportation and processing.

Got questions?

If there's something you want to know about Apache Kafka, Confluent or event streaming, please send us an email with your question and we'll hope to answer it on the next episode of Ask Confluent.

Email Us

Never miss an episode!

Confluent Cloud is a fully managed Apache Kafka service available on all three major clouds. Try it for free today.

Try it for free