Get Started Free
course: Confluent Cloud Networking

Connect to Confluent Cloud with Secure Public Endpoints

7 min
justin-lee

Justin Lee

Staff Solutions Engineer (Presenter)

Secure Public Endpoints

secure-public-endpoints

If you’re just getting started with Confluent Cloud, you’re probably going to start with a cluster that is accessed over a secure public endpoint.

In fact, it’s quite possible that this is the only network connectivity you’ll ever need—this is a secure production-grade solution that greatly simplifies your architecture.

At a very high level—with a secure public endpoint, you can access your Confluent Cloud cluster from anywhere, in a secure fashion.

Security Features – Encryption in Transit

encryption-in-transit

All communication from clients—whether that’s Kafka applications and services, Kafka Connect integrations, stream processing, or even just exploring Kafka with the UI or CLI—is encrypted, using industry-standard TLS encryption. This is the same encryption used when you access your bank account or make a credit card payment online—everything is secured in transit.

Security Features – AuthN/AuthZ

AuthN-AuthZ Additionally, everything in Confluent Cloud requires authentication—in order to talk to Kafka you have to prove your identity, using an API key and secret, in order to access any data. We can also use your identity to validate that you should have access to a specific set of data.

So unless you have a specific infosec requirement, where you need more complex or advanced network connectivity, using a secure public endpoint may actually be both the best and easiest way to use Confluent Cloud.

Benefits of Secure Public Endpoints

benenfits-secure-endpoints With a secure public endpoint, we’re standing up a Kafka cluster that can be accessed from anywhere.

  • Kafka clients and services that are running on premises, on a workstation, or in the cloud, can all access the Kafka cluster, without any special network connectivity.
  • For example, if you want to access Kafka from an application running in AWS, you can do that—regardless of where in AWS it’s running, as long as the application can access the Confluent Cloud broker’s public endpoint on the internet.
    • Caveat: Because Kafka uses a TCP wire protocol, and not HTTP or HTTPs, access through an HTTP(s) proxy is not supported—if you’re using a secure public endpoint Confluent Cloud cluster, your Kafka clients have to be able to access the internet without a proxy.
  • At the same time, if you’re running services in your datacenter, you can also access Kafka, as long as you have the ability to access the internet (again, without a proxy).

Architecture: Datacenter and/or Cloud

Datacenter-and-or-Cloud Let’s expand on this a little bit more. With a secure public endpoint, your Confluent cluster is exposed on a set of public endpoints, securely. You can access it from anywhere you need, whether from a cloud provider network, or from your on-premises network.

Architecture: Fully Managed Connectors

fully-managed-connectors Secure public endpoints allow clients to connect to Confluent Cloud. But there are also situations in which Confluent Cloud may need to connect back to your network. For example, fully managed connectors.

When you’re using a secure public endpoint, managed connectors run in the Confluent Cloud network. Because managed connectors are going to access your data sources and data sinks, your data sources and data sinks need to be accessible over the internet, just like Confluent Cloud is accessible via secure public endpoints. This can be done in one of two ways:

  • If you’re using a SaaS provider data source or data sink, your provider may support exposing your data source directly on the internet. Several common examples of this are AWS S3 or Snowflake.
  • If you’re using self-managed data sources or data sinks, you’ll need to work with your network team to expose your data source to the internet in a secure way. For example, you may have a TLS-secured database that you expose on a public DNS name, that can be accessed from the Confluent network.

Also, all of this applies to all Confluent Cloud network architectures. If your data sources and sinks can be exposed on internet endpoints, you can use fully managed connectors, regardless of whether you’re using public endpoints, or one of the other options we discuss later in this course, such as a peering configuration, or a Private Link network.

Architecture: Self-Managed Connectors

architecture-self-managed-connectors

If you can’t expose your data sources and sinks to the internet, another option is self-managed connectors. In this architecture, you’ll run a connector in your on-premises network. This is a data integration which lives in your environment.

In the case of a self-managed source connector, you’ll run a data integration in your environment that reads from your data source and pushes (or produces) it up to Confluent Cloud using the Confluent Cloud secure public endpoint.

In the case of a self-managed sink connector, you’ll again run a data integration in your environment, this time one that consumes (or reads) from Confluent Cloud using the Confluent Cloud secure public endpoint and writes to your data sink.

Because these self-managed connectors are running in your environment, this architecture actually works regardless of your network architecture—you can do this with secure public endpoints, peered networks, or even Private Link.

Why Not Secure Public Endpoints?

why-not-secure Here are a few reasons why you wouldn’t be able to use secure public endpoints:

  • Infosec/security requirements.
    • In certain environments, services don’t have direct access to the internet. As a reminder, Kafka doesn’t use HTTP, and doesn’t work through an HTTP proxy. So if you aren’t able to access the internet directly, you may end up needing a private endpoint. This often comes from an infosec requirement—certain environments require private network connectivity for everything, and we support that in Confluent Cloud. More on this in the next few modules.
  • Another situation we occasionally encounter is where customers want to run fully managed connectors; these connectors run in the Confluent network, and in a secure public endpoint environment, these can only access endpoints on the public internet. If you’re trying to read or write from a data source or data sink that’s only accessible from within your network, you may want to consider one of the multiple peering options.
  • Note that as an alternative to this, if you’re trying to read or write from a data source or data sink that’s only accessible from within your network, you can choose to run a separate connect Kafka cluster in your datacenter (or cloud account) and configure it to read or write data to your Confluent Cloud cluster. Remember that because we’re using secure public endpoints, your Confluent Cloud cluster can be accessed from anywhere that has access to the internet. With this solution you have two clusters: a local one that runs the connectors, and then connects to the one running in Confluent Cloud.

Use the promo code NETWORKING101 to get $25 of free Confluent Cloud usage

Be the first to get updates and new content

We will only share developer content and updates, including notifications when new content is added. We will never send you sales emails. 🙂 By subscribing, you understand we will process your personal information in accordance with our Privacy Statement.