Get Started Free
‹ Back to courses
course: Confluent Cloud Networking

Hands On: Configuring a Cluster with Public Endpoints

11 min
dennis-wittekind

Dennis Wittekind

Customer Success Technical Architect (Presenter)

Configuring a Cluster with Public Endpoints

  1. Sign up for Confluent Cloud if you have not already done so and respond to the verification email to confirm your account.

  2. Log into Confluent Cloud.

confluent-cloud-sign-in

  1. Create an Environment called “Cloud-Networking.”

confuent-cloud-environment

new-confluent-cloud-networking-environment

cloud-networking-environment

  1. Click Create a cluster on my own, select a Basic cluster, and Begin configuration.

confluent-cloud-networking-create-cluster

  1. Select AWS as the cloud provider, then select a region and leave availability as Single zone, and click Continue.

confluent-cloud-networking-create-cluster

  1. Give the cluster the name “Public Endpoint Cluster” and click Launch cluster.

confluent-cloud-networking-create-launch-cluster

confluent-cloud-networking-you-are-all-done-creating-your-cluster

  1. Review the cluster networking. Click let me explore. Click Cluster overview, Networking.

confluent-cloud-networking-cluster-networking

Test connecting a client to a public cluster.

  1. First, get some data into a topic. We can use the Datagen source connector for this purpose. Click on Data integration, and Connectors.

confluent-cloud-networking-connectors

  1. Next, search for the Datagen Connector and select it.

confluent-cloud-networking-datagen-source-connector

confluent-cloud-networking-add-datagen-source-connector

  1. Now create a topic by clicking Add a new topic, name the topic clickstream, and then click Create with defaults.

confluent-cloud-networking-add-new-topic

  1. Select the newly created topic and click Continue.

confluent-cloud-networking-add-datagen-source-connector

  1. For purposes of this demo, you will enable global access for our connector. In production, it is recommended to use service accounts and specify granular access. Be sure to save the API key and secret pair to a safe place as you will need it later. Click Generate API key and Download.

confluent-cloud-networking-add-datagen-source-connector-global-access

  1. Once you have the credential saved, click continue. Choose the Clickstream template and the output format of JSON and click Continue.

confluent-cloud-networking-clickstream-json

  1. Choose one task for the connector and click Continue.

confluent-cloud-networking-datagen-source-connector-topic-summary

  1. Leave the name as the default and click Launch.

confluent-cloud-networking-connectors-page

Set up a cloud environment to test connectivity.

  1. Log in to AWS console and Create a VPC.

confluent-cloud-networking-aws-console

confluent-cloud-networking-create-vpc

Launch an EC2 instance in the VPC, for simulating client traffic.

  1. Name the instance and select the Ubuntu AMI.

confluent-cloud-networking-vpc-instance

  1. Select a Key pair. You may first need to create one if you haven’t previously done so.

confluent-cloud-networking-select-a-key-pair

  1. Under Network settings, select the networking course VPC that was just created, a public subnet, and a security group that allows SSH access from your local workstation.

confluent-cloud-networking-configure-instance-networking

confluent-cloud-networking-add-storage

confluent-cloud-networking-configure-security-group

confluent-cloud-networking-review-instance-launch

  1. After completing the EC2 instance configuration, click Launch instance.

confluent-cloud-networking-launch-status

  1. Associate an Elastic IP address with your EC2 instance so you can ssh in and configure it from your local workstation.

confluent-cloud-networking-elastic-ip-address

Download the confluent CLI to consume the records.

  1. Click CLI and Tools, then follow the instructions to install the CLI on your EC2 instance.

confluent-cloud-networking-cli-and-tools

  1. Set the environment and cluster, and store the API key for use in consumption.

confluent-cloud-networking-cli-promot-1

  1. Consume the clickstream data from your public endpoint cluster.

confluent-cloud-networking-cli-promot-2

Clean up.

  1. Delete the Datagen Connector by going to Data Integration -> Connectors, selecting the connector, and clicking delete. Confirm the deletion by entering the name of the connector.

confluent-cloud-networking-add-connectors-page

confluent-cloud-networking-data-source-connector-0

confluent-cloud-networking-connector-deletion

  1. Delete the API keys by clicking Data Integration -> API Keys. Select the API Key you created, click delete API key, and confirm the deletion.

confluent-cloud-networking-api-keys

confluent-cloud-networking-api-key-window

confluent-cloud-networking-confirm-api-key-deletion

  1. Delete the cluster. Click Cluster Overview -> Cluster Settings, then delete cluster. Enter the cluster name to confirm the deletion.

confluent-cloud-networking-cluster--settings

confluent-cloud-networking-confirm-deletion-page

Use the promo code NETWORKING101 to get $25 of free Confluent Cloud usage

Be the first to get updates and new content

We will only share developer content and updates, including notifications when new content is added. We will never send you sales emails. 🙂 By subscribing, you understand we will process your personal information in accordance with our Privacy Statement.

Hands On: Configuring a Cluster with Public Endpoints

Welcome to the first exercise of the Confluent Cloud Networking Course. In this exercise, we will create a public endpoint Confluent Cloud cluster along with a Datagen connector to populate the cluster with data. Next, we'll configure an AWS VPC and EC2 instance to consume the data from our cluster over the public internet. (upbeat music) Let's begin by signing in to Confluent Cloud. Once we've signed in, we can click on View environments, and then Add cloud environment. We'll give the environment a name of Cloud-Networking, and click Create. Next, let's go ahead and create a cluster by clicking Create cluster on my own. We'll select a Basic cluster for this exercise, and provision it inside of AWS in the Ohio region. We'll hit Continue, and give the cluster a name. In this case, we'll name it Public Endpoint Cluster, and click Launch. Now that the cluster is provisioned, we'll create a Datagen Source connector to populate our cluster with some data. We'll click on Data integration, and Connectors, and select the Datagen Source connector. We'll add a new topic, and give it a name of clickstream. And we'll create it with the default settings. Now, we can select the topic, and hit Continue. The next step is to generate some credentials for the connector. We will give this connector Global access, as we will be using these credentials to also consume from the cluster. In production situations, you would want to use granular access. Now that we've created the credentials, we'll go ahead and select a template and the output format. We'll select one task, and hit Continue, and finally, give the connector a name. Once the connector is finished provisioning, we'll go ahead and provision our AWS resources. So we'll switch over to the AWS Management Console, and we'll look for the VPC dashboard, and we'll click on Create VPC. We'll make sure that we've selected VPC and more, give our VPC a name, validate the CIDR block. We'll provision three availability zones for this VPC, as we'll be doing some multi-availability zone setup in one of the later exercises, and we'll click Create VPC. Once the VPC has finished provisioning, we can click View to view some of the details, including the VPC ID, the CIDR block, and some of the route table information. We'll now want to go ahead and provision an EC2 instance inside of our VPC. So we'll go to the EC2 dashboard, and click Launch instance. We'll go ahead and give the instance a name. In this case, we'll call it the networking-course-ec2-instance, and we'll select the Ubuntu AMI. We can leave the instance type as t2.micro, and we can select a key pair that we would like to use to SSH to the instance. Under Network settings, we'll want to edit and select the VPC that we just created, our networking-course-vpc, and make sure that we are provisioning the EC2 instance inside of a public subnet. We'll edit the security group name for the security group for the instance, call it the networking-course-ec2-security-group, and we'll make sure that we have SSH access enabled from our local machine's IP address. Next, we can click Launch instance, and the instance should successfully launch. Once the instance state is running, we can go ahead and configure an Elastic IP. We can do this by clicking on Elastic IPs, Allocate Elastic IP address. We'll give it a name. In this case, we'll give it the networking-course-elastic-ip. And click Allocate. And now, we'll associate this IP address with our instance that we just provisioned. So we'll go to Actions, Associate Elastic IP address, and then type in the instance name here, networking-course-ec2-instance, and select the private IP of it. Optionally, you can allow this Elastic IP to be re-associated later, and then click Associate. Now that we've configured the Elastic IP, we have a public DNS that we can use to access our EC2 instance. So we'll take that public DNS, and switch over to our terminal, and we'll go ahead and SSH into the instance, given our key pair that we selected during the provisioning process, and then give the user ubuntu@ our public IP address. And we'll get logged in. The next step is to install the CLI. So we'll go ahead and go back into the Confluent Cloud UI, and select CLI and tools. And then Copy the install script from the UI, and Paste it into our terminal. Next, we'll log out and log back in to our EC2 instance, so that way, we have the Confluent CLI in our path. Now that we've logged back into our EC2 instance, we can just issue a confluent login command, and we'll enter our Confluent Cloud credentials, including our email address and our password. These are the same credentials you used to log in to the Confluent Cloud UI, and next, we'll go ahead and set our environment. So we'll do a Confluent Cloud environment list to list out the current environments. And then we'll use the confluent environment use command to set the cloud networking environment as the default. Next, we'll identify and set our Kafka cluster as the default that we just created. So we'll do confluent kafka cluster list, get that logical cluster ID, and set confluent kafka cluster use and specify the ID there. And now that we've set our defaults for our environment, we'll go ahead and import the credentials that we generated earlier, the API key secret pair for the Datagen connector, as we'll be using these credentials to consume from the topic. We will import using the confluent api-key store command, being sure to specify the resource as the Kafka cluster ID. And we'll supply the API key and secret. Next, we'll wanna set this key as the default. So we'll use the confluent api-key use command, again, specifying the resource as the cluster ID. And with that, we're ready to start consuming from our topic. So we'll issue a confluent kafka topic consume command. And we can see that our records are being consumed. These records are being consumed over internet routing, as our Confluent Cloud cluster has a public IP address. That means the client's reaching out of its VPC through an internet gateway to Confluent Cloud. As a final step, let's clean up the resources that we don't need for the remaining exercises in this course. First, we'll delete the Datagen connector. So we'll go back to the Confluent Cloud UI, click on Connectors, and then delete the Datagen connector. We'll confirm the deletion by entering its name. Next, we'll go ahead and delete the credentials. So we'll go to API keys, select the API key, and click Delete, again confirming the deletion. And finally, we'll delete the cluster itself by clicking Cluster overview, Cluster settings, and then Delete cluster, again, confirming the deletion by entering the name. And we're done. You've successfully completed the first exercise of this course. In the next exercise, we'll be exploring VPC-peered clusters. See you there.