
Apache Kafka is one of the technologies with the fastest popularity growth in the last 10 years. AWS, always vigilant for new tech to incorporate into its offering, launched its Kafka as a managed service in February 2019: Amazon MSK.
MSK follows the RDS model: customers choose how much hardware to provision (number of nodes, CPU, memory, etc.) and AWS manages the service for you. Since Kafka is a complex software that is not easy to operate, having AWS dealing with that for you is quite appealing, especially at the beginning of your journey with it.
In this article, we are going to look at how to provision your first MSK cluster, the different options that you encounter (and what they mean) and how to do a quick-and-dirty performance testing to understand how much the most humble cluster can actually process.
Creating the smallest cluster possible
The proces starts with logging into the AWS Console, selecting/searching MSK and clicking on “Create Cluster”. That leads you to a typically dry first screen with lots of options to select. Don’t be worry, we will see what they mean one by one.
Step 1 – Sofware version

Firstly, we are asked for a name for the new cluster. Choose your own adventure here.
Anybody familiar with AWS will recognize the option for VPC. The easiest (and least safe) option is to choose your default VPC, which will grant access to anything to everybody. After all, we are just testing here, right?
Finally, a more important choice: the Apache Kafka version. Since AWS MSK launched, they have consistently only supported x.y.1 versions, meaning 1.1.1, 2.1.1, 2.2.1, etc. Personally, I try to stay away from x.y.0 versions, especially for least mature components like Kafka Connect or Kafka Streams. Besides that rule, choose the newest version possible to stay away from annoying bugs like this.
Step 2 – Network options

MSK offers the option to deploy your Kafka brokers to as many as 3 availability zones, being also the recommended option for high availability.
Obviously, the more availability zones, the more brokers you will get provisioned (and the more expensive your cluster will be). For simplicity, let’s go with “3” and assign the default AZ and subnet existing in your default VPC.
Step 3 – Broker configuration

Every Kafka Broker requires configuration for a number of properties. Apache Kafka comes with defaults for pretty much all of them. However, AWS MSK overrides some of them with its own defaults. In this section, it is possible to choose your own custom configuration for Apache Kafka, assuming you have created one. In a future post, we will see how to do that. For now, let’s run with the defaults.
Step 4 – Hardware and tags

Things are getting interesting now. You have to choose the hardware family/size of the EC2 instances that will power your Kafka cluster, plus how many of them to run per AZ (remember, we have chosen 3 AZs). For this example, let’s go with 1 broker per AZ.
Time to look at MSK pricing. For this example, I’m going to choose the cheapest options for both instance type and storage. That would cost me, on a monthly basis (eu-west-1 region, 30-days month):
- kafka.m5.large: 168.48$ / month
- 1000 GB storage: 0.11$ / month
- Total: (168.48 * 3) + (0.11 * 3) = 505.77$ / month
For reference, EC2 instance m5.large cost 77.04$/month. AWS is charging you approx. 2x for managing your Kafka cluster.
UPDATE: I got good feedback on this point. When considering all costs involved (EC2 instances for Zookeeper nodes, EC2 instances for Broker nodes, replication traffic cost, etc., the overall cost of MSK is almost the same as running the cluster yourself (assuming your DevOps team works for free… which they don’t).
AWS has published a Pricing Calculator to size your MSK cluster correctly for your expected traffic; it also compares its cost with a self-managed option. Spoiler alert, you shouldn’t do it unless you really know what you’re doing (ample experience with both AWS and Kafka), and even then it is unclear to me why you would do that to yourself 🙂
WARNING: remember to delete your cluster once you are done with the tutorial or you will regret having followed it!!
Step 5 – Security options

In this section you choose a bunch of security-related options:
- Do you want to encrypt the communication between brokers? Yeah, why not!
- Do you want to force your clients to use SSL/TLS? For testing, probably allowing both TLS and plaintext is the best option. For production, you might want to restrict to TLS.
- Should I encrypt my data at rest? Definitively yes.
- Should I use TLS to authenticate clients? Well, you probably want to have some form of authentication for production environments, although depends on your security requirements. For testing your first cluster, leave it unticked.
We are almost there… one more step!
Step 6 – Monitoring

You definitively want to monitor your cluster, even if this is a “managed” service. At the end of the day, your clients might have an impact on your cluster (load, resource consumption, etc.) that AWS will definitively not monitor or alert on.
You have two choices to make here:
- What level of monitoring do you need? There are three options: basic, cluster level or topic level. Basic level, but cluster and topic level can save your day if the cluster starts acting weird. For instance, if you find one of your topics being really hot (lots of writes and/or reads).
- Where do you want to send your metrics? For a test cluster, CloudWatch can be good enough. For a production cluster, consider Prometheus, especially if you are already using it.
Step 7 – Have a coffee (or tea)

Just fly past the “Advanced Settings” section and click “Create Cluster” and… wait, a lot. Like 15 minutes… or more.
Step 8 – Provision the client
You are going to need a client that can connect to your newly created cluster, just to play a bit with it. Let’s provision a small EC2 instance, install Kafka command-line tools and give it a spin. I won’t go into too much detail here, I assume you already know how to do this with EC2:
- Navigate to EC2.
- Click on “Launch Instance” button.
- Select “Amazon Linux 2 AMI (HVM), SSD Volume Type”.
- Select “t2.micro” from the free tier.
- Keep all defaults for the rest of the options.
- Make sure that you have the key needed to SSH into the instance. Otherwise, create a new one.
Click on View Instance to go back to the EC2 Instances section. You should see your instance here. Select it and copy/paste its public IP.

Let’s SSH into this instance (don’t bother trying to connect to the IP in this example, by the time I publish this post I will have already deleted it :)). Make sure you have the key located and do:
ssh -i [path-to-your-key] ec2-user@[ec2-public-ip]
If ssh complains about the key permissions being too open, just do a chmod 600 [key-path]
to make sure they are restricted enough to make ssh happy.
Step 9 – Installing Kafka command-line tools
We are going to need the command line tools to connect to our cluster. Luckily, you can easily curl
all versions of Kafka from the official download page.
curl https://downloads.apache.org/kafka/2.3.1/kafka_2.12-2.3.1.tgz -o kafka.tgz tar -xvzf kafka.tgz
Once the file is decompressed, you have a new folder like kafka_2.12-2.3.1
. Navigate to the bin
subfolder to find all the command-line tools there.
However, if we try to run any of the tools here, they will all fail because we don’t have Java installed in the machine. Let’s get that too:
sudo yum install java
You will be prompted with a summary of what is going to install. Accept and wait.
Step 10 – Connecting to your cluster
Once the installation is finished, let’s try to connect to our cluster. Head back to MSK main page, choose your cluster and click on the “View client information” button on the top-right side of the screen. A pop-up window opens with the details to connect to your cluster (TLS and/or plaintext) like the one in the picture below.

Let’s go back to the EC2 instance and we try to list topics with the following command:
./kafka-topics.sh --bootstrap-server [first-broker plaintext url] --list
We launch the command, we wait, we wait a little bit more, even more… and eventually we get this error:
Step 11 – Opening the Kafka port
The EC2 instance is running in its own security group, created when the instance was launched. This group allows SSH traffic to the instances that belong to it, which is why we can connect from our computers to the instance.
The MSK cluster, on the other hand, is running in the VPC default security group. This group allows incoming traffic to any port when it originates in the group itself. However, it rejects the traffic coming from the security group where the EC2 is running.

The good news is it has an easy solution: change the default security group to accept traffic from the EC2 instance security group. Follow these steps:
- Head to the “Security Groups” section under EC2.
- Choose the “default” security group.
- Click on the “Edit” button.
- In the pop-up window, click on the “Add Rule” button.
- Choose:
- Type: Custom TCP Rule
- Protocol: TCP
- Port: 9092 (Kafka port)
- Source: Custom + the name of the EC2 security group
- Click on the “Save” button.
That is. Go back to the EC2 instance console and try the kafka-topics
command again. This time it should return quickly, but without yielding a result (there isn’t any topic in the cluster yet).
Step 12 – Launching the performance producer tool
Let’s put some load through the system, just for fun. Firstly, we need to create a topic that we will use for performance testing.
./kafka-topics.sh --bootstrap-server [first-broker plaintext url] --create --topic performance-topic --partitions 4 --replication-factor 2
With this command, we are saying we want a topic with four partitions and that should be replicated twice.

Once it is created, we can launch the performance producer.
./kafka-producer-perf-test.sh --topic performance-topic --num-records 1000000 --throughput 100 --producer-props bootstrap.servers=b-1.testing-cluster.34vag9.c4.kafka.eu-west-1.amazonaws.com:9092 acks=all --record-size 10240
What this command does is:
- Sends 1 million records of 10KB size.
- Awaits replication to complete (acks=all) up to the
min.in.sync.replicas
number (2 in this case).

Step 13 – Launching the performance consumer tool
How can we know that these records are going somewhere? Well, we can obviously consume them back.
Run the following command from a separate SSH session.
./kafka-consumer-perf-test.sh --broker-list b-1.testing-cluster.34vag9.c4.kafka.eu-west-1.amazonaws.com:9092 --messages 1000000 --print-metrics --show-detailed-stats --topic performance-topic
What this command does is:
- Consumes 1 million records.
- Prints detailed stats while it does so.

Step 14 – Watching the metrics
We can also look at CloudWatch metrics to see them live, growing with the load we are sending to the cluster. Head to Cloud Watch in your AWS Console. Once there:
- Click on “Metrics”.
- Choose “AWS/Kafka”.
- Choose “Broker ID, Cluster Name, Topic”.
You will see that the only topic-level metrics available are for the topic just created (the cluster does not have any other topic at the moment). Click on “Bytes In” for the 3 brokers. You will see a growing graph like this one.

Make sure to configure the “Metrics Period” to 1 minute (under “Graphed Metrics”) to have a more accurate visualization.
Step 15 – Delete everything if you don’t want to pay
Once you are satisfied with all your tests, it’s time to clean everything up and avoid nasty surprises when the AWS bill arrives at the end of the month.
Head to the EC2 section first to kill your EC2 instance and follow these steps:
- Select “Instances” on the left menu.
- Select your EC2 instance to kill.
- Click on the “Actions” button.
- Choose “Instant state” -> “Terminate”.
- In the pop-up window, click on “Yes, terminate”.
In a few minutes, your instance will be dead. Take this opportunity to also remove its orphan security group.
- Select “Security Groups” on the left menu.
- Select the old EC2 instance security group (something like launch-wizard).
- Click on the “Actions” button.
- Choose “Delete security group”.
- A pop-up window informs you that you can’t delete the security group because it is being referenced from another group (the default group, remember step 11).
- Choose the default group, click the “Edit” button and delete the Kafka related rule (TCP port 9092).
- Try to delete the EC2 security group again, this time the pop-up window displays a “Yes, Delete” button. Click it to remove the security group.
Last but not least, remove the Kafka cluster. Head to MSK and choose your instance there.

Type “delete” on the pop-up window. Your cluster status will change to “deleting”. A minute later, it will be gone for good.
Conclusions
15 steps aren’t the simplest process possible, but if we think about it, we have covered a lot of ground:
- Created the cheapest possible cluster.
- Provisioned an EC2 instance with the Kafka command-line tools to test the cluster.
- Run performance producers and consumers.
- Monitored the cluster load with CloudWatch.
Even more important, with a really small cluster, we were sending 100 messages/s with a total load of 1 MB/s, from one single client, and our cluster didn’t even blink.
That is the power of Kafka, one of the fastest tools available in the market when it comes to moving data. And now, with AWS MSK, it is really easy to get a cluster up and running.
Step9 – The tar file is not available in the given URL, it has been now moved to below :
https://downloads.apache.org/kafka/2.3.1/kafka_2.12-2.3.1.tgz
So please edit the curl command as below:
curl https://downloads.apache.org/kafka/2.3.1/kafka_2.12-2.3.1.tgz -o kafka.tgz
Updated. Thanks for pointing it out
I believe the storage cost calculation is incorrect. AWS charges $0.10 per GB-month. So 1000 GB would cost 1000*$0.10 = $100.
How do I connect to cluster outside VPC?. I have a consumer in different AWS account.
Is it possible to map brokers, zks to a publicly available endpoints like loadbalancers.