Dec 17, 2020
5 benefits of an Apache Kafka®-centric microservice architecture
Microservices are the most flexible way to provide an online service today. Find out why Apache Kafka is a good choice when microservices are on the line.
When you’re setting up a microservice architecture, one of your first major decisions is whether to have the services communicate directly with each other, or whether to use a broker system. Since the broker model is more flexible and failure-resistant, it’s likely you’ve more or less decided to choose that. However, you may be concerned that the broker will become a bottleneck in the system if traffic is heavy.
Allow me to set your mind at ease by introducing you to Apache Kafka, a distributed, partitioned commit log service that acts as a messaging system but with a flair all its own. Developed at LinkedIn to ingest event data, Kafka is designed to collect, hold and dispense enormous amounts of data.
Kafka-centric microservice architecture
Apache Kafka aims to solve the scaling and reliability issues that hold older messaging queues back. A Kafka-centric microservice architecture refers to an application setup where microservices communicate with each other using Kafka as an intermediary.
This is made possible with Kafka’s publish-subscribe model for handling the writing and reading of records. The publish-subscribe model (pub-sub) is a communication method where the sender merely sends events — whenever there are events to be sent— and each receiver chooses, asynchronously, which events to receive.
Kafka-centric microservice architectures are often more scalable, reliable, and secure than traditional monolithic application architectures — where one big database is used to store everything in an application.
Let’s take a brief look at why Kafka is a particularly good choice for microservices.
Why use Kafka at the heart of your microservice architecture
1. The Kafka ecosystem
Apache Kafka is designed for easy connectivity to a number of open-source systems, and the ecosystem grows year by year. We’d go so far as to say Kafka is at its best when paired with stuff like analytic engines or search systems.
With Kafka Connect, you can easily plug Kafka into other data systems, allowing data streams to flow with low latency for consumption or further processing. This way you can expand your architecture to a whole open-source ecosystem of ready-made connectors to various services, almost all of them free.
2. Integration to existing systems
Choosing Apache Kafka doesn’t mean throwing out everything you’ve set up before. Combining Kafka with other systems also applies to your existing ones.
You can use Kafka to transport either all of your data, or only a subset, to legacy systems to ensure backwards compatibility or to ensure that you can still use systems you’ve invested in previously.
3. Fault tolerance and scaling through clustering
Kafka’s clustered design makes it very fault-tolerant and scalable. When message load increases or the number of Kafka consumers changes, Kafka will automatically rebalance the processing load between the consumers. To increase throughput and scale up your services, just add nodes.
Note also that this removes the need for any external High Availability arrangements.
4. Advanced access control
The data you process may go to different endpoints, but with Kafka you can manage access to it through one centralized mechanism. Producers and consumers are configured to write to and read from only specified queues. This acts as an efficient access control mechanism, without your having to build a separate authentication structure on top.
You can improve your system’s security by using ACLs to restrict access to business-critical or classified parts. At the same time, though, you can empower people by giving them access to data that they may not otherwise ever see. For example, data scientists could use error reporting and website analytics to analyse their impact on customer satisfaction.
5. Store and process any content
Kafka doesn’t care what the messages it stores contain, so you can use it for any type of content. This means that when your business needs change, you have the freedom to add any types of producers and consumers into the mix. Your business can grow and diversify without further infrastructure investments into service data processing.
Wrapping up
The robust and expandable Apache Kafka is, in our humble opinion, the best choice for most use cases involving microservices. It’s true that it’s notoriously complex and the learning curve can be steep before you have people qualified to maintain it. So why not consider getting a managed solution? Aiven offers a free 30-day trial of Kafka as a service, full-featured and easy to take into use. Why not give it a spin?
Want to know more about Apache Kafka®?
We've got what you need. Everything you always wanted to know about Apache Kafka but were afraid to ask.
Get our free ebookIn the meantime, make sure you follow our changelog and blog RSS feeds or our LinkedIn and Twitter accounts to stay up-to-date with product and feature-related news.
Further Reading
Stay updated with Aiven
Subscribe for the latest news and insights on open source, Aiven offerings, and more.
Related resources
Oct 31, 2022
Already using Apache Kafka®? Here’s why you should be analyzing your streaming data, not just moving it around.
Aug 3, 2022
Aiven now supports the newly released Apache Kafka version 3.2, which comes with a number of enhancements. Read on to find out what it means for you.
Jul 5, 2023
With Aiven at the heart of its data platform, Talon.One enjoys significant cost efficiencies