Hello, my name is Florent Ramière, and I am one of the authors of Conduktor Gateway. In this blog post, I want to present the history of Gateway and how I came to the realization that Apache Kafka was not complete and needed further development.
My beginnings with Kafka
Like many others, I entered a project in the middle of development when Kafka was already introduced by a predecessor. I was working in the banking sector in an exciting and fast-paced environment full of new technologies and highly skilled people. When I started, Kafka was only used as a high-quality buffer for collecting large amounts of logs and metrics.
With our small team, we had to implement monitoring, alerting, and processing using technologies such as Storm, then Spark, and eventually Kafka Streams. As we gained experience, I quickly acquired the necessary skills to perform my job, and everything seemed to be going well.
However, I soon discovered that I didn't know Kafka as well as I thought. I only knew what was absolutely necessary to do my job, but not much more. The reality of production issues and managing various Kafka requirements forced me to re-evaluate my knowledge a bit too late.
This situation is the root of many IT problems: addressing the most pressing issues and then moving on. Although I was considered the local Kafka expert, there was still so much I didn't know. I had to learn from my mistakes and correct numerous errors, most of which were avoidable.
This led me to dive deeper into Kafka and its ecosystem. My growing interest in Kafka and its potential prompted me to join Confluent, the company behind Apache Kafka.
My beginnings at Confluent
As a Customer Success Technical Architect at Confluent, I was responsible for supporting major clients across Europe. With the many lessons learned from my previous experience, I worked with various clients, each with different levels of maturity and unique contexts, every day I was discovering new ways to encounter challenges with Kafka.
I wanted to share my experience with a broader audience, so I began giving talks at conferences on Kafka patterns and anti-patterns. I presented the problems that some people had already encountered, as well as those they hadn't yet faced but were likely to experience during their journey with Kafka.
In essence, I was fixing problems after Kafka had been integrated. I decided to shift my reactive approach to a more proactive one, joining the pre-sales team. The goal was to set up teams for success from the very beginning, helping them achieve their business objectives.
Despite good intentions, I observed that Kafka implementations were often plagued by production bugs or were reliant on people with experience. There were no automated elements to rely on, and governance was only as effective as the least informed person.
In short, there was no simple way to:
address common configuration problems
address common bugs or development issues
simulate real-world scenarios to ensure application resilience
enforce rules where they matter
do field-level encryption
do multi-tenancy or namespacing
do RBAC
do proper auditing
send large messages
reduce networking costs
add real lineage
and much much more
The arrival of cloud technology required me to learn and stay updated on various subjects like connectivity, VPC/VNet peering, and private links of the world. The non-transitivity of peering networks and Kafka is a significant issue. One way to address it is by using proxies that can natively talk to the Kafka protocol.
At that time, I began to see the Kafka protocol as a means of solving networking or security problems, recommending and implementing Grepplabs' Kafka-proxy for our clients.
Things progressed rapidly, and I discovered that interest in proxies within the Kafka universe was booming. On one hand, Apache Pulsar was developing Kafka on Pulsar, a compatibility mode for using Pulsar with the Kafka protocol. On the other hand, Microsoft was also working on Kafka compatibility for Event Hubs. Other products like Redpanda were starting to natively leverage the Kafka protocol.
For me, the most significant catalyst was the discovery of the discussion about Kafka support in Envoy. Eye-opening!
A fascinating array of new features were being introduced to address Kafka's shortcomings, such as filters, rate limiting, transformations, fallback, validation, and more.
Gradually, I realized that these features provided concrete and unavoidable solutions to most of the problems I encountered daily with my clients. In short, these solutions addressed Kafka's limitations.
I wanted to provide tangible solutions quickly to the teams I worked with daily. So, I decided to create this product myself.
Enter Conduktor
I was prepared to leave Confluent to start my own team, my own company, and wanted to help teams address the most common Kafka governance issues. As it happened, some people I had tried to recruit for Confluent contacted me and tried to get me to join their company. This company was Conduktor, which was building a desktop user interface for troubleshooting Apache Kafka that I had seen implemented by many clients and was a resounding success.
The Conduktor team was in the process of expanding their existing products to create a web version and build testing and governance tools. We shared a common interest in governance. So, I decided to join this team to add an essential feature: the Gateway.
To reverse the Kafka limitations above, with Conduktor Gateway, you'll be able to:
Create and enforce rules to address common configuration problems
Simulate real-world scenarios to address common bugs or
development issues before production
Encrypt data down to the field level without changing your applications
Have multiple virtual Kafka clusters on a single physical cluster
Define RBAC permissions for applications
Understand and audit who is doing what, when
and much much more
We'll be launching Conduktor Gateway at Kafka Summit 2023. So Conduktor won't only be the best user interface to troubleshoot and manage Kafka it'll also provide the critical features you feel are missing in Kafka to be successful.
My journey with Kafka
As you can see, like everyone else, I have gradually matured through my experiences. I started as a beginner, completed my first projects, made my first mistakes, corrected them, and fundamentally reconsidered what Kafka is and how to approach it. I then worked on governance, change management, and ultimately focused on adding Kafka's missing features in a Gateway.
I'm excited for us to release Gateway at Kafka Summit 2023 - if you're at Kafka Summit and want to see Gateway in action please book time with us here.
We'd love to hear from you!
If you want to accelerate your project delivery, fortify your security, and federate your Kafka ecosystem, you know where to find us.