Aiven Blog

Nov 9, 2022

Solving problems with event streaming

Find out about the challenges some of the amazing innovators in our data community are facing, and how event streaming has helped them meet those challenges (with a little bit of help from their friends).

Introducing James Job AG

I’m Nick, and I work at James Job, a startup that’s revolutionizing the way you find job openings to match your skills.

It can be tedious combing through job ads to find the right job for you, so we decided to make it easier. With James Job you define your SkillSet and we calculate the matching job ads for you, with no more searching needed.

As a result, we have an architecture that calculates a lot of things behind the scenes to only show you the relevant job ads. Changes that happen (job ad changes, you gaining a new skill, and so on), again triggers many calculations in our systems. It’s absolutely vital for us to have a solid infrastructure to fulfill this need - and Apache Kafka® gives us the perfect backbone.

As a startup, it is also very important to us that new ideas, use cases and more can be built fast and in a flexible way. This is where data in motion and infinite retention (where needed) plays right into our hands.

And if this wasn’t enough, we are also huge fans of Aiven’s Crabby ;)

The communication challenge

The rise of micro and nano services ultimately means that more components need to be connected. While we can achieve this over APIs, RPC, etc., this also means close coupling of components. A change to one service can cause ripples of subsequent changes. I like to think that with events, everything is more free and flexible.

Any service can start sending and listening to events. It's an open world of data flowing through your system, and that opens up many possibilities. I’ll outline some of the benefits and challenges in the following sections.

While we already have a lot of data, in most cases it is at rest - in databases, indices, files, and so on. To get the data we need, we have to overcome certain hurdles.

We need to know:

  • Where to find it (many data systems and structures)
  • How to access it (integration)
  • How the data structure works (there’s generally no documentation)

Using events:

  • Access to your data is streamlined
  • Data structure is documented (Avro, Protobuf, etc.)
  • Data is easy to find, with good naming policies

The system is never really down

Let's start with a simple example: a user signup.

Signup processes can vary from a couple of input fields to several pages.
Most frustrating of all, when you reach the end, you click the Signup button and get an error to try again later! This is not only frustrating, it’s also very unspecific and not exactly helpful.

If we’re using events, we can minimize impact for the user when the signup service is down.

All we need to do is send the event to Apache Kafka®, which is built for high availability.

As soon as the signup service recovers, the signup event is processed and the user receives an email that the account is successfully created.

While it is still bothersome that the account was not created immediately, the user doesn’t have to go through the signup process again. A definite win for all.

History repeats itself

History repeating itself is very real - and absolutely beneficial in event streaming.

With Apache Kafka® you can choose infinite retention for your events. How can this be useful?

Let’s say, after one year, we decide we want to reward our first hundred customers with a surprise gift, but we forgot to save the creation date for our users.

Without events this would be an impossible hurdle. But with events, we can replay our signup event and find our lucky winners.

While this isn't exactly a likely scenario, it shows how we can replay our events to build new use cases with historical data.

Building new services from the ground up gets easier once we no longer need to extract and transform data from existing systems. All we have to do is:

  • Identify the data streams we need
  • Replay historical events
  • Persist / Transform / React to these events as needed

This has saved me time in many use cases over the years. It encourages you to think more about your data and it makes many things easier.

Instead of trying to have a data structure that fits every need, it is easy to build custom data projections if a component needs it. As a result, all your components become more independent along the way.

Being late

Sending events means that something will happen at a certain point in time.
In some cases you’ll be dependent on other things happening in your system.
Since everything is decoupled, there is no way to communicate with components directly.

So, let’s assume a user registers with an email and a postal address. Let’s also say we choose two events for that:

  • User account event
  • User address event

After signing up, the user is redirected to their profile. Since these events are processed by different systems, the following could happen:

  • The user account is still being created
  • The address service is down
  • The address service is up but is faster than the user services

These kinds of problems always exist, even when data is at rest - but with events, we tend to think about it more often, which is definitely a good thing.

Uncertainty poses challenges for our systems, so how can we solve this in an event-driven architecture?

Any of the following things (and many more) help us build better systems:

  • Resiliency concerning missing data
  • Flexible data loading (e.g. loading skeletons)
  • Live events about what’s happening in other components (push notifications)

In the case of the address service being slower, for example, we’d already see our account information, and the address would have a loading state and refresh immediately after the data becomes available (push vs. poll).

Apache Kafka® is built for very high throughput, so in many scenarios the delay won’t even be noticeable, but it is always good to keep the unexpected in mind.

In conclusion

As with any technology, there are benefits and challenges. Over the years I’ve found that the benefits of an event streaming architecture by far outweigh the challenges.

There is a learning curve to Apache Kafka®, don’t let that discourage you!
Once you embrace the “event first” mindset, your application becomes more resilient, stable and flexible. You’ll see many new opportunities and use cases open up, as Apache Kafka has a great ecosystem of integrations.

It has been an incredible journey so far and I can't wait to see where it will take me next.

I want to thank the whole Aiven team for being an awesome and reliable partner that helped us grow our architecture over the years 💖

About Nick

Nick Chiu is the CTIO of James Job AG, a startup that provides job seekers with the best job matches, taking minimal effort. He’s worked in the software services building area for over 20 years and has had the opportunity to work in many different industries, which helped him gain a lot of insight into solution architecture. He’s been focusing on event driven systems for 5 years now, which has proven to be a great blueprint for the many solutions he’s built over the years. He loves food and manga.

To get the latest news about Aiven and our services, plus a bit of extra around all things open source, subscribe to our monthly newsletter! Daily news about Aiven is available on our LinkedIn and Twitter feeds.

Further reading


Stay updated with Aiven

Subscribe for the latest news and insights on open source, Aiven offerings, and more.

Related resources