Apache Kafka is a distributed system. At the heart of Apache Kafka is a set of brokers that contain topics and topics are split into partitions. Dividing topics into smaller pieces allows us to work with data in parallel and achieve higher data throughput.
Such parallelization is the key to a performant cluster, however it comes with a price. First of all, reading from multiple partitions will eventually mess up the order of records, meaning that the resulting order will be different from when the data was pushed into the cluster.
Another big challenge is uneven distribution of data across partitions. Overloaded partitions present a dangerous issue for performance of all involved parties, but especially for brokers and consumers.
Therefore, when building our product architecture we should carefully weigh up how many partitions we need, how to ensure proper message ordering, how to evenly balance records across partitions and not forget about data load distribution over time. And do all of this while still maintaining good performance of the cluster.
If you are fresh to Apache Kafka, or you're looking for good practices to design your topic partitions and avoid common pitfalls, you'll find this session useful!
OLENA KUTSENKO
Olena is a seasoned expert in data, sustainable software development, and teamwork. With a background in software engineering, she's led teams and developed mission-critical applications at Nokia, HERE Technologies, and AWS. Currently, she works at Aiven where she supports developers and customers in using open-source data technologies such as Apache Kafka, ClickHouse, and OpenSearch. She is also an international public speaker and regularly present at conferences around the world. She holds AWS Developer and Solutions Architect certifications, and is also a Confluent Catalyst.
Ещё видео!