Tech Threads

Resolve Apache Kafka starting issue installed on Single/Multi-node cluster


This short article explains how to resolve the error “ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)kafka.common.InconsistentClusterIdException:” when we start the Apache Kafka installed and configured on a multi-node cluster. You can read here the steps for setting up multi-node Apache Kafka cluster. Without integrating Apache Zookeeper, Kafka alone won’t be able to form the complete Kafka cluster. Because ZooKeeper handles the leadership election of Kafka brokers and manages service discovery as well as cluster topology. Also tracks when topics are created or deleted from the cluster and maintain a topic list. Overall, ZooKeeper provides an in-sync view of the Kafka cluster.

The above mention error is quite common and can be encountered while we are updating the server.properties file on each Kafka Broker in the existing cluster to add an additional zookeeper server entry because of the new Kafka broker addition into the cluster. It might be possible to face this issue on a single node Kafka broker too after a system reboot if performed any configuration changes on the Zookeeper’s zoo.cfg or Kafka’s server. properties file to replace/update the /tmp directory location.

The root cause of the issue is when zookeeper data are stored in the temporary folder and Kafka logs are stored in the persistent folder or vice versa. Ideally, the default value of dataDir is /tmp/zookeeper  in zoo.cfg for Zookeeper to store the snapshots, and similarly the default value of log.dirs in server.properties for Kafka broker is /tmp/kafka-logs. After the system restarts, files that are stored in that temporary directory get cleaned and a new cluster id would be generated due to the complete new registration of the Kafka cluster. Eventually, this configuration mismatch or cluster id number variation causes the above mention error.

There are two ways to resolve the above error quickly but not recommended for the production environment.

Option 1:- Update the Cluster ID in the file meta.properties 

The meta.properties file can be located at /tmp/kafka-logs if the value of log.dirs key in server.properties was not updated with a persistent folder location. Open the meta.properties, the contents will be similar as shown below. Replace the cluster.id value with the new cluster id present in the error log and restart the Kafka server.

Option 2:-   Delete /tmp/kafka-logs/meta.properties which saved the wrong cluster.id of last/failed session.

Hope you have enjoyed this read. Please like and share if you feel this composition is valuable.


Written by
Gautam Goswami

Page: 1 2

Recent Posts

Transferring real-time data processed within Apache Flink to Kafka

Transferring real-time data processed within Apache Flink to Kafka and ultimately to Druid for analysis/decision-making.… Read More

4 weeks ago

Streaming real-time data from Kafka 3.7.0 to Flink 1.18.1 for processing

Over the past few years, Apache Kafka has emerged as the leading standard for streaming… Read More

2 months ago

Why Apache Kafka and Apache Flink work incredibly well together to boost real-time data analytics

When data is analyzed and processed in real-time, it can yield insights and actionable information… Read More

3 months ago

Integrating rate-limiting and backpressure strategies synergistically to handle and alleviate consumer lag in Apache Kafka

Apache Kafka stands as a robust distributed streaming platform. However, like any system, it is… Read More

4 months ago

Leveraging Apache Kafka for the Distribution of Large Messages (in gigabyte size range)

In today's data-driven world, the capability to transport and circulate large amounts of data, especially… Read More

5 months ago

The Zero Copy principle subtly encourages Apache Kafka to be more efficient.

The Apache Kafka, a distributed event streaming technology, can process trillions of events each day… Read More

6 months ago