Categories: Tech Threads

Basic Understanding of Stateful data Streaming supported by Apache Flink


The technologies related to Big Data processing platform are enhancing the maturity in order to efficiently execute the streaming data which is becoming a major focal point to take business decision instantly especially in telecom and retail sector. Collecting data continuously from the various sensors installed/fitted with an industrial heavy equipment, click stream on an e-commerce application’s navigation etc can be considered as streaming data generation sources. By leveraging streaming application, we can process/analyze these continues flow of data without storing (data is in motion) to find out any discrepancies, issues, error, various behavioural pattern etc that help directly to avoid the complete breakdown, to take instance business decision.


Maximum clicks on a specific product on an e-commerce site indicate popularity among buyer and subsequently offering promotion can boost the sell for revenue growth which is another use case to understand the value of streaming data analysis. Arriving data from multiple sources in an infinite succession with the same pattern can be denoted as a data stream. Analyzing and acting on it using continuous queries known as stream processing. A couple of built-in operations provided by the stream processing engines can be leveraged to ingest, transform and output.
Operations or computations can be stateless or stateful. Stateless computation does not maintain/depend on any event. Every event considers individually and apply computation over it and produces some output based on the last event. For example, click stream (Mouse clicks on products in e-commerce site) is passing through a streaming program and raise the alarm if number of clicks within an hour reached over 10,000 on a specific product/item. Stateful operation maintain state and gets updated based on the every input. In order to produce output , last input and the current value of state will be utilized. Ideally output creates based on the accumulation of multiple event/input during a period. Here if we compare with previous click stream example, an alarm can be raised by application if there is very few number of clicks difference within half an hour. Stateful computation is surrounded by lots of challenges like concurrent updates, maintain parallelism etc.
Apache Flink has been developed to overcome those challenges. The feature known as ‘checkpoint’ in Flink confirm that the correct state of event retrieve even after a program interruption while processing the streaming of data which is back bond to achieve stateful operation. A consistent checkpoint of a stateful streaming application is a copy of the state of each of its operators at a point when all operators have processed exactly the same input. Flink allow to plug-in distributed storage mechanism like HDFS etc where state can be persisted. In many cases, Flink can partition the state by a key and manage the state of each partition independently.’Savepoint’ or Versioning state is another feature provided by Flink and exactly same as ‘checkpoint’ but has be triggered manually by the user. Operators namely KeyBy as well as stateful map can be used programmatically to understand better how Flink periodically takes consistent checkpoints to protect a streaming application from failure.

Written by
Gautam Goswami

Page: 1 2

Recent Posts

Transferring real-time data processed within Apache Flink to Kafka

Transferring real-time data processed within Apache Flink to Kafka and ultimately to Druid for analysis/decision-making.… Read More

4 weeks ago

Streaming real-time data from Kafka 3.7.0 to Flink 1.18.1 for processing

Over the past few years, Apache Kafka has emerged as the leading standard for streaming… Read More

2 months ago

Why Apache Kafka and Apache Flink work incredibly well together to boost real-time data analytics

When data is analyzed and processed in real-time, it can yield insights and actionable information… Read More

3 months ago

Integrating rate-limiting and backpressure strategies synergistically to handle and alleviate consumer lag in Apache Kafka

Apache Kafka stands as a robust distributed streaming platform. However, like any system, it is… Read More

4 months ago

Leveraging Apache Kafka for the Distribution of Large Messages (in gigabyte size range)

In today's data-driven world, the capability to transport and circulate large amounts of data, especially… Read More

5 months ago

The Zero Copy principle subtly encourages Apache Kafka to be more efficient.

The Apache Kafka, a distributed event streaming technology, can process trillions of events each day… Read More

6 months ago