Home

Proof of concept to analyse huge application log files using Hadoop cluster on IBM Cloud Platform

Analysing the application log files generated on production environment are very challenging. Data in the log files are in unstructured format and hence to leverage the query functionality, they can’t be stored in RDBMS/traditional database systems without conversion to structured format. Hence if an application behaves abruptly for very short duration, troubleshooting the application based on the information recorded in a large log file, probably of size hundreds of terabytes, is nearly impossible.As part of our POC development, we found that from an E-Commerce application running on Oracle Web Commerce platform (ATG), sometimes for order fulfilment asynchronous communication was not established to a third party vendor. JMS messaging protocol was responsible to delivered the order submission message from ATG third party vendor and vice versa, but periodically it was unable to do that. Using Hadoop cluster with customized Map-Reduce programming model, we extracted the exact recorded warnings and errors from log files produced from out of box ATG component. After performing the intricate analysis within the framework component, based on the analysed reports produced by Hadoop framework, we concluded that the issue was lying within the ATG framework itself. The same was communicated to the software vendor and subsequently received the patch from them.

Recent Posts

The Significance of Complex Event Processing (CEP) with RisingWave for Delivering Accurate Business Decisions

Complex event processing (CEP) is a highly effective and optimized mechanism that combines several sources… Read More

2 months ago

Principle Of Data Science

Source:- www.PacktPub.com This book focuses on data science, a rapidly expanding field of study and… Read More

3 months ago

Integrating Apache Kafka in KRaft Mode with RisingWave for Event Streaming Analytics

Over the past few years, Apache Kafka has emerged as the top event streaming platform… Read More

3 months ago

Criticality in Data Stream Processing and a Few Effective Approaches

In the current fast-paced digital age, many data sources generate an unending flow of information,… Read More

4 months ago

Partitioning Hot and Cold Data Tier in Apache Kafka Cluster for Optimal Performance

At first, data tiering was a tactic used by storage systems to reduce data storage… Read More

5 months ago

Exploring Telemetry: Apache Kafka’s Role in Telemetry Data Management with OpenTelemetry as a Fulcrum

With the use of telemetry, data can be remotely measured and transmitted from multiple sources… Read More

6 months ago