Analysing the application log files generated on production environment are very challenging. Data in the log files are in unstructured format and hence to leverage the query functionality, they can’t be stored in RDBMS/traditional database systems without conversion to structured format. Hence if an application behaves abruptly for very short duration, troubleshooting the application based on the information recorded in a large log file, probably of size hundreds of terabytes, is nearly impossible.As part of our POC development, we found that from an E-Commerce application running on Oracle Web Commerce platform (ATG), sometimes for order fulfilment asynchronous communication was not established to a third party vendor. JMS messaging protocol was responsible to delivered the order submission message from ATG third party vendor and vice versa, but periodically it was unable to do that. Using Hadoop cluster with customized Map-Reduce programming model, we extracted the exact recorded warnings and errors from log files produced from out of box ATG component. After performing the intricate analysis within the framework component, based on the analysed reports produced by Hadoop framework, we concluded that the issue was lying within the ATG framework itself. The same was communicated to the software vendor and subsequently received the patch from them.
Complex event processing (CEP) is a highly effective and optimized mechanism that combines several sources… Read More
Source:- www.PacktPub.com This book focuses on data science, a rapidly expanding field of study and… Read More
Over the past few years, Apache Kafka has emerged as the top event streaming platform… Read More
In the current fast-paced digital age, many data sources generate an unending flow of information,… Read More
At first, data tiering was a tactic used by storage systems to reduce data storage… Read More
With the use of telemetry, data can be remotely measured and transmitted from multiple sources… Read More