Sometimes it has been observed that when we configure and deploy multi-node Hadoop cluster or add new DataNodes, there is a SSH permission issue in communication with Hadoop daemons. In a fully distributed environment when the cluster is alive and running, the NameNode (Hadoop core services like NodeManager, YARN, etc) uses SSH for communication with DataNodes very frequently. Simply, in other words, we can say monitoring the heartbeats of every configured slaves or DataNodes. The error in the terminal console appears as “Permission Denied (public key, password)” once we start the Hadoop daemons at NameNode ($ sbin /.start-dfs.sh ) in that cluster.
Most of the time we suspect that there was an issue in public private RSA key pair generation followed by granting accurate permissions. And we keep repeating those steps to resolve the issue.
Even though key-pair generation and permission grant were correct to connect via SSH
and able to ssh to all the systems (designated for DataNodes) without a passphrase from NameNode terminal (without starting Hadoop daemons), there could be an issue of permission denied as mentioned above. This issue ideally might come upon unknowing modification/changes in the sshd_config file in any DataNode or Secondary NameNode (if configured in a separate system ) in the cluster. This file is available in /etc/ssh/ in Ubuntu 14.04. Here are the following parameters in the sshd_config that we need to be careful.
1. “PubKeyAuthentication” key should be uncommented with value “yes”
2. “PasswordAuthentication” key should be uncommented with value “yes”
3. The key “UsePAM” should be uncommented with value “no”
After verification with necessary corrections, restart the ssh service or reboot the systems.
And finally, restart the cluster after the successful format of NameNode. The error will disappear and successfully starts all the DataNode in the cluster. We used Ubuntu 14.04 as OS in the multi-node cluster.
Can be contacted for real time POC development and hands-on technical training. Also to develop/support any Hadoop related project. Email:- gautam@onlineguwahati.com, gautambangalore@gmail.com. Gautam is a consultant as well as Educator. Prior to that, he worked as Sr. Technical Architect in multiple technologies and business domain across many countries. Currently, he is specializing in Big Data processing and analysis, Data lake creation, architecture etc. using HDFS. Besides, involved in HDFS maintenance and loading of multiple types of data from different sources, Design and development of real time use case development on client/customer demands to demonstrate how data can be leveraged for business transformation, profitability etc. He is passionate about sharing knowledge through blogs, training, seminars, presentations etc. on various Big Data related technologies, methodologies, real time projects with their architecture /design, multiple procedure of huge volume data ingestion, basic data lake creation etc.
Page: 1 2
Data is being generated from various sources, including electronic devices, machines, and social media, across… Read More
An Apache Kafka outage occurs when a Kafka cluster or some of its components fail,… Read More
Complex event processing (CEP) is a highly effective and optimized mechanism that combines several sources… Read More
Source:- www.PacktPub.com This book focuses on data science, a rapidly expanding field of study and… Read More
Over the past few years, Apache Kafka has emerged as the top event streaming platform… Read More
In the current fast-paced digital age, many data sources generate an unending flow of information,… Read More