Categories: Tech Threads

Resolve permission issue among DataNodes with NameNode to establish Secure Shell /SSH without passphrase

Sometimes it has been observed that when we configure and deploy multi-node Hadoop cluster or add new DataNodes, there is a SSH permission issue in communication with Hadoop daemons. In a fully distributed environment when the cluster is alive and running, the NameNode (Hadoop core services like NodeManager, YARN, etc) uses SSH for communication with DataNodes very frequently. Simply, in other words, we can say monitoring the heartbeats of every configured slaves or DataNodes. The error in the terminal console appears as “Permission Denied (public key, password)” once we start the Hadoop daemons at NameNode ($ sbin /.start-dfs.sh ) in that cluster.

Most of the time we suspect that there was an issue in public private RSA key pair generation followed by granting accurate permissions. And we keep repeating those steps to resolve the issue.
Even though key-pair generation and permission grant were correct to connect via SSH

    $ ssh-keygen -t rsa -P ” -f ~/.ssh/id_rsa
    $ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
    $ chmod 0600 ~/.ssh/authorized_keys

and able to ssh to all the systems (designated for DataNodes) without a passphrase from NameNode terminal (without starting Hadoop daemons), there could be an issue of permission denied as mentioned above. This issue ideally might come upon unknowing modification/changes in the sshd_config file in any DataNode or Secondary NameNode (if configured in a separate system ) in the cluster. This file is available in /etc/ssh/ in Ubuntu 14.04. Here are the following parameters in the sshd_config that we need to be careful.

1. “PubKeyAuthentication” key should be uncommented with value “yes”

2. “PasswordAuthentication” key should be uncommented with value “yes”

3. The key “UsePAM” should be uncommented with value “no”

After verification with necessary corrections, restart the ssh service or reboot the systems.

    sudo service network-manager restart
    sudo service ssh restart


And finally, restart the cluster after the successful format of NameNode. The error will disappear and successfully starts all the DataNode in the cluster. We used Ubuntu 14.04 as OS in the multi-node cluster.

Written by
Gautam Goswami    

Can be contacted for real time POC development and hands-on technical training. Also to develop/support any Hadoop related project. Email:- gautam@onlineguwahati.com, gautambangalore@gmail.com. Gautam is a consultant as well as Educator. Prior to that, he worked as Sr. Technical Architect in multiple technologies and business domain across many countries. Currently, he is specializing in Big Data processing and analysis, Data lake creation, architecture etc. using HDFS. Besides, involved in HDFS maintenance and loading of multiple types of data from different sources, Design and development of real time use case development on client/customer demands to demonstrate how data can be leveraged for business transformation, profitability etc. He is passionate about sharing knowledge through blogs, training, seminars, presentations etc. on various Big Data related technologies, methodologies, real time projects with their architecture /design, multiple procedure of huge volume data ingestion, basic data lake creation etc.

Page: 1 2

Recent Posts

Transferring real-time data processed within Apache Flink to Kafka

Transferring real-time data processed within Apache Flink to Kafka and ultimately to Druid for analysis/decision-making.… Read More

3 weeks ago

Streaming real-time data from Kafka 3.7.0 to Flink 1.18.1 for processing

Over the past few years, Apache Kafka has emerged as the leading standard for streaming… Read More

2 months ago

Why Apache Kafka and Apache Flink work incredibly well together to boost real-time data analytics

When data is analyzed and processed in real-time, it can yield insights and actionable information… Read More

3 months ago

Integrating rate-limiting and backpressure strategies synergistically to handle and alleviate consumer lag in Apache Kafka

Apache Kafka stands as a robust distributed streaming platform. However, like any system, it is… Read More

3 months ago

Leveraging Apache Kafka for the Distribution of Large Messages (in gigabyte size range)

In today's data-driven world, the capability to transport and circulate large amounts of data, especially… Read More

5 months ago

The Zero Copy principle subtly encourages Apache Kafka to be more efficient.

The Apache Kafka, a distributed event streaming technology, can process trillions of events each day… Read More

6 months ago