Sudo chown -R hduser:hadoop /usr/local/hdfs/namenode Create a directory and change permissions for namenode and datanode:.If the problem persists, it might be due to permission issue. If it does not list all these processes, check respective logs for errors. Your jps command should list these processes: Namenode # The following lines are desirable for IPv6 capable hosts Your final /etc/hosts file should look like: 127.0.0.1 localhost marta-komputer (find your IP address here) or type this: ifdata -pa eth0 To find your system IP, type this in terminal ifconfig If any of these hasn't started, share the log output for us to re-evaluate. start-dfs.shĪt this point if NameNode, DataNode, NodeManager and ResourceManager are all running you should be set to go! In /sbin/ start the name and data node by using the start-dfs.sh script and yarn with start-yarn.sh and evaluate the output of jps. If not remove the temporary files already present in the tmp directory you are using (default /tmp/): 6) Start Nodes and Yarn: 5) Format if necessary:Īt this point, if you have changed the value you should reformat the NameNode by hdfs namenode -format. Run sudo lsof -i to be certain no other services are using them for some reason. NameNode binds ports 500 on my standard distribution and DataNode binds ports 50010, 50020, 5008. xml files reside, view contents of script hadoop_env.sh and make sure $HADOOP_CONF_DIR is pointing at the right directory. This is a good opportunity to verify that you are indeed using this configuration. 1) Format of my /etc/hosts folder: 127.0.0.1 localhost Ģ) *.xml configuration files (displaying contents inside tag): This is a set of steps I followed on Ubuntu when facing exactly the same problem but with 2.7.1, the steps shouldn't differ much for previous and future version (I'd believe).
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. Archives
January 2023
Categories |