配置完hadoop,输入start-dfs.sh,无法启动

2024-12-25 18:53:23
推荐回答(2个)
回答1:

step1:
查看hdfs-site.xml,找到存namenode元数据和datanode元数据的路径:

dfs.namenode.name.dir
file:///home/casliyang/hadoop2/hadoop-2.2.0/metadata/name


dfs.datanode.data.dir
file:///home/casliyang/hadoop2/hadoop-2.2.0/metadata/data


step2:
打开namenode路径下的current/VERSION文件:
casliyang@singlehadoop:~/hadoop2/hadoop-2.2.0/metadata/name/current$ cat VERSION
#Thu May 15 14:46:39 CST 2014
namespaceID=1252551786
clusterID=CID-2cc69ada-3730-4c79-8384-c725fa85859a
cTime=0
storageType=NAME_NODE
blockpoolID=BP-2020521428-192.168.0.166-1397704506565
layoutVersion=-47

打开datanode路径下的current/VERSION文件:
casliyang@singlehadoop:~/hadoop2/hadoop-2.2.0/metadata/data/current$ cat VERSION
#Thu Apr 17 11:15:57 CST 2014
storageID=DS-432251277-192.168.0.166-50010-1397704557407
clusterID=CID-3e649eb6-cdb3-4a0c-aad8-5948c66bf282
cTime=0
storageType=DATA_NODE
layoutVersion=-47

我们可以看到,name节点元数据的clusterID和data节点元数据的clusterID不一致了,并且和报错信息完全对应上!
接下来将data节点的clusterID修改成和name节点的clusterID一致,重启集群即可。

回答2:

先输入stop-all.sh
如果出现Are you sure you want to continue connecting (yes/no)?
输入yes
然后在执行 start-dfs.sh就好了