Hadoop的jps少了进程

我的hadoop不知道这么就少了jps进程这么回事啊?
2024-12-26 00:51:05
推荐回答(1个)
回答1:

1.主机名与配置文件不一致

启动成功,但是看不到5个进程

  • hadoop@node1:~/hadoop$ bin/start-all.sh 

  • This script is Deprecated. Instead use start-dfs.sh and start-mapred.sh

  • starting namenode, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-namenode-node1.out

  • node3: starting datanode, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-node3.out

  • node2: starting datanode, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-node2.out

  • node1: starting secondarynamenode, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-node1.out

  • starting jobtracker, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-jobtracker-node1.out

  • node3: starting tasktracker, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-node3.out

  • node2: starting tasktracker, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-node2.out

  • hadoop@node1:~/hadoop$ jps

  • 16993 SecondaryNameNode

  • 17210 Jps

  • 复制代码

  • 配置与日志如下:
  • hadoop@node1:~/hadoop/conf$ cat core-site.xml 






  • hadoop.tmp.dir

  • /home/hadoop/hadoop/tmp




  • fs.default.name

  • hdfs://masternode:54310




  • hadoop@node1:~/hadoop/conf$ cat hdfs-site.xml 






  • dfs.replication

  • 3




  • hadoop@node1:~/hadoop/conf$ cat mapred-site.xml 






  • mapred.job.tracker

  • masternode:54311




  • jobtracker的log文件如下:

  • 2006-03-11 23:54:44,348 FATAL org.apache.hadoop.mapred.JobTracker: java.net.BindException: Problem binding to masternode/122.72.28.136:54311 : Cannot assign requested address

  • at org.apache.hadoop.ipc.Server.bind(Server.java:218)

  • at org.apache.hadoop.ipc.Server$Listener.(Server.java:289)

  • at org.apache.hadoop.ipc.Server.(Server.java:1443)

  • at org.apache.hadoop.ipc.RPC$Server.(RPC.java:343)

  • at org.apache.hadoop.ipc.WritableRpcEngine$Server.(WritableRpcEngine.java:324)

  • at org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:284)

  • at org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:45)

  • at org.apache.hadoop.ipc.RPC.getServer(RPC.java:331)

  • at org.apache.hadoop.mapred.JobTracker.(JobTracker.java:1450)

  • at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:258)

  • at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:250)

  • at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:245)

  • at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4164)

  • Caused by: java.net.BindException: Cannot assign requested address

  • at sun.nio.ch.Net.bind(Native Method)

  • at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)

  • at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)

  • at org.apache.hadoop.ipc.Server.bind(Server.java:216)

  • ... 12 more

  • 2006-03-11 23:54:44,353 INFO org.apache.hadoop.mapred.JobTracker: SHUTDOWN_MSG: 

  • /************************************************************

  • SHUTDOWN_MSG: Shutting down JobTracker at node1/192.168.10.237

  • ************************************************************/

  • namenode的log文件如下:

  • 2006-03-11 23:54:37,009 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.net.BindException: Problem binding to masternode/122.72.28.136:54310 : Cannot assign requested address

  • at org.apache.hadoop.ipc.Server.bind(Server.java:218)

  • at org.apache.hadoop.ipc.Server$Listener.(Server.java:289)

  • at org.apache.hadoop.ipc.Server.(Server.java:1443)

  • at org.apache.hadoop.ipc.RPC$Server.(RPC.java:343)

  • at org.apache.hadoop.ipc.WritableRpcEngine$Server.(WritableRpcEngine.java:324)

  • at org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:284)

  • at org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:45)

  • at org.apache.hadoop.ipc.RPC.getServer(RPC.java:331)

  • at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:305)

  • at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:433)

  • at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:421)

  • at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1359)

  • at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1368)

  • Caused by: java.net.BindException: Cannot assign requested address

  • at sun.nio.ch.Net.bind(Native Method)

  • at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)

  • at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)

  • at org.apache.hadoop.ipc.Server.bind(Server.java:216)

  • ... 12 more

  • 2006-03-11 23:54:37,010 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: 

  • /************************************************************

  • SHUTDOWN_MSG: Shutting down NameNode at node1/192.168.10.237

  • ************************************************************

  • 复制代码

  • host如下:
  • hadoop@node1:~/hadoop/conf$ cat masters 

  • node1

  • hadoop@node1:~/hadoop/conf$ cat slaves 

  • node2

  • node3

  • hadoop@node1:~/hadoop/conf$ cat /etc/hosts

  • 127.0.0.1       localhost

  • 192.168.10.237  node1.node1     node1

  • 192.168.10.238  node2

  • 192.168.10.239  node3

  • 复制代码

  • 原因:主机名与配置文件不一致


  • 解决办法:


  • 1.修改主机名为masternode:具体如何修改查看:ubuntu修改hostname

  • 修改后结果如下:
  • hadoop@node1:~/hadoop/conf$ cat masters 

  • masternode

  • 复制代码


  • ---------------------------------------------------------------------------------------------------------------------------


  • 2.主机名错误






  • 错误如下:


  • start-all.sh 没有报错 但是master 和slave都没有启动服务