`

【hadoop-1.0】:启动hadoop时,log中出现:java.io.IOException: NameNode is not formatted.

 
阅读更多

1、启动Hadoop

[plain] view plaincopy
  1. ubuntu@ubuntu:~/hadoop-1.0.4/bin$ ./start-all.sh   
  2. starting namenode, logging to /home/ubuntu/hadoop-1.0.4/libexec/../logs/hadoop-ubuntu-namenode-ubuntu.out  
  3. localhost: starting datanode, logging to /home/ubuntu/hadoop-1.0.4/libexec/../logs/hadoop-ubuntu-datanode-ubuntu.out  
  4. localhost: starting secondarynamenode, logging to /home/ubuntu/hadoop-1.0.4/libexec/../logs/hadoop-ubuntu-secondarynamenode-ubuntu.out  
  5. starting jobtracker, logging to /home/ubuntu/hadoop-1.0.4/libexec/../logs/hadoop-ubuntu-jobtracker-ubuntu.out  
  6. localhost: starting tasktracker, logging to /home/ubuntu/hadoop-1.0.4/libexec/../logs/hadoop-ubuntu-tasktracker-ubuntu.out  

2、访问localhost:50070失败,说明namenode启动失败
3、查看namenode启动日志

[plain] view plaincopy
 
  1. ubuntu@ubuntu:~/hadoop-1.0.4/bin$ cd ../logs  
  2. ubuntu@ubuntu:~/hadoop-1.0.4/logs$ view hadoop-ubuntu-namenode-ubuntu.log  
  3. /************************************************************  
  4. STARTUP_MSG: Starting NameNode  
  5. STARTUP_MSG:   host = ubuntu/127.0.1.1  
  6. STARTUP_MSG:   args = []  
  7. STARTUP_MSG:   version = 1.0.4  
  8. STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012  
  9. ************************************************************/  
  10. 2013-01-24 07:05:46,936 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties  
  11. 2013-01-24 07:05:46,945 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.  
  12. 2013-01-24 07:05:46,945 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).  
  13. 2013-01-24 07:05:46,945 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started  
  14. 2013-01-24 07:05:47,053 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.  
  15. 2013-01-24 07:05:47,058 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!  
  16. 2013-01-24 07:05:47,064 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.  
  17. 2013-01-24 07:05:47,064 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered.  
  18. 2013-01-24 07:05:47,092 INFO org.apache.hadoop.hdfs.util.GSet: VM type       = 32-bit  
  19. 2013-01-24 07:05:47,092 INFO org.apache.hadoop.hdfs.util.GSet: 2% max memory = 19.33375 MB  
  20. 2013-01-24 07:05:47,092 INFO org.apache.hadoop.hdfs.util.GSet: capacity      = 2^22 = 4194304 entries  
  21. 2013-01-24 07:05:47,092 INFO org.apache.hadoop.hdfs.util.GSet: recommended=4194304, actual=4194304  
  22. 2013-01-24 07:05:47,140 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=ubuntu  
  23. 2013-01-24 07:05:47,140 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup  
  24. 2013-01-24 07:05:47,140 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true  
  25. 2013-01-24 07:05:47,143 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100  
  26. 2013-01-24 07:05:47,143 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)  
  27. 2013-01-24 07:05:47,154 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean  
  28. 2013-01-24 07:05:47,169 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times   
  29. 2013-01-24 07:05:47,174 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.  
  30. java.io.IOException: NameNode is not formatted.  
  31.     at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)  
  32.     at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)  
  33.     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)  
  34.     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)  
  35.     at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)  
  36.     at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)  
  37.     at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)  
  38.     at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)  
  39. 2013-01-24 07:05:47,175 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException: NameNode is not formatted.  
  40.     at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)  
  41.     at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)  
  42.     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)  
  43.     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)  
  44.     at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)  
  45.     at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)  
  46.     at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)  
  47.     at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)  

其中"2013-01-24 07:05:47,174 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
java.io.IOException: NameNode is not formatted."一行显示,namenode未初始化。
4、初始化namenode,却提示是否重新初始化namenode,于是输入Y。

[plain] view plaincopy
 
  1. ubuntu@ubuntu:~/hadoop-1.0.4$ bin/hadoop namenode -format  
  2. 13/01/24 07:05:08 INFO namenode.NameNode: STARTUP_MSG:   
  3. /************************************************************  
  4. STARTUP_MSG: Starting NameNode  
  5. STARTUP_MSG:   host = ubuntu/127.0.1.1  
  6. STARTUP_MSG:   args = [-format]  
  7. STARTUP_MSG:   version = 1.0.4  
  8. STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012  
  9. ************************************************************/  
  10. Re-format filesystem in /home/ubuntu/hadoop-1.0.4/tmp/dfs/name ? (Y or N) y  
  11. Format aborted in /home/ubuntu/hadoop-1.0.4/tmp/dfs/name  
  12. 13/01/24 07:05:12 INFO namenode.NameNode: SHUTDOWN_MSG:   
  13. /************************************************************  
  14. SHUTDOWN_MSG: Shutting down NameNode at ubuntu/127.0.1.1  
  15. ************************************************************/  

5、初始化后,重新启动hadoop,localhost:50070仍然访问失败。查看namenode启动日志,依然报namenode没有初始化的错误。
6、于是删除core-site.xml配置文件中配置的tmp目录下的所有文件;
将hadoop所有服务停止;

再次启动hadoop,访问localhost:50070成功。

 

[plain] view plaincopy
 
  1. ubuntu@ubuntu:~/hadoop-1.0.4/tmp$ rm -rf *  
  2. ubuntu@ubuntu:~/hadoop-1.0.4/tmp$ cd ../bin  
  3. ubuntu@ubuntu:~/hadoop-1.0.4/bin$ ./stop-all.sh   
  4. stopping jobtracker  
  5. localhost: stopping tasktracker  
  6. no namenode to stop  
  7. localhost: stopping datanode  
  8. localhost: stopping secondarynamenode  
  9. ubuntu@ubuntu:~/hadoop-1.0.4/bin$ hadoop namenode -format  
  10. 13/01/24 07:10:45 INFO namenode.NameNode: STARTUP_MSG:   
  11. /************************************************************  
  12. STARTUP_MSG: Starting NameNode  
  13. STARTUP_MSG:   host = ubuntu/127.0.1.1  
  14. STARTUP_MSG:   args = [-format]  
  15. STARTUP_MSG:   version = 1.0.4  
  16. STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012  
  17. ************************************************************/  
  18. 13/01/24 07:10:46 INFO util.GSet: VM type       = 32-bit  
  19. 13/01/24 07:10:46 INFO util.GSet: 2% max memory = 19.33375 MB  
  20. 13/01/24 07:10:46 INFO util.GSet: capacity      = 2^22 = 4194304 entries  
  21. 13/01/24 07:10:46 INFO util.GSet: recommended=4194304, actual=4194304  
  22. 13/01/24 07:10:46 INFO namenode.FSNamesystem: fsOwner=ubuntu  
  23. 13/01/24 07:10:46 INFO namenode.FSNamesystem: supergroup=supergroup  
  24. 13/01/24 07:10:46 INFO namenode.FSNamesystem: isPermissionEnabled=true  
  25. 13/01/24 07:10:46 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100  
  26. 13/01/24 07:10:46 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)  
  27. 13/01/24 07:10:46 INFO namenode.NameNode: Caching file names occuring more than 10 times   
  28. 13/01/24 07:10:46 INFO common.Storage: Image file of size 112 saved in 0 seconds.  
  29. 13/01/24 07:10:46 INFO common.Storage: Storage directory /home/ubuntu/hadoop-1.0.4/tmp/dfs/name has been successfully formatted.  
  30. 13/01/24 07:10:46 INFO namenode.NameNode: SHUTDOWN_MSG:   
  31. /************************************************************  
  32. SHUTDOWN_MSG: Shutting down NameNode at ubuntu/127.0.1.1  
  33. ************************************************************/  
  34. ubuntu@ubuntu:~/hadoop-1.0.4/bin$ ./start-all.sh   
  35. starting namenode, logging to /home/ubuntu/hadoop-1.0.4/libexec/../logs/hadoop-ubuntu-namenode-ubuntu.out  
  36. localhost: starting datanode, logging to /home/ubuntu/hadoop-1.0.4/libexec/../logs/hadoop-ubuntu-datanode-ubuntu.out  
  37. localhost: starting secondarynamenode, logging to /home/ubuntu/hadoop-1.0.4/libexec/../logs/hadoop-ubuntu-secondarynamenode-ubuntu.out  
  38. starting jobtracker, logging to /home/ubuntu/hadoop-1.0.4/libexec/../logs/hadoop-ubuntu-jobtracker-ubuntu.out  
  39. localhost: starting tasktracker, logging to /home/ubuntu/hadoop-1.0.4/libexec/../logs/hadoop-ubuntu-tasktracker-ubuntu.out  

7、原来是修改了配置文件中的tmp目录后没有对hdfs做初始化,导致启动hadoop时报namenode没有初始化的错误。

分享到:
评论

相关推荐

    hadoop常见问题及解决办法

    问题描述:Hadoop抛出java.io.IOException: Could not obtain block错误。 解决办法:该问题是由于结点断开连接所致。解决办法是增加dfs.datanode.max.xcievers的值到4096。 问题6:Problem: "No live nodes ...

    hadoop-lzo-0.4.21-SNAPSHOT jars

    1. `hadoop-lzo-0.4.21-SNAPSHOT-javadoc.jar`:这是Hadoop-LZO的Java文档(Javadoc),包含了一份详细的API文档,开发者可以通过查阅这份文档了解如何在自己的代码中调用Hadoop-LZO提供的接口和类,进行数据压缩...

    hadoop-train-v2-1.0.jar

    hadoop-train-v2-1.0.jar

    hadoop-mapreduce-client-common-2.6.5-API文档-中英对照版.zip

    赠送jar包:hadoop-mapreduce-client-common-2.6.5.jar; 赠送原API文档:hadoop-mapreduce-client-common-2.6.5-javadoc.jar; 赠送源代码:hadoop-mapreduce-client-common-2.6.5-sources.jar; 赠送Maven依赖信息...

    hadoop-auth-2.5.1-API文档-中文版.zip

    赠送jar包:hadoop-auth-2.5.1.jar; 赠送原API文档:hadoop-auth-2.5.1-javadoc.jar; 赠送源代码:hadoop-auth-2.5.1-sources.jar; 赠送Maven依赖信息文件:hadoop-auth-2.5.1.pom; 包含翻译后的API文档:hadoop...

    hadoop-2.10.0jar.zip

    包含hadoop平台Java开发的所有所需jar包,例如activation-1.1.jar apacheds-i18n-2.0.0-M15.jar apacheds-kerberos-codec-2.0.0-M15.jar api-asn1-api-1.0.0-M20.jar api-util-1.0.0-M20.jar asm-3.2.jar avro-1.7.7...

    hadoop-yarn-client-2.6.5-API文档-中文版.zip

    赠送jar包:hadoop-yarn-client-2.6.5.jar; 赠送原API文档:hadoop-yarn-client-2.6.5-javadoc.jar; 赠送源代码:hadoop-yarn-client-2.6.5-sources.jar; 赠送Maven依赖信息文件:hadoop-yarn-client-2.6.5.pom;...

    hadoop-mapreduce-client-jobclient-2.6.5-API文档-中文版.zip

    赠送jar包:hadoop-mapreduce-client-jobclient-2.6.5.jar; 赠送原API文档:hadoop-mapreduce-client-jobclient-2.6.5-javadoc.jar; 赠送源代码:hadoop-mapreduce-client-jobclient-2.6.5-sources.jar; 赠送...

    hadoop-1.0源代码(全)

    这个名为“hadoop-1.0源代码(全)”的压缩包包含的是一整套Hadoop 1.0版本的源代码,这对于开发者来说是一个宝贵的资源,可以深入理解Hadoop的工作原理和实现细节。 在压缩包中,我们可以看到以下几个关键目录: ...

    hadoop-common-2.7.3-API文档-中文版.zip

    赠送jar包:hadoop-common-2.7.3.jar; 赠送原API文档:hadoop-common-2.7.3-javadoc.jar; 赠送源代码:hadoop-common-2.7.3-sources.jar; 赠送Maven依赖信息文件:hadoop-common-2.7.3.pom; 包含翻译后的API文档...

    flink-shaded-hadoop-3-uber-3.1.1.7.1.1.0-565-9.0.jar

    Flink-1.11.2与Hadoop3集成JAR包,放到flink安装包的lib目录下,可以避免Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop is not in the classpath/dependencies.这个报错,实现...

    hadoop1.0 Failed to set permissions of path 解决方案

    ERROR org.apache.hadoop.mapred.TaskTracker: Can not start task tracker because java.io.IOException: Failed to set permissions of path: \tmp\hadoop-admin \mapred\local\ttprivate to 0700 at org.apache...

    hadoop插件apache-hadoop-3.1.0-winutils-master.zip

    标题中的"apache-hadoop-3.1.0-winutils-master.zip"是一个针对Windows用户的Hadoop工具包,它包含了运行Hadoop所需的特定于Windows的工具和配置。`winutils.exe`是这个工具包的关键组件,它是Hadoop在Windows上的一...

    hadoop-lzo-0.4.15.tar.gz

    当我们谈论“hadoop-lzo-0.4.15.tar.gz”时,我们实际上是在讨论一个特定版本的Hadoop LZO库,这个库将LZO压缩技术集成到Hadoop生态系统中,以提高数据处理效率。 Hadoop LZO是由Gopala Krishna阿德瓦尼创建的,它...

    hadoop-lzo-0.4.20.jar

    hadoop支持LZO压缩配置 将编译好后的hadoop-lzo-0.4.20.jar 放入hadoop-2.7.2/share/hadoop/common/ core-site.xml增加配置支持LZO压缩 &lt;?xml version="1.0" encoding="UTF-8"?&gt; &lt;?xml-stylesheet type="text/xsl...

    hadoop-yarn-api-2.5.1-API文档-中文版.zip

    赠送jar包:hadoop-yarn-api-2.5.1.jar; 赠送原API文档:hadoop-yarn-api-2.5.1-javadoc.jar; 赠送源代码:hadoop-yarn-api-2.5.1-sources.jar; 赠送Maven依赖信息文件:hadoop-yarn-api-2.5.1.pom; 包含翻译后...

    hadoop-eclipse-plugin-3.1.1.tar.gz

    Hadoop-Eclipse-Plugin-3.1.1是一款专为Eclipse集成开发环境设计的插件,用于方便地在Hadoop分布式文件系统(HDFS)上进行开发和调试MapReduce程序。这款插件是Hadoop生态系统的组成部分,它使得Java开发者能够更加...

    hadoop-hdfs-2.5.1-API文档-中文版.zip

    赠送jar包:hadoop-hdfs-2.5.1.jar; 赠送原API文档:hadoop-hdfs-2.5.1-javadoc.jar; 赠送源代码:hadoop-hdfs-2.5.1-sources.jar; 赠送Maven依赖信息文件:hadoop-hdfs-2.5.1.pom; 包含翻译后的API文档:hadoop...

    hadoop-annotations-2.7.3-API文档-中文版.zip

    赠送jar包:hadoop-annotations-2.7.3.jar; 赠送原API文档:hadoop-annotations-2.7.3-javadoc.jar; 赠送源代码:hadoop-annotations-2.7.3-sources.jar; 赠送Maven依赖信息文件:hadoop-annotations-2.7.3.pom;...

    hadoop-yarn-server-resourcemanager-2.6.0-API文档-中文版.zip

    赠送jar包:hadoop-yarn-server-resourcemanager-2.6.0.jar; 赠送原API文档:hadoop-yarn-server-resourcemanager-2.6.0-javadoc.jar; 赠送源代码:hadoop-yarn-server-resourcemanager-2.6.0-sources.jar; 赠送...

Global site tag (gtag.js) - Google Analytics