1、启动Hadoop
- ubuntu@ubuntu:~/hadoop-1.0.4/bin$ ./start-all.sh
- starting namenode, logging to /home/ubuntu/hadoop-1.0.4/libexec/../logs/hadoop-ubuntu-namenode-ubuntu.out
- localhost: starting datanode, logging to /home/ubuntu/hadoop-1.0.4/libexec/../logs/hadoop-ubuntu-datanode-ubuntu.out
- localhost: starting secondarynamenode, logging to /home/ubuntu/hadoop-1.0.4/libexec/../logs/hadoop-ubuntu-secondarynamenode-ubuntu.out
- starting jobtracker, logging to /home/ubuntu/hadoop-1.0.4/libexec/../logs/hadoop-ubuntu-jobtracker-ubuntu.out
- localhost: starting tasktracker, logging to /home/ubuntu/hadoop-1.0.4/libexec/../logs/hadoop-ubuntu-tasktracker-ubuntu.out
2、访问localhost:50070失败,说明namenode启动失败
3、查看namenode启动日志
- ubuntu@ubuntu:~/hadoop-1.0.4/bin$ cd ../logs
- ubuntu@ubuntu:~/hadoop-1.0.4/logs$ view hadoop-ubuntu-namenode-ubuntu.log
- /************************************************************
- STARTUP_MSG: Starting NameNode
- STARTUP_MSG: host = ubuntu/127.0.1.1
- STARTUP_MSG: args = []
- STARTUP_MSG: version = 1.0.4
- STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1393290; compiled by 'hortonfo' on Wed Oct 3 05:13:58 UTC 2012
- ************************************************************/
- 2013-01-24 07:05:46,936 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
- 2013-01-24 07:05:46,945 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
- 2013-01-24 07:05:46,945 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
- 2013-01-24 07:05:46,945 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
- 2013-01-24 07:05:47,053 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
- 2013-01-24 07:05:47,058 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
- 2013-01-24 07:05:47,064 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
- 2013-01-24 07:05:47,064 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered.
- 2013-01-24 07:05:47,092 INFO org.apache.hadoop.hdfs.util.GSet: VM type = 32-bit
- 2013-01-24 07:05:47,092 INFO org.apache.hadoop.hdfs.util.GSet: 2% max memory = 19.33375 MB
- 2013-01-24 07:05:47,092 INFO org.apache.hadoop.hdfs.util.GSet: capacity = 2^22 = 4194304 entries
- 2013-01-24 07:05:47,092 INFO org.apache.hadoop.hdfs.util.GSet: recommended=4194304, actual=4194304
- 2013-01-24 07:05:47,140 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=ubuntu
- 2013-01-24 07:05:47,140 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
- 2013-01-24 07:05:47,140 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
- 2013-01-24 07:05:47,143 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100
- 2013-01-24 07:05:47,143 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
- 2013-01-24 07:05:47,154 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean
- 2013-01-24 07:05:47,169 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
- 2013-01-24 07:05:47,174 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
- java.io.IOException: NameNode is not formatted.
- at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
- at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
- at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
- at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
- at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
- at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
- at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
- at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
- 2013-01-24 07:05:47,175 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException: NameNode is not formatted.
- at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
- at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
- at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
- at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
- at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
- at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
- at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
- at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
其中"2013-01-24 07:05:47,174 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
java.io.IOException: NameNode is not formatted."一行显示,namenode未初始化。
4、初始化namenode,却提示是否重新初始化namenode,于是输入Y。
- ubuntu@ubuntu:~/hadoop-1.0.4$ bin/hadoop namenode -format
- 13/01/24 07:05:08 INFO namenode.NameNode: STARTUP_MSG:
- /************************************************************
- STARTUP_MSG: Starting NameNode
- STARTUP_MSG: host = ubuntu/127.0.1.1
- STARTUP_MSG: args = [-format]
- STARTUP_MSG: version = 1.0.4
- STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1393290; compiled by 'hortonfo' on Wed Oct 3 05:13:58 UTC 2012
- ************************************************************/
- Re-format filesystem in /home/ubuntu/hadoop-1.0.4/tmp/dfs/name ? (Y or N) y
- Format aborted in /home/ubuntu/hadoop-1.0.4/tmp/dfs/name
- 13/01/24 07:05:12 INFO namenode.NameNode: SHUTDOWN_MSG:
- /************************************************************
- SHUTDOWN_MSG: Shutting down NameNode at ubuntu/127.0.1.1
- ************************************************************/
5、初始化后,重新启动hadoop,localhost:50070仍然访问失败。查看namenode启动日志,依然报namenode没有初始化的错误。
6、于是删除core-site.xml配置文件中配置的tmp目录下的所有文件;
将hadoop所有服务停止;
再次启动hadoop,访问localhost:50070成功。
- ubuntu@ubuntu:~/hadoop-1.0.4/tmp$ rm -rf *
- ubuntu@ubuntu:~/hadoop-1.0.4/tmp$ cd ../bin
- ubuntu@ubuntu:~/hadoop-1.0.4/bin$ ./stop-all.sh
- stopping jobtracker
- localhost: stopping tasktracker
- no namenode to stop
- localhost: stopping datanode
- localhost: stopping secondarynamenode
- ubuntu@ubuntu:~/hadoop-1.0.4/bin$ hadoop namenode -format
- 13/01/24 07:10:45 INFO namenode.NameNode: STARTUP_MSG:
- /************************************************************
- STARTUP_MSG: Starting NameNode
- STARTUP_MSG: host = ubuntu/127.0.1.1
- STARTUP_MSG: args = [-format]
- STARTUP_MSG: version = 1.0.4
- STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1393290; compiled by 'hortonfo' on Wed Oct 3 05:13:58 UTC 2012
- ************************************************************/
- 13/01/24 07:10:46 INFO util.GSet: VM type = 32-bit
- 13/01/24 07:10:46 INFO util.GSet: 2% max memory = 19.33375 MB
- 13/01/24 07:10:46 INFO util.GSet: capacity = 2^22 = 4194304 entries
- 13/01/24 07:10:46 INFO util.GSet: recommended=4194304, actual=4194304
- 13/01/24 07:10:46 INFO namenode.FSNamesystem: fsOwner=ubuntu
- 13/01/24 07:10:46 INFO namenode.FSNamesystem: supergroup=supergroup
- 13/01/24 07:10:46 INFO namenode.FSNamesystem: isPermissionEnabled=true
- 13/01/24 07:10:46 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
- 13/01/24 07:10:46 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
- 13/01/24 07:10:46 INFO namenode.NameNode: Caching file names occuring more than 10 times
- 13/01/24 07:10:46 INFO common.Storage: Image file of size 112 saved in 0 seconds.
- 13/01/24 07:10:46 INFO common.Storage: Storage directory /home/ubuntu/hadoop-1.0.4/tmp/dfs/name has been successfully formatted.
- 13/01/24 07:10:46 INFO namenode.NameNode: SHUTDOWN_MSG:
- /************************************************************
- SHUTDOWN_MSG: Shutting down NameNode at ubuntu/127.0.1.1
- ************************************************************/
- ubuntu@ubuntu:~/hadoop-1.0.4/bin$ ./start-all.sh
- starting namenode, logging to /home/ubuntu/hadoop-1.0.4/libexec/../logs/hadoop-ubuntu-namenode-ubuntu.out
- localhost: starting datanode, logging to /home/ubuntu/hadoop-1.0.4/libexec/../logs/hadoop-ubuntu-datanode-ubuntu.out
- localhost: starting secondarynamenode, logging to /home/ubuntu/hadoop-1.0.4/libexec/../logs/hadoop-ubuntu-secondarynamenode-ubuntu.out
- starting jobtracker, logging to /home/ubuntu/hadoop-1.0.4/libexec/../logs/hadoop-ubuntu-jobtracker-ubuntu.out
- localhost: starting tasktracker, logging to /home/ubuntu/hadoop-1.0.4/libexec/../logs/hadoop-ubuntu-tasktracker-ubuntu.out
7、原来是修改了配置文件中的tmp目录后没有对hdfs做初始化,导致启动hadoop时报namenode没有初始化的错误。
相关推荐
问题描述:Hadoop抛出java.io.IOException: Could not obtain block错误。 解决办法:该问题是由于结点断开连接所致。解决办法是增加dfs.datanode.max.xcievers的值到4096。 问题6:Problem: "No live nodes ...
1. `hadoop-lzo-0.4.21-SNAPSHOT-javadoc.jar`:这是Hadoop-LZO的Java文档(Javadoc),包含了一份详细的API文档,开发者可以通过查阅这份文档了解如何在自己的代码中调用Hadoop-LZO提供的接口和类,进行数据压缩...
hadoop-train-v2-1.0.jar
赠送jar包:hadoop-mapreduce-client-common-2.6.5.jar; 赠送原API文档:hadoop-mapreduce-client-common-2.6.5-javadoc.jar; 赠送源代码:hadoop-mapreduce-client-common-2.6.5-sources.jar; 赠送Maven依赖信息...
赠送jar包:hadoop-auth-2.5.1.jar; 赠送原API文档:hadoop-auth-2.5.1-javadoc.jar; 赠送源代码:hadoop-auth-2.5.1-sources.jar; 赠送Maven依赖信息文件:hadoop-auth-2.5.1.pom; 包含翻译后的API文档:hadoop...
包含hadoop平台Java开发的所有所需jar包,例如activation-1.1.jar apacheds-i18n-2.0.0-M15.jar apacheds-kerberos-codec-2.0.0-M15.jar api-asn1-api-1.0.0-M20.jar api-util-1.0.0-M20.jar asm-3.2.jar avro-1.7.7...
赠送jar包:hadoop-yarn-client-2.6.5.jar; 赠送原API文档:hadoop-yarn-client-2.6.5-javadoc.jar; 赠送源代码:hadoop-yarn-client-2.6.5-sources.jar; 赠送Maven依赖信息文件:hadoop-yarn-client-2.6.5.pom;...
赠送jar包:hadoop-mapreduce-client-jobclient-2.6.5.jar; 赠送原API文档:hadoop-mapreduce-client-jobclient-2.6.5-javadoc.jar; 赠送源代码:hadoop-mapreduce-client-jobclient-2.6.5-sources.jar; 赠送...
这个名为“hadoop-1.0源代码(全)”的压缩包包含的是一整套Hadoop 1.0版本的源代码,这对于开发者来说是一个宝贵的资源,可以深入理解Hadoop的工作原理和实现细节。 在压缩包中,我们可以看到以下几个关键目录: ...
赠送jar包:hadoop-common-2.7.3.jar; 赠送原API文档:hadoop-common-2.7.3-javadoc.jar; 赠送源代码:hadoop-common-2.7.3-sources.jar; 赠送Maven依赖信息文件:hadoop-common-2.7.3.pom; 包含翻译后的API文档...
Flink-1.11.2与Hadoop3集成JAR包,放到flink安装包的lib目录下,可以避免Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop is not in the classpath/dependencies.这个报错,实现...
ERROR org.apache.hadoop.mapred.TaskTracker: Can not start task tracker because java.io.IOException: Failed to set permissions of path: \tmp\hadoop-admin \mapred\local\ttprivate to 0700 at org.apache...
标题中的"apache-hadoop-3.1.0-winutils-master.zip"是一个针对Windows用户的Hadoop工具包,它包含了运行Hadoop所需的特定于Windows的工具和配置。`winutils.exe`是这个工具包的关键组件,它是Hadoop在Windows上的一...
当我们谈论“hadoop-lzo-0.4.15.tar.gz”时,我们实际上是在讨论一个特定版本的Hadoop LZO库,这个库将LZO压缩技术集成到Hadoop生态系统中,以提高数据处理效率。 Hadoop LZO是由Gopala Krishna阿德瓦尼创建的,它...
hadoop支持LZO压缩配置 将编译好后的hadoop-lzo-0.4.20.jar 放入hadoop-2.7.2/share/hadoop/common/ core-site.xml增加配置支持LZO压缩 <?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl...
赠送jar包:hadoop-yarn-api-2.5.1.jar; 赠送原API文档:hadoop-yarn-api-2.5.1-javadoc.jar; 赠送源代码:hadoop-yarn-api-2.5.1-sources.jar; 赠送Maven依赖信息文件:hadoop-yarn-api-2.5.1.pom; 包含翻译后...
Hadoop-Eclipse-Plugin-3.1.1是一款专为Eclipse集成开发环境设计的插件,用于方便地在Hadoop分布式文件系统(HDFS)上进行开发和调试MapReduce程序。这款插件是Hadoop生态系统的组成部分,它使得Java开发者能够更加...
赠送jar包:hadoop-hdfs-2.5.1.jar; 赠送原API文档:hadoop-hdfs-2.5.1-javadoc.jar; 赠送源代码:hadoop-hdfs-2.5.1-sources.jar; 赠送Maven依赖信息文件:hadoop-hdfs-2.5.1.pom; 包含翻译后的API文档:hadoop...
赠送jar包:hadoop-annotations-2.7.3.jar; 赠送原API文档:hadoop-annotations-2.7.3-javadoc.jar; 赠送源代码:hadoop-annotations-2.7.3-sources.jar; 赠送Maven依赖信息文件:hadoop-annotations-2.7.3.pom;...
赠送jar包:hadoop-yarn-server-resourcemanager-2.6.0.jar; 赠送原API文档:hadoop-yarn-server-resourcemanager-2.6.0-javadoc.jar; 赠送源代码:hadoop-yarn-server-resourcemanager-2.6.0-sources.jar; 赠送...