`
hyz301
  • 浏览: 374903 次
  • 性别: Icon_minigender_1
  • 来自: 北京
社区版块
存档分类
最新评论

求助hadoop2.X分布式搭建两个NameNode均无法正常启动

 
阅读更多

场景描述:hadoop2.5.2分布式部署,按照《传智播客7天视频》里第五天的教程部署,部署完成后,在主节点上启动hdfs,两个nameservers上的nameNode服务都没有启动(第一次是有一个启动成功,之后两个就都无法启动了),但是DataNode上的服务均启动成功,且能上传数据。MR是正常的。求教高手解答错误日志如下:

2015-07-20 17:35:12,528 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = hadoopcs01/192.168.1.201
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 2.5.2
STARTUP_MSG:   classpath = /iwisdom/hadoop-2.5.2/etc/hadoop:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/commons-configuration-1.6.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/jetty-6.1.26.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/hadoop-annotations-2.5.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/jets3t-0.9.0.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/commons-math3-3.1.1.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/servlet-api-2.5.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/paranamer-2.3.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/jsch-0.1.42.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/hadoop-auth-2.5.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/commons-net-3.1.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/hamcrest-core-1.3.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/avro-1.7.4.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/commons-collections-3.2.1.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/commons-logging-1.1.3.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/activation-1.1.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/commons-el-1.0.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/commons-io-2.4.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/asm-3.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/jsp-api-2.1.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/xz-1.0.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/commons-httpclient-3.1.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/netty-3.6.2.Final.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/junit-4.11.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/xmlenc-0.52.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/guava-11.0.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/stax-api-1.0-2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/commons-lang-2.6.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/mockito-all-1.8.5.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/commons-compress-1.4.1.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/commons-digester-1.8.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/jersey-json-1.9.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/jersey-server-1.9.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/httpclient-4.2.5.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/httpcore-4.2.5.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/jsr305-1.3.9.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/commons-codec-1.4.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/jetty-util-6.1.26.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/zookeeper-3.4.6.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/log4j-1.2.17.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/commons-cli-1.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/jettison-1.1.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/jersey-core-1.9.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/hadoop-common-2.5.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/hadoop-nfs-2.5.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/hadoop-common-2.5.2-tests.jar:/iwisdom/hadoop-2.5.2/share/hadoop/hdfs:/iwisdom/hadoop-2.5.2/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/iwisdom/hadoop-2.5.2/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/iwisdom/hadoop-2.5.2/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/iwisdom/hadoop-2.5.2/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/iwisdom/hadoop-2.5.2/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/iwisdom/hadoop-2.5.2/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/iwisdom/hadoop-2.5.2/share/hadoop/hdfs/lib/commons-el-1.0.jar:/iwisdom/hadoop-2.5.2/share/hadoop/hdfs/lib/commons-io-2.4.jar:/iwisdom/hadoop-2.5.2/share/hadoop/hdfs/lib/asm-3.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/iwisdom/hadoop-2.5.2/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/iwisdom/hadoop-2.5.2/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/iwisdom/hadoop-2.5.2/share/hadoop/hdfs/lib/guava-11.0.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/iwisdom/hadoop-2.5.2/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/iwisdom/hadoop-2.5.2/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/iwisdom/hadoop-2.5.2/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/iwisdom/hadoop-2.5.2/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/iwisdom/hadoop-2.5.2/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/iwisdom/hadoop-2.5.2/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/iwisdom/hadoop-2.5.2/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/iwisdom/hadoop-2.5.2/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/iwisdom/hadoop-2.5.2/share/hadoop/hdfs/hadoop-hdfs-nfs-2.5.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/hdfs/hadoop-hdfs-2.5.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/hdfs/hadoop-hdfs-2.5.2-tests.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/jetty-6.1.26.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/servlet-api-2.5.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/activation-1.1.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/commons-io-2.4.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/asm-3.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/xz-1.0.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/jline-0.9.94.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/guava-11.0.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/commons-lang-2.6.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/guice-3.0.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/jersey-json-1.9.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/jersey-server-1.9.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/jersey-client-1.9.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/commons-codec-1.4.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/aopalliance-1.0.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/log4j-1.2.17.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/commons-cli-1.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/jettison-1.1.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/javax.inject-1.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/jersey-core-1.9.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.5.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-api-2.5.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-server-common-2.5.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-server-tests-2.5.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.5.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.5.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-common-2.5.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.5.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.5.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.5.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-client-2.5.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/lib/hadoop-annotations-2.5.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/lib/asm-3.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/lib/xz-1.0.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/lib/junit-4.11.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/lib/guice-3.0.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/lib/javax.inject-1.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.5.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.5.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.5.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.5.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.5.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.5.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.5.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.5.2-tests.jar:/contrib/capacity-scheduler/*.jar:/contrib/capacity-scheduler/*.jar:/contrib/capacity-scheduler/*.jar
STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r cc72e9b000545b86b75a61f4835eb86d57bfafc0; compiled by 'jenkins' on 2014-11-14T23:45Z
STARTUP_MSG:   java = 1.7.0_79
************************************************************/
2015-07-20 17:35:12,555 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2015-07-20 17:35:12,558 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []
2015-07-20 17:35:12,986 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2015-07-20 17:35:13,226 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2015-07-20 17:35:13,226 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2015-07-20 17:35:13,236 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is hdfs://ns1
2015-07-20 17:35:13,236 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use ns1 to access this namenode/service.
2015-07-20 17:35:13,422 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2015-07-20 17:35:13,594 INFO org.apache.hadoop.hdfs.DFSUtil: Starting web server as: ${dfs.web.authentication.kerberos.principal}
2015-07-20 17:35:13,594 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://hadoopcs01:50070
2015-07-20 17:35:13,668 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2015-07-20 17:35:13,673 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined
2015-07-20 17:35:13,687 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2015-07-20 17:35:13,691 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
2015-07-20 17:35:13,691 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2015-07-20 17:35:13,692 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2015-07-20 17:35:13,746 INFO org.apache.hadoop.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2015-07-20 17:35:13,748 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2015-07-20 17:35:13,773 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 50070
2015-07-20 17:35:13,773 INFO org.mortbay.log: jetty-6.1.26
2015-07-20 17:35:14,006 WARN org.apache.hadoop.security.authentication.server.AuthenticationFilter: 'signature.secret' configuration not set, using a random value as secret
2015-07-20 17:35:14,073 INFO org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@hadoopcs01:50070
2015-07-20 17:35:14,134 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
2015-07-20 17:35:14,184 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true
2015-07-20 17:35:14,238 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
2015-07-20 17:35:14,238 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2015-07-20 17:35:14,241 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2015-07-20 17:35:14,243 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block deletion will start around 2015 Jul 20 17:35:14
2015-07-20 17:35:14,246 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap
2015-07-20 17:35:14,246 INFO org.apache.hadoop.util.GSet: VM type       = 32-bit
2015-07-20 17:35:14,248 INFO org.apache.hadoop.util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
2015-07-20 17:35:14,248 INFO org.apache.hadoop.util.GSet: capacity      = 2^22 = 4194304 entries
2015-07-20 17:35:14,309 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false
2015-07-20 17:35:14,309 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication         = 3
2015-07-20 17:35:14,309 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication             = 512
2015-07-20 17:35:14,309 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication             = 1
2015-07-20 17:35:14,309 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams      = 2
2015-07-20 17:35:14,310 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
2015-07-20 17:35:14,310 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000
2015-07-20 17:35:14,310 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer        = false
2015-07-20 17:35:14,310 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
2015-07-20 17:35:14,316 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner             = root (auth:SIMPLE)
2015-07-20 17:35:14,316 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup          = supergroup
2015-07-20 17:35:14,316 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true
2015-07-20 17:35:14,316 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Determined nameservice ID: ns1
2015-07-20 17:35:14,317 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: true
2015-07-20 17:35:14,318 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
2015-07-20 17:35:14,535 INFO org.apache.hadoop.util.GSet: Computing capacity for map INodeMap
2015-07-20 17:35:14,535 INFO org.apache.hadoop.util.GSet: VM type       = 32-bit
2015-07-20 17:35:14,535 INFO org.apache.hadoop.util.GSet: 1.0% max memory 966.7 MB = 9.7 MB
2015-07-20 17:35:14,535 INFO org.apache.hadoop.util.GSet: capacity      = 2^21 = 2097152 entries
2015-07-20 17:35:14,587 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2015-07-20 17:35:14,600 INFO org.apache.hadoop.util.GSet: Computing capacity for map cachedBlocks
2015-07-20 17:35:14,600 INFO org.apache.hadoop.util.GSet: VM type       = 32-bit
2015-07-20 17:35:14,601 INFO org.apache.hadoop.util.GSet: 0.25% max memory 966.7 MB = 2.4 MB
2015-07-20 17:35:14,601 INFO org.apache.hadoop.util.GSet: capacity      = 2^19 = 524288 entries
2015-07-20 17:35:14,613 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2015-07-20 17:35:14,613 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2015-07-20 17:35:14,613 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
2015-07-20 17:35:14,615 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is enabled
2015-07-20 17:35:14,615 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2015-07-20 17:35:14,619 INFO org.apache.hadoop.util.GSet: Computing capacity for map NameNodeRetryCache
2015-07-20 17:35:14,619 INFO org.apache.hadoop.util.GSet: VM type       = 32-bit
2015-07-20 17:35:14,620 INFO org.apache.hadoop.util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB
2015-07-20 17:35:14,620 INFO org.apache.hadoop.util.GSet: capacity      = 2^16 = 65536 entries
2015-07-20 17:35:14,627 INFO org.apache.hadoop.hdfs.server.namenode.NNConf: ACLs enabled? false
2015-07-20 17:35:14,628 INFO org.apache.hadoop.hdfs.server.namenode.NNConf: XAttrs enabled? true
2015-07-20 17:35:14,628 INFO org.apache.hadoop.hdfs.server.namenode.NNConf: Maximum size of an xattr: 16384
2015-07-20 17:35:14,661 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /iwisdom/hadoop-2.5.2/tmp/dfs/name/in_use.lock acquired by nodename 8525@hadoopcs01
2015-07-20 17:35:14,935 WARN org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory: The property 'ssl.client.truststore.location' has not been set, no TrustStore will be loaded
2015-07-20 17:35:16,310 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs06/192.168.1.206:8485. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:16,311 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs07/192.168.1.207:8485. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:16,311 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs05/192.168.1.205:8485. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:17,312 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs07/192.168.1.207:8485. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:17,313 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs06/192.168.1.206:8485. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:17,313 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs05/192.168.1.205:8485. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:18,315 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs07/192.168.1.207:8485. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:18,330 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs05/192.168.1.205:8485. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:18,330 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs06/192.168.1.206:8485. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:19,331 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs05/192.168.1.205:8485. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:19,332 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs07/192.168.1.207:8485. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:19,353 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs06/192.168.1.206:8485. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:20,333 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs07/192.168.1.207:8485. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:20,337 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs05/192.168.1.205:8485. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:20,355 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs06/192.168.1.206:8485. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:21,151 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 6001 ms (timeout=20000 ms) for a response for selectInputStreams. No responses yet.
2015-07-20 17:35:21,334 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs07/192.168.1.207:8485. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:21,340 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs05/192.168.1.205:8485. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:21,356 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs06/192.168.1.206:8485. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:22,153 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 7003 ms (timeout=20000 ms) for a response for selectInputStreams. No responses yet.
2015-07-20 17:35:22,335 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs07/192.168.1.207:8485. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:22,343 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs05/192.168.1.205:8485. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:22,358 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs06/192.168.1.206:8485. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:24,519 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 9369 ms (timeout=20000 ms) for a response for selectInputStreams. No responses yet.
2015-07-20 17:35:24,519 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs07/192.168.1.207:8485. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:24,520 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs05/192.168.1.205:8485. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:24,521 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs06/192.168.1.206:8485. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:25,520 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 10370 ms (timeout=20000 ms) for a response for selectInputStreams. No responses yet.
2015-07-20 17:35:25,520 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs07/192.168.1.207:8485. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:25,524 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs05/192.168.1.205:8485. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:25,523 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs06/192.168.1.206:8485. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:26,521 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 11371 ms (timeout=20000 ms) for a response for selectInputStreams. No responses yet.
2015-07-20 17:35:26,598 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs07/192.168.1.207:8485. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:26,598 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs05/192.168.1.205:8485. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:26,599 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs06/192.168.1.206:8485. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:26,603 WARN org.apache.hadoop.hdfs.server.namenode.FSEditLog: Unable to determine input streams from QJM to [192.168.1.205:8485, 192.168.1.206:8485, 192.168.1.207:8485]. Skipping.
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Got too many exceptions to achieve quorum size 2/3. 3 exceptions thrown:
192.168.1.205:8485: Call From hadoopcs01/192.168.1.201 to hadoopcs05:8485 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
192.168.1.206:8485: Call From hadoopcs01/192.168.1.201 to hadoopcs06:8485 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
192.168.1.207:8485: Call From hadoopcs01/192.168.1.201 to hadoopcs07:8485 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
        at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
        at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:223)
        at org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.waitForWriteQuorum(AsyncLoggerSet.java:142)
        at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectInputStreams(QuorumJournalManager.java:471)
        at org.apache.hadoop.hdfs.server.namenode.JournalSet.selectInputStreams(JournalSet.java:260)
        at org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1430)
        at org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1450)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:636)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:279)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:955)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:700)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:529)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:585)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:751)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:735)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1407)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1473)
2015-07-20 17:35:26,605 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: No edit log streams selected.
2015-07-20 17:35:26,655 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode: Loading 1 INodes.
2015-07-20 17:35:26,835 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf: Loaded FSImage in 0 seconds.
2015-07-20 17:35:26,835 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Loaded image for txid 0 from /iwisdom/hadoop-2.5.2/tmp/dfs/name/current/fsimage_0000000000000000000
2015-07-20 17:35:26,842 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Need to save fs image? false (staleImage=true, haEnabled=true, isRollingUpgrade=false)
2015-07-20 17:35:26,842 INFO org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries 0 lookups
2015-07-20 17:35:26,842 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 12214 msecs
2015-07-20 17:35:27,207 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: RPC server is binding to hadoopcs01:9000
2015-07-20 17:35:27,232 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
2015-07-20 17:35:27,250 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 9000
2015-07-20 17:35:27,295 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemState MBean
2015-07-20 17:35:27,345 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of blocks under construction: 0
2015-07-20 17:35:27,345 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of blocks under construction: 0
2015-07-20 17:35:27,345 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 13 secs
2015-07-20 17:35:27,345 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes
2015-07-20 17:35:27,345 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks
2015-07-20 17:35:27,380 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2015-07-20 17:35:27,382 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 9000: starting
2015-07-20 17:35:27,384 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: NameNode RPC up at: hadoopcs01/192.168.1.201:9000
2015-07-20 17:35:27,384 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting services required for standby state
2015-07-20 17:35:27,389 INFO org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Will roll logs on active node at hadoopcs02/192.168.1.202:9000 every 120 seconds.
2015-07-20 17:35:27,395 INFO org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer: Starting standby checkpoint thread...
Checkpointing active NN at http://hadoopcs02:50070
Serving checkpoints at http://hadoopcs01:50070
2015-07-20 17:35:28,397 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs07/192.168.1.207:8485. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:28,398 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs05/192.168.1.205:8485. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:28,400 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs06/192.168.1.206:8485. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:29,401 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs05/192.168.1.205:8485. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:29,401 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs06/192.168.1.206:8485. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:30,403 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs05/192.168.1.205:8485. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:30,403 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs06/192.168.1.206:8485. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:30,659 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Reading org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@14a469d expecting start txid #1
2015-07-20 17:35:30,661 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Start loading edits file http://hadoopcs06:8480/getJournal?jid=ns1&segmentTxId=1&storageInfo=-57%3A1755095172%3A0%3ACID-e7314263-b26e-4167-a7f6-fd97f27a0826, http://hadoopcs07:8480/getJournal?jid=ns1&segmentTxId=1&storageInfo=-57%3A1755095172%3A0%3ACID-e7314263-b26e-4167-a7f6-fd97f27a0826
2015-07-20 17:35:30,663 INFO org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding stream 'http://hadoopcs06:8480/getJournal?jid=ns1&segmentTxId=1&storageInfo=-57%3A1755095172%3A0%3ACID-e7314263-b26e-4167-a7f6-fd97f27a0826, http://hadoopcs07:8480/getJournal?jid=ns1&segmentTxId=1&storageInfo=-57%3A1755095172%3A0%3ACID-e7314263-b26e-4167-a7f6-fd97f27a0826' to transaction ID 1
2015-07-20 17:35:30,663 INFO org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding stream 'http://hadoopcs06:8480/getJournal?jid=ns1&segmentTxId=1&storageInfo=-57%3A1755095172%3A0%3ACID-e7314263-b26e-4167-a7f6-fd97f27a0826' to transaction ID 1
2015-07-20 17:35:31,623 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Edits file http://hadoopcs06:8480/getJournal?jid=ns1&segmentTxId=1&storageInfo=-57%3A1755095172%3A0%3ACID-e7314263-b26e-4167-a7f6-fd97f27a0826, http://hadoopcs07:8480/getJournal?jid=ns1&segmentTxId=1&storageInfo=-57%3A1755095172%3A0%3ACID-e7314263-b26e-4167-a7f6-fd97f27a0826 of size 1048576 edits # 1 loaded in 0 seconds
2015-07-20 17:35:31,624 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Reading org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@161f335 expecting start txid #2
2015-07-20 17:35:31,624 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Start loading edits file http://hadoopcs07:8480/getJournal?jid=ns1&segmentTxId=36&storageInfo=-57%3A1755095172%3A0%3ACID-e7314263-b26e-4167-a7f6-fd97f27a0826, http://hadoopcs06:8480/getJournal?jid=ns1&segmentTxId=36&storageInfo=-57%3A1755095172%3A0%3ACID-e7314263-b26e-4167-a7f6-fd97f27a0826
2015-07-20 17:35:31,624 INFO org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding stream 'http://hadoopcs07:8480/getJournal?jid=ns1&segmentTxId=36&storageInfo=-57%3A1755095172%3A0%3ACID-e7314263-b26e-4167-a7f6-fd97f27a0826, http://hadoopcs06:8480/getJournal?jid=ns1&segmentTxId=36&storageInfo=-57%3A1755095172%3A0%3ACID-e7314263-b26e-4167-a7f6-fd97f27a0826' to transaction ID 1
2015-07-20 17:35:31,624 INFO org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding stream 'http://hadoopcs07:8480/getJournal?jid=ns1&segmentTxId=36&storageInfo=-57%3A1755095172%3A0%3ACID-e7314263-b26e-4167-a7f6-fd97f27a0826' to transaction ID 1
2015-07-20 17:35:32,519 FATAL org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Unknown error encountered while tailing edits. Shutting down standby NN.
java.io.IOException: There appears to be a gap in the edit log.  We expected txid 2, but got txid 36.
        at org.apache.hadoop.hdfs.server.namenode.MetaRecoveryContext.editLogLoaderPrompt(MetaRecoveryContext.java:94)
        at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:209)
        at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:137)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:816)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:797)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:230)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:324)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$200(EditLogTailer.java:282)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:299)
        at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:411)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:295)
2015-07-20 17:35:32,534 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2015-07-20 17:35:32,545 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoopcs01/192.168.1.201
************************************************************/

 

<a href="http://vagex.com/?ref=341721"><img src="http://vagex.com/banners/ban1.png" width="728" height="90" /></a>

1
2
分享到:
评论

相关推荐

    第四章(Hadoop大数据处理实战)Hadoop分布式文件系统.pdf

    第四章(Hadoop大数据处理实战)Hadoop分布式文件系统.pdf第四章(Hadoop大数据处理实战)Hadoop分布式文件系统.pdf第四章(Hadoop大数据处理实战)Hadoop分布式文件系统.pdf第四章(Hadoop大数据处理实战)Hadoop分布式文件...

    hadoop 2.X 伪分布式配置文件

    了解以上知识点后,将提供的压缩包解压到`/etc/hadoop`目录下,根据具体的硬件和需求调整配置文件,然后按照步骤启动服务,就可以开始在Hadoop 2.x的伪分布式环境中探索和实验了。这种环境对于学习Hadoop生态系统的...

    Hadoop2.x HA环境搭建

    ### Hadoop2.x HA环境搭建知识点详解 #### 一、HA2.0集群搭建与测试 **准备工作:** 1. **停掉之前的服务:** - 在搭建Hadoop2.x HA集群之前,需要确保所有相关的服务都已经停止运行,避免与新搭建的集群产生...

    Hadoop 2.x

    Hadoop由两个主要组件构成:Hadoop Distributed File System (HDFS) 和 MapReduce。HDFS是分布式文件系统,用于存储大量数据,而MapReduce则是一种并行处理模型,用于处理和生成这些数据。 **二、HDFS详解** 1. **...

    初识Hadoop 2.x.pdf

    - **Hadoop Distributed File System (HDFS)**:这是一个分布式文件系统,设计用于存储大量数据。HDFS具有高可靠性、高吞吐量的特点,能够很好地支持数据密集型应用。 - **Hadoop MapReduce**:这是一种分布式计算...

    win32win64hadoop2.7.x.hadoop.dll.bin

    标题“win32win64hadoop2.7.x.hadoop.dll.bin”暗示了这是一个与Hadoop 2.7.x版本相关的二进制文件,适用于32位和64位的Windows操作系统。描述中提到,这些文件是用于在Windows环境下部署Hadoop时必需的组件,并且在...

    hadoop3.x盘地址及官方其他版本下载地址.rar

    Hadoop是Apache软件基金会开发的一个开源分布式计算框架,主要用于处理和存储海量数据。Hadoop 3.x系列是Hadoop的主要版本之一,相比之前的Hadoop 2.x,它引入了诸多改进和优化,提升了整体的存储性能和计算效率。在...

    hadoop3.x笔记.docx

    Hadoop 是一个基于分布式存储的大数据处理框架,本文档将详细介绍 Hadoop 3.x 的配置和底层原理,从零搭建集群以及解决遇到的问题,通过图形化的方式更好地理解 Hadoop 的作用。 一、HDFS 组成 HDFS(Hadoop ...

    Apache Hadoop2.x 安装入门详解 PDF

    Hadoop的核心由两个主要组件构成:Hadoop Distributed File System (HDFS) 和 MapReduce。HDFS是一种分布式文件系统,能存储和处理海量数据;MapReduce是并行处理模型,用于处理和生成大数据集。 二、Hadoop 2.x的...

    Hadoop2.X集群安装与配置

    以上就是Hadoop 2.x集群的安装与配置过程,这是一个基础的大数据环境搭建,后续可以根据需要添加更多节点,或者集成其他大数据组件,如Hive、Spark等,构建更复杂的数据处理平台。在实际生产环境中,还需要考虑高...

    hadoop2.x集群搭建(1.0).txt

    - **高可用机制**:Hadoop 2.x支持两个NameNode,一个处于Active状态,另一个处于Standby状态。当Active NameNode发生故障时,系统会自动将Standby NameNode转换为Active状态。 - **状态监控与切换**:使用ZooKeeper...

    hadoop2.x集群搭建.txt(hdfs和yarn貌似正常,但mapreduce 提交job执行失败,请看我的另一个资源,另一个搭建是成功的)

    根据提供的文件信息,本文将详细解析Hadoop 2.x集群的搭建步骤以及遇到的问题,特别是针对MapReduce提交Job执行失败的情况进行分析。 ### Hadoop 2.x 集群搭建 #### 一、前期准备 在搭建Hadoop 2.x集群之前,我们...

    大数据+Hadoop3.X伪分布式集群搭建笔记+练习搭建Hadoop平台

    练习搭建伪分布Hadoop3.X集群,只用于刚刚开始学习搭建hadoo伪分布式集群的人群,帮助大家快速搭建Hadoop3.X伪分布式集群,快速入门大数据为日后的学习打下坚实的基础

    Hadoop2.x版本完全分布式安装与部署

    由于篇幅限制,无法涵盖Hadoop安装过程中可能遇到的所有问题和细节,但上述知识点涵盖了一个基础的、完整的Hadoop 2.x版本完全分布式安装与部署流程。对于初学者而言,跟随这个流程一步步操作,可以较为顺利地搭建起...

    Hadoop 2.x单节点部署学习。

    在IT领域,Hadoop是一个广泛使用的开源大数据处理框架,它主要设计用于分布式存储和处理海量数据。本教程将深入探讨如何在单节点环境中部署Hadoop 2.x版本,这对于初学者理解和测试Hadoop功能非常有帮助。我们将关注...

    Hadoop第02天-03.完全分布式-搭建.avi

    Hadoop第02天-03.完全分布式-搭建.avi!!

    hadoop2.x 安装文档

    - **NameNode 高可用**: 配置了两个NameNode实例nn1和nn2,实现高可用。 - **JournalNode 配置**: 通过Quorum Journal Manager (QJM)机制保证数据一致性。 - **自动故障转移**: 开启了自动故障转移功能,确保系统的...

    Hadoop2.x学习资料

    在Hadoop2.x版本中,HDFS作为分布式存储系统,它的设计思想是将大文件分割成固定大小的块(Block)进行存储,这些块分散存储在集群的多个节点中。每个块都有一个偏移量(offset),表示它在文件中的起始位置。HDFS...

    Hadoop的单机伪分布式搭建和运行第一个WordCount程序

    Hadoop单机伪分布式搭建和运行第一个WordCount程序 Hadoop是Apache基金会下的一个开源的大数据处理框架,它广泛应用于数据处理、数据分析和机器学习等领域。下面是关于Hadoop单机伪分布式搭建和运行第一个WordCount...

Global site tag (gtag.js) - Google Analytics