`
tangjunliang
  • 浏览: 109996 次
  • 性别: Icon_minigender_1
  • 来自: 北京
社区版块
存档分类
最新评论

安装CDH5 hadoop2.2.0遇到的配置问题(二)

阅读更多
hadoop版本:hadoop-2.2.0-cdh5.0.0-beta-1

今天在安装hadoop后,启动了namenode,在执行hadoop fs -put /tmp/test.dat /test命令后,报了下面的一个错:
13/11/05 23:40:36 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
13/11/05 23:40:37 WARN hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /test/test.dat._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.
	at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1384)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2478)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042)

	at org.apache.hadoop.ipc.Client.call(Client.java:1347)
	at org.apache.hadoop.ipc.Client.call(Client.java:1300)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
	at com.sun.proxy.$Proxy9.addBlock(Unknown Source)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
	at com.sun.proxy.$Proxy9.addBlock(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:330)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1226)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1078)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
put: File /test/test.dat._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.
13/11/05 23:40:37 ERROR hdfs.DFSClient: Failed to close file /test/test.dat._COPYING_
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /test/test.dat._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.
	at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1384)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2478)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042)

	at org.apache.hadoop.ipc.Client.call(Client.java:1347)
	at org.apache.hadoop.ipc.Client.call(Client.java:1300)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
	at com.sun.proxy.$Proxy9.addBlock(Unknown Source)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
	at com.sun.proxy.$Proxy9.addBlock(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:330)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1226)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1078)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)


上面的意思大概是说只有0个datanode,也就是说没有连接到namenode的datanode,但是我到DN节点上jps,datanode进程是存在的;我又用hadoop dfsadmin -report命令查看,发现确实是0个DN......

这是后只有查看DN的日志:
edSleep(maxRetries=10, sleepTime=1 SECONDS)
2013-11-06 16:42:04,222 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master1/10.95.3.100:8020. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2013-11-06 16:42:04,223 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: master1/10.95.3.100:8020
2013-11-06 16:42:10,226 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master1/10.95.3.100:8020. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2013-11-06 16:42:11,228 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master1/10.95.3.100:8020. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2013-11-06 16:42:12,230 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master1/10.95.3.100:8020. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2013-11-06 16:42:13,232 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master1/10.95.3.100:8020. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2013-11-06 16:42:14,234 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master1/10.95.3.100:8020. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2013-11-06 16:42:15,238 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master1/10.95.3.100:8020. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2013-11-06 16:42:16,241 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master1/10.95.3.100:8020. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2013-11-06 16:42:17,243 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master1/10.95.3.100:8020. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2013-11-06 16:42:18,245 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master1/10.95.3.100:8020. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)


发现确实是DN一直连不上NN, 怀疑是防火墙的问题,因此把所有节点上的防火墙都关闭了,
上述错误消失,hdfs正常。
分享到:
评论

相关推荐

    spark-assembly-1.6.0-cdh5.9.2-hadoop2.6.0-cdh5.9.2.jar

    spark-assembly-1.6.0-cdh5.9.2-hadoop2.6.0-cdh5.9.2.jar

    大数据技术之CM6.3.1+CDH6.3.2 集成 Atlas2.2.0.pdf

    4. Atlas2.2.0中的Hadoop和Hive版本问题: Atlas2.2.0默认使用Hadoop 3.3.0和Hive 3.1.0版本,但是CDH6.3.2使用的是Hadoop 3.0.0和Hive 2.1.1版本。因此,需要修改Atlas2.2.0的源码,以便与CDH6.3.2集成。 5. Java...

    spark-2.2.0-bin-2.6.0-cdh5.14.0.tgz

    这个"spark-2.2.0-bin-2.6.0-cdh5.14.0.tgz"压缩包是Spark的一个特定版本,用于与Cloudera Distribution Including Apache Hadoop (CDH) 5.14.0兼容。CDH是Cloudera公司提供的一个全面、集成、管理的Hadoop堆栈,...

    spark-2.2.0-bin-2.6.0-cdh5.7.0.tg

    spark-2.2.0-bin-2.6.0-cdh5.7.0.tg 由:Java需要7+版本,而且在Spark2.0.0之后Java 7已经被标识成deprecated了,但是不影响使用,但是在Spark2.2.0版本之后Java 7的支持将会被移除;...hadoop2.6.0cdh5.7.0

    基于cdh6.3.2 apache-atlas-2.2.0,完整编译打包

    压缩包中的"apache-atlas-2.2.0"包含了所有必要的二进制文件、配置文件、脚本等,用于安装和运行Apache Atlas服务。 总结来说,Apache Atlas 2.2.0在CDH 6.3.2中的应用,为大数据环境提供了全面的元数据管理和治理...

    spark-2.2.0-bin-2.6.0-cdh5.7.0.tgz

    windows系统手动编译spark-2.2.0-bin-2.6.0-cdh5.7.0.tgz apache-maven-3.3.9-bin.tar.gz hadoop-2.6.0-cdh5.7.0.tar.gz jdk-8u91-linux-x64.tar.gz scala-2.11.8.tgz

    hadoop-auth-2.6.0.jar

    hadoop连接相关jar包

    spark-2.2.1-bin-2.6.0-cdh5.14.2.tar.gz

    4. 配置Spark的Hadoop相关属性,确保它能正确地与CDH中的HDFS、YARN等组件通信。 5. 可以通过`spark-shell`或`pyspark`启动交互式环境,或者编写Spark应用程序并使用`spark-submit`提交到集群上运行。 在实际应用中...

    Hadoop技术-Hadoop概述.pptx

    2013年,Hadoop发布2.2.0版本。2016年,Hadoop发布Hadoop3.0-alpha版本。 Hadoop有三个大发行版本:Apache Hadoop、Cloudera CDH和Hortonworks HDP。Apache Hadoop提供基础版本。Cloudera CDH完全开源,比Apache ...

    (word完整版)windows下非submit-方式运行spark-on-yarn(CDH集群).doc

    - **Spark环境**:下载Spark 2.2.0并配置SPARK_HOME,该版本与CDH集群的Spark 2.2.0兼容。 2. **CDH集群配置**: - 在CDH 5.13集群上,需要添加Spark 2.2.0的服务,因为默认集成的是Spark 1.6。为此,需要下载...

    spark2-2.4.0.cloudera2-1.cdh5.13.3.p0.1041012-el6.zip

    ".parcel"文件是Cloudera的部署单位,包含可分发的软件组件,"SPARK2-2.4.0.cloudera2-1.cdh5.13.3.p0.1041012-el6.parcel"即是Spark 2.4.0的 parcel 文件,用于在CDH集群中安装和管理Spark。".parcel.sha"文件是...

    HDFS HA和Federation安装部署方法

    - **Apache Hadoop 2.2.0 或更高版本**:或CDH 4及以上版本。 - **JDK 1.6 或更高版本**:对于CDH 5,需要JDK 7。 #### 修改配置文件 HDFS HA的关键配置位于`hdfs-site.xml`文件中。以下是一些重要的配置参数: - ...

    SparkStreaming:Spark Streaming + Flume + Kafka + HBase + Hadoop + Zookeeper实现实时日志分析统计; SpringBoot + Echarts实现数据可视化展示

    避免流式传输前言:使用scala和java混编完成,其中也涉及到python脚本来自动生成日志,linux crontab调度工具来定时执行...hadoop-2.6.0-cdh5.7.0 hbase-1.2.0-cdh5.7.0 zookeeper-3.4.5-cdh5.7.0 spark-2.2.0-bin-2.6

    性能测试工具所引用的hbase依赖包

    除了提供hbase 1.0.0的jar包,本次还提供了hadoop-common-2.2.0-bin-master,用于在Windows下调用hadoop组件 本依赖包在Loadrunner12+jdk1.7.0_67,和Jmeter3.1+jdk1.8.0_112环境下调试通过 说明:如果想关注...

    KAFKA-3.1.1-1.3.1.1.p0.2-el7.parcel

    "parcel"是一种软件分发格式,通常在Cloudera CDH(Cloudera Distribution Including Apache Hadoop)中使用,它允许快速部署和管理大数据相关的服务,如Kafka。 Apache Kafka是一款分布式流处理平台,由LinkedIn...

Global site tag (gtag.js) - Google Analytics