- 浏览: 1791601 次
- 性别:
- 来自: 北京
文章分类
最新评论
-
奔跑的小牛:
例子都打不开
如何使用JVisualVM进行性能分析 -
蜗牛coder:
好东西[color=blue][/color]
Lucene学习:全文检索的基本原理 -
lovesunweina:
不在haoop中是在linux系统中,映射IP的时候,不能使用 ...
java.io.IOException: Incomplete HDFS URI, no host -
evening_xxxy:
挺好的, 谢谢分享
如何利用 JConsole观察分析Java程序的运行,进行排错调优 -
di1984HIT:
学习了~~~
ant使用ssh和linux交互 如:上传文件
Hadoop put 报异常“could only be replicated to 0 nodes, instead of 1”
- 博客分类:
- HBase
$ bin/hadoop fs -put /home/lighttpd/hadoop-0.20.2/hadoop-0.20.2-tools.jar e.jar
WARN hdfs.DFSClient: DataStreamer Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/lighttpd/f.jar could only be replicated to 0 nodes, instead of 1
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:416)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
at org.apache.hadoop.ipc.Client.call(Client.java:740)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at $Proxy0.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at $Proxy0.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2937)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2819)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)
10/03/25 08:13:25 WARN hdfs.DFSClient: Error Recovery for block null bad datanode[0] nodes == null
10/03/25 08:13:25 WARN hdfs.DFSClient: Could not get block locations. Source file "/user/lighttpd/f.jar" - Aborting...
put: java.io.IOException: File /user/lighttpd/f.jar could only be replicated to 0 nodes, instead of 1
10/03/25 08:13:25 ERROR hdfs.DFSClient: Exception closing file /user/lighttpd/f.jar : org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/lighttpd/f.jar could only be replicated to 0 nodes, instead of 1
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:416)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/lighttpd/f.jar could only be replicated to 0 nodes, instead of 1
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:416)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
at org.apache.hadoop.ipc.Client.call(Client.java:740)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at $Proxy0.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at $Proxy0.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2937)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2819)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)
好了,本机模拟分布式编程还是好的,真的分布式却有问题,这个问题貌似数据节点有问题,我从从机器上执行:
> bin/hadoop fs put /home/lighttpd/hadoop-0.20.2/hadoop-0.20.2-tools.jar c.jar
10/03/24 18:53:10 INFO ipc.Client: Retrying connect to server: home0.hadoop/192.168.0.129:9000. Already tried 0 time(s).
10/03/24 18:53:11 INFO ipc.Client: Retrying connect to server: home0.hadoop/192.168.0.129:9000. Already tried 1 time(s).
10/03/24 18:53:12 INFO ipc.Client: Retrying connect to server: home0.hadoop/192.168.0.129:9000. Already tried 2 time(s).
10/03/24 18:53:13 INFO ipc.Client: Retrying connect to server: home0.hadoop/192.168.0.129:9000. Already tried 3 time(s).
10/03/24 18:53:14 INFO ipc.Client: Retrying connect to server: home0.hadoop/192.168.0.129:9000. Already tried 4 time(s).
10/03/24 18:53:15 INFO ipc.Client: Retrying connect to server: home0.hadoop/192.168.0.129:9000. Already tried 5 time(s).
这里您可以可以猜出来什么问题了,链接怎么会失败呢?
记得上次socket编程的时候,遇到端口问题,突然大悟:防火墙 !
呵呵,把您需要通信的端口都给加进去吧,或者你是测试的话,可以把防火强给关掉。
重新启动
$ bin/stop-all.sh $ bin/start-all.sh
就OK了。
涉及到端口,一定注意防火墙(这里涉及到的是局域网,本地防火墙。有些公司,会屏蔽掉某些端口,要注意哦,保证机器间的通信端口畅通就是了)
评论
官方有解答 哈哈
2010-11-11 17:33:59,406 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 9000: starting 2010-11-11 17:33:59,420 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 9000: starting 2010-11-11 17:34:00,581 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 0 time(s). 2010-11-11 17:34:01,582 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 1 time(s). 2010-11-11 17:34:02,583 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 2 time(s). 2010-11-11 17:34:03,583 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 3 time(s). 2010-11-11 17:34:04,584 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 4 time(s). 2010-11-11 17:34:05,585 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 5 time(s).
但是9000端口一开始能用其它机器telnet到的,后来就不行了
不知道什么情况
ubuntu10 默认防火墙是ufw
6.1.1 是否是防火墙未关闭,查看。确实忘记关闭防火墙了,因为我换了几台机器,以前是在虚拟机下用的是redhat,现在用的是ubuntu8.0.4。所以跟 这个很可能相关。关闭iptables后,出现的错误信息没那么多了,但是还有错误。如下:
6.1.2 是否还有空间
df –hl
6.1.3 datanode数目是否正常
datanode用jps查看进程
6.1.4 是否在safemode下
hadoop dfsadmin -safemode leave
发表评论
-
HBase配置LZO压缩
2011-07-10 22:40 6198系统: gentoo HDFS: hadoop:hado ... -
HBase RegionServer 退出 ( ZooKeeper session expired)
2011-04-23 08:32 9103RegionServer 由于 ZooKeeper sessi ... -
HBase迁移数据方案1(两个集群不能通信)
2011-03-30 18:23 3884前一篇文章里面介绍了 两个可以直接通信的集群之间很容易拷贝数据 ... -
HBase如何迁移数据
2011-03-10 13:42 6525HBase如何迁移数据?这里有个方案:http://blog. ... -
HBase如何存取多个版本的值
2011-03-07 16:11 27253HBase如何存取多个版本 ... -
HBase简介(很好的梳理资料)
2011-01-30 10:18 130761一、 简介 history s ... -
Google_三大论文中文版(Bigtable、 GFS、 Google MapReduce)
2010-11-28 16:30 22201做个中文版下载源: http://dl.iteye.c ... -
hadoop主节点(NameNode)备份策略以及恢复方法
2010-11-11 19:35 27832一、dits和fsimage 首先要提到 ... -
HRegionServer: ZooKeeper session expired
2010-11-01 14:21 11502Hbase不稳定,分析日志 ... -
Bad connect ack with firstBadLink
2010-10-25 13:20 8334hbase报的错误,经过分析是Hadoop不能写入数据了。可恶 ... -
hbase0.20.支持多个主节点容灾切换功能(只激活当前某个节点,其他节点备份)
2010-09-09 14:53 2880http://wiki.apache.org/hadoop/H ... -
java.io.IOException: Incomplete HDFS URI, no host
2010-09-07 08:31 16235ERROR org.apache.hadoop.hdfs.se ... -
升级hadoop0.20.2到hadoop-0.21.0
2010-09-05 11:52 7761按照新的文档来 更新配置: http://hadoop.apa ... -
hadoop-hdfs启动又自动退出的问题
2010-05-20 10:45 6166hadoop-hdfs启动又自动退出的问题,折腾了我1天时间啊 ... -
在windows平台下Eclipse调试Hadoop/Nutch
2010-04-29 14:34 3306即让碰到这个问题说明 准备工作都做好了,软件包,环境什么的这里 ... -
Hadoop运行mapreduce实例时,抛出错误 All datanodes xxx.xxx.xxx.xxx:xxx are bad. Aborting…
2010-04-29 14:26 6444Hadoop运行mapreduce实例时,抛出错误 All d ... -
cygwin 添加用户
2010-04-13 17:48 7426http://hi.baidu.com/skychen1900 ... -
nutch总体输入输出流程图解析
2010-04-12 16:58 2477附件里面有word文档,请下 ... -
解析hadoop框架下的Map-Reduce job的输出格式的实现
2010-04-10 18:34 10147Hadoop 其实并非一个单纯用于存储的分布式文 ... -
nutch分布式搭建
2010-04-06 17:54 6833如何在eclipse中跑nutch :http://jiaj ...
相关推荐
6、启动时报错java.io.IOException: File jobtracker.info could only be replicated to 0 nodes, instead of 1: 解决方法:首先,检查防火墙是否关闭,是否对jobtracker.info文件进行了acl权限设置,或者是否已经...
在使用Hadoop进行数据处理时,可能会出现"can only be replicated to node 0, instead of 1"的错误提示。解决办法是,释放更多的磁盘空间。这个错误可能是由于磁盘空间不足导致的。 四、INFO mapred.JobClient: map...
问题描述:在hadoop集群启动时,slave总是无法启动datanode,并报错“could only be replicated to 0 nodes, instead of 1”。 解决方法: 1. 删除所有节点的数据文件:删除所有节点的dfs.data.dir和dfs.tmp.dir...
ERROR org.apache.hadoop.mapred.TaskTracker: Can not start task tracker because java.io.IOException: Failed to set permissions of path: \tmp\hadoop-admin \mapred\local\ttprivate to 0700 at org.apache...
eclipse远程调试hadoop时 报出eclipse Hadoop Failed to set permissions of path错误 修改hadoop core包中FileUtil java文件 里面有checkReturnValue方法 将代码throw new IOException "Failed to set ...
Moving Hadoop to The Cloud 英文epub 本资源转载自网络,如有侵权,请联系上传者或csdn删除 本资源转载自网络,如有侵权,请联系上传者或csdn删除
然而,在处理海量数据时,Hadoop平台常会遇到异常数据的挑战,这包括数据逻辑错误、数据链完整性缺失以及数据失效等问题。这些问题的出现严重干扰了云计算平台的数据运算准确性。 面对这些挑战,研究者们提出了针对...
In addition, the SAS/ACCESS Interface to Hadoop methods that allow LIBNAME access and SQL pass-through techniques to read and write Hadoop HIVE or Cloudera Impala tables structures is part of this ...
Failed to set permissions of path: \tmp\hadoop-Administrator,的解决方法,更换hadoop-core-1.0.2-modified.jar包
It morphed the Hadoop compute layer to be a common resource-management platform that can host a wide variety of applications. Many organizations leverage YARN in building their applications on top of...
If your organization is about to enter the world of big data, you not only need to decide whether Apache Hadoop is the right platform to use, but also which of its many components are best suited to ...
Ideal for enterprise architects, IT managers, application architects, and data engineers, this book shows you how to overcome the many challenges that emerge during Hadoop projects. You'll explore the...
By the end of the book, you will be confident to begin working with Hadoop straightaway and implement the knowledge gained in all your real-world scenarios. Table of Contents Chapter 1: Introduction ...
This book is written for anyone who needs to know how to analyze data using Hadoop. It is a good book for both Hadoop beginners and those in need of advancing their Hadoop skills. The author has ...
To recap, this release has a number of significant highlights compared to Hadoop 1.x: YARN - A general purpose resource management system for Hadoop to allow MapReduce and other other data processing...
Along with Hadoop 2.x and illustrates how it can be used to extend the capabilities of Hadoop. When you nish this course, you will be able to tackle the real-world scenarios and become a big data ...
Hands-on recipes to configure a Hadoop cluster from bare metal hardware nodes Practical and in depth explanation of cluster management commands Easy-to-understand recipes for securing and monitoring a...
It enables large datasets to be efficiently processed instead of using one large computer to store and process the data. This book will get you started with the Hadoop ecosystem, and introduce you to...