`

Hadoop put 报异常“could only be replicated to 0 nodes, instead of 1”

阅读更多
 $ bin/hadoop fs -put /home/lighttpd/hadoop-0.20.2/hadoop-0.20.2-tools.jar  e.jar
 

WARN hdfs.DFSClient: DataStreamer Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/lighttpd/f.jar could only be replicated to 0 nodes, instead of 1
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
        at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:616)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:416)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

        at org.apache.hadoop.ipc.Client.call(Client.java:740)
        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
        at $Proxy0.addBlock(Unknown Source)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:616)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
        at $Proxy0.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2937)
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2819)
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)

10/03/25 08:13:25 WARN hdfs.DFSClient: Error Recovery for block null bad datanode[0] nodes == null
10/03/25 08:13:25 WARN hdfs.DFSClient: Could not get block locations. Source file "/user/lighttpd/f.jar" - Aborting...
put: java.io.IOException: File /user/lighttpd/f.jar could only be replicated to 0 nodes, instead of 1
10/03/25 08:13:25 ERROR hdfs.DFSClient: Exception closing file /user/lighttpd/f.jar : org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/lighttpd/f.jar could only be replicated to 0 nodes, instead of 1
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
        at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:616)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:416)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/lighttpd/f.jar could only be replicated to 0 nodes, instead of 1
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
        at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:616)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:416)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

        at org.apache.hadoop.ipc.Client.call(Client.java:740)
        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
        at $Proxy0.addBlock(Unknown Source)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:616)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
        at $Proxy0.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2937)
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2819)
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)

 

 

好了,本机模拟分布式编程还是好的,真的分布式却有问题,这个问题貌似数据节点有问题,我从从机器上执行:

 

> bin/hadoop fs put /home/lighttpd/hadoop-0.20.2/hadoop-0.20.2-tools.jar c.jar

 

10/03/24 18:53:10 INFO ipc.Client: Retrying connect to server: home0.hadoop/192.168.0.129:9000. Already tried 0 time(s).
10/03/24 18:53:11 INFO ipc.Client: Retrying connect to server: home0.hadoop/192.168.0.129:9000. Already tried 1 time(s).
10/03/24 18:53:12 INFO ipc.Client: Retrying connect to server: home0.hadoop/192.168.0.129:9000. Already tried 2 time(s).
10/03/24 18:53:13 INFO ipc.Client: Retrying connect to server: home0.hadoop/192.168.0.129:9000. Already tried 3 time(s).
10/03/24 18:53:14 INFO ipc.Client: Retrying connect to server: home0.hadoop/192.168.0.129:9000. Already tried 4 time(s).
10/03/24 18:53:15 INFO ipc.Client: Retrying connect to server: home0.hadoop/192.168.0.129:9000. Already tried 5 time(s).

 

这里您可以可以猜出来什么问题了,链接怎么会失败呢?

 

记得上次socket编程的时候,遇到端口问题,突然大悟:防火墙

 

呵呵,把您需要通信的端口都给加进去吧,或者你是测试的话,可以把防火强给关掉。

 

重新启动

$ bin/stop-all.sh
$ bin/start-all.sh

 就OK了。

 

涉及到端口,一定注意防火墙(这里涉及到的是局域网,本地防火墙。有些公司,会屏蔽掉某些端口,要注意哦,保证机器间的通信端口畅通就是了)

分享到:
评论
4 楼 store88 2010-11-11  
http://wiki.apache.org/hadoop/Hbase/Troubleshooting#A1

官方有解答 哈哈
3 楼 store88 2010-11-11  
2010-11-11 17:33:59,406 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 9000: starting
2010-11-11 17:33:59,420 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 9000: starting
2010-11-11 17:34:00,581 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 0 time(s).
2010-11-11 17:34:01,582 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 1 time(s).
2010-11-11 17:34:02,583 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 2 time(s).
2010-11-11 17:34:03,583 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 3 time(s).
2010-11-11 17:34:04,584 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 4 time(s).
2010-11-11 17:34:05,585 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 5 time(s).

但是9000端口一开始能用其它机器telnet到的,后来就不行了
不知道什么情况
ubuntu10 默认防火墙是ufw
2 楼 iammonster 2010-09-03  
kenl6 写道
请问第一个问题“could only be replicated to 0 nodes, instead of 1 ”是怎么解决的?


6.1.1 是否是防火墙未关闭,查看。确实忘记关闭防火墙了,因为我换了几台机器,以前是在虚拟机下用的是redhat,现在用的是ubuntu8.0.4。所以跟 这个很可能相关。关闭iptables后,出现的错误信息没那么多了,但是还有错误。如下:
6.1.2 是否还有空间
df –hl
6.1.3 datanode数目是否正常
datanode用jps查看进程
6.1.4 是否在safemode下
hadoop dfsadmin -safemode leave
1 楼 kenl6 2010-09-01  
请问第一个问题“could only be replicated to 0 nodes, instead of 1 ”是怎么解决的?

相关推荐

    hadoop常见问题及解决方法

    6、启动时报错java.io.IOException: File jobtracker.info could only be replicated to 0 nodes, instead of 1: 解决方法:首先,检查防火墙是否关闭,是否对jobtracker.info文件进行了acl权限设置,或者是否已经...

    Hadoop常见异常

    在使用Hadoop进行数据处理时,可能会出现"can only be replicated to node 0, instead of 1"的错误提示。解决办法是,释放更多的磁盘空间。这个错误可能是由于磁盘空间不足导致的。 四、INFO mapred.JobClient: map...

    hadoop配置运行错误

    问题描述:在hadoop集群启动时,slave总是无法启动datanode,并报错“could only be replicated to 0 nodes, instead of 1”。 解决方法: 1. 删除所有节点的数据文件:删除所有节点的dfs.data.dir和dfs.tmp.dir...

    hadoop1.0 Failed to set permissions of path 解决方案

    ERROR org.apache.hadoop.mapred.TaskTracker: Can not start task tracker because java.io.IOException: Failed to set permissions of path: \tmp\hadoop-admin \mapred\local\ttprivate to 0700 at org.apache...

    hadoop-core-1.2.0解决eclipse Hadoop Failed to set permissions of path错误

    eclipse远程调试hadoop时 报出eclipse Hadoop Failed to set permissions of path错误 修改hadoop core包中FileUtil java文件 里面有checkReturnValue方法 将代码throw new IOException "Failed to set ...

    Moving Hadoop to The Cloud epub

    Moving Hadoop to The Cloud 英文epub 本资源转载自网络,如有侵权,请联系上传者或csdn删除 本资源转载自网络,如有侵权,请联系上传者或csdn删除

    云计算Hadoop平台的异常数据检测算法研究.pdf

    然而,在处理海量数据时,Hadoop平台常会遇到异常数据的挑战,这包括数据逻辑错误、数据链完整性缺失以及数据失效等问题。这些问题的出现严重干扰了云计算平台的数据运算准确性。 面对这些挑战,研究者们提出了针对...

    Introduction to SAS and Hadoop

    In addition, the SAS/ACCESS Interface to Hadoop methods that allow LIBNAME access and SQL pass-through techniques to read and write Hadoop HIVE or Cloudera Impala tables structures is part of this ...

    Failed to set permissions of path: \tmp\hadoop-Administrator

    Failed to set permissions of path: \tmp\hadoop-Administrator,的解决方法,更换hadoop-core-1.0.2-modified.jar包

    Apache Hadoop 3.x state of the union and upgrade guidance

    It morphed the Hadoop compute layer to be a common resource-management platform that can host a wide variety of applications. Many organizations leverage YARN in building their applications on top of...

    Field Guide to Hadoop

    If your organization is about to enter the world of big data, you not only need to decide whether Apache Hadoop is the right platform to use, but also which of its many components are best suited to ...

    Architecting Modern Data Platforms: A Guide to Enterprise Hadoop at Scale

    Ideal for enterprise architects, IT managers, application architects, and data engineers, this book shows you how to overcome the many challenges that emerge during Hadoop projects. You'll explore the...

    Hadoop.Essentials.1784396680

    By the end of the book, you will be confident to begin working with Hadoop straightaway and implement the knowledge gained in all your real-world scenarios. Table of Contents Chapter 1: Introduction ...

    Hadoop from the beginning: The basics

    This book is written for anyone who needs to know how to analyze data using Hadoop. It is a good book for both Hadoop beginners and those in need of advancing their Hadoop skills. The author has ...

    hadoop-2.2.0-src.tar

    To recap, this release has a number of significant highlights compared to Hadoop 1.x: YARN - A general purpose resource management system for Hadoop to allow MapReduce and other other data processing...

    Hadoop_Data Processing and Modelling-Packt Publishing(2016).pdf

    Along with Hadoop 2.x and illustrates how it can be used to extend the capabilities of Hadoop. When you nish this course, you will be able to tackle the real-world scenarios and become a big data ...

    [Hadoop] Hadoop 集群操作管理技巧 (英文版)

    Hands-on recipes to configure a Hadoop cluster from bare metal hardware nodes Practical and in depth explanation of cluster management commands Easy-to-understand recipes for securing and monitoring a...

    Apache Hadoop 3 Quick Start Guide

    It enables large datasets to be efficiently processed instead of using one large computer to store and process the data. This book will get you started with the Hadoop ecosystem, and introduce you to...

Global site tag (gtag.js) - Google Analytics