09/01/19 17:32:41 WARN fs.FileSystem: "10.5.57.81:9000" is a deprecated filesystem name. Use "hdfs://10.5.57.81:9000/" instead.
09/01/19 17:32:42 WARN fs.FileSystem: "10.5.57.81:9000" is a deprecated filesystem name. Use "hdfs://10.5.57.81:9000/" instead.
09/01/19 17:32:42 WARN fs.FileSystem: "10.5.57.81:9000" is a deprecated filesystem name. Use "hdfs://10.5.57.81:9000/" instead.
09/01/19 17:32:42 WARN fs.FileSystem: "10.5.57.81:9000" is a deprecated filesystem name. Use "hdfs://10.5.57.81:9000/" instead.
09/01/19 17:32:42 INFO dfs.DFSClient: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/hadoop/input/1.test could only be replicated to 0 nodes, instead of 1
at org.apache.hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1120)
at org.apache.hadoop.dfs.NameNode.addBlock(NameNode.java:330)
at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:452)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:888)
at org.apache.hadoop.ipc.Client.call(Client.java:715)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216)
at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)
at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2448)
at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2331)
at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1800(DFSClient.java:1743)
at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1920)
09/01/19 17:32:42 WARN dfs.DFSClient: NotReplicatedYetException sleeping /user/hadoop/input/1.test retries left 4
09/01/19 17:32:43 INFO dfs.DFSClient: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/hadoop/input/1.test could only be replicated to 0 nodes, instead of 1
at org.apache.hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1120)
at org.apache.hadoop.dfs.NameNode.addBlock(NameNode.java:330)
at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:452)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:888)
at org.apache.hadoop.ipc.Client.call(Client.java:715)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216)
at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)
at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2448)
at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2331)
at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1800(DFSClient.java:1743)
at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1920)
09/01/19 17:32:43 WARN dfs.DFSClient: NotReplicatedYetException sleeping /user/hadoop/input/1.test retries left 3
09/01/19 17:32:44 INFO dfs.DFSClient: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/hadoop/input/1.test could only be replicated to 0 nodes, instead of 1
at org.apache.hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1120)
at org.apache.hadoop.dfs.NameNode.addBlock(NameNode.java:330)
at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:452)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:888)
at org.apache.hadoop.ipc.Client.call(Client.java:715)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216)
at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)
at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2448)
at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2331)
at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1800(DFSClient.java:1743)
at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1920)
09/01/19 17:32:44 WARN dfs.DFSClient: NotReplicatedYetException sleeping /user/hadoop/input/1.test retries left 2
09/01/19 17:32:45 INFO dfs.DFSClient: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/hadoop/input/1.test could only be replicated to 0 nodes, instead of 1
at org.apache.hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1120)
at org.apache.hadoop.dfs.NameNode.addBlock(NameNode.java:330)
at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:452)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:888)
at org.apache.hadoop.ipc.Client.call(Client.java:715)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216)
at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)
at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2448)
at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2331)
at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1800(DFSClient.java:1743)
at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1920)
09/01/19 17:32:45 WARN dfs.DFSClient: NotReplicatedYetException sleeping /user/hadoop/input/1.test retries left 1
09/01/19 17:32:48 WARN dfs.DFSClient: DataStreamer Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/hadoop/input/1.test could only be replicated to 0 nodes, instead of 1
at org.apache.hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1120)
at org.apache.hadoop.dfs.NameNode.addBlock(NameNode.java:330)
at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:452)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:888)
at org.apache.hadoop.ipc.Client.call(Client.java:715)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216)
at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)
at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2448)
at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2331)
at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1800(DFSClient.java:1743)
at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1920)
09/01/19 17:32:48 WARN dfs.DFSClient: Error Recovery for block null bad datanode[0]
put: Could not get block locations. Aborting...
Exception closing file /user/hadoop/input/1.test
java.io.IOException: Could not get block locations. Aborting...
at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2151)
at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1400(DFSClient.java:1743)
at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1897)
Hadoop DFSClient警告NotReplicatedYetException信息
有时,当你申请到一个HOD集群后马上尝试上传文件到HDFS时,DFSClient会警告NotReplicatedYetException。通常会有一个这样的信息 -
WARN hdfs.DFSClient: NotReplicatedYetException sleeping <filename> retries left 3 |
08/01/25 16:31:40 INFO hdfs.DFSClient: org.apache.hadoop.ipc.RemoteException:java.io.IOException: File <filename> could only be replicated to 0 nodes, instead of 1 |
当你向一个DataNodes正在和NameNode联络的集群上传文件的时候,这种现象就会发生。在上传新文件到HDFS之前多等待一段时间就可以解决这个问题,因为这使得足够多的DataNode启动并且联络上了NameNode。
分享到:
相关推荐
- 使用 `bin/hadoop fs -put f1.txt input` 命令将 `f1.txt` 文件上传至 HDFS 的 “input” 目录下。 5. **运行WordCount程序**: - 执行 `bin/hadoop jar hadoop-0.20.2-examples.jar wordcount input output` ...
例如,`hadoop fs -ls`用来列出目录内容,`hadoop fs -put`用于上传本地文件到HDFS,`hadoop fs -get`则是下载HDFS上的文件到本地。 - `hadoop fs -mkdir`和`hadoop fs -rm`分别用于创建和删除HDFS目录。 - `...
1. 数据上传:使用hadoop fs -put命令将本地文件系统中的数据上传至HDFS。 2. 文件操作:学习使用Hadoop提供的命令行工具进行文件的读取、删除和移动操作。 3. WordCount示例:Hadoop的典型入门示例,统计文本中每个...
1. 数据加载:使用Hadoop的工具如`hadoop fs -put`将gzip文件上传到HDFS。 2. 数据预处理:对原始数据进行清洗,去除无效或异常值,转换为适合分析的格式。 3. 分析:利用MapReduce编写程序,进行特定的统计分析,...
对于更复杂的数据分析,可以结合使用Hadoop与机器学习库Mahout或Spark MLlib进行用户行为预测和异常检测。 总结来说,Hadoop在处理大规模上网流量源数据时,提供了强大的分布式存储和计算能力。通过对"HTTP_.dat"...
5. **数据上传**:使用`hadoop fs -put`命令将交通数据和日志数据上传到HDFS,以便于后续分析。 6. **日志分析**:在Hadoop上进行日志分析可以采用多种方式,如MapReduce、Apache Pig、Apache Hive或者Spark。...
1. 数据摄入:使用Hadoop的工具如`hadoop fs -put`命令将气象数据上传到HDFS。 2. 数据预处理:通过MapReduce任务进行数据清洗,去除异常值,转换数据格式,以适应后续分析。 3. 数据分析:利用MapReduce的并行...
hadoop fs -put localfile /user/hadoop/数据分析 ``` 通过以上命令,可以在HDFS中创建目录并上传数据文件。 ##### 2. 数据存储 数据存储在HDFS中,它是Hadoop的分布式存储系统。HDFS设计用于存储海量数据,具备...
- 分析Hadoop Shell命令执行过程中的错误和异常,以及如何解决这些问题。 - 介绍如何通过调整参数优化Hadoop作业的性能,如增大内存分配、设置合适的线程数等。 9. **案例研究** - 提供实际案例,展示如何在...
要上传文件,我们需要使用`FileSystem`的`copyFromLocalFile`或`put`方法。以下是一个简单的例子,假设我们要上传本地文件`local_file_path`到HDFS的`/hdfs_path`: ```java try (InputStream in = new ...
HDFS 提供了一系列的Shell命令,如启动HDFS集群的`start-dfs.sh`,查看目录的`hdfs dfs -ls /`,创建目录的`hdfs dfs -mkdir`,上传文件的`hdfs dfs -put`,下载文件的`hdfs dfs -get`等。这些命令与Linux命令类似,...
比如,当需要测试Hadoop客户端对HTTPFS的请求处理时,可以利用HTTP Stub Server模拟HTTPFS服务器的行为,返回预设的响应,从而测试客户端在各种异常和正常情况下的行为。这种隔离测试的方式提高了测试效率,也减少了...
- **租约管理**:主要包括增加租约的`put`方法和删除租约的`remove`方法。此外,LeaseChecker还会定期执行`run`方法,该方法会通过`ClientProtocol.renewLease`自动延长租约的有效期。 #### FSInputStream及校验...
此外,这个项目也涉及到了人工智能的相关概念,虽然在描述中没有具体说明,但可以想象,未来的人工智能技术可能会被整合到云盘系统中,比如通过机器学习算法进行智能推荐、文件分类或是异常检测,以提供更智能化的...
1. **HDFS操作**:htool集成了hadoop fs命令,可执行常见的文件系统操作,如ls(列出目录),put(上传文件),get(下载文件)等。 2. **日志分析**:htool能够解析和展示MapReduce任务的日志,帮助理解任务执行...
掌握Java的基本语法、异常处理和集合框架等概念对学习Hadoop很有帮助。 5. **Hadoop生态系统**:除了HDFS和MapReduce,Hadoop生态系统还包括许多其他组件,如YARN(资源调度器),HBase(NoSQL数据库),Hive(数据...
- **put**: 从本地文件系统向HDFS上传文件,如`hadoop fs -put <localsrc> <dst>`将本地文件复制到HDFS。 - **cat**: 查看文件内容,如`hadoop fs -cat <URI>`用于查看HDFS文件的全部内容。 - **cp**: 复制HDFS上...
import org.apache.hadoop.hbase.client.Put import org.apache.hadoop.hbase.client.Get import org.apache.hadoop.hbase.util.Bytes ``` 2. 配置HBase连接: 创建一个`Configuration`对象并加载HBase的配置...
1. **对象存储API**:定义了客户端与服务端之间的交互接口,如PUT(上传)、GET(下载)、DELETE(删除)等操作。 2. **Spring Boot启动类**:启动整个应用,包含自动配置和相关Bean的初始化。 3. **HDFS配置**:...