`
standalone
  • 浏览: 610928 次
  • 性别: Icon_minigender_1
  • 来自: 上海
社区版块
存档分类
最新评论

Exceptions in HDFS

阅读更多

Log them here for later analysis.

 

 

2009-08-26 01:17:37,798 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in receiveBlock for block blk_5223350282761282817_281131 java.nio.channels.ClosedByInterruptException
2009-08-26 01:17:37,799 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: writeBlock blk_5223350282761282817_281131 received exception java.io.IOException: Interrupted receiveBlock
2009-08-26 01:17:37,799 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder blk_5223350282761282817_281131 1 Exception java.net.SocketException: Socket closed
        at java.net.SocketInputStream.socketRead0(Native Method)
        at java.net.SocketInputStream.read(SocketInputStream.java:129)
        at java.io.DataInputStream.readFully(DataInputStream.java:178)
        at java.io.DataInputStream.readLong(DataInputStream.java:399)
        at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:853)
        at java.lang.Thread.run(Thread.java:619)

 

2009-08-26 01:17:37,827 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(10.0.0.9:50010, storageID=DS-951226019-10.0.0.9-50010-1251209172987, infoPort=50075, ipcPort=50020):DataXceiver
java.io.IOException: Interrupted receiveBlock
        at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:569)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:357)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:103)
        at java.lang.Thread.run(Thread.java:619)

 

2009-08-26 01:17:37,840 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 50020, call updateBlock(blk_5223350282761282817_281131, blk_5223350282761282817_281136, false) from 10.0.0.16:54613: error: java.io.IOException: Block blk_5223350282761282817_281136 length is 1105408 does not match block file length 1560576
java.io.IOException: Block blk_5223350282761282817_281136 length is 1105408 does not match block file length 1560576
        at org.apache.hadoop.hdfs.server.datanode.FSDataset.validateBlockMetadata(FSDataset.java:1259)
        at org.apache.hadoop.hdfs.server.datanode.FSDataset.tryUpdateBlock(FSDataset.java:898)
        at org.apache.hadoop.hdfs.server.datanode.FSDataset.updateBlock(FSDataset.java:810)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.updateBlock(DataNode.java:1384)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:481)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:894)

2009-08-26 01:10:48,314 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(10.0.0.8:50010, storageID=DS-1793106907-10.0.0.8-50010-1251209173521, infoPort=50075, ipcPort=50020):Failed to transfer blk_-3457816871186697703_281034 to 10.0.0.16:50010 got java.net.SocketException: Connection reset
        at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:96)
        at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
        at java.io.BufferedOutputStream.write(BufferedOutputStream.java:105)
        at java.io.DataOutputStream.write(DataOutputStream.java:90)
        at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:336)
        at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:421)
        at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1111)
        at java.lang.Thread.run(Thread.java:619)

 

2009-08-26 00:41:04,430 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(10.0.0.14:50010, storageID=DS-1239116510-10.0.0.14-50010-1251209186514, infoPort=50075, ipcPort=50020):DataXceiver
org.apache.hadoop.hdfs.server.datanode.BlockAlreadyExistsException: Block blk_-8656937491228549459_162680 is valid, and cannot be written to.
        at org.apache.hadoop.hdfs.server.datanode.FSDataset.writeToBlock(FSDataset.java:975)
        at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.<init>(BlockReceiver.java:97)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:259)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:103)
        at java.lang.Thread.run(Thread.java:619)

2009-08-26 00:55:10,250 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Error processing datanode Command
java.io.IOException: Error in deleting blocks.
        at org.apache.hadoop.hdfs.server.datanode.FSDataset.invalidate(FSDataset.java:1353)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.processCommand(DataNode.java:849)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.processCommand(DataNode.java:811)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.offerService(DataNode.java:691)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1144)
        at java.lang.Thread.run(Thread.java:619)

 

 

2009-08-26 01:27:44,783 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(10.0.0.14:50010, storageID=DS-1239116510-10.0.0.14-50010-1251209186514, infoPort=50075, ipcPort=50020):Got exception while serving blk_-2856096768554983549_281092 to /10.0.0.15:
java.io.IOException: Block blk_-2856096768554983549_281092 is not valid.
        at org.apache.hadoop.hdfs.server.datanode.FSDataset.getBlockFile(FSDataset.java:726)
        at org.apache.hadoop.hdfs.server.datanode.FSDataset.getLength(FSDataset.java:714)
        at org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:100)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:172)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:95)
        at java.lang.Thread.run(Thread.java:619)

 

2009-08-26 01:50:50,095 WARN org.apache.hadoop.util.Shell: Could not get disk usage information
org.apache.hadoop.util.Shell$ExitCodeException: du: cannot access `/mnt/DP_disk4/tao/hadoop-tao/dfs/data/current/subdir61/blk_1441044640010723064_32156.meta': No such file or directory
du: cannot access `/mnt/DP_disk4/tao/hadoop-tao/dfs/data/current/subdir61/blk_1441044640010723064': No such file or directory

        at org.apache.hadoop.util.Shell.runCommand(Shell.java:195)
        at org.apache.hadoop.util.Shell.run(Shell.java:134)
        at org.apache.hadoop.fs.DU.access$200(DU.java:29)
        at org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:84)
        at java.lang.Thread.run(Thread.java:619)

 

 

761282817_281136, datanode=10.0.0.9:50010)
org.apache.hadoop.ipc.RemoteException: java.io.IOException: Block blk_5223350282761282817_281136 length is 1105408 does not match block file length 1560576
        at org.apache.hadoop.hdfs.server.datanode.FSDataset.validateBlockMetadata(FSDataset.java:1259)
        at org.apache.hadoop.hdfs.server.datanode.FSDataset.tryUpdateBlock(FSDataset.java:898)
        at org.apache.hadoop.hdfs.server.datanode.FSDataset.updateBlock(FSDataset.java:810)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.updateBlock(DataNode.java:1384)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:481)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:894)

        at org.apache.hadoop.ipc.Client.call(Client.java:697)
        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216)
        at $Proxy5.updateBlock(Unknown Source)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1513)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1482)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1548)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:481)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:894)

 

分享到:
评论
1 楼 月光杯 2014-07-03  
问题解决了吗?

相关推荐

    Exceptions in MIPS.

    异常处理是计算机系统设计中的一个重要概念,它使得处理器能够响应并处理在程序执行过程中出现的非标准或意外情况。MIPS(Microprocessor without Interlocked Pipeline Stages)是一种广泛使用的精简指令集计算机...

    Exceptions in Java and Eiffel:Two Extremes in Exception Design and Application

    Java中的异常分为两大类:检查异常(checked exceptions)和运行时异常(runtime exceptions)。检查异常是指编译器强制要求程序员必须处理的异常类型,通常用于表示程序中可能出现的正常但无法预料的情况;而运行时...

    Applying Multicycle Exceptions in the TimeQuest Timing Analyzer

    在使用FPGA进行设计时,多时钟周期异常(Multicycle Exceptions)在Quartus软件中是提高设计性能的关键手段之一。为了实现设计的最大性能,必须指定以下的时序约束:时钟约束、输入输出约束和异常约束。这些约束对于...

    Python库 | py_exceptions-1.1.0-py3-none-any.whl

    《Python库py_exceptions-1.1.0-py3-none-any.whl详解》 Python作为一门强大且广泛应用的编程语言,其丰富的库生态系统是其魅力所在。本文将深入探讨一个名为`py_exceptions`的Python库,该库在版本1.1.0中发布,其...

    Laravel开发-exceptions

    `Laravel开发-exceptions`着重关注的就是这一核心功能。异常处理不仅有助于调试,还能提升用户体验,因为它可以定制化错误页面,避免暴露敏感信息。下面将详细探讨Laravel中的异常处理机制。 1. **异常处理基础** ...

    precise_exceptions_in_pipelined_processors.pdf

    本文档《precise_exceptions_in_pipelined_processors.pdf》探讨了在流水线处理器中实现精确中断的解决方案,并对其性能影响进行了评估。 首先,本文开篇介绍了精确中断的定义。在顺序模型的程序执行中,精确中断...

    完美解决python针对hdfs上传和下载的问题

    当我们使用python的hdfs包进行上传和下载文件的时候,总会出现如下问题 requests.packages.urllib3.exceptions.NewConnectionError:&lt;requests&gt;: Failed to establish a new connection: [Errno -2] Name or service...

    解决Cause com.mysql.jdbc.exceptions.jdbc4.CommunicationsException

    Cause com.mysql.jdbc.exceptions.jdbc4.CommunicationsException The last packet successfully received from the server was 47,795,922 milliseconds ago. The last packet sent successfully to the server was...

    Cause: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: ….. this is incompatible with sq

    1、写在开头 标题之前我想说一下Linux的mysql真的实在是太坑了。太坑了。总是会出现这样那样的你想不到的问题。崩溃了。首先来罗列一下我遇到过的...Cause:com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorExcepti

    exceptions4c, 一种面向C的异常.zip

    exceptions4c, 一种面向C的异常 exceptions4c 使用这个小巧的便携库将异常的威力带给你的C 应用程序。 一个异常处理程序框架这里库提供了一组简单的关键字( 。宏,实际上),它映射了可以能已经经使用的异常处理的...

    Python库 | better_exceptions-0.2.3.tar.gz

    资源分类:Python库 所属语言:Python 资源全名:better_exceptions-0.2.3.tar.gz 资源来源:官方 安装方法:https://lanzao.blog.csdn.net/article/details/101784059

    exceptions_2.11-0.0.5-M6.zip

    标题“exceptions_2.11-0.0.5-M6.zip”暗示这可能是一个软件库的归档,其中包含了特定版本(2.11-0.0.5-M6)的异常处理相关的代码。在Java编程环境中,"2.x.x"这样的版本号通常与Java的版本相对应,而"M6"可能是...

    PyPI 官网下载 | py_exceptions-1.1.0-py3-none-any.whl

    资源来自pypi官网。 资源全名:py_exceptions-1.1.0-py3-none-any.whl

    Python库 | asphalt-exceptions-1.0.0.tar.gz

    资源分类:Python库 所属语言:Python 资源全名:asphalt-exceptions-1.0.0.tar.gz 资源来源:官方 安装方法:https://lanzao.blog.csdn.net/article/details/101784059

    exceptions_2.11-0.0.5-M3.zip

    《scrimage:Scala图像处理库详解》 在IT领域,图像处理是一项不可或缺的技术,它广泛应用于各种场景,如社交媒体、图像分析、机器学习等。在Scala编程语言中,有一款名为scrimage的开源项目,为开发者提供了强大的...

    PyPI 官网下载 | django-extra-exceptions-1.0.0.tar.gz

    if request.accepted_media_type not in ['application/json', 'text/html']: raise Http404NotAcceptable() # ... ``` 4. **自定义异常** 除了提供的预定义异常,"django-extra-exceptions"也支持自定义异常...

    Exceptions java 源码

    Exceptions java 源码

    graphviz-2.38安装程序(免费)

    graphviz-2.38.msi。用于解决'The command "{}" is required to be in your path.'.format(cmd)) ...pycallgraph.exceptions.PyCallGraphException: The command "dot" is required to be in your path.

    Thinking in C++.pdf

    - 异常处理(Exceptions):阐述了异常处理机制及其对程序健壮性的影响。 - 多重继承(Multiple Inheritance):探讨了多重继承带来的挑战和解决方案。 4. **标准库与I/O流**:提供了关于C++标准库的详细介绍,...

Global site tag (gtag.js) - Google Analytics