- 浏览: 616805 次
- 性别:
- 来自: 上海
-
文章分类
最新评论
-
月光杯:
问题解决了吗?
Exceptions in HDFS -
iostreamin:
神,好厉害,这是我找到的唯一可以ac的Java代码,厉害。
[leetcode] word ladder II -
standalone:
One answer I agree with:引用Whene ...
How many string objects are created? -
DiaoCow:
不错!,一开始对这些确实容易犯迷糊
erlang中的冒号 分号 和 句号 -
standalone:
Exception in thread "main& ...
one java interview question
Log them here for later analysis.
2009-08-26 01:17:37,798 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in receiveBlock for block blk_5223350282761282817_281131 java.nio.channels.ClosedByInterruptException
2009-08-26 01:17:37,799 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: writeBlock blk_5223350282761282817_281131 received exception java.io.IOException: Interrupted receiveBlock
2009-08-26 01:17:37,799 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder blk_5223350282761282817_281131 1 Exception java.net.SocketException: Socket closed
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:129)
at java.io.DataInputStream.readFully(DataInputStream.java:178)
at java.io.DataInputStream.readLong(DataInputStream.java:399)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:853)
at java.lang.Thread.run(Thread.java:619)
2009-08-26 01:17:37,827 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(10.0.0.9:50010, storageID=DS-951226019-10.0.0.9-50010-1251209172987, infoPort=50075, ipcPort=50020):DataXceiver
java.io.IOException: Interrupted receiveBlock
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:569)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:357)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:103)
at java.lang.Thread.run(Thread.java:619)
2009-08-26 01:17:37,840 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 50020, call updateBlock(blk_5223350282761282817_281131, blk_5223350282761282817_281136, false) from 10.0.0.16:54613: error: java.io.IOException: Block blk_5223350282761282817_281136 length is 1105408 does not match block file length 1560576
java.io.IOException: Block blk_5223350282761282817_281136 length is 1105408 does not match block file length 1560576
at org.apache.hadoop.hdfs.server.datanode.FSDataset.validateBlockMetadata(FSDataset.java:1259)
at org.apache.hadoop.hdfs.server.datanode.FSDataset.tryUpdateBlock(FSDataset.java:898)
at org.apache.hadoop.hdfs.server.datanode.FSDataset.updateBlock(FSDataset.java:810)
at org.apache.hadoop.hdfs.server.datanode.DataNode.updateBlock(DataNode.java:1384)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:481)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:894)
2009-08-26 01:10:48,314 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(10.0.0.8:50010, storageID=DS-1793106907-10.0.0.8-50010-1251209173521, infoPort=50075, ipcPort=50020):Failed to transfer blk_-3457816871186697703_281034 to 10.0.0.16:50010 got java.net.SocketException: Connection reset
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:96)
at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
at java.io.BufferedOutputStream.write(BufferedOutputStream.java:105)
at java.io.DataOutputStream.write(DataOutputStream.java:90)
at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:336)
at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:421)
at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1111)
at java.lang.Thread.run(Thread.java:619)
2009-08-26 00:41:04,430 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(10.0.0.14:50010, storageID=DS-1239116510-10.0.0.14-50010-1251209186514, infoPort=50075, ipcPort=50020):DataXceiver
org.apache.hadoop.hdfs.server.datanode.BlockAlreadyExistsException: Block blk_-8656937491228549459_162680 is valid, and cannot be written to.
at org.apache.hadoop.hdfs.server.datanode.FSDataset.writeToBlock(FSDataset.java:975)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.<init>(BlockReceiver.java:97)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:259)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:103)
at java.lang.Thread.run(Thread.java:619)
2009-08-26 00:55:10,250 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Error processing datanode Command
java.io.IOException: Error in deleting blocks.
at org.apache.hadoop.hdfs.server.datanode.FSDataset.invalidate(FSDataset.java:1353)
at org.apache.hadoop.hdfs.server.datanode.DataNode.processCommand(DataNode.java:849)
at org.apache.hadoop.hdfs.server.datanode.DataNode.processCommand(DataNode.java:811)
at org.apache.hadoop.hdfs.server.datanode.DataNode.offerService(DataNode.java:691)
at org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1144)
at java.lang.Thread.run(Thread.java:619)
2009-08-26 01:27:44,783 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(10.0.0.14:50010, storageID=DS-1239116510-10.0.0.14-50010-1251209186514, infoPort=50075, ipcPort=50020):Got exception while serving blk_-2856096768554983549_281092 to /10.0.0.15:
java.io.IOException: Block blk_-2856096768554983549_281092 is not valid.
at org.apache.hadoop.hdfs.server.datanode.FSDataset.getBlockFile(FSDataset.java:726)
at org.apache.hadoop.hdfs.server.datanode.FSDataset.getLength(FSDataset.java:714)
at org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:100)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:172)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:95)
at java.lang.Thread.run(Thread.java:619)
2009-08-26 01:50:50,095 WARN org.apache.hadoop.util.Shell: Could not get disk usage information
org.apache.hadoop.util.Shell$ExitCodeException: du: cannot access `/mnt/DP_disk4/tao/hadoop-tao/dfs/data/current/subdir61/blk_1441044640010723064_32156.meta': No such file or directory
du: cannot access `/mnt/DP_disk4/tao/hadoop-tao/dfs/data/current/subdir61/blk_1441044640010723064': No such file or directory
at org.apache.hadoop.util.Shell.runCommand(Shell.java:195)
at org.apache.hadoop.util.Shell.run(Shell.java:134)
at org.apache.hadoop.fs.DU.access$200(DU.java:29)
at org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:84)
at java.lang.Thread.run(Thread.java:619)
761282817_281136, datanode=10.0.0.9:50010)
org.apache.hadoop.ipc.RemoteException: java.io.IOException: Block blk_5223350282761282817_281136 length is 1105408 does not match block file length 1560576
at org.apache.hadoop.hdfs.server.datanode.FSDataset.validateBlockMetadata(FSDataset.java:1259)
at org.apache.hadoop.hdfs.server.datanode.FSDataset.tryUpdateBlock(FSDataset.java:898)
at org.apache.hadoop.hdfs.server.datanode.FSDataset.updateBlock(FSDataset.java:810)
at org.apache.hadoop.hdfs.server.datanode.DataNode.updateBlock(DataNode.java:1384)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:481)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:894)
at org.apache.hadoop.ipc.Client.call(Client.java:697)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216)
at $Proxy5.updateBlock(Unknown Source)
at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1513)
at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1482)
at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1548)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:481)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:894)
发表评论
-
hadoop-2.2.0 build failure due to missing dependancy
2014-01-06 13:18 758The bug and fix is at https://i ... -
HDFS中租约管理源代码分析
2013-07-05 18:05 0HDFS中Client写文件的时候要获得一个租约,用来保证Cl ... -
Question on HBase source code
2013-05-22 15:05 1128I'm reading source code of hbas ... -
Using the libjars option with Hadoop
2013-05-20 15:03 975As I have said in my last post, ... -
What's Xen?
2012-12-23 17:19 1134Xen的介绍。 -
学习hadoop之基于protocol buffers的 RPC
2012-11-15 23:23 10131现在版本的hadoop各种serv ... -
学习hadoop之基于protocol buffers的 RPC
2012-11-15 22:59 2现在版本的hadoop各种server、client RPC端 ... -
Hadoop RPC 一问
2012-11-14 14:43 121看代码时候发现好像有个地方做得多余,不知道改一下会不会有好处, ... -
Hadoop Version Graph
2012-11-14 11:47 933可以到这里看全文: http://cloudblog.8km ... -
Hadoop 2.0 代码分析---MapReduce
2012-10-25 18:27 7102本文参考hadoop的版本: hadoop-2.0.1-alp ... -
how to study hadoop?
2012-04-27 15:34 1538From StackOverflow http://stack ... -
首相发怒记之hadoop篇
2012-03-23 12:14 794我在youtube上看到的,某位能翻*墙的看一下吧,挺好笑的。 ... -
Cloud Security?
2011-09-02 14:23 860看了一些文章,主要是保证用户怎么保证存储在公有云的数据的完整性 ... -
一个HDFS Error
2011-06-11 21:53 1540ERROR: hdfs.DFSClient: Excep ... -
hadoop cluster at ebay
2011-06-11 21:39 1167Friday, December 17, 2010Hadoop ... -
[转]hadoop at ebay
2011-06-11 21:09 1202http://www.ebaytechblog.com/201 ... -
【读书笔记】Data warehousing and analytics infrastructure at facebook
2011-03-18 22:03 1957这好像是sigmod2010上的paper。 读了之后做了以 ... -
cassandra example
2011-01-19 16:39 1787http://www.rackspacecloud.com/b ... -
想了解Thrift,留个记号
2011-01-19 16:35 144Thrift: Scalable Cross-Langu ... -
impact of total region numbers?
2011-01-19 16:31 939这几天tune了hbase的几个参数,有些有意思的结果。具体看 ...
相关推荐
异常处理是计算机系统设计中的一个重要概念,它使得处理器能够响应并处理在程序执行过程中出现的非标准或意外情况。MIPS(Microprocessor without Interlocked Pipeline Stages)是一种广泛使用的精简指令集计算机...
在使用FPGA进行设计时,多时钟周期异常(Multicycle Exceptions)在Quartus软件中是提高设计性能的关键手段之一。为了实现设计的最大性能,必须指定以下的时序约束:时钟约束、输入输出约束和异常约束。这些约束对于...
《Python库py_exceptions-1.1.0-py3-none-any.whl详解》 Python作为一门强大且广泛应用的编程语言,其丰富的库生态系统是其魅力所在。本文将深入探讨一个名为`py_exceptions`的Python库,该库在版本1.1.0中发布,其...
`Laravel开发-exceptions`着重关注的就是这一核心功能。异常处理不仅有助于调试,还能提升用户体验,因为它可以定制化错误页面,避免暴露敏感信息。下面将详细探讨Laravel中的异常处理机制。 1. **异常处理基础** ...
本文档《precise_exceptions_in_pipelined_processors.pdf》探讨了在流水线处理器中实现精确中断的解决方案,并对其性能影响进行了评估。 首先,本文开篇介绍了精确中断的定义。在顺序模型的程序执行中,精确中断...
当我们使用python的hdfs包进行上传和下载文件的时候,总会出现如下问题 requests.packages.urllib3.exceptions.NewConnectionError:<requests>: Failed to establish a new connection: [Errno -2] Name or service...
Cause com.mysql.jdbc.exceptions.jdbc4.CommunicationsException The last packet successfully received from the server was 47,795,922 milliseconds ago. The last packet sent successfully to the server was...
exceptions4c, 一种面向C的异常 exceptions4c 使用这个小巧的便携库将异常的威力带给你的C 应用程序。 一个异常处理程序框架这里库提供了一组简单的关键字( 。宏,实际上),它映射了可以能已经经使用的异常处理的...
资源来自pypi官网。 资源全名:py_exceptions-1.1.0-py3-none-any.whl
资源分类:Python库 所属语言:Python 资源全名:better_exceptions-0.2.3.tar.gz 资源来源:官方 安装方法:https://lanzao.blog.csdn.net/article/details/101784059
标题“exceptions_2.11-0.0.5-M6.zip”暗示这可能是一个软件库的归档,其中包含了特定版本(2.11-0.0.5-M6)的异常处理相关的代码。在Java编程环境中,"2.x.x"这样的版本号通常与Java的版本相对应,而"M6"可能是...
资源分类:Python库 所属语言:Python 资源全名:asphalt-exceptions-1.0.0.tar.gz 资源来源:官方 安装方法:https://lanzao.blog.csdn.net/article/details/101784059
《scrimage:Scala图像处理库详解》 在IT领域,图像处理是一项不可或缺的技术,它广泛应用于各种场景,如社交媒体、图像分析、机器学习等。在Scala编程语言中,有一款名为scrimage的开源项目,为开发者提供了强大的...
if request.accepted_media_type not in ['application/json', 'text/html']: raise Http404NotAcceptable() # ... ``` 4. **自定义异常** 除了提供的预定义异常,"django-extra-exceptions"也支持自定义异常...
通过pip安装better_exceptions : $ pip install better_exceptions 并将BETTER_EXCEPTIONS环境变量设置为任何值: export BETTER_EXCEPTIONS=1 # Linux / OSX setx BETTER_EXCEPTIONS 1 # Windows 而已! ...
Exceptions java 源码
graphviz-2.38.msi。用于解决'The command "{}" is required to be in your path.'.format(cmd)) ...pycallgraph.exceptions.PyCallGraphException: The command "dot" is required to be in your path.