- 浏览: 23174 次
- 性别:
- 来自: 杭州
最新评论
-
johnzhou:
你好 xp中怎么解决呢?不会coding的人,谢谢!
GAE使用时的一些问题:urllib2.URLError: <urlopen error [Errno 10061] > -
java_doom:
可能是因为在Win7系统中没有管理员权限引起的,以管理员身份运 ...
Oracle Enterprise Manager使用时 分析tnsnames.ora文件失败 解决方法
版本信息(注意不同版本之间的差异):
HBase Version : 0.90.3, r1100350
Hadoop Version : 0.20.3-SNAPSHOT, r1057313
问题描述
在使用HFileOutputFormat.configureIncrementalLoad()时,MapReduce的job跑的时间在20分钟左右,经常发现Master自动退出,日志的部分信息后面贴上。日志显示的主要问题是zookeeper(简称ZK)会话过期,然后接着一堆的KeeperException异常。最终Master自己退出,regionserver和ZK还继续运行。
解决尝试过程
给hbase的邮件组发了邮件,描述了问题,很快得到了Stack的回答,Stack也确认了这个是ZK超时引起的Master故障。主要是两个方面可能引起超时,一是ZK节点上有MapReduce任务在跑,几乎占据了所有的IO(磁盘IO和网络IO);二是Java GC运行占据了大多数资源。
的确,我的集群的ZK节点上同时是TaskTracker和DataNode,跑Job时会占据大量IO,但是GC的占据不大。试着将ZK节点和TaskTracker节点分开,然后将zookeeper.session.timeout时间设置更大一些,不过原先我的设置就比较大了,已经是300000毫秒了,现在设置为600000毫秒。
设置后继续尝试批量导入,跑了10几个批量导入的job没发现报告超时异常。也算用一种不优雅的方式暂时解决了这个问题。
不知道zookeeper的超时时间设置太大会带来什么影响,也不知道是否有更好的办法。不断尝试,也许能有更进一步的理解,发现新的解决方法。
如果有谁有更好的方法,可以告诉我;如果文中有问题,还请指正。
附Master退出情况下的一些日志片段:
2011-06-29 02:26:40,454 INFO org.apache.zookeeper.ClientCnxn: Client session timed out, have not heard from server in 200000ms for sessionid 0x130d6fba3cc0001, closing socket connection and attempting reconnect 2011-06-29 02:26:40,557 WARN org.apache.hadoop.hbase.zookeeper.ZKUtil: master:60000-0x130d6fba3cc0001-0x130d6fba3cc0001-0x130d6fba3cc0001-0x130d6fba3cc0001-0x130d6fba3cc0001 Unable to get data of znode /hbase/unassigned/3624b719752997d87cbba6529edcd2a7 org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/unassigned/3624b719752997d87cbba6529edcd2a7 at org.apache.zookeeper.KeeperException.create(KeeperException.java:90) at org.apache.zookeeper.KeeperException.create(KeeperException.java:42) at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:921) at org.apache.hadoop.hbase.zookeeper.ZKUtil.getDataNoWatch(ZKUtil.java:586) at org.apache.hadoop.hbase.zookeeper.ZKAssign.getDataNoWatch(ZKAssign.java:770) at org.apache.hadoop.hbase.master.AssignmentManager$TimeoutMonitor.chore(AssignmentManager.java:1709) at org.apache.hadoop.hbase.Chore.run(Chore.java:66) 2011-06-29 02:26:40,557 FATAL org.apache.hadoop.hbase.master.HMaster: Unexpected ZK exception creating/setting node OFFLINE org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/unassigned/f3523f7dd3eeaae3bb3c66907227e8b0 at org.apache.zookeeper.KeeperException.create(KeeperException.java:90) at org.apache.zookeeper.KeeperException.create(KeeperException.java:42) at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:637) at org.apache.hadoop.hbase.zookeeper.ZKUtil.createAndWatch(ZKUtil.java:856) at org.apache.hadoop.hbase.zookeeper.ZKAssign.createOrForceNodeOffline(ZKAssign.java:246) at org.apache.hadoop.hbase.master.AssignmentManager.setOfflineInZooKeeper(AssignmentManager.java:967) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:918) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:746) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:726) at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:154) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:156) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) 2011-06-29 02:26:40,557 INFO org.apache.hadoop.hbase.master.HMaster: Aborting 2011-06-29 02:26:40,557 ERROR org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher: master:60000-0x130d6fba3cc0001-0x130d6fba3cc0001-0x130d6fba3cc0001-0x130d6fba3cc0001-0x130d6fba3cc0001 Received unexpected KeeperException, re-throwing exception org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/unassigned/3624b719752997d87cbba6529edcd2a7 at org.apache.zookeeper.KeeperException.create(KeeperException.java:90) at org.apache.zookeeper.KeeperException.create(KeeperException.java:42) at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:921) at org.apache.hadoop.hbase.zookeeper.ZKUtil.getDataNoWatch(ZKUtil.java:586) at org.apache.hadoop.hbase.zookeeper.ZKAssign.getDataNoWatch(ZKAssign.java:770) at org.apache.hadoop.hbase.master.AssignmentManager$TimeoutMonitor.chore(AssignmentManager.java:1709) at org.apache.hadoop.hbase.Chore.run(Chore.java:66) 2011-06-29 02:26:40,558 ERROR org.apache.hadoop.hbase.master.AssignmentManager: Unexpected ZK exception timing out CLOSING region org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/unassigned/3624b719752997d87cbba6529edcd2a7 at org.apache.zookeeper.KeeperException.create(KeeperException.java:90) at org.apache.zookeeper.KeeperException.create(KeeperException.java:42) at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:921) at org.apache.hadoop.hbase.zookeeper.ZKUtil.getDataNoWatch(ZKUtil.java:586) at org.apache.hadoop.hbase.zookeeper.ZKAssign.getDataNoWatch(ZKAssign.java:770) at org.apache.hadoop.hbase.master.AssignmentManager$TimeoutMonitor.chore(AssignmentManager.java:1709) at org.apache.hadoop.hbase.Chore.run(Chore.java:66) 2011-06-29 02:26:40,558 INFO org.apache.hadoop.hbase.master.AssignmentManager: Regions in transition timed out: BaseLog,C110,1309156850234.bfcc3150f5b6ebd169de3266ce49e764. state=OPENING, ts=1309285400162 2011-06-29 02:26:40,558 INFO org.apache.hadoop.hbase.master.AssignmentManager: Region has been OPENING for too long, reassigning region=BaseLog,C110,1309156850234.bfcc3150f5b6ebd169de3266ce49e764. 2011-06-29 02:26:41,063 DEBUG org.apache.hadoop.hbase.master.HMaster: Stopping service threads 2011-06-29 02:26:41,064 INFO org.apache.hadoop.ipc.HBaseServer: Stopping server on 60000 2011-06-29 02:26:41,064 INFO org.apache.hadoop.hbase.master.HMaster$1: dev-199-121:60000-BalancerChore exiting 2011-06-29 02:26:41,064 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 3 on 60000: exiting 2011-06-29 02:26:41,064 INFO org.apache.hadoop.hbase.master.CatalogJanitor: dev-199-121:60000-CatalogJanitor exiting 2011-06-29 02:26:41,066 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 7 on 60000: exiting 2011-06-29 02:26:41,066 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 6 on 60000: exiting 2011-06-29 02:26:41,066 INFO org.apache.hadoop.ipc.HBaseServer: Stopping IPC Server listener on 60000 2011-06-29 02:26:41,066 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 9 on 60000: exiting 2011-06-29 02:26:41,066 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 1 on 60000: exiting 2011-06-29 02:26:41,066 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 4 on 60000: exiting 2011-06-29 02:26:41,066 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 0 on 60000: exiting 2011-06-29 02:26:41,065 INFO org.apache.hadoop.hbase.master.LogCleaner: master-dev-199-121:60000.oldLogCleaner exiting 2011-06-29 02:26:41,065 INFO org.apache.hadoop.hbase.master.HMaster: Stopping infoServer 2011-06-29 02:26:41,065 INFO org.apache.hadoop.ipc.HBaseServer: Stopping IPC Server Responder 2011-06-29 02:26:41,065 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 5 on 60000: exiting 2011-06-29 02:26:41,064 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 2 on 60000: exiting 2011-06-29 02:26:41,066 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 8 on 60000: exiting 2011-06-29 02:26:41,073 INFO org.mortbay.log: Stopped SelectChannelConnector@0.0.0.0:60010 2011-06-29 02:26:41,390 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server dev-197-149/10.249.197.149:61000 2011-06-29 02:26:41,391 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to dev-197-149/10.249.197.149:61000, initiating session 2011-06-29 02:26:41,394 INFO org.apache.zookeeper.ClientCnxn: Session establishment complete on server dev-197-149/10.249.197.149:61000, sessionid = 0x130d6fba3cc0001, negotiated timeout = 300000 2011-06-29 02:27:21,996 DEBUG org.apache.hadoop.hbase.catalog.CatalogTracker: Stopping catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@7bd33a6b 2011-06-29 02:27:21,997 WARN org.apache.hadoop.hbase.zookeeper.ZKUtil: master:60000-0x130d6fba3cc0001-0x130d6fba3cc0001-0x130d6fba3cc0001-0x130d6fba3cc0001-0x130d6fba3cc0001-0x130d6fba3cc0001 Unable to get data of znode /hbase/unassigned/bfcc3150f5b6ebd169de3266ce49e764 java.lang.InterruptedException at java.lang.Object.wait(Native Method) at java.lang.Object.wait(Object.java:485) at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1317) at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:919) at org.apache.hadoop.hbase.zookeeper.ZKUtil.getDataAndWatch(ZKUtil.java:549) at org.apache.hadoop.hbase.master.AssignmentManager$TimeoutMonitor.chore(AssignmentManager.java:1734) at org.apache.hadoop.hbase.Chore.run(Chore.java:66) 2011-06-29 02:27:21,997 INFO org.apache.hadoop.hbase.master.AssignmentManager: Successfully transitioned region=BaseLog,C110,1309156850234.bfcc3150f5b6ebd169de3266ce49e764. into OFFLINE and forcing a new assignment 2011-06-29 02:27:21,997 INFO org.apache.hadoop.hbase.master.AssignmentManager: Regions in transition timed out: BaseLog,C2,1309156850235.f3523f7dd3eeaae3bb3c66907227e8b0. state=OFFLINE, ts=1309285399926 2011-06-29 02:27:21,997 INFO org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation: Closed zookeeper sessionid=0x30d6fc37f40006 2011-06-29 02:27:21,997 INFO org.apache.hadoop.hbase.master.AssignmentManager: Region has been OFFLINE for too long, reassigning BaseLog,C2,1309156850235.f3523f7dd3eeaae3bb3c66907227e8b0. to a random server 2011-06-29 02:27:22,001 DEBUG org.apache.hadoop.hbase.master.AssignmentManager: Handling transition=RS_ZK_REGION_OPENING, server=dev-195-151,60020,1309276923241, region=6aec4056cb58be626bc1686cb77b5557, which is more than 15 seconds late 2011-06-29 02:27:22,001 INFO org.apache.zookeeper.ZooKeeper: Session: 0x30d6fc37f40006 closed 2011-06-29 02:27:22,002 INFO org.apache.zookeeper.ClientCnxn: EventThread shut down 2011-06-29 02:27:22,200 INFO org.apache.hadoop.hbase.zookeeper.RegionServerTracker: RegionServer ephemeral node deleted, processing expiration [dev-192-19,60020,1309277114541] 2011-06-29 02:27:22,200 DEBUG org.apache.hadoop.hbase.zookeeper.ZKAssign: master:60000-0x130d6fba3cc0001-0x130d6fba3cc0001-0x130d6fba3cc0001-0x130d6fba3cc0001-0x130d6fba3cc0001-0x130d6fba3cc0001 Creating (or updating) unassigned node for 4510ea062f3aee7fea9c8c034e6085af with OFFLINE state 2011-06-29 02:27:22,200 DEBUG org.apache.hadoop.hbase.master.AssignmentManager: Forcing OFFLINE; was=BaseLog,C110,1309156850234.bfcc3150f5b6ebd169de3266ce49e764. state=OPENING, ts=1309285400162 2011-06-29 02:27:22,201 DEBUG org.apache.hadoop.hbase.master.AssignmentManager: Server stopped; skipping assign of BaseLog,C110,1309156850234.bfcc3150f5b6ebd169de3266ce49e764. state=OFFLINE, ts=1309285642201 2011-06-29 02:27:22,201 DEBUG org.apache.hadoop.hbase.master.AssignmentManager: Forcing OFFLINE; was=BaseLog,C2,1309156850235.f3523f7dd3eeaae3bb3c66907227e8b0. state=OFFLINE, ts=1309285399926 2011-06-29 02:27:22,201 WARN org.apache.hadoop.hbase.zookeeper.ZKUtil: master:60000-0x130d6fba3cc0001-0x130d6fba3cc0001-0x130d6fba3cc0001-0x130d6fba3cc0001-0x130d6fba3cc0001-0x130d6fba3cc0001 Unable to set watcher on znode (/hbase/unassigned/4510ea062f3aee7fea9c8c034e6085af) org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/unassigned/4510ea062f3aee7fea9c8c034e6085af at org.apache.zookeeper.KeeperException.create(KeeperException.java:90) at org.apache.zookeeper.KeeperException.create(KeeperException.java:42) at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:809) at org.apache.hadoop.hbase.zookeeper.ZKUtil.checkExists(ZKUtil.java:260) at org.apache.hadoop.hbase.zookeeper.ZKAssign.createOrForceNodeOffline(ZKAssign.java:244) at org.apache.hadoop.hbase.master.AssignmentManager.setOfflineInZooKeeper(AssignmentManager.java:967) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:918) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:746) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:726) at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:154) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:156) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) 2011-06-29 02:27:22,201 ERROR org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher: master:60000-0x130d6fba3cc0001-0x130d6fba3cc0001-0x130d6fba3cc0001-0x130d6fba3cc0001-0x130d6fba3cc0001-0x130d6fba3cc0001 Received unexpected KeeperException, re-throwing exception org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/unassigned/4510ea062f3aee7fea9c8c034e6085af at org.apache.zookeeper.KeeperException.create(KeeperException.java:90) at org.apache.zookeeper.KeeperException.create(KeeperException.java:42) at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:809) at org.apache.hadoop.hbase.zookeeper.ZKUtil.checkExists(ZKUtil.java:260) at org.apache.hadoop.hbase.zookeeper.ZKAssign.createOrForceNodeOffline(ZKAssign.java:244) at org.apache.hadoop.hbase.master.AssignmentManager.setOfflineInZooKeeper(AssignmentManager.java:967) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:918) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:746) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:726) at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:154) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:156) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
相关推荐
标题中的“MR程序Bulkload数据到hbase”指的是使用MapReduce(MR)程序批量加载(Bulkload)数据到HBase数据库的过程。MapReduce是Apache Hadoop框架中的一个关键组件,用于处理和生成大规模数据集。而HBase是一个...
HBase 的批量写入是指使用 bulkLoad 将大量数据写入到 HBase 中。这种方法可以减少写入时间,提高数据写入效率。HBase 的批量写入可以使用 Spark 实现。 ETL 过程 ETL(Extract, Transform, Load)是数据处理的...
在项目`hive-bulkload-hbase-master`中,你将找到一个示例项目,它演示了上述步骤的实现。这个项目可能包括了Hive和HBase的连接代码、数据预处理逻辑、MapReduce作业的配置以及加载HFiles的Java代码。通过阅读和理解...
通过HBase的监控工具(如HBase Master UI)和日志系统,可以跟踪导入过程,了解数据分布和性能瓶颈,以便进行调优。 通过理解和掌握这些知识点,开发者能够有效地使用Java API实现HBase的大规模数据导入,从而充分...
这种方法适用于数据量大的情况(大于 4TB),通过 Hive 将数据转换为 HFile,然后使用 bulkload 将数据导入到 HBase 中。 首先,需要将 Hive 数据转换为 HFile: CREATE TABLE hbase_hfile_table(key int, name ...
在监控HBase时,Zabbix可以帮助我们了解HBase Master节点、RegionServer节点的状态,包括内存使用、磁盘I/O、请求延迟等关键指标。 “hbase监控文件.zip”中包含的“hbase-master-template”模板文件,就是为Zabbix...
本文将详细讲解如何在HBase环境中安装和编译LZO压缩包,以及它与HBase Master节点的关系。 首先,我们需要了解HBase。HBase是一个分布式、版本化的NoSQL数据库,基于Apache Hadoop构建,用于存储大规模结构化数据。...
* hbase.local.dir:${hbase.tmp.dir}/local/,这个参数指定了 HBase 使用本地文件系统需要配置的数据持久化目录。 六、HBase Master 配置 * hbase.master.port:16000,这个参数指定了 HBase Master 绑定的端口。...
在本文中,我们将深入探讨如何使用Java通过Thrift2接口操作HBase数据库。HBase是一个分布式、可扩展的大数据存储系统,它构建于Hadoop之上,支持实时读写。Thrift是一个轻量级的框架,用于跨语言服务开发,允许不同...
8. **异常处理**:在使用HbaseTemplate时,需要注意处理可能出现的异常,如TableExistsException(表已存在)、TableNotFoundException(表不存在)等,确保程序的健壮性。 总之,Spring Data Hadoop的Hbase...
通过使用 WAL 和缓冲的 Put 从 Hdfs 文件中摄取 HBase 记录 通过 WAL(使用 Put)将具有 PARQUET 格式的 hdfs 文件加载到 Hbase 的包。 该包基于仅使用 Mapper 加载表。 很快我将添加如何使用 reducer 以及使用 MR ...
标题“C#使用Thrift2操作HBase数据库”表明我们将讨论如何在C#环境下利用Thrift2库与HBase进行交互,执行基本的数据操作,如添加(Insert)、删除(Delete)、更新(Update)和查询(Select)。 首先,我们需要理解...
使用hbck2时,管理员通常会在发现集群异常、升级HBase版本或者进行大规模数据操作后运行该工具。通过定期执行hbck2检查,可以及时发现和预防潜在的问题,确保HBase集群的稳定运行。 总之,"hbase-operator-tools-...
4. **生成HFile**:使用HBase的`org.apache.hadoop.hbase.mapreduce.ImportTsv`类或其他工具,如`hbase-bulk-import-example`项目中的工具,将SequenceFile转换成HFile。HFile是HBase内部的数据格式,可以直接由...
### HBase 安装与使用知识点详解 #### 概述 HBase 是一款构建于 Hadoop 之上的分布式、可扩展的大规模数据存储系统。它提供了类似 Google BigTable 的功能特性,非常适合处理海量数据和高并发读写需求的应用场景。...
- 部署HBase Master和Region服务器。Master节点负责元数据管理,Region服务器则处理数据存储和查询。 - 一旦配置完成,Cloudera Manager将启动HBase服务。 2. **默认配置与验证**: - HBase会自动在HDFS上创建所...
完成以上配置后,在主服务器上启动 Hadoop 服务,使用命令 `start-all.sh`,然后在 HBase 的 `bin` 目录下使用 `start-hbase.sh` 命令来启动 HBase。 ##### 5. 验证服务状态 最后,通过 `jps` 命令来验证 Hadoop ...
在本文中,我们将深入探讨如何使用Scala API操作HBase数据库。HBase是一个分布式、面向列的NoSQL数据库,它构建于Hadoop之上,提供实时访问大量数据的能力。Scala是一种强大的函数式编程语言,与Java虚拟机(JVM)...
hbase 常用参数含义,默认值,调优建议(必须参数,split,compaction,blockcache,memstore flush,hlog,zookeeper,其他,等相参数名称、含义、默认值、调优建议)
除了基本操作,教程还会涵盖高级主题,如过滤器、二级索引、Coprocessors、Bulk Load以及HBase与其他大数据组件(如Hadoop、Spark)的集成。这些内容有助于读者掌握HBase在实际项目中的应用技巧。 通过阅读《HBase...