如题:
hdfs:hadoop1.2.1搭建的
hbase:0.98.7版本
flume:1.5.0.1版本
flume拿到日志sink到hbase中去,遇到的问题是这样的:
往表中存入100条数据以后,就开始报错,是flume爆出来的:
2014-11-01 11:18:35,168 (SinkRunner-PollingRunner-DefaultSinkProcessor) [ERROR - org.apache.flume.sink.hbase.HBaseSink.process(HBaseSink.java:353)] Failed to commit transaction.Transaction rolled back.
java.lang.NoSuchMethodError: org.apache.hadoop.hbase.client.Increment.setWriteToWAL(Z)Lorg/apache/hadoop/hbase/client/Increment;
at org.apache.flume.sink.hbase.HBaseSink$4.run(HBaseSink.java:408)
at org.apache.flume.sink.hbase.HBaseSink$4.run(HBaseSink.java:391)
at org.apache.flume.sink.hbase.HBaseSink.runPrivileged(HBaseSink.java:427)
at org.apache.flume.sink.hbase.HBaseSink.putEventsAndCommit(HBaseSink.java:391)
at org.apache.flume.sink.hbase.HBaseSink.process(HBaseSink.java:344)
at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
at java.lang.Thread.run(Thread.java:662)
2014-11-01 11:18:35,170 (SinkRunner-PollingRunner-DefaultSinkProcessor) [ERROR - org.apache.flume.sink.hbase.HBaseSink.process(HBaseSink.java:356)] Failed to commit transaction.Transaction rolled back.
java.lang.NoSuchMethodError: org.apache.hadoop.hbase.client.Increment.setWriteToWAL(Z)Lorg/apache/hadoop/hbase/client/Increment;
at org.apache.flume.sink.hbase.HBaseSink$4.run(HBaseSink.java:408)
at org.apache.flume.sink.hbase.HBaseSink$4.run(HBaseSink.java:391)
at org.apache.flume.sink.hbase.HBaseSink.runPrivileged(HBaseSink.java:427)
at org.apache.flume.sink.hbase.HBaseSink.putEventsAndCommit(HBaseSink.java:391)
at org.apache.flume.sink.hbase.HBaseSink.process(HBaseSink.java:344)
at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
at java.lang.Thread.run(Thread.java:662)
Exception in thread "SinkRunner-PollingRunner-DefaultSinkProcessor" java.lang.NoSuchMethodError: org.apache.hadoop.hbase.client.Increment.setWriteToWAL(Z)Lorg/apache/hadoop/hbase/client/Increment;
at org.apache.flume.sink.hbase.HBaseSink$4.run(HBaseSink.java:408)
at org.apache.flume.sink.hbase.HBaseSink$4.run(HBaseSink.java:391)
at org.apache.flume.sink.hbase.HBaseSink.runPrivileged(HBaseSink.java:427)
at org.apache.flume.sink.hbase.HBaseSink.putEventsAndCommit(HBaseSink.java:391)
at org.apache.flume.sink.hbase.HBaseSink.process(HBaseSink.java:344)
at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
at java.lang.Thread.run(Thread.java:662)
我翻阅了flume1.5.0.1和hbase0.98.7版本的源代码以后发现确实是由hbase的0.98.7版本的Increment没有setWriteToWAL方法,这种情况如何破?
找到了问题的替代解决方法:
之前的flume配置是:
agent0.sinks.log-sink0.serializer = org.apache.flume.sink.hbase.SimpleHbaseEventSerializer
替换成:
agent0.sinks.log-sink0.serializer = org.apache.flume.sink.hbase.RegexHbaseEventSerializer
不知道是不是SimpleHbaseEventSerializer的bug(不确定),flume1.5.0.1的Simple***lizer在所引用的hbase版本中不存在的方法。不过这个类在实际使用中应该都会自己重写。
此处路略过了。
分享到:
相关推荐
在大数据处理领域,Flume 是一个至关重要的工具,尤其在Apache Hadoop 生态系统中扮演着核心角色。标题中的"Apache-flume-1.7.0-bin.tar.gz"指的是该软件的1.7.0版本的二进制打包文件,通常这种格式的文件是用于...
apache-flume-1.7.0-bin.tar.gz,apache-hive-1.2.1-bin.tar.gz,apache-hive-2.1.0-bin.tar.gz,FileZilla_Server-0_9_60_2.exe,hadoop-2.7.2.tar.gz,hbase-1.1.5-bin.tar.gz,kafka_2.11-0.10.2.0.tgz,mysql-...
1.2.1 下载并安装Hadoop 1.2.2 Hadoop 的配置 1.2.3 CLI 基本命令 1.2.4 运行MapReduce 作业 1.3 本章小结 第2 部分 数据逻辑. 2 将数据导入导出Hadoop. 2.1 导入导出的关键要素 2.2 将...
展示对大数据技术的深入学习,如Hadoop、Hive、Flume、Kafka、Spark、HBase以及Flink等,表达对行业发展趋势的看好,并表明持续学习和进取的态度。 此外,面试时还需准备针对项目经验、技术问题的回答,如如何规划...
此外,Hadoop及其组件(如HBase、Hive、ZooKeeper、Sqoop和Flume)都有明确的版本信息,如Hadoop为2.7.1版本,HBase为1.2.1版本,这些软件都安装在Master节点上。 Hadoop的核心组件是分布式存储HDFS(Hadoop ...
- **集群规划**:设计了包括DataNode、NameNode、ResourceManager、NodeManager、Zookeeper、Kafka、Flume、Hbase、Hive、MySQL、Spark、Elasticsearch、Sqoop和Azkaban在内的服务器分配,以满足不同组件的需求。...