工作的地方不让上网,回家补个笔记,好惨好惨
主要的步骤都在HRegion下的Store compact方法中
Store.compact(final List<StoreFile> filesToCompact, final boolean majorCompaction, final long maxId)
1.根据filesToCompat,生成hfile所对应的StoreFileScanner
2.创建StoreScanner 类,负责顺序便利所有要compact的file,对生成的storeFileScanner进行封装,主要方法为next
2.1创建ScanQueryMatcher 类,负责判断是否过滤删除keyvalue
2.2创建KeyValueHeap类,存放StoreFileScanner,进行堆的排序,根据hfile的startrowkey进行排序,hfile自身有序
2.3维护一个current的StoreFileScanner
3.调用StoreScanner 的next方法
3.1获取current的StoreFileScanner的startrowkey,进行poll操作
3.2使用ScanQueryMatcher类,对输出的keyValue进行流程分支判断
3.3调用KeyValueHeap类next方法,更新current的StoreFileScanner,若当前current.startrowkey>heap.peek.startrowkey,则将current放入堆中,并从堆中取出作为current,使得下次获得当前最小keyvalue;(keyvalueheap使用PriorityQueue)
补贴一个StoreScanner调用ScanQueryMatcher类的next方法,在这里确定kv是写入新tmp文件,还是skip掉
StoreScanner类 public boolean next(List<Cell> outResult, int limit) throws IOException { lock.lock(); try { if (checkReseek()) { return true; } // if the heap was left null, then the scanners had previously run out anyways, close and // return. if (this.heap == null) { close(); return false; } KeyValue peeked = this.heap.peek(); if (peeked == null) { close(); return false; } // only call setRow if the row changes; avoids confusing the query matcher // if scanning intra-row byte[] row = peeked.getBuffer(); int offset = peeked.getRowOffset(); short length = peeked.getRowLength(); if (limit < 0 || matcher.row == null || !Bytes.equals(row, offset, length, matcher.row, matcher.rowOffset, matcher.rowLength)) { this.countPerRow = 0; matcher.setRow(row, offset, length); } KeyValue kv; // Only do a sanity-check if store and comparator are available. KeyValue.KVComparator comparator = store != null ? store.getComparator() : null; int count = 0; LOOP: while((kv = this.heap.peek()) != null) { if (prevKV != kv) ++kvsScanned; // Do object compare - we set prevKV from the same heap. checkScanOrder(prevKV, kv, comparator); prevKV = kv; ScanQueryMatcher.MatchCode qcode = matcher.match(kv);//使用ScanQueryMatcher类,进行判断,查看是否 switch(qcode) { case INCLUDE:
删除数据的逻辑 /* * The delete logic is pretty complicated now. * This is corroborated by the following: * 1. The store might be instructed to keep deleted rows around. * 2. A scan can optionally see past a delete marker now. * 3. If deleted rows are kept, we have to find out when we can * remove the delete markers. * 4. Family delete markers are always first (regardless of their TS) * 5. Delete markers should not be counted as version * 6. Delete markers affect puts of the *same* TS * 7. Delete marker need to be version counted together with puts * they affect */ byte type = bytes[initialOffset + keyLength - 1]; if (kv.isDelete()) { if (!keepDeletedCells) { // first ignore delete markers if the scanner can do so, and the // range does not include the marker // // during flushes and compactions also ignore delete markers newer // than the readpoint of any open scanner, this prevents deleted // rows that could still be seen by a scanner from being collected boolean includeDeleteMarker = seePastDeleteMarkers ? tr.withinTimeRange(timestamp) : tr.withinOrAfterTimeRange(timestamp); if (includeDeleteMarker && kv.getMvccVersion() <= maxReadPointToTrackVersions) { this.deletes.add(bytes, offset, qualLength, timestamp, type); } // Can't early out now, because DelFam come before any other keys } if (retainDeletesInOutput || (!isUserScan && (EnvironmentEdgeManager.currentTimeMillis() - timestamp) <= timeToPurgeDeletes) || kv.getMvccVersion() > maxReadPointToTrackVersions) {//minor compact // always include or it is not time yet to check whether it is OK // to purge deltes or not if (!isUserScan) { // if this is not a user scan (compaction), we can filter this deletemarker right here // otherwise (i.e. a "raw" scan) we fall through to normal version and timerange checking return MatchCode.INCLUDE; } } else if (keepDeletedCells) { if (timestamp < earliestPutTs) { // keeping delete rows, but there are no puts older than // this delete in the store files. return columns.getNextRowOrNextColumn(bytes, offset, qualLength); } // else: fall through and do version counting on the // delete markers } else {//major compact return MatchCode.SKIP; } // note the following next else if... // delete marker are not subject to other delete markers } else if (!this.deletes.isEmpty()) { DeleteResult deleteResult = deletes.isDeleted(bytes, offset, qualLength, timestamp); switch (deleteResult) { case FAMILY_DELETED: case COLUMN_DELETED: return columns.getNextRowOrNextColumn(bytes, offset, qualLength); case VERSION_DELETED: case FAMILY_VERSION_DELETED: return MatchCode.SKIP; case NOT_DELETED: break; default: throw new RuntimeException("UNEXPECTED"); } }
/** *做scan的类型 * Enum to distinguish general scan types. */ @InterfaceAudience.Private public enum ScanType { COMPACT_DROP_DELETES, COMPACT_RETAIN_DELETES, USER_SCAN }
对delete data的处理 /* * The following three booleans define how we deal with deletes. * There are three different aspects: * 1. Whether to keep delete markers. This is used in compactions. * Minor compactions always keep delete markers. * 2. Whether to keep deleted rows. This is also used in compactions, * if the store is set to keep deleted rows. This implies keeping * the delete markers as well. * In this case deleted rows are subject to the normal max version * and TTL/min version rules just like "normal" rows. * 3. Whether a scan can do time travel queries even before deleted * marker to reach deleted rows. */
相关推荐
HBase源码分析与开发实战视频技术讲解高阶视频教程以及课件,内部讲解资料 内容非常详细 值得想要提高薪水的人去学习了解
HBase源码分析揭示了HBase在RPC通信机制方面的一些关键技术点,这包括了角色分配、通信信道建立、通信接口协议定义、对象序列化、传输控制和会话管理,以及在传输过程中可能出现的错误处理和重试机制。 HBase中的...
源码分析是理解HBase工作原理和技术细节的重要途径。HBase在大数据领域扮演着关键角色,它能够处理海量数据并提供实时访问。下面,我们将深入探讨HBase的核心概念和源码中的关键组件。 1. **HBase架构**:HBase基于...
该项目为基于Java开发的分布式NoSQL数据库HBase的设计源码,包含5289个文件,涵盖各类编程语言和文件类型,其中Java源文件4465个,Ruby脚本221个,XML配置112个,Protobuf定义66个,以及少量Shell、Python、...
HBase的源码分析有助于理解其内部工作原理。例如,`HRegionServer`是数据服务的主要组件,负责Region的管理和数据操作;`HMaster`负责Region的分配和负载均衡;`HStore`管理Column Family,包含一系列的`HStoreFile...
源码包“hbase-0.98.1-src.tar.gz”提供了HBase 0.98.1版本的完整源代码,对于理解其内部工作原理、进行二次开发或调试是非常有价值的。 HBase的核心概念包括: 1. 表:HBase中的表由行和列族组成,表名全局唯一。...
通过分析和实践《HBase权威指南》的源码,读者不仅可以深化理论知识,还能掌握实际操作技巧,为解决实际项目中的问题提供有力支持。对于想深入理解HBase工作原理和优化技巧的开发者来说,这份源码是一份宝贵的资源。
本使用kafka,spark,hbase开发日志分析系统。 ![architecture](/docs/images/architecture.png "architecture") ### 软件模块 * Kafka:作为日志事件的消息系统,具有分布式,可分区,可冗余的消息服务功能。...
### HBase源码分析 #### 一、HBase性能测试要点与分析 ##### 1.1 测试环境 - **硬件配置**: - 客户端:1台 - RegionServer:5台 - Master:1台 - ZooKeeper:3台 - **软件配置**: - CPU:每台服务器配备8...
HBase 0.94.4的源码分析有助于我们深入了解其内部机制,从而更好地进行系统设计和优化。无论是对于开发者还是管理员,掌握HBase的核心原理都将极大地提升在大数据领域的实践能力。通过不断学习和实践,我们可以更好...
HBase 1.2.0是该数据库的一个稳定版本,包含了众多优化和改进,对于想要深入理解HBase工作原理或者进行大数据分析的学习者来说,研究其源码是非常有价值的。 一、HBase架构与核心概念 1. 表与Region:HBase中的...
### HBase性能深度分析 HBase,作为BigTable的一个开源实现,因其卓越的分布式数据库特性在大数据处理领域占据了重要地位。然而,随着HBase在各行业的广泛应用,用户对其性能表现的关注日益增强,尤其是实时数据...
### HBase源码解析与开发实战 #### 一、HBase简介 HBase是一个分布式的、面向列的开源数据库,该技术来源于 Fay Chang 所撰写的 Google 论文 “Bigtable:一个结构化数据的分布式存储系统”。就像 Bigtable 利用了...
HBase是Apache软件基金会的一个开源项目,它是基于Google的Bigtable设计思想...通过分析HBase 1.3.1的源码,开发者可以深入理解HBase的工作原理,从而更好地优化应用、解决性能问题,甚至进行功能扩展和定制化开发。
该系统是一款基于Java语言的HBase数据库设计源码,共计26个文件,其中包含18个Java源文件、5个XML配置文件、2个Git忽略文件和1个属性文件。
通过分析"HBaseTest"源码,我们可以更直观地理解HBase的工作原理和使用方式,为实际项目中的数据存储和处理提供有力的支持。在实际应用中,还需要根据具体需求进行性能调优,以充分利用HBase的优势,解决大数据场景...
Hbase权威指南 随书源代码 源码包 绝对完整版 maven工程,带pom文件,可以直接作为一个完整工程导入eclipse等ide。
通过分析`org.apache.hadoop.hbase.masterAssignment.RegionStates`和`org.apache.hadoop.hbase.master.LoadBalancer`等类,我们可以了解HBase如何实现集群的负载均衡和容错能力。 在大数据处理中,HBase的性能优化...