`

5。hbase高级部分:compact/split/balance及其它维护原理--the flow of creating table

 
阅读更多

1.principle

 

 

 

 

2.note

-as is hbase-0.94.2,the table .meta. is not splittable,therefore,there is only ONE region in table -root-.

that is if you try to split the root you will do it w/o errors but this is meanlessfull and noneffective,as ONLY one row in root table.

 

3.flow at master side

Preconditinos:

-1.verify the meta table is online

-2.check the table whether exists or not(fun in CreateTableHandler#constructor)

-3 set a znode '/hbase/table/new-table' with state 'enabling'(differe from last step )

Real opers:

-4 create tableinfo.xxx file

-5 create new regions belong this table

-6 add them to meta for tracking down

-7 assign regions to respective rs (see [1] assign algo )

-8 enable the table state in with 'enabled'

 

[1] assign algo(round robin)

/**
   * Generates a bulk assignment plan to be used on cluster startup using a
   * simple round-robin assignment.
   * <p>
   * Takes a list of all the regions and all the servers in the cluster and
   * returns a map of each server to the regions that it should be assigned.
   * <p>
   * Currently implemented as a round-robin assignment.  Same invariant as
   * load balancing, all servers holding floor(avg) or ceiling(avg).
   *
   * TODO: Use block locations from HDFS to place regions with their blocks
   *
   * @param regions all regions
   * @param servers all servers
   * @return map of server to the regions it should take, or null if no
   *         assignment is possible (ie. no regions or no servers)
   */
  public Map<ServerName, List<HRegionInfo>> roundRobinAssignment(
      List<HRegionInfo> regions, List<ServerName> servers) {
    if (regions.isEmpty() || servers.isEmpty()) {
      return null;
    }
    Map<ServerName, List<HRegionInfo>> assignments =
      new TreeMap<ServerName,List<HRegionInfo>>();
    int numRegions = regions.size();
    int numServers = servers.size();
    int max = (int)Math.ceil((float)numRegions/numServers); //smallest that bigger than or equals to x
    int serverIdx = 0;
    if (numServers > 1) {
      serverIdx = RANDOM.nextInt(numServers);//used for randomly select a first rs
    }
    int regionIdx = 0;
    for (int j = 0; j < numServers; j++) {	//iterate all regions then assign by index % servers will be more simiple
      ServerName server = servers.get((j + serverIdx) % numServers);
      List<HRegionInfo> serverRegions = new ArrayList<HRegionInfo>(max); //max:the only optimization of this algorithm
      for (int i=regionIdx; i<numRegions; i += numServers) {
        serverRegions.add(regions.get(i % numRegions));
      }
      assignments.put(server, serverRegions);
      regionIdx++;
    }
    return assignments;
  }

=============

blow are the logs from master when creating table:

2014-07-18 16:39:14,334 DEBUG [IPC Server handler 18 on 60000] ClientScanner.java:90 Creating scanner over .META. starting at key 't1,,'
2014-07-18 16:39:14,334 DEBUG [IPC Server handler 18 on 60000] ClientScanner.java:198 Advancing internal scanner to startKey at 't1,,'
2014-07-18 16:39:14,379 INFO [MASTER_TABLE_OPERATIONS-cluster-03,60000,1402881344300-0] CreateTableHandler.java:125 Attemping to create the table t1
2014-07-18 16:39:14,425 INFO [MASTER_TABLE_OPERATIONS-cluster-03,60000,1402881344300-0] HRegion.java:3685 creating HRegion t1 HTD == {NAME => 't1', FAMILIES => [{NAME => 'f1', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', VERSIONS => '5', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'}]} RootDir = hdfs://host03:54310/hbase Table name == t1
2014-07-18 16:39:14,430 INFO [MASTER_TABLE_OPERATIONS-cluster-03,60000,1402881344300-0] HLog.java:455 FileSystem doesn't support getDefaultBlockSize
2014-07-18 16:39:14,433 INFO [MASTER_TABLE_OPERATIONS-cluster-03,60000,1402881344300-0] HLog.java:426 HLog configuration: blocksize=128 MB, rollsize=121.6 MB, enabled=true, optionallogflushinternal=1000ms
2014-07-18 16:39:14,446 DEBUG [MASTER_TABLE_OPERATIONS-cluster-03,60000,1402881344300-0] SequenceFileLogWriter.java:193 using new createWriter -- HADOOP-6840
2014-07-18 16:39:14,447 DEBUG [MASTER_TABLE_OPERATIONS-cluster-03,60000,1402881344300-0] SequenceFileLogWriter.java:204 Path=hdfs://host03:54310/hbase/t1/7846f70ff612136e14a1c48283deb102/.logs/hlog.1405672754433, syncFs=true, hflush=false, compression=false
2014-07-18 16:39:14,447 INFO [MASTER_TABLE_OPERATIONS-cluster-03,60000,1402881344300-0] HLog.java:650  for /hbase/t1/7846f70ff612136e14a1c48283deb102/.logs/hlog.1405672754433
2014-07-18 16:39:14,448 INFO [MASTER_TABLE_OPERATIONS-cluster-03,60000,1402881344300-0] HLog.java:498 Using getNumCurrentReplicas--HDFS-826
2014-07-18 16:39:14,456 INFO [MASTER_TABLE_OPERATIONS-cluster-03,60000,1402881344300-0] HRegion.java:425 Setting up tabledescriptor config now ...
2014-07-18 16:39:14,464 DEBUG [MASTER_TABLE_OPERATIONS-cluster-03,60000,1402881344300-0] HRegion.java:419 Instantiated t1,,1405672754325.7846f70ff612136e14a1c48283deb102.
2014-07-18 16:39:14,465 DEBUG [MASTER_TABLE_OPERATIONS-cluster-03,60000,1402881344300-0] FSUtils.java:154 Creating file:hdfs://host03:54310/hbase/t1/7846f70ff612136e14a1c48283deb102/.tmp/.regioninfowith permission:rwxrwxrwx
2014-07-18 16:39:14,484 INFO [StoreOpenerThread-t1,,1405672754325.7846f70ff612136e14a1c48283deb102.-1] Store.java:226 time to purge deletes set to 0ms in store null
2014-07-18 16:39:14,508 INFO [StoreOpenerThread-t1,,1405672754325.7846f70ff612136e14a1c48283deb102.-1] ChecksumType.java:73 org.apache.hadoop.util.PureJavaCrc32 not available.
2014-07-18 16:39:14,508 INFO [StoreOpenerThread-t1,,1405672754325.7846f70ff612136e14a1c48283deb102.-1] ChecksumType.java:80 Checksum can use java.util.zip.CRC32
2014-07-18 16:39:14,508 INFO [StoreOpenerThread-t1,,1405672754325.7846f70ff612136e14a1c48283deb102.-1] ChecksumType.java:116 org.apache.hadoop.util.PureJavaCrc32C not available. 
2014-07-18 16:39:14,523 INFO [MASTER_TABLE_OPERATIONS-cluster-03,60000,1402881344300-0] HRegion.java:577 Onlined t1,,1405672754325.7846f70ff612136e14a1c48283deb102.; next sequenceid=1
2014-07-18 16:39:14,530 INFO [MASTER_TABLE_OPERATIONS-cluster-03,60000,1402881344300-0] MetaEditor.java:163 Added 1 regions in META
2014-07-18 16:39:14,530 DEBUG [MASTER_TABLE_OPERATIONS-cluster-03,60000,1402881344300-0] HRegion.java:870 Closing t1,,1405672754325.7846f70ff612136e14a1c48283deb102.: disabling compactions & flushes
2014-07-18 16:39:14,531 DEBUG [MASTER_TABLE_OPERATIONS-cluster-03,60000,1402881344300-0] HRegion.java:900 Updates disabled for region t1,,1405672754325.7846f70ff612136e14a1c48283deb102.
2014-07-18 16:39:14,532 DEBUG [StoreCloserThread-t1,,1405672754325.7846f70ff612136e14a1c48283deb102.-1] Store.java:639 closed f1
2014-07-18 16:39:14,533 INFO [MASTER_TABLE_OPERATIONS-cluster-03,60000,1402881344300-0] HRegion.java:948 Closed t1,,1405672754325.7846f70ff612136e14a1c48283deb102.
2014-07-18 16:39:14,534 INFO [MASTER_TABLE_OPERATIONS-cluster-03,60000,1402881344300-0.logSyncer] HLog.java:1240 MASTER_TABLE_OPERATIONS-cluster-03,60000,1402881344300-0.logSyncer exiting
2014-07-18 16:39:14,534 DEBUG [MASTER_TABLE_OPERATIONS-cluster-03,60000,1402881344300-0] HLog.java:1002 closing hlog writer in hdfs://host03:54310/hbase/t1/7846f70ff612136e14a1c48283deb102/.logs
2014-07-18 16:39:14,545 DEBUG [MASTER_TABLE_OPERATIONS-cluster-03,60000,1402881344300-0] HLog.java:970 Moved 1 log files to /hbase/t1/7846f70ff612136e14a1c48283deb102/.oldlogs
2014-07-18 16:39:14,546 INFO [MASTER_TABLE_OPERATIONS-cluster-03,60000,1402881344300-0] AssignmentManager.java:2258 Bulk assigning 1 region(s) round-robin across 14 server(s)
2014-07-18 16:39:14,548 DEBUG [cluster-03,60000,1402881344300-StartupBulkAssigner-0] AssignmentManager.java:1377 Bulk assigning 1 region(s) to cluster-05,60020,1405411368652
2014-07-18 16:39:14,550 DEBUG [cluster-03,60000,1402881344300-StartupBulkAssigner-0] ZKAssign.java:169 master:60000-0x545ef23b01a0ad5-0x545ef23b01a0ad5-0x545ef23b01a0ad5 Async create of unassigned node for 7846f70ff612136e14a1c48283deb102 with OFFLINE state
2014-07-18 16:39:14,550 DEBUG [MASTER_TABLE_OPERATIONS-cluster-03,60000,1402881344300-0] AssignmentManager.java:2393 Timeout-on-RIT=1000
2014-07-18 16:39:14,553 DEBUG [main-EventThread] AssignmentManager.java:1483 rs=t1,,1405672754325.7846f70ff612136e14a1c48283deb102. state=OFFLINE, ts=1405672754550, server=null, server=cluster-05,60020,1405411368652
2014-07-18 16:39:14,554 DEBUG [main-EventThread] AssignmentManager.java:1514 rs=t1,,1405672754325.7846f70ff612136e14a1c48283deb102. state=OFFLINE, ts=1405672754550, server=null
2014-07-18 16:39:14,554 INFO [cluster-03,60000,1402881344300-StartupBulkAssigner-0] AssignmentManager.java:1410 cluster-05,60020,1405411368652 unassigned znodes=1 of total=1
2014-07-18 16:39:14,554 DEBUG [cluster-03,60000,1402881344300-StartupBulkAssigner-0] ServerManager.java:549 New connection to cluster-05,60020,1405411368652
2014-07-18 16:39:14,560 DEBUG [cluster-03,60000,1402881344300-StartupBulkAssigner-0] AssignmentManager.java:1454 Bulk assigning done for cluster-05,60020,1405411368652
2014-07-18 16:39:14,564 DEBUG [main-EventThread] AssignmentManager.java:702 Handling transition=RS_ZK_REGION_OPENING, server=cluster-05,60020,1405411368652, region=7846f70ff612136e14a1c48283deb102
2014-07-18 16:39:14,633 DEBUG [main-EventThread] AssignmentManager.java:702 Handling transition=RS_ZK_REGION_OPENING, server=cluster-05,60020,1405411368652, region=7846f70ff612136e14a1c48283deb102
2014-07-18 16:39:14,639 DEBUG [main-EventThread] AssignmentManager.java:702 Handling transition=RS_ZK_REGION_OPENED, server=cluster-05,60020,1405411368652, region=7846f70ff612136e14a1c48283deb102
2014-07-18 16:39:14,639 DEBUG [MASTER_OPEN_REGION-cluster-03,60000,1402881344300-3] OpenedRegionHandler.java:147 Handling OPENED event for t1,,1405672754325.7846f70ff612136e14a1c48283deb102. from cluster-05,60020,1405411368652; deleting unassigned node
2014-07-18 16:39:14,639 DEBUG [MASTER_OPEN_REGION-cluster-03,60000,1402881344300-3] ZKAssign.java:483 master:60000-0x545ef23b01a0ad5-0x545ef23b01a0ad5-0x545ef23b01a0ad5 Deleting existing unassigned node for 7846f70ff612136e14a1c48283deb102 that is in expected state RS_ZK_REGION_OPENED
2014-07-18 16:39:14,642 DEBUG [main-EventThread] AssignmentManager.java:1136 The znode of region t1,,1405672754325.7846f70ff612136e14a1c48283deb102. has been deleted.
2014-07-18 16:39:14,642 DEBUG [MASTER_OPEN_REGION-cluster-03,60000,1402881344300-3] ZKAssign.java:512 master:60000-0x545ef23b01a0ad5-0x545ef23b01a0ad5-0x545ef23b01a0ad5 Successfully deleted unassigned node for region 7846f70ff612136e14a1c48283deb102 in expected state RS_ZK_REGION_OPENED
2014-07-18 16:39:14,642 INFO [main-EventThread] AssignmentManager.java:1148 The master has opened the region t1,,1405672754325.7846f70ff612136e14a1c48283deb102. that was online on cluster-05,60020,1405411368652
2014-07-18 16:39:14,643 INFO [MASTER_TABLE_OPERATIONS-cluster-03,60000,1402881344300-0] AssignmentManager.java:2263 Bulk assigning done
2014-07-18 16:40:03,986 DEBUG [326031821@qtp-47973429-17] MetaScanner.java:200 Scanning .META. starting at row=t1,,00000000000000 for max=10 rows using org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@679b2faf
2014-07-18 16:40:03,989 DEBUG [326031821@qtp-47973429-17] HConnectionManager.java:1241 Cached location for t1,,1405672754325.7846f70ff612136e14a1c48283deb102. is cluster-05:60020
2014-07-18 16:40:03,993 INFO [326031821@qtp-47973429-17] ZooKeeper.java:433 Initiating client connection, connectString=host06:2181,host05:2181,host02:2181,host03:2181,host07:2181 sessionTimeout=180000 watcher=catalogtracker-on-org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@679b2faf
2014-07-18 16:40:03,993 INFO [326031821@qtp-47973429-17-SendThread()] ClientCnxn.java:933 Opening socket connection to server /192.168.100.107:2181
2014-07-18 16:40:03,994 INFO [326031821@qtp-47973429-17] RecoverableZooKeeper.java:98 The identifier of this process is 18174@cluster08
2014-07-18 16:40:03,994 DEBUG [326031821@qtp-47973429-17] CatalogTracker.java:236 Starting catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@75dcb817
2014-07-18 16:40:03,994 WARN [326031821@qtp-47973429-17-SendThread(cluster-02:2181)] ZooKeeperSaslClient.java:123 SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
2014-07-18 16:40:03,994 INFO [326031821@qtp-47973429-17-SendThread(cluster-02:2181)] ZooKeeperSaslClient.java:125 Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
2014-07-18 16:40:03,995 INFO [326031821@qtp-47973429-17-SendThread(cluster-02:2181)] ClientCnxn.java:846 Socket connection established to cluster-02/192.168.100.107:2181, initiating session
2014-07-18 16:40:03,998 INFO [326031821@qtp-47973429-17-SendThread(cluster-02:2181)] ClientCnxn.java:1175 Session establishment complete on server cluster-02/192.168.100.107:2181, sessionid = 0x245ef23b5f4027c, negotiated timeout = 40000
2014-07-18 16:40:04,000 DEBUG [326031821@qtp-47973429-17] MetaScanner.java:200 Scanning .META. starting at row= for max=2147483647 rows using org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@679b2faf
2014-07-18 16:40:04,094 DEBUG [326031821@qtp-47973429-17] ClientScanner.java:90 Creating scanner over .META. starting at key 't1,,'
2014-07-18 16:40:04,094 DEBUG [326031821@qtp-47973429-17] ClientScanner.java:198 Advancing internal scanner to startKey at 't1,,'
2014-07-18 16:40:04,128 DEBUG [326031821@qtp-47973429-17] ClientScanner.java:90 Creating scanner over .META. starting at key 't1,,'
2014-07-18 16:40:04,128 DEBUG [326031821@qtp-47973429-17] ClientScanner.java:198 Advancing internal scanner to startKey at 't1,,'
2014-07-18 16:40:04,163 DEBUG [326031821@qtp-47973429-17] CatalogTracker.java:253 Stopping catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@75dcb817
2014-07-18 16:40:04,165 INFO [326031821@qtp-47973429-17] ZooKeeper.java:679 Session: 0x245ef23b5f4027c closed
2014-07-18 16:40:04,165 INFO [326031821@qtp-47973429-17-EventThread] ClientCnxn.java:511 EventThread shut down
2014-07-18 16:40:04,166 DEBUG [326031821@qtp-47973429-17] MetaScanner.java:200 Scanning .META. starting at row=t1,,00000000000000 for max=2147483647 rows using org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@679b2faf

 

and logs from rs:

2014-07-18 16:39:15,755 INFO [PRI IPC Server handler 4 on 60020] HRegionServer.java:2801 Received request to open 1 region(s)
2014-07-18 16:39:15,756 INFO [PRI IPC Server handler 4 on 60020] HRegionServer.java:2755 Received request to open region: t1,,1405672754325.7846f70ff612136e14a1c48283deb102.
2014-07-18 16:39:15,760 DEBUG [RS_OPEN_REGION-cluster-05,60020,1405411368652-0] ZKAssign.java:757 regionserver:60020-0x145ef23b27d0d3b Attempting to transition node 7846f70ff612136e14a1c48283deb102 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING
2014-07-18 16:39:15,763 DEBUG [RS_OPEN_REGION-cluster-05,60020,1405411368652-0] ZKAssign.java:820 regionserver:60020-0x145ef23b27d0d3b Successfully transitioned node 7846f70ff612136e14a1c48283deb102 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING
2014-07-18 16:39:15,763 DEBUG [RS_OPEN_REGION-cluster-05,60020,1405411368652-0] HRegion.java:3745 Opening region: {NAME => 't1,,1405672754325.7846f70ff612136e14a1c48283deb102.', STARTKEY => '', ENDKEY => '', ENCODED => 7846f70ff612136e14a1c48283deb102,}
2014-07-18 16:39:15,763 INFO [RS_OPEN_REGION-cluster-05,60020,1405411368652-0] HRegion.java:425 Setting up tabledescriptor config now ...
2014-07-18 16:39:15,764 DEBUG [RS_OPEN_REGION-cluster-05,60020,1405411368652-0] HRegion.java:419 Instantiated t1,,1405672754325.7846f70ff612136e14a1c48283deb102.
2014-07-18 16:39:15,826 INFO [StoreOpenerThread-t1,,1405672754325.7846f70ff612136e14a1c48283deb102.-1] Store.java:226 time to purge deletes set to 0ms in store null
2014-07-18 16:39:15,828 INFO [RS_OPEN_REGION-cluster-05,60020,1405411368652-0] HRegion.java:577 Onlined t1,,1405672754325.7846f70ff612136e14a1c48283deb102.; next sequenceid=1
2014-07-18 16:39:15,829 DEBUG [RS_OPEN_REGION-cluster-05,60020,1405411368652-0] ZKAssign.java:757 regionserver:60020-0x145ef23b27d0d3b Attempting to transition node 7846f70ff612136e14a1c48283deb102 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENING
2014-07-18 16:39:15,832 DEBUG [RS_OPEN_REGION-cluster-05,60020,1405411368652-0] ZKAssign.java:820 regionserver:60020-0x145ef23b27d0d3b Successfully transitioned node 7846f70ff612136e14a1c48283deb102 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENING
2014-07-18 16:39:15,832 INFO [PostOpenDeployTasks:7846f70ff612136e14a1c48283deb102] HRegionServer.java:1651 Post open deploy tasks for region=t1,,1405672754325.7846f70ff612136e14a1c48283deb102., daughter=false
2014-07-18 16:39:15,835 INFO [PostOpenDeployTasks:7846f70ff612136e14a1c48283deb102] MetaEditor.java:261 Updated row t1,,1405672754325.7846f70ff612136e14a1c48283deb102. with server=cluster-05,60020,1405411368652
2014-07-18 16:39:15,835 INFO [PostOpenDeployTasks:7846f70ff612136e14a1c48283deb102] HRegionServer.java:1676 Done with post open deploy task for region=t1,,1405672754325.7846f70ff612136e14a1c48283deb102., daughter=false
2014-07-18 16:39:15,835 DEBUG [RS_OPEN_REGION-cluster-05,60020,1405411368652-0] ZKAssign.java:757 regionserver:60020-0x145ef23b27d0d3b Attempting to transition node 7846f70ff612136e14a1c48283deb102 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED
2014-07-18 16:39:15,838 DEBUG [RS_OPEN_REGION-cluster-05,60020,1405411368652-0] ZKAssign.java:820 regionserver:60020-0x145ef23b27d0d3b Successfully transitioned node 7846f70ff612136e14a1c48283deb102 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED
2014-07-18 16:39:15,838 DEBUG [RS_OPEN_REGION-cluster-05,60020,1405411368652-0] OpenRegionHandler.java:284 region transitioned to opened in zookeeper: {NAME => 't1,,1405672754325.7846f70ff612136e14a1c48283deb102.', STARTKEY => '', ENDKEY => '', ENCODED => 7846f70ff612136e14a1c48283deb102,}, server: cluster-05,60020,1405411368652
2014-07-18 16:39:15,838 DEBUG [RS_OPEN_REGION-cluster-05,60020,1405411368652-0] OpenRegionHandler.java:140 Opened t1,,1405672754325.7846f70ff612136e14a1c48283deb102. on server:cluster-05,60020,1405411368652

 

 

 

ref:
hbase高级部分:compact/split/balance及其它维护原理-delete table 
  • 大小: 31.3 KB
分享到:
评论

相关推荐

    hbase 启动regionserver日志报错: Wrong FS: hdfs:// .regioninfo, expected: file:///

    NULL 博文链接:https://bnmnba.iteye.com/blog/2322332

    hbase-2.4.16-bin.tar.gz

    hbase官网下载地址(官网下载太慢): https://downloads.apache.org/hbase/ 国内镜像hbase-2.4.16: https://mirrors.tuna.tsinghua.edu.cn/apache/hbase/2.4.16/hbase-2.4.16-bin.tar.gz

    hbase-0.98.9-src.tar

    section and is where you should being your exploration of the hbase project. The latest HBase can be downloaded from an Apache Mirror [4]. The source code can be found at [5] The HBase issue ...

    hbase-0.98.1源码包

    5. 并发控制:学习RegionSplitPolicy、RegionSplitter等类,理解HBase如何处理并发请求和Region分裂。 6. 客户端API:研究HBase客户端如何通过Table、Get、Put、Scan等对象进行数据操作。 通过阅读源码,开发者可以...

    pinpoint的hbase初始化脚本hbase-create.hbase

    搭建pinpoint需要的hbase初始化脚本hbase-create.hbase

    centos7 安装 hbase单机版

    首先下载hbase安装包 wget http://mirror.bit.edu.cn/apache/hbase/stable/hbase-2.2.3-bin.tar.gz 解压压缩包 tar -zxvf hbase-2.2.3-bin.tar.gz 修改/opt/hbase-2.2.3/conf/hbse-env.sh文件 第一步 设置javahome ...

    在hadoop-3.1.2上安装hbase-2.2.1.pdf

    本文将HBase-2.2.1安装在Hadoop-3.1.2上,关于Hadoop-3.1.2的安装,请参见《基于zookeeper-3.5.5安装hadoop-3.1.2》一文。安装环境为64位CentOS-Linux 7.2版本。 本文将在HBase官方提供的quickstart.html文件的指导...

    HBase单机版部署教程

    HBase单机版部署需要安装JDK、下载HBase、解压HBase、设置环境变量、修改/etc/profile文件、配置hbase-env.sh文件、配置hbase-site.xml文件、启动HBase和访问HBase。通过这些步骤,我们可以成功部署HBase单机版。

    hbase-meta-repair-hbase-2.0.2.jar

    HBase 元数据修复工具包。 ①修改 jar 包中的application.properties,重点是 zookeeper.address、zookeeper.nodeParent、hdfs.root.dir配置项,hdfs 最好写 ip; ②将core-site.xml、hdfs-site.xml添加到BOOT-INF/...

    apache hbase reference guide

    - **Accessing Other HBase Tables in a MapReduce Job**(在一个MapReduce作业中访问其他HBase表):如何在一个作业中同时操作多个表。 - **Speculative Execution**(推测执行):通过预测机制提高MapReduce任务...

    hbase-2.4.11-bin.tar.gz

    《HBase 2.4.11:大数据存储与管理的基石》 HBase,作为Apache软件基金会的一个开源项目,是构建在Hadoop文件系统(HDFS)之上的分布式、面向列的数据库,专为处理大规模数据而设计。标题中的“hbase-2.4.11-bin....

    HBASE跨集群迁移总结---扎啤1

    在IT行业中,尤其是在大数据处理领域,HBase是一个广泛使用的分布式列式存储系统,与Hadoop生态系统紧密集成。本文将深入探讨HBase跨集群迁移的方法,特别是通过DistCp工具进行迁移的步骤和注意事项。 首先,当需要...

    HBase学习利器:HBase实战

    ### HBase学习利器:HBase实战 #### 一、HBase简介与背景 HBase是Apache Hadoop生态系统中的一个分布式、可扩展的列族数据库,它提供了类似Bigtable的能力,能够在大规模数据集上进行随机读写操作。HBase是基于...

    zookeeper+hadoop+hbase+hive(集成hbase)安装部署教程(超详细).docx

    jdk1.8.0_131、apache-zookeeper-3.8.0、hadoop-3.3.2、hbase-2.4.12 mysql5.7.38、mysql jdbc驱动mysql-connector-java-8.0.8-dmr-bin.jar、 apache-hive-3.1.3 2.本文软件均安装在自建的目录/export/server/下 ...

    hbase-1.2.6-bin.tar.gz

    HBase是Apache软件基金会开发的一个开源、分布式、版本化、基于列族的NoSQL数据库,设计用于处理海量数据。在大型分布式系统中,HBase提供实时读写访问,且能够支持PB级别的数据存储。"hbase-1.2.6-bin.tar.gz" 是...

    最新版linux hbase-2.3.3-bin.tar.gz

    Linux上的HBase是Apache Hadoop生态系统中的一个分布式、版本化、基于列族的NoSQL数据库。它设计用于处理海量数据,特别适合实时读写操作。HBase 2.3.3是该软件的一个稳定版本,提供了许多改进和新功能。在本文中,...

Global site tag (gtag.js) - Google Analytics