`

hbase cluster install

阅读更多

refer to 0.20.6 version

==================

1  ----- use hbase's built in-zookeeper instance  as coordinator

 

 

 

 

 

2----- use zookeeper cluster as coordinator

within this intallition,i met with a puzzle to this,

here are the steps for installation:


1)configure the hbase cluster

a).as we use hdfs to save files generated by HMaster and HRegionServer(s),so we must reference the hdfs-site.xml from hadoop config dir.u can :

  copy the hdfs-site.xml to conf dir of hbase, OR

  make a link file from it:

    cd conf-dir-of-hbase

    ln -s hdfs-site.xml-of-haoop 

b)hbase-en.sh

    *set the java home

    *update: HBASE_MANAGES_ZK=false //not to use zookeeper managed by hbase

c)hbase-site.xml

 *hbase.rootdir

let it point to the hdfs url,for example:hdfs://url-to-hdfs:<port>/path-to-share-by-regionservses

NOTE :this url can NOT contains ip ,use domain is a must.

*hbase.cluster.distributed=true

*hbase.zookeeper.quorum=zk-server-list-splited-by comma

*hbase.zookeeper.property.clientPort=port-which-set-in-zoo.cfg(used to connect by client)

d)regionservers

add HRegionServer host list to this by one per line


2)start the various clusters( note the sequences )

a)start hadoop cluster:start-all.sh.yes this is the same like pseudo mode,so u can NOT use start-dfs.sh instead of!

b)start the zookeeper cluster

  zkServer.sh start

run this command in all nodes in turn.

c)start the hbase cluster.run this command in a node which the HMater will keep in:

  start-hbase.sh

then other region servers will be started up respectively.


this means that the sequence of start hbase cluster are:

hadoop -> zk -> hbase

and these are opposite from stopping it:

hbase -> zk  -> hadoop


3) processes in nodes:

a) master:

8027 HMaster                             //hbase's master
2542 SecondaryNameNode
7920 QuorumPeerMain           //zk  process.
note this is started outsite of hbase,it is not named with:HQuorumPeer
2611 JobTracker
2377 NameNode

b)region serves

3870 QuorumPeerMain
1156 TaskTracker
4019 HRegionServer              //slave of hbase master
1026 DataNode


4) status in zk

a)nodes in zk tree

[safe-mode, root-region-server, rs, master, shutdown]


*safe-mode

[zk: localhost:2181(CONNECTED) 26] get /hbase/safe-mode    // value is empty that is hbase has leave from safe mode.

cZxid = 115964117007
ctime = Wed May 18 23:03:03 CST 2011
mZxid = 115964117007
mtime = Wed May 18 23:03:03 CST 2011
pZxid = 115964117007
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0
dataLength = 0
numChildren = 0


*root-region-servers

[zk: localhost:2181(CONNECTED) 17] get /hbase/root-region-server
192.168.0.2:60020

i think this is "root" region considered by hbase as this is one of the RS(region servers)


*rs (another region server)

[zk: localhost:2181(CONNECTED) 25] get /hbase/rs    //value is empty

cZxid = 115964117001
ctime = Wed May 18 23:02:56 CST 2011
mZxid = 115964117001
mtime = Wed May 18 23:02:56 CST 2011
pZxid = 115964117005
cversion = 2
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0
dataLength = 0
numChildren = 2


* master

[zk: localhost:2181(CONNECTED) 21] get /hbase/master
192.168.0.1:60000

this is the ip and port belong to the master,it is used to report by rs(region servers)


*shutdown

[zk: localhost:2181(CONNECTED) 20] get /hbase/shutdown 
up


NOTE :

1.the hbase use zk cluster to heartbeat(hbase itself have not this mechanism?),so zk must be started before the running of  hbase cluster!


2.how to know to store data in hdfs?

as hbase is hdfs-based table,so it must know site-specific config before storing.so it use the hdfs-site.xml (replication from hadoop/conf) also.

actually,we have set the hbase.rootdir to point to hdfs file system,this is very import for a cluster hbase!

 

3.does hbase use cluster-mapreduce?

there is nothing mapred-related config in the xml,so i think it use local job runner to do the mapred job instea of JobTracker(that is cluster mapred mode)

BUT if you want to create a secondary index,hbase MAYBE start a job to do it,so it is needness to start MR in this case.

 

4.how to use zookeeper in hbase?

based on the proerties of ZK_MANAGER_ZOOKEEPER,hbase.zookeeper.quorum and hbase.zookeeper.property.clientPort,hbase will learn to how to connect to zookeeper.for example,if the first param is set to true ,and hbase will proced to use quorum and clientPort to poll the followers(or leader) of zookeeper cluster to read or write files.

 

 

 

 


see also:hbase architecture

分享到:
评论

相关推荐

    第二步-hbase-hbase-1.2.9在centos7上部署安装(单机版).zip

    &lt;name&gt;hbase.cluster.distributed &lt;value&gt;false ``` 6. **启动HBase**: 添加HBase的守护进程启动脚本到系统服务,然后启动HBase: ``` sudo cp $HBASE_HOME/contrib/init/hbase.init.sh /etc/init.d/...

    cloudera-hbase-cluster

    $ sudo apt-get install software-properties-common $ sudo apt-add-repository ppa:ansible/ansible $ sudo apt-get update $ sudo apt-get install ansible 在 Mac 上安装 ansible $ brew install ansible ###...

    Sams.Teach.Yourself.Big.Data.Analytics.with.Microsoft.HDInsight

    Customize the HDInsight cluster and install additional Hadoop ecosystem projects using Script Actions Administering HDInsight from the Hadoop command prompt or Microsoft PowerShell Using the Microsoft...

    elasticsearch环境部署测试

    - **安装命令**: `bin/plugin install mobz/elasticsearch-head`,此插件主要用于监控集群健康状态。 ##### 6. 启动Elasticsearch集群 - 拷贝并分发Elasticsearch到各个节点,修改每个节点的`network.host`配置为...

    impala-2.8

    - **Enabling JDBC Support on Client Systems**: Install the JDBC driver on client systems. - **Establishing JDBC Connections**: Configure the connection URL and credentials. #### Upgrading Impala -...

    Arkcontrol 数据同步功能简介1

    4. **安装Arkgate插件**:通过MySQL客户端连接到实例,执行一系列的`install plugin`命令来安装Arkgate及其相关组件。 完成上述步骤后,用户就可以在Arkcontrol界面中创建和管理数据同步任务,调整参数,监控任务...

    centos7 部署MySQL

    NoSQL数据库根据存储类型又分为多种,例如键值存储数据库(如Redis)、列存储数据库(如HBase)和文档型数据库(如MongoDB)。 #### 二、单实例MySQL的安装部署 1. **下载MySQL安装包** 访问MySQL官网...

    redis教案笔记

    2. **列存储数据库**:这类数据库以列簇形式存储数据,代表产品包括Cassandra、HBase、Riak等。适合应用于分布式的文件系统。优点是查找速度快,易于分布式扩展,但功能相对有限。 3. **文档型数据库**:这类...

    redis数据库

    3. **编译安装Redis**:解压下载的Redis压缩包后,进入解压后的目录,执行 `make` 命令进行编译,然后使用 `make install` 安装Redis。 #### 四、Redis客户端 Redis提供了多种客户端工具供用户选择,包括: 1. **...

    NoSql数据库之Redis笔记

    - **列存储数据库**:如Cassandra、HBase等,适用于分布式的文件系统场景,数据模型为列簇式存储,优点在于查找速度快且易于扩展,但功能相对局限。 - **文档型数据库**:如CouchDB、MongoDB等,常用于Web应用中,...

Global site tag (gtag.js) - Google Analytics