`

Linux搭建Zookeeper环境之单机模式和集群模式配置

阅读更多
Linux搭建Zookeeper环境
1.Zookeeper
ZooKeeper is a centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services.
All of these kinds of services are used in some form or another by distributed applications. 
Each time they are implemented there is a lot of work that goes into fixing the bugs and race conditions that are inevitable. 
Because of the difficulty of implementing these kinds of services, applications initially usually skimp on them ,which make them brittle in the presence of change and difficult to manage.
Even when done correctly, different implementations of these services lead to management complexity when the applications are deployed.
 
ZooKeeper是一个分布式的,开放源码的分布式应用程序协调服务,是Google的Chubby一个开源的实现,是Hadoop和Hbase的重要组件。它是一个为分布式应用提供一致性服务的软件,提供的功能包括:配置维护、域名服务、分布式同步、组服务等。
  ZooKeeper的目标就是封装好复杂易出错的关键服务,将简单易用的接口和性能高效、功能稳定的系统提供给用户。
  ZooKeeper包含一个简单的原语集提供Java和C的接口。
  ZooKeeper代码版本中,提供了分布式独享锁、选举、队列的接口,代码在zookeeper-3.4.3\src\recipes。其中分布锁和队列有Java和C两个版本,选举只有Java版本。
主要作用是用来解决分布式应用中经常遇到的一些数据管理问题,如集群管理、统一命名管理、分布式配置管理、分布式消息队列、分布式锁、分布式通知协调
官方网站:https://zookeeper.apache.org/
2.Zookeeper 下载安装
【1】官网下载【http://zookeeper.apache.org/releases.html】zookeeper安装包:zookeeper-3.5.3-beta.tar.gz
【2】利用Xftp5工具把zookeeper安装包:zookeeper-3.5.3-beta.tar.gz上传到Linux服务器:/usr/local/zookeeper
【3】利用XShell5工具登录到Linux服务器进入到:cd /usr/local/zookeeper
Connecting to 192.168.3.4:22...
Connection established.
To escape to local shell, press 'Ctrl+Alt+]'.
 
Last login: Wed Apr  4 10:29:33 2018 from 192.168.3.3
[root@marklin ~]# cd /usr/local/zookeeper
[root@marklin zookeeper]# ll
total 41632
-rw-r--r--. 1 root root 42630656 Apr  4 11:15 zookeeper-3.5.3-beta.tar.gz
[root@marklin zookeeper]#
【4】输入tar -xvf 解压:tar -xvf zookeeper-3.5.3-beta.tar.gz
[root@marklin zookeeper]# tar -xvf zookeeper-3.5.3-beta.tar.gz
[root@marklin zookeeper]# ll
total 41636
drwxr-xr-x. 10  502 games     4096 Apr  3  2017 zookeeper-3.5.3-beta
-rw-r--r--.  1 root root  42630656 Apr  4 11:15 zookeeper-3.5.3-beta.tar.gz
[root@marklin zookeeper]#
【5】配置zookeeper环境变量,输入:vim /etc/profile
#Setting ZOOKEEPER_HOEM PATH
export ZOOKEEPER_HOEM=/usr/local/zookeeper/zookeeper-3.5.3
export PATH=${PATH}:${ZOOKEEPER_HOEM}/bin
【6】配置zookeeper输入:cd zookeeper-3.5.3/conf
[root@marklin zookeeper]# cd zookeeper-3.5.3/conf
[root@marklin conf]#
[root@marklin conf]# ll
total 12
-rw-r--r--. 1 502 games  535 Apr  3  2017 configuration.xsl
-rw-r--r--. 1 502 games 2712 Apr  3  2017 log4j.properties
-rw-r--r--. 1 502 games  922 Apr  3  2017 zoo_sample.cfg
[root@marklin conf]#
将conf目录中的zoo_sample.cfg文件复制为zoo.cfg,输入:cp zoo_sample.cfg zoo.cfg
[root@marklin conf]# cp zoo_sample.cfg zoo.cfg
[root@marklin conf]# ll
total 16
-rw-r--r--. 1  502 games  535 Apr  3  2017 configuration.xsl
-rw-r--r--. 1  502 games 2712 Apr  3  2017 log4j.properties
-rw-r--r--. 1 root root   922 Apr  4 11:33 zoo.cfg
-rw-r--r--. 1  502 games  922 Apr  3  2017 zoo_sample.cfg
[root@marklin conf]#
【7】zookeeper配置部署服务这里拓展一下,可配置单机模式[standalone]和集群模式[zookeeper-cluster]:
单机模式:安装上述配置修改对应的data和logs目录即可
集群模式:集群模式主要有两种形式:
1)使用多台机器部署一个ZooKeeper服务:在每台机器上运行一个ZooKeeper Server进程,在生产环境中,一般使用使用多台机器部署一个ZooKeeper服务
2)使用一台机器部署多个ZooKeeper服务:在该台机器上运行多个ZooKeeper Server进程,在练习测试环境中,一般使用使用一台机器部署多个ZooKeeper服务
【8】单机模式配置部署调整:
  在/usr/local/zookeeper目录创建:mkdir zookeeper-standalone
 
单机模式环境变量调整:vim /etc/profile 
#Setting ZOOKEEPER_HOEM PATH
export ZOOKEEPER_HOEM=/usr/local/zookeeper/zookeeper-standalone/zookeeper-3.5.3
export PATH=${PATH}:${ZOOKEEPER_HOEM}/bin
修改配置zookeepe:cd /usr/local/zookeeper/zookeeper-standalone/zookeeper-3.5.3/conf
[root@marklin ~]# cd /usr/local/zookeeper/zookeeper-standalone/zookeeper-3.5.3/conf
[root@marklin conf]# ll
total 12
-rw-r--r--. 1 502 games  535 Apr  3  2017 configuration.xsl
-rw-r--r--. 1 502 games 2712 Apr  3  2017 log4j.properties
-rw-r--r--. 1 502 games  922 Apr  3  2017 zoo_sample.cfg
[root@marklin conf]#
将conf目录中的zoo_sample.cfg文件复制为zoo.cfg,输入:cp zoo_sample.cfg zoo.cfg
[root@marklin conf]# cp zoo_sample.cfg zoo.cfg
[root@marklin conf]# ll
total 16
-rw-r--r--. 1  502 games  535 Apr  3  2017 configuration.xsl
-rw-r--r--. 1  502 games 2712 Apr  3  2017 log4j.properties
-rw-r--r--. 1 root root   922 Apr  4 12:20 zoo.cfg
-rw-r--r--. 1  502 games  922 Apr  3  2017 zoo_sample.cfg
[root@marklin conf]#
单机模式修改配置文件zoo.cfg,输入:vim zoo.cfg
# The number of milliseconds of each tick
tickTime=2000 -- 心跳间隔时间,zookeeper中使用的基本时间单位,毫秒值。每隔2秒发送一个心跳
# The number of ticks that the initial
# synchronization phase can take
initLimit=10 -- leader与客户端连接超时时间。表示10个心跳间隔
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5 -- Leader与Follower之间的超时时间,表示2个心跳间隔
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
#dataDir=/tmp/zookeeper
dataDir=/usr/local/zookeeper/repository/zookeeper-standalone/data -- 数据目录
dataLogDir=/usr/local/zookeeper/repository/zookeeper-standalone/logs -- 日志目录
# the port at which the clients will connect
clientPort=2181 -- 客户端连接端口
单机模式Zookeeper 日志输出到指定文件夹:
【1】 修改zookeeper-standalone安装目录conf中log4j.properties:
修改zookeeper.root.logger=INFO, CONSOLE 默认配置为:zookeeper.root.logger=INFO,ROLLINGFILE
#zookeeper.root.logger=INFO, CONSOLE 
zookeeper.root.logger=INFO,ROLLINGFILE
修改#log4j.appender.ROLLINGFILE=org.apache.log4j.RollingFileAppender 默认配置为:log4j.appender.ROLLINGFILE=org.apache.log4j.DailyRollingFileAppender  --按天统计
#log4j.appender.ROLLINGFILE=org.apache.log4j.RollingFileAppender
log4j.appender.ROLLINGFILE=org.apache.log4j.DailyRollingFileAppender
【2】修改调整 zookeeper-standalone安装目录还需要改bin/zkEnv.sh:
修改#ZOO_LOG_DIR="$ZOOKEEPER_PREFIX/logs" 为指定的日志目录:ZOO_LOG_DIR="/usr/local/zookeeper/repository/zookeeper-standalone/logs":
    #ZOO_LOG_DIR="$ZOOKEEPER_PREFIX/logs"
    ZOO_LOG_DIR="/usr/local/zookeeper/repository/zookeeper-standalone/logs"
修改#ZOO_LOG4J_PROP="INFO,CONSOLE" 为刚才配置的:ZOO_LOG4J_PROP="INFO,ROLLINGFILE":
    #ZOO_LOG4J_PROP="INFO,CONSOLE"
    ZOO_LOG4J_PROP="INFO,ROLLINGFILE"
if [ "x${ZOO_LOG_DIR}" = "x" ]
then
    #ZOO_LOG_DIR="$ZOOKEEPER_PREFIX/logs"
    ZOO_LOG_DIR="/usr/local/zookeeper/repository/zookeeper-standalone/logs"
fi
 
if [ "x${ZOO_LOG4J_PROP}" = "x" ]
then
    #ZOO_LOG4J_PROP="INFO,CONSOLE"
    ZOO_LOG4J_PROP="INFO,ROLLINGFILE"
fi
【3】启动测试:
启动zookeeper-standalone服务,输入zkServer.sh start:
[root@marklin conf]# zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/zookeeper-standalone/zookeeper-3.5.3/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@marklin conf]#
查看zookeeper-standalone服务的状态,输入zkServer.sh status:
[root@marklin conf]# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/zookeeper-standalone/zookeeper-3.5.3/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: standalone
[root@marklin conf]#
 
停止zookeeper-standalone单机服务,输入:zkServer.sh stop:
[root@marklin conf]# zkServer.sh stop
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/zookeeper-standalone/zookeeper-3.5.3/bin/../conf/zoo.cfg
Stopping zookeeper ... STOPPED
[root@marklin conf]#
 
查看data和logs目录:
data:
logs:
 
【9】集群模式配置部署:
在/usr/local/zookeeper/zookeeper-cluster/cluster-master 目录解压zookeeper安装包:tar -xvf zookeeper-3.5.3-beta.tar.gz
[root@marklin cluster-master]# tar -xvf zookeeper-3.5.3-beta.tar.gz
[root@marklin cluster-master]# ll
total 41636
drwxr-xr-x. 10  502 games     4096 Apr  3  2017 zookeeper-3.5.3
-rw-r--r--.  1 root root  42630656 Apr  4 11:51 zookeeper-3.5.3-beta.tar.gz
[root@marklin cluster-master]#
 
 
在/usr/local/zookeeper/zookeeper-cluster/cluster-slave1目录解压zookeeper安装包:tar -xvf zookeeper-3.5.3-beta.tar.gz
[root@marklin cluster-slave1]# tar -xvf zookeeper-3.5.3-beta.tar.gz
[root@marklin cluster-slave1]# ll
total 41636
drwxr-xr-x. 10  502 games     4096 Apr  3  2017 zookeeper-3.5.3-beta
-rw-r--r--.  1 root root  42630656 Apr  4 11:51 zookeeper-3.5.3-beta.tar.gz
[root@marklin cluster-slave1]#
在/usr/local/zookeeper/zookeeper-cluster/cluster-slave2 目录解压zookeeper安装包:tar -xvf zookeeper-3.5.3-beta.tar.gz
[root@marklin cluster-slave2]# tar -xvf zookeeper-3.5.3-beta.tar.gz
[root@marklin cluster-slave2]# ll
total 41636
drwxr-xr-x. 10  502 games     4096 Apr  3  2017 zookeeper-3.5.3
-rw-r--r--.  1 root root  42630656 Apr  4 11:52 zookeeper-3.5.3-beta.tar.gz
[root@marklin cluster-slave2]#
集群模式环境变量:vim /etc/profile 
#Setting ZOOKEEPER_HOEM PATH
export ZOOKEEPER_HOEM=/usr/local/zookeeper/zookeeper-standalone/zookeeper-3.5.3
export PATH=${PATH}:${ZOOKEEPER_HOEM}/bin
export ZOOKEEPER_MASTER_HOEM=/usr/local/zookeeper/zookeeper-cluster/cluster-master/zookeeper-3.5.3
export PATH=${PATH}:${ZOOKEEPER_MASTER_HOEM}/bin
export ZOOKEEPER_SLAVE1_HOEM=/usr/local/zookeeper/zookeeper-cluster/cluster-slave1/zookeeper-3.5.3
export PATH=${PATH}:${ZOOKEEPER_SLAVE1_HOEM}/bin
export ZOOKEEPER_SLAVE2_HOEM=/usr/local/zookeeper/zookeeper-cluster/cluster-slave2/zookeeper-3.5.3
export PATH=${PATH}:${ZOOKEEPER_SLAVE2_HOEM}/bin
集群模式conf目录中的zoo_sample.cfg文件复制为zoo.cfg:
在/usr/local/zookeeper/zookeeper-cluster/cluster-master/zookeeper-3.5.3/conf,输入:cp zoo_sample.cfg zoo.cfg
[root@marklin cluster-master]# cd zookeeper-3.5.3/conf
[root@marklin conf]# ll
total 12
-rw-r--r--. 1 502 games  535 Apr  3  2017 configuration.xsl
-rw-r--r--. 1 502 games 2712 Apr  3  2017 log4j.properties
-rw-r--r--. 1 502 games  922 Apr  3  2017 zoo_sample.cfg
[root@marklin conf]# cp zoo_sample.cfg zoo.cfg
[root@marklin conf]# ll
total 16
-rw-r--r--. 1  502 games  535 Apr  3  2017 configuration.xsl
-rw-r--r--. 1  502 games 2712 Apr  3  2017 log4j.properties
-rw-r--r--. 1 root root   922 Apr  4 13:23 zoo.cfg
-rw-r--r--. 1  502 games  922 Apr  3  2017 zoo_sample.cfg
[root@marklin conf]#
在/usr/local/zookeeper/zookeeper-cluster/cluster-slave1/zookeeper-3.5.3/conf,输入:cp zoo_sample.cfg zoo.cfg
[root@marklin cluster-slave1]# cd zookeeper-3.5.3/conf
[root@marklin conf]# ll
total 12
-rw-r--r--. 1 502 games  535 Apr  3  2017 configuration.xsl
-rw-r--r--. 1 502 games 2712 Apr  3  2017 log4j.properties
-rw-r--r--. 1 502 games  922 Apr  3  2017 zoo_sample.cfg
[root@marklin conf]# cp zoo_sample.cfg zoo.cfg
[root@marklin conf]# ll
total 16
-rw-r--r--. 1  502 games  535 Apr  3  2017 configuration.xsl
-rw-r--r--. 1  502 games 2712 Apr  3  2017 log4j.properties
-rw-r--r--. 1 root root   922 Apr  4 13:25 zoo.cfg
-rw-r--r--. 1  502 games  922 Apr  3  2017 zoo_sample.cfg
[root@marklin conf]#
在/usr/local/zookeeper/zookeeper-cluster/cluster-slave2/zookeeper-3.5.3/conf,输入:cp zoo_sample.cfg zoo.cfg
[root@marklin cluster-slave2]# cd zookeeper-3.5.3/conf
[root@marklin conf]# ll
total 12
-rw-r--r--. 1 502 games  535 Apr  3  2017 configuration.xsl
-rw-r--r--. 1 502 games 2712 Apr  3  2017 log4j.properties
-rw-r--r--. 1 502 games  922 Apr  3  2017 zoo_sample.cfg
[root@marklin conf]# cp zoo_sample.cfg zoo.cfg
[root@marklin conf]# ll
total 16
-rw-r--r--. 1  502 games  535 Apr  3  2017 configuration.xsl
-rw-r--r--. 1  502 games 2712 Apr  3  2017 log4j.properties
-rw-r--r--. 1 root root   922 Apr  4 13:26 zoo.cfg
-rw-r--r--. 1  502 games  922 Apr  3  2017 zoo_sample.cfg
[root@marklin conf]#
集群模式修改配置文件zoo.cfg:
cluster-master修改配置zoo.cfg,输入:
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
#dataDir=/tmp/zookeeper
dataDir=/usr/local/zookeeper/repository/zookeeper-cluster/cluster-master/data
dataLogDir=/usr/local/zookeeper/repository/zookeeper-cluster/cluster-master/logs
# the port at which the clients will connect
clientPort=2182
server.1=127.0.0.1:2888:3888
server.2=127.0.0.1:2889:3889
server.3=127.0.0.1:2890:3890
 
cluster-slave1修改配置zoo.cfg,输入:
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
#dataDir=/tmp/zookeeper
dataDir=/usr/local/zookeeper/repository/zookeeper-cluster/cluster-slave1/data
dataLogDir=/usr/local/zookeeper/repository/zookeeper-cluster/cluster-slave1/logs
# the port at which the clients will connect
clientPort=2183
server.1=127.0.0.1:2888:3888
server.2=127.0.0.1:2889:3889
server.3=127.0.0.1:2890:3890
cluster-slave2修改配置zoo.cfg,输入:
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
#dataDir=/tmp/zookeeper
dataDir=/usr/local/zookeeper/repository/zookeeper-cluster/cluster-slave2/data
dataLogDir=/usr/local/zookeeper/repository/zookeeper-cluster/cluster-slave2/logs
# the port at which the clients will connect
clientPort=2184
server.1=127.0.0.1:2888:3888
server.2=127.0.0.1:2889:3889
server.3=127.0.0.1:2890:3890
 
PS: 因为是在一台机器上模拟集群,所以端口不能重复,这里用2182~21842888~2890,以及3888~3890相互错开。另外每个zkinstance,都需要设置独立的数据存储目录、日志存储目录,所以dataDirdataLogDir这二个节点对应的目录,需要手动先创建好:
cluster-master:在/usr/local/zookeeper/repository/zookeeper-cluster/cluster-master创建data 和logs文件目录
cluster-slave1:/usr/local/zookeeper/repository/zookeeper-cluster/cluster-slave1创建data 和logs文件目录
cluster-slave2/usr/local/zookeeper/repository/zookeeper-cluster/cluster-slave2创建data 和logs文件目录
另外还有一个非常关键的设置,在每个zookeeper server配置文件的dataDir所对应的目录下,必须创建一个名为myid的文件,其中的内容必须与zoo.cfgserver.x 中的x相同,即
cluster-master:/usr/local/zookeeper/repository/zookeeper-cluster/cluster-master/data创建一个名为myid的文件,并标识为1
[root@marklin ~]# cd /usr/local/zookeeper/repository/zookeeper-cluster/cluster-master
[root@marklin cluster-master]# ll
total 0
drwxr-xr-x. 2 root root 18 Apr  4 13:52 data
drwxr-xr-x. 2 root root  6 Apr  4 12:17 logs
[root@marklin cluster-master]# cd data
[root@marklin data]# ll
total 4
-rw-r--r--. 1 root root 3 Apr  4 13:52 myid
[root@marklin data]#
 
cluster-slave1:/usr/local/zookeeper/repository/zookeeper-cluster/cluster-slave1/data创建一个名为myid的文件,并标识为2
[root@marklin zookeeper-cluster]# cd /usr/local/zookeeper/repository/zookeeper-cluster/cluster-salve1
[root@marklin cluster-salve1]# ll
total 0
drwxr-xr-x. 2 root root 6 Apr  4 12:17 data
drwxr-xr-x. 2 root root 6 Apr  4 12:17 logs
[root@marklin cluster-salve1]# cd data
[root@marklin data]# vim myid
[root@marklin data]# ll
total 4
-rw-r--r--. 1 root root 2 Apr  4 13:56 myid
[root@marklin data]#
 
cluster-slave2:/usr/local/zookeeper/repository/zookeeper-cluster/cluster-slave2/data创建一个名为myid的文件,并标识为3
[root@marklin ~]# cd /usr/local/zookeeper/repository/zookeeper-cluster/cluster-slave2
[root@marklin cluster-slave2]# ll
total 0
drwxr-xr-x. 2 root root 6 Apr  4 12:17 data
drwxr-xr-x. 2 root root 6 Apr  4 12:17 logs
[root@marklin cluster-slave2]# cd data/
[root@marklin data]# vim myid
[root@marklin data]# ll
total 4
-rw-r--r--. 1 root root 2 Apr  4 13:59 myid
[root@marklin data]#
 
集群模式Zookeeper 日志输出到指定文件夹:
【1】 修改cluster-master,cluster-slave1,cluster-slave2的安装目录conf中log4j.properties:
修改zookeeper.root.logger=INFO, CONSOLE 默认配置为:zookeeper.root.logger=INFO,ROLLINGFILE
#zookeeper.root.logger=INFO, CONSOLE 
zookeeper.root.logger=INFO,ROLLINGFILE
修改#log4j.appender.ROLLINGFILE=org.apache.log4j.RollingFileAppender 默认配置为:log4j.appender.ROLLINGFILE=org.apache.log4j.DailyRollingFileAppender  --按天统计
#log4j.appender.ROLLINGFILE=org.apache.log4j.RollingFileAppender
log4j.appender.ROLLINGFILE=org.apache.log4j.DailyRollingFileAppender
 
【2】修改cluster-master,cluster-slave1,cluster-slave2的安装目录中bin/zkEnv.sh:
cluster-master:
修改#ZOO_LOG_DIR="$ZOOKEEPER_PREFIX/logs" 为指定的日志目录:ZOO_LOG_DIR="/usr/local/zookeeper/repository/zookeeper-cluster/cluster-master/logs":
    #ZOO_LOG_DIR="$ZOOKEEPER_PREFIX/logs"
    ZOO_LOG_DIR="/usr/local/zookeeper/repository/zookeeper-cluster/cluster-master/logs"
 
修改#ZOO_LOG4J_PROP="INFO,CONSOLE" 为刚才配置的:ZOO_LOG4J_PROP="INFO,ROLLINGFILE":
    #ZOO_LOG4J_PROP="INFO,CONSOLE"
    ZOO_LOG4J_PROP="INFO,ROLLINGFILE"
cluster-slave1:
修改#ZOO_LOG_DIR="$ZOOKEEPER_PREFIX/logs" 为指定的日志目录:ZOO_LOG_DIR="/usr/local/zookeeper/repository/zookeeper-cluster/cluster-slave1/logs":
    #ZOO_LOG_DIR="$ZOOKEEPER_PREFIX/logs"
    ZOO_LOG_DIR="/usr/local/zookeeper/repository/zookeeper-cluster/cluster-slave1/logs"
修改#ZOO_LOG4J_PROP="INFO,CONSOLE" 为刚才配置的:ZOO_LOG4J_PROP="INFO,ROLLINGFILE":
    #ZOO_LOG4J_PROP="INFO,CONSOLE"
    ZOO_LOG4J_PROP="INFO,ROLLINGFILE"
cluster-slave2:
修改#ZOO_LOG_DIR="$ZOOKEEPER_PREFIX/logs" 为指定的日志目录:ZOO_LOG_DIR="/usr/local/zookeeper/repository/zookeeper-cluster/cluster-slave2/logs":
    #ZOO_LOG_DIR="$ZOOKEEPER_PREFIX/logs"
    ZOO_LOG_DIR="/usr/local/zookeeper/repository/zookeeper-cluster/cluster-slave2/logs"
修改#ZOO_LOG4J_PROP="INFO,CONSOLE" 为刚才配置的:ZOO_LOG4J_PROP="INFO,ROLLINGFILE":
    #ZOO_LOG4J_PROP="INFO,CONSOLE"
    ZOO_LOG4J_PROP="INFO,ROLLINGFILE"
【3】启动测试:
启动zookeeper-standalone服务,输入zkServer.sh start:
[root@marklin conf]# zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/zookeeper-standalone/zookeeper-3.5.3/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@marklin conf]#
查看zookeeper-standalone服务的状态,输入zkServer.sh status:
[root@marklin conf]# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/zookeeper-standalone/zookeeper-3.5.3/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: standalone
[root@marklin conf]#
 
停止zookeeper-standalone单机服务,输入:zkServer.sh stop:
[root@marklin conf]# zkServer.sh stop
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/zookeeper-standalone/zookeeper-3.5.3/bin/../conf/zoo.cfg
Stopping zookeeper ... STOPPED
[root@marklin conf]#
测试客户端连接:zookeeper-standalone,cluster-master,cluster-slave1,cluster-slave2
测试客户端连接zookeeper-standalone,输入:zkCli.sh -server 127.0.0.1:2181
[root@marklin conf]# zkCli.sh -server 127.0.0.1:2181
log4j:WARN No such property [maxFileSize] in org.apache.log4j.DailyRollingFileAppender.
log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.DailyRollingFileAppender.
Connecting to 127.0.0.1:2181
Welcome to ZooKeeper!
JLine support is enabled
 
WATCHER::
 
WatchedEvent state:SyncConnected type:None path:null
[zk: 127.0.0.1:2181(CONNECTED) 0]
测试客户端连接cluster-master,输入:zkCli.sh -server 127.0.0.1:2182
[root@marklin data]# zkCli.sh -server 127.0.0.1:2182
log4j:WARN No such property [maxFileSize] in org.apache.log4j.DailyRollingFileAppender.
log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.DailyRollingFileAppender.
Connecting to 127.0.0.1:2182
Welcome to ZooKeeper!
JLine support is enabled
[zk: 127.0.0.1:2182(CONNECTING) 0]
测试客户端连接cluster-slave1,输入:zkCli.sh -server 127.0.0.1:2183
[root@marklin data]# zkCli.sh -server 127.0.0.1:2183
log4j:WARN No such property [maxFileSize] in org.apache.log4j.DailyRollingFileAppender.
log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.DailyRollingFileAppender.
Connecting to 127.0.0.1:2183
Welcome to ZooKeeper!
JLine support is enabled
[zk: 127.0.0.1:2183(CONNECTING) 0]
测试客户端连接cluster-slave2,输入:zkCli.sh -server 127.0.0.1:2184
[root@marklin data]# zkCli.sh -server 127.0.0.1:2184
log4j:WARN No such property [maxFileSize] in org.apache.log4j.DailyRollingFileAppender.
log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.DailyRollingFileAppender.
Connecting to 127.0.0.1:2184
Welcome to ZooKeeper!
JLine support is enabled
[zk: 127.0.0.1:2184(CONNECTING) 0]
【4】查看jps 进程:
jps:Java Virtual Machine Process Status Tool ,显示当前所有java进程pid的命令
[root@marklin ~]# jps
6832 ZooKeeperMain
6246 QuorumPeerMain
6679 ZooKeeperMain
6760 ZooKeeperMain
6907 ZooKeeperMain
7004 Jps
[root@marklin ~]#
 
分享到:
评论

相关推荐

    zookeeper linux集群搭建流程

    ZooKeeper Linux 集群搭建流程可以分为四步:下载和解压 ZooKeeper、创建目录和 myid 文件、修改 ZooKeeper 启动配置文件、启动 ZooKeeper 节点。通过这些步骤,可以成功搭建 ZooKeeper Linux 集群,满足分布式系统...

    zookeeper数据迁移从单例到集群linux命令过程

    ### Zookeeper 数据迁移从单例到集群 ...需要注意的是,集群的搭建涉及到多个方面的配置和调整,因此在实际操作过程中需要仔细检查每一项配置以确保无误。此外,还可以考虑使用自动化工具来简化这一过程,提高效率。

    Zookeeper单机及集群安装配置

    ### Zookeeper基础知识与架构 ...通过上述步骤,不仅可以完成Zookeeper单机版的安装配置,还可以搭建一个具备高可用特性的Zookeeper集群。这为分布式应用提供了一种可靠且高效的服务协调解决方案。

    zookeeper集群配置详解

    在Linux集群环境中搭建Zookeeper集群是一个涉及到多台服务器间协调工作的复杂过程,需要对Zookeeper的工作原理和配置有深入的了解。本篇详细阐述了从下载安装到集群配置的全过程。 首先,Zookeeper集群部署的前提...

    zookeeper 单机集群配置

    总结一下,ZooKeeper单机集群配置主要包括下载与解压、环境变量配置、ZooKeeper配置文件修改、初始化数据目录、启动ZooKeeper服务以及测试服务。在Java Dubbo项目中,ZooKeeper发挥着关键的协调作用,使得服务之间的...

    zookeeper分布式集群配置

    本教程将深入讲解在Linux环境中配置Zookeeper的单机模式以及分布式集群模式,帮助初学者快速掌握这一重要技术。 首先,我们要了解Zookeeper的基础知识。Zookeeper基于观察者模式设计,提供了一种树状的数据结构,...

    zookeeper搭建集群

    实现zookeeper搭建单机集群,分机器搭建也可以。只要更改这个配置文件就可以了 start(){ sh /Users/mac/linuxsoft/zk-cluster/zookeeper-3.4.6/bin/zkServer.sh start /Users/mac/linuxsoft/zk-cluster/zookeeper...

    Zookeeper 单机环境和集群环境搭建

    本文将详细讲解如何在单机环境中搭建Zookeeper以及如何构建一个高可用的Zookeeper集群。 ### 一、单机环境搭建 #### 1.1 下载Zookeeper 首先,你需要从Apache官方网站下载Zookeeper的最新稳定版本。这里以3.4.14...

    kafka搭建单机windows_单机linux_集群linux操作.rar

    本文将详细介绍如何在Windows单机环境、Linux单机环境以及Linux集群环境下搭建Kafka,旨在帮助读者深入理解Kafka的部署与配置,以便更好地运用在实际项目中。 ### Windows单机环境搭建 1. **下载安装Java运行环境...

    快速部署单机kafka集群(win环境)

    kafka集群类型: single broker(单节点单boker集群,亦即kafka只启一个broker消息中间件服务,producer、consumer、broker均通过zookeeper集群交换消息,具体可参考:http://exp-blog.com/2018/08/03/pid-2187/

    solr7.4,linux单机、集群版搭建设置IK分词器

    ### Solr 7.4 Linux环境下单机及集群版本搭建与IK分词器配置详解 #### 一、Solr简介 Solr是一款开源的、基于Lucene的全文搜索引擎。它提供了一个高性能、可伸缩的企业级搜索平台。Solr不仅支持分布式部署(集群...

    CentOS7安装与配置Zookeeper1

    【CentOS7安装与配置Zookeeper】是针对Linux系统中常用的服务发现和分布式协调框架Zookeeper进行的基础操作。Zookeeper是Apache Hadoop项目的一部分,它为分布式应用提供了一个高效、可靠的分布式协调服务。 首先,...

    Linux环境Hadoop2.6+Hbase1.2集群安装部署

    在本篇文章中,我们将详细介绍如何在Linux环境下搭建Hadoop 2.6和HBase 1.2集群。该教程涵盖了从环境准备、Hadoop与HBase的安装配置到集群的测试等全过程。通过以下步骤,读者可以了解到不同运行模式下的具体操作...

    搭建Zookeeper单机和集群

    压缩包下载地址:http://mirror.bit.edu.cn/apache/zookeeper/ ...单机搭建: 创建zookeeper文件夹: mkdir /usr/local/zookeeper 解压到zookeeper文件夹下: tar -zxvf /usr/local/apache-zookee

    构建高可用ZooKeeper集群

    Zookeeper有三种运行模式:单机模式、伪集群模式和集群模式。单机模式这种模式一般适用于开发测试环境,一方面我们没有那么多机器资源,另外就是平时的开发调试并不需要极好的稳定性。在Linux环境下运行单机模式需要...

    linux环境下:zookeeper3.5.7二进制安装包

    在Linux环境下安装Zookeeper 3.5.7二进制包是一个常见的任务,尤其是在搭建分布式系统或管理集群配置时。Zookeeper是一个高可用的分布式协调服务,由Apache软件基金会开发,广泛应用于分布式计算、数据库、配置管理...

    大数据学习环境搭建系列(一)大数据集群平台介绍.docx

    总之,大数据集群平台的搭建涉及硬件节点的配置、操作系统的安装、网络环境的设置以及各种大数据软件的部署和配置。理解这些基础知识对于深入学习和实践大数据技术至关重要。通过这样的集群环境,我们可以进行数据...

Global site tag (gtag.js) - Google Analytics