- 浏览: 2539373 次
- 性别:
- 来自: 成都
文章分类
最新评论
-
nation:
你好,在部署Mesos+Spark的运行环境时,出现一个现象, ...
Spark(4)Deal with Mesos -
sillycat:
AMAZON Relatedhttps://www.godad ...
AMAZON API Gateway(2)Client Side SSL with NGINX -
sillycat:
sudo usermod -aG docker ec2-use ...
Docker and VirtualBox(1)Set up Shared Disk for Virtual Box -
sillycat:
Every Half an Hour30 * * * * /u ...
Build Home NAS(3)Data Redundancy -
sillycat:
3 List the Cron Job I Have>c ...
Build Home NAS(3)Data Redundancy
Kafka 2017 Update(1)Zookeeper Cluster and Kafka Cluster
Install Zookeeper Cluster on 3 Nodes
>wget http://mirrors.advancedhosters.com/apache/zookeeper/zookeeper-3.5.3-beta/zookeeper-3.5.3-beta.tar.gz
>tar xf zookeeper-3.5.3-beta.tar.gz
>sudo ln -s /home/ec2-user/tool/zookeeper-3.5.3 /opt/zookeeper-3.5.3
>sudo ln -s /opt/zookeeper-3.5.3 /opt/zookeeper
Add the working directory to the PATH
PATH=$PATH:/opt/zookeeper/bin
Prepare the Configuration file
>cp conf/zoo_sample.cfg conf/zoo.cfg
Try start local server
>zkServer.sh start conf/zoo.cfg
Port 8080 is used by the AdminServer which is new in 3.5.0.
Add the settings in the zkServer.sh
nohup "$JAVA" $ZOO_DATADIR_AUTOCREATE "-Dzookeeper.log.dir=${ZOO_LOG_DIR}" "-Dzookeeper.admin.serverPort=8081"\
"-Dzookeeper.log.file=${ZOO_LOG_FILE}" "-Dzookeeper.root.logger=${ZOO_LOG4J_PROP}" \
-XX:+HeapDumpOnOutOfMemoryError -XX:OnOutOfMemoryError='kill -9 %p' \
-cp "$CLASSPATH" $JVMFLAGS $ZOOMAIN "$ZOOCFG" > "$_ZOO_DAEMON_OUT" 2>&1 < /dev/null &
if [ $? -eq 0 ]
https://zookeeper.apache.org/doc/r3.5.1-alpha/zookeeperAdmin.html#sc_adminserver_config
Source code is here https://github.com/apache/zookeeper/blob/master/src/java/main/org/apache/zookeeper/server/admin/JettyAdminServer.java
It is reading from the JVM system configuration of -Dzookeeper.admin.serverPort=8081
Visit the Admin console
http://fr-stage-api:8081/commands/stats
Connect with Client
>zkCli.sh -server localhost:2181
Stop the Service
>zkServer.sh stop
Prepare the Configuration for Cluster zoo1.cfg, zoo2.cfg, zoo3.cfg, adding these lines
server.1=fr-stage-api:2888:3888
server.2=fr-stage-consumer:2888:3888
server.3=fr-perf1:2888:3888
>vi /tmp/zookeeper/myid
1
>vi /tmp/zookeeper/myid
2
>vi /tmp/zookeeper/myid
3
Copy the file to all other server and start them all.
Exception:
2018-01-02 19:38:03,370 [myid:] - ERROR [main:QuorumPeerMain@86] - Invalid config, exiting abnormally
org.apache.zookeeper.server.quorum.QuorumPeerConfig$ConfigException: Error processing conf/zoo1.cfg
at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:138)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:110)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:79)
Caused by: java.lang.IllegalArgumentException: myid file is missing
Solution:
https://github.com/31z4/zookeeper-docker/issues/13
On the system settings:
ZOO_MY_ID=1
export ZOO_MY_ID
This is not necessary if I directly install zookeeper on my local.
Check status on Server
>zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: follower
Kafka Cluster
Kafka requires Gradle 2.0 or higher
Install Gradle Manually https://gradle.org/install/#manually
>wget https://downloads.gradle.org/distributions/gradle-4.4.1-bin.zip
>unzip gradle-4.4.1-bin.zip
>sudo ln -s /home/ec2-user/tool/gradle-4.4.1 /opt/gradle-4.4.1
>sudo ln -s /opt/gradle-4.4.1 /opt/gradle
>gradle --version
------------------------------------------------------------
Gradle 4.4.1
------------------------------------------------------------
Build time: 2017-12-20 15:45:23 UTC
Revision: 10ed9dc355dc39f6307cc98fbd8cea314bdd381c
Groovy: 2.4.12
Ant: Apache Ant(TM) version 1.9.9 compiled on February 2 2017
JVM: 1.8.0_60 (Oracle Corporation 25.60-b23)
OS: Linux 4.1.13-18.26.amzn1.x86_64 amd64
Run gradle command on the source directory of Kafka
>gradle
Build Jar package
>./gradlew jar
Build the release Jar Package
>./gradlew releaseTarGz -x signArchives
Copy the binary file out to install directory
>cp ./core/build/distributions/kafka_2.11-1.0.0.tgz ~/install/
Link that from the right working directory
>sudo ln -s /home/ec2-user/tool/kafka-1.0.0 /opt/kafka-1.0.0
Prepare Single Kafka Configuration
>cat config/server.properties
zookeeper.connect=fr-stage-api:2181,fr-stage-consumer:2181,fr-perf1:2181
Start the Server
>kafka-server-start.sh config/server.properties
Prepare the Cluster Configuration
>cp config/server.properties config/server1.properties
>cp config/server.properties config/server2.properties
>cp config/server.properties config/server3.properties
broker.id=1
broker.id=2
broker.id=3
Start the first kafka on the first machine
>nohup bin/kafka-server-start.sh config/server1.properties &
Exceptions:
[2018-01-03 18:17:02,882] FATAL Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
kafka.common.InconsistentBrokerIdException: Configured broker.id 1 doesn't match stored broker.id 0 in meta.properties. If you moved your data, make sure your configured broker.id matches. If you intend to create a new broker, you should remove all data in your data directories (log.dirs).
at kafka.server.KafkaServer.getBrokerIdAndOfflineDirs(KafkaServer.scala:615)
at kafka.server.KafkaServer.startup(KafkaServer.scala:201)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
at kafka.Kafka$.main(Kafka.scala:92)
at kafka.Kafka.main(Kafka.scala)
Solution:
Clean the log directory
>rm -fr /tmp/kafka-logs/*
>nohup bin/kafka-server-start.sh config/server1.properties &
>nohup bin/kafka-server-start.sh config/server2.properties &
>nohup bin/kafka-server-start.sh config/server3.properties &
Create the topic
>bin/kafka-topics.sh --create --zookeeper fr-stage-api:2181,fr-stage-consumer:2181,fr-perf1:2181 --replication-factor 2 --partitions 2 --topic cluster1
Created topic "cluster1".
Producer
>bin/kafka-console-producer.sh --broker-list fr-stage-api:9092,fr-stage-consumer:9092,fr-perf1:9092 --topic cluster1
Consumer
>bin/kafka-console-consumer.sh --zookeeper fr-stage-api:2181,fr-stage-consumer:2181,fr-perf1:2181 --topic cluster1 --from-beginning
References:
Kafka 1~6
http://sillycat.iteye.com/blog/1563312
http://sillycat.iteye.com/blog/1563314
http://sillycat.iteye.com/blog/2015175
http://sillycat.iteye.com/blog/2015181
http://sillycat.iteye.com/blog/2094688
http://sillycat.iteye.com/blog/2108042
http://sillycat.iteye.com/blog/2215237
http://sillycat.iteye.com/blog/2183932
https://kafka.apache.org/downloads
zookeeper
http://sillycat.iteye.com/blog/2397642
http://sillycat.iteye.com/blog/2397645
Install Zookeeper Cluster on 3 Nodes
>wget http://mirrors.advancedhosters.com/apache/zookeeper/zookeeper-3.5.3-beta/zookeeper-3.5.3-beta.tar.gz
>tar xf zookeeper-3.5.3-beta.tar.gz
>sudo ln -s /home/ec2-user/tool/zookeeper-3.5.3 /opt/zookeeper-3.5.3
>sudo ln -s /opt/zookeeper-3.5.3 /opt/zookeeper
Add the working directory to the PATH
PATH=$PATH:/opt/zookeeper/bin
Prepare the Configuration file
>cp conf/zoo_sample.cfg conf/zoo.cfg
Try start local server
>zkServer.sh start conf/zoo.cfg
Port 8080 is used by the AdminServer which is new in 3.5.0.
Add the settings in the zkServer.sh
nohup "$JAVA" $ZOO_DATADIR_AUTOCREATE "-Dzookeeper.log.dir=${ZOO_LOG_DIR}" "-Dzookeeper.admin.serverPort=8081"\
"-Dzookeeper.log.file=${ZOO_LOG_FILE}" "-Dzookeeper.root.logger=${ZOO_LOG4J_PROP}" \
-XX:+HeapDumpOnOutOfMemoryError -XX:OnOutOfMemoryError='kill -9 %p' \
-cp "$CLASSPATH" $JVMFLAGS $ZOOMAIN "$ZOOCFG" > "$_ZOO_DAEMON_OUT" 2>&1 < /dev/null &
if [ $? -eq 0 ]
https://zookeeper.apache.org/doc/r3.5.1-alpha/zookeeperAdmin.html#sc_adminserver_config
Source code is here https://github.com/apache/zookeeper/blob/master/src/java/main/org/apache/zookeeper/server/admin/JettyAdminServer.java
It is reading from the JVM system configuration of -Dzookeeper.admin.serverPort=8081
Visit the Admin console
http://fr-stage-api:8081/commands/stats
Connect with Client
>zkCli.sh -server localhost:2181
Stop the Service
>zkServer.sh stop
Prepare the Configuration for Cluster zoo1.cfg, zoo2.cfg, zoo3.cfg, adding these lines
server.1=fr-stage-api:2888:3888
server.2=fr-stage-consumer:2888:3888
server.3=fr-perf1:2888:3888
>vi /tmp/zookeeper/myid
1
>vi /tmp/zookeeper/myid
2
>vi /tmp/zookeeper/myid
3
Copy the file to all other server and start them all.
Exception:
2018-01-02 19:38:03,370 [myid:] - ERROR [main:QuorumPeerMain@86] - Invalid config, exiting abnormally
org.apache.zookeeper.server.quorum.QuorumPeerConfig$ConfigException: Error processing conf/zoo1.cfg
at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:138)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:110)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:79)
Caused by: java.lang.IllegalArgumentException: myid file is missing
Solution:
https://github.com/31z4/zookeeper-docker/issues/13
On the system settings:
ZOO_MY_ID=1
export ZOO_MY_ID
This is not necessary if I directly install zookeeper on my local.
Check status on Server
>zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: follower
Kafka Cluster
Kafka requires Gradle 2.0 or higher
Install Gradle Manually https://gradle.org/install/#manually
>wget https://downloads.gradle.org/distributions/gradle-4.4.1-bin.zip
>unzip gradle-4.4.1-bin.zip
>sudo ln -s /home/ec2-user/tool/gradle-4.4.1 /opt/gradle-4.4.1
>sudo ln -s /opt/gradle-4.4.1 /opt/gradle
>gradle --version
------------------------------------------------------------
Gradle 4.4.1
------------------------------------------------------------
Build time: 2017-12-20 15:45:23 UTC
Revision: 10ed9dc355dc39f6307cc98fbd8cea314bdd381c
Groovy: 2.4.12
Ant: Apache Ant(TM) version 1.9.9 compiled on February 2 2017
JVM: 1.8.0_60 (Oracle Corporation 25.60-b23)
OS: Linux 4.1.13-18.26.amzn1.x86_64 amd64
Run gradle command on the source directory of Kafka
>gradle
Build Jar package
>./gradlew jar
Build the release Jar Package
>./gradlew releaseTarGz -x signArchives
Copy the binary file out to install directory
>cp ./core/build/distributions/kafka_2.11-1.0.0.tgz ~/install/
Link that from the right working directory
>sudo ln -s /home/ec2-user/tool/kafka-1.0.0 /opt/kafka-1.0.0
Prepare Single Kafka Configuration
>cat config/server.properties
zookeeper.connect=fr-stage-api:2181,fr-stage-consumer:2181,fr-perf1:2181
Start the Server
>kafka-server-start.sh config/server.properties
Prepare the Cluster Configuration
>cp config/server.properties config/server1.properties
>cp config/server.properties config/server2.properties
>cp config/server.properties config/server3.properties
broker.id=1
broker.id=2
broker.id=3
Start the first kafka on the first machine
>nohup bin/kafka-server-start.sh config/server1.properties &
Exceptions:
[2018-01-03 18:17:02,882] FATAL Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
kafka.common.InconsistentBrokerIdException: Configured broker.id 1 doesn't match stored broker.id 0 in meta.properties. If you moved your data, make sure your configured broker.id matches. If you intend to create a new broker, you should remove all data in your data directories (log.dirs).
at kafka.server.KafkaServer.getBrokerIdAndOfflineDirs(KafkaServer.scala:615)
at kafka.server.KafkaServer.startup(KafkaServer.scala:201)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
at kafka.Kafka$.main(Kafka.scala:92)
at kafka.Kafka.main(Kafka.scala)
Solution:
Clean the log directory
>rm -fr /tmp/kafka-logs/*
>nohup bin/kafka-server-start.sh config/server1.properties &
>nohup bin/kafka-server-start.sh config/server2.properties &
>nohup bin/kafka-server-start.sh config/server3.properties &
Create the topic
>bin/kafka-topics.sh --create --zookeeper fr-stage-api:2181,fr-stage-consumer:2181,fr-perf1:2181 --replication-factor 2 --partitions 2 --topic cluster1
Created topic "cluster1".
Producer
>bin/kafka-console-producer.sh --broker-list fr-stage-api:9092,fr-stage-consumer:9092,fr-perf1:9092 --topic cluster1
Consumer
>bin/kafka-console-consumer.sh --zookeeper fr-stage-api:2181,fr-stage-consumer:2181,fr-perf1:2181 --topic cluster1 --from-beginning
References:
Kafka 1~6
http://sillycat.iteye.com/blog/1563312
http://sillycat.iteye.com/blog/1563314
http://sillycat.iteye.com/blog/2015175
http://sillycat.iteye.com/blog/2015181
http://sillycat.iteye.com/blog/2094688
http://sillycat.iteye.com/blog/2108042
http://sillycat.iteye.com/blog/2215237
http://sillycat.iteye.com/blog/2183932
https://kafka.apache.org/downloads
zookeeper
http://sillycat.iteye.com/blog/2397642
http://sillycat.iteye.com/blog/2397645
发表评论
-
Stop Update Here
2020-04-28 09:00 310I will stop update here, and mo ... -
NodeJS12 and Zlib
2020-04-01 07:44 465NodeJS12 and Zlib It works as ... -
Docker Swarm 2020(2)Docker Swarm and Portainer
2020-03-31 23:18 361Docker Swarm 2020(2)Docker Swar ... -
Docker Swarm 2020(1)Simply Install and Use Swarm
2020-03-31 07:58 363Docker Swarm 2020(1)Simply Inst ... -
Traefik 2020(1)Introduction and Installation
2020-03-29 13:52 328Traefik 2020(1)Introduction and ... -
Portainer 2020(4)Deploy Nginx and Others
2020-03-20 12:06 419Portainer 2020(4)Deploy Nginx a ... -
Private Registry 2020(1)No auth in registry Nginx AUTH for UI
2020-03-18 00:56 428Private Registry 2020(1)No auth ... -
Docker Compose 2020(1)Installation and Basic
2020-03-15 08:10 364Docker Compose 2020(1)Installat ... -
VPN Server 2020(2)Docker on CentOS in Ubuntu
2020-03-02 08:04 444VPN Server 2020(2)Docker on Cen ... -
Buffer in NodeJS 12 and NodeJS 8
2020-02-25 06:43 376Buffer in NodeJS 12 and NodeJS ... -
NodeJS ENV Similar to JENV and PyENV
2020-02-25 05:14 464NodeJS ENV Similar to JENV and ... -
Prometheus HA 2020(3)AlertManager Cluster
2020-02-24 01:47 413Prometheus HA 2020(3)AlertManag ... -
Serverless with NodeJS and TencentCloud 2020(5)CRON and Settings
2020-02-24 01:46 330Serverless with NodeJS and Tenc ... -
GraphQL 2019(3)Connect to MySQL
2020-02-24 01:48 242GraphQL 2019(3)Connect to MySQL ... -
GraphQL 2019(2)GraphQL and Deploy to Tencent Cloud
2020-02-24 01:48 443GraphQL 2019(2)GraphQL and Depl ... -
GraphQL 2019(1)Apollo Basic
2020-02-19 01:36 320GraphQL 2019(1)Apollo Basic Cl ... -
Serverless with NodeJS and TencentCloud 2020(4)Multiple Handlers and Running wit
2020-02-19 01:19 306Serverless with NodeJS and Tenc ... -
Serverless with NodeJS and TencentCloud 2020(3)Build Tree and Traverse Tree
2020-02-19 01:19 310Serverless with NodeJS and Tenc ... -
Serverless with NodeJS and TencentCloud 2020(2)Trigger SCF in SCF
2020-02-19 01:18 284Serverless with NodeJS and Tenc ... -
Serverless with NodeJS and TencentCloud 2020(1)Running with Component
2020-02-19 01:17 302Serverless with NodeJS and Tenc ...
相关推荐
1. 生产者(Producer):生产者是发布消息到Kafka 主题的客户端。它可以是任何生成数据的应用程序,例如服务器日志、传感器数据等。生产者负责将消息发送到Kafka 的broker,并可以指定消息的分区策略。 2. 消费者...
ZooKeeper是一个开放源码的分布式应用程序协调服务,它包含一个简单的原语集,分布式应用程序可以基于它实现同步服务,配置维护和命名服务等。 Kafka目前主要作为一个分布式的发布订阅式的消息系统使用 本资源包括...
在Windows环境下搭建Kafka之前,首先需要安装Zookeeper,因为Zookeeper是Kafka的重要组成部分,它作为分布式协调服务,为Kafka提供了集群管理和数据一致性保障。Zookeeper-3.4.6是Apache ZooKeeper的一个稳定版本,...
在构建大数据处理环境时,Hadoop集群是核心基础,而`zookeeper3.4.12+hbase1.4.4+sqoop1.4.7+kafka2.10`这一组合则提供了集群中不可或缺的组件。让我们逐一探讨这些组件的功能、作用以及它们之间的协同工作。 **...
消息队列:Kafka:Kafka与Zookeeper集成教程.docx
【标题】"kafka_jdk_zookeeper集合.zip"是一个压缩包,包含了三个核心组件:Java Development Kit (JDK) 版本8u211、Apache ZooKeeper 3.6.2 和 Kafka 0.9.0.0,这些都是大数据处理和流计算领域的重要工具。...
这个Docker Compose 文件定义了一个包含Zookeeper和三个Kafka节点的服务集群。通过指定镜像、端口映射、环境变量和依赖关系等配置,实现了Zookeeper和Kafka的快速部署和集成。同时,在定义了一个名为"mynetwork"的...
《Kafka 0.10.2.1与Zookeeper 3.4.9:分布式消息系统的基石》 在分布式系统领域,Kafka和Zookeeper是两个不可或缺的组件。Kafka是一个高性能、可扩展的开源流处理平台,而Zookeeper则是一个分布式的协调服务,它们...
kafka, 负载均衡,恢复 Kafka 消费,支持 Zookeeper KafkaKafka,工具和示例应用程序构建在 sarama插件包之上。库收费:基于 web sphere 支持负载平衡和偏移持久性的分布式 Kafka 使用者,支持负载平衡和偏移持久性...
1. **上传文件**: 将Zookeeper和Kafka的安装包上传至`/opt/software`目录下。 2. **解压Kafka**: ```bash tar zxf /opt/software/kafka_2.12-2.0.0.tgz ``` #### 三、文件移动与目录创建 1. **移动Kafka**: 将...
本文介绍了一款集成图形化界面配置和一键自启功能的Kafka与Zookeeper服务管理软件。该软件通过直观易用的图形界面,使用户能够轻松完成Kafka和Zookeeper的配置工作,有效避免了手动编辑配置文件可能带来的错误和不便...
KAFKA_ZOOKEEPER_CONNECT:zookeeper的连接地址 KAFKA_LISTENERS:标识kafka服务运行在容器内的9092端口,因为没有指定host,所以是 0.0.0.0标识所有的网络接口。 KAFKA_ADVERTISED_LISTENERS:kafka发布到zookeeper...
kafka zookeeper 执行文件 kafka zookeeper 执行文件 kafka zookeeper 执行文件 kafka zookeeper 执行文件 kafka zookeeper 执行文件 kafka zookeeper 执行文件 kafka zookeeper 执行文件
《Kafka与Zookeeper在消息中间件中的应用与整合》 Kafka和Zookeeper是两个在分布式系统中广泛应用的关键组件,特别是在消息中间件领域。Kafka是一个高效、可扩展、持久化的发布/订阅消息系统,而Zookeeper则是一个...
标题中的"storm,kafka,zookeeper jar包"提到了三个关键组件,它们都是大数据处理领域中的重要工具。让我们逐一深入了解这些技术。 1. **Apache Storm**:Storm 是一个开源的分布式实时计算系统,用于处理无界数据...
kafka配置文件zookeeper参数.md
标题中的"kafka+zookeeper.zip"表明这是一个关于Apache Kafka和Zookeeper的组合包,通常用于构建高效的消息队列系统,尤其在大数据处理场景中。Apache Kafka是一个分布式流处理平台,而Zookeeper是一个分布式协调...
1. **安装准备**: 首先,确保你的环境中已经安装了Java运行环境(JRE),因为Kafka和Zookeeper都是基于Java开发的。 2. **Zookeeper配置**: - **启动Zookeeper集群**:在每个节点上,配置`zoo.cfg`文件,设置...