Kafka Cluster 2019(2)Kafka Cluster 2.12-2.2.0
Last time, I get the source codes and compile and package it myself. This time, I will just download the binary directly.
> wget http://mirror.cc.columbia.edu/pub/software/apache/kafka/2.2.0/kafka_2.12-2.2.0.tgz
Unzip and place it in the right directly
> sudo ln -s /home/carl/tool/kafka-2.12-2.2.0 /opt/kafka-2.2.0
> sudo ln -s /opt/kafka-2.2.0 /opt/kafka
Prepare single file first
> vi config/server.properties
zookeeper.connect=ubuntu-master:2181,ubuntu-dev2:2181,ubuntu-dev4:2181
Add Kafka to path
export PATH="/opt/kafka/bin:$PATH"
Start the Server to Have a Try
> kafka-server-start.sh config/server.properties
Exception:
[2019-06-24 18:25:42,264] INFO Opening socket connection to server ubuntu-dev2/192.168.56.3:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2019-06-24 18:25:42,265] INFO Socket error occurred: ubuntu-dev2/192.168.56.3:2181: Connection refused (org.apache.zookeeper.ClientCnxn)
Solution:
First of all, I found the Zookeeper cluster is not running.
I checked the logging in ZooKeeper, it is saying that the /tmp/zookeeper/myid file is missing.
Haha, it is because that my /tmp/zookeeper is gone, it is not a good data directory.
Change the zookeeper configuration
dataDir=/opt/zookeeper/data
It works perfect for ZooKeeper now. Try Kafka again.
>kafka-server-start.sh config/server.properties
For single node, it works this time. Try Set Up Cluster.
Do the similar installation on ubuntu-dev2 and ubuntu-dev4.
Copy the basic configuration files here
> scp ubuntu-master:/opt/kafka/config/server1.properties ./config/
Rename them on different machines
> mv config/server1.properties config/server2.properties
> mv config/server1.properties config/server3.properties
Change the broker.id on each configurations files
broker.id=1
broker.id=2
broker.id=3
Clean the log directory and then start the Kafka again.
> rm -fr /tmp/kafka-logs/*
> nohup bin/kafka-server-start.sh config/server1.properties &
> nohup bin/kafka-server-start.sh config/server2.properties &
> nohup bin/kafka-server-start.sh config/server3.properties &
Check the installation.
Create the topic
>bin/kafka-topics.sh --create --zookeeper ubuntu-master:2181,ubuntu-dev2:2181,ubuntu-dev4:2181 --replication-factor 2 --partitions 2 --topic cluster1
Created topic "cluster1".
Producer
>bin/kafka-console-producer.sh --broker-list ubuntu-master:9092,ubuntu-dev2:9092,ubuntu-dev4:9092 --topic cluster1
Consumer
>bin/kafka-console-consumer.sh --zookeeper ubuntu-master:2181,ubuntu-dev2:2181,ubuntu-dev4:2181 --topic cluster1 --from-beginning
Exception:
zookeeper is not a recognized option
Solution:
>bin/kafka-console-consumer.sh --bootstrap-server ubuntu-master:9092,ubuntu-dev2:9092,ubuntu-dev4:9092 --topic cluster1 --from-beginning
It should be working well, next step is to set up driver to connect to it.
References:
https://www.jianshu.com/p/5297773fcc1b
分享到:
相关推荐
kafka_2.12-2.7.0.tar kafka_2.12-2.7.0.tar kafka_2.12-2.7.0.tar kafka_2.12-2.7.0.tar kafka_2.12-2.7.0.tar kafka_2.12-2.7.0.tar kafka_2.12-2.7.0.tar kafka_2.12-2.7.0.tar kafka_2.12-2.7.0.tar kafka_2.12-...
flink-sql-connector-kafka_2.12-1.13.1.jar 是 Apache Flink 的一个 Kafka SQL Connector 的 JAR 包,用于在 Flink SQL 环境中与 Apache Kafka 集成。这里面的数字 2.12 和 1.13.1 分别表示了这个 JAR 包所依赖的 ...
/usr/local/kafka_2.12-3.6.1/bin/kafka-topics.sh --create --topic my-topic --partitions 3 --replication-factor 2 --if-not-exists --zookeeper localhost:2181 ``` 七、生产消息 创建一个简单的消息生产者,...
赠送jar包:kafka-clients-2.2.0.jar; 赠送原API文档:kafka-clients-2.2.0-javadoc.jar; 赠送源代码:kafka-clients-2.2.0-sources.jar; 赠送Maven依赖信息文件:kafka-clients-2.2.0.pom; 包含翻译后的API文档...
赠送jar包:flink-connector-kafka_2.12-1.14.3.jar; 赠送原API文档:flink-connector-kafka_2.12-1.14.3-javadoc.jar; 赠送源代码:flink-connector-kafka_2.12-1.14.3-sources.jar; 赠送Maven依赖信息文件:...
赠送jar包:flink-connector-kafka_2.12-1.14.3.jar 赠送原API文档:flink-connector-kafka_2.12-1.14.3-javadoc.jar 赠送源代码:flink-connector-kafka_2.12-1.14.3-sources.jar 包含翻译后的API文档:flink-...
spark3.0.0版本对接kafka数据源需要的jar包,最新的版本导致maven的阿里云仓库不能直接下载下来,所以需要手动导入jar包进行操作,有需要的朋友可以免费下载
flink-connector-kafka_2.12-1.13.2.jar
2.11-2.4.0.tgz, kafka_2.11-2.4.1.tgz, kafka_2.12-2.0.0.tgz, kafka_2.12-2.0.1.tgz, kafka_2.12-2.1.0.tgz, kafka_2.12-2.1.1.tgz, kafka_2.12-2.2.0.tgz, kafka_2.12-2.2.1.tgz, kafka_2.12-2.2.2.tgz, kafka_...
- **安装与启动**:通过解压"**kafka_2.12-2.2.0.tgz**",我们可以获取Kafka的二进制文件,按照官方文档进行配置和启动。 - **创建主题**:使用Kafka的命令行工具创建主题,设置分区和副本数量。 - **配置生产者...
spakr streaming的kafka依赖
赠送jar包:kafka-clients-2.2.0.jar; 赠送原API文档:kafka-clients-2.2.0-javadoc.jar; 赠送源代码:kafka-clients-2.2.0-sources.jar; 赠送Maven依赖信息文件:kafka-clients-2.2.0.pom; 包含翻译后的API文档...
**Kafka概述** Kafka是由LinkedIn开发并...总的来说,Kafka 2.12-3.3.2版本在保持其核心特性的基础上,对生产者和消费者的API进行了优化,增强了集群的稳定性和安全性,使得Kafka在大数据领域的应用更加广泛和可靠。
1. 下载:可以从官方网站或者其他镜像站点下载对应版本的Kafka,如“kafka_2.12-2.8.2”。 2. 解压:将压缩包解压到指定目录,配置环境变量以方便使用。 3. 启动:启动Zookeeper服务,然后启动Kafka服务器。 4. 创建...
《kafka_2.12-3.2.1.tgz》是一个用于构建分布式消息传递系统的开源软件包,它是Apache Kafka的最新版本之一。该软件包包括Kafka的核心组件,如Kafka生产者和消费者API,Kafka协调器,Kafka存储层等。它还包括一些...
标题中的“pylink链接kafka资源jar包flink-connector-kafka_2.12-1.11.0”表明这是一个关于使用Python(pylink)连接Apache Flink与Kafka资源的Java Archive (JAR) 文件。描述简单地提到了“flink-connector-kafka_...
在“kafka-2.12-3.4.0.tgz”这个压缩包中,包含了Kafka的源码、库文件和其他相关组件,适用于Scala 2.12版本,这是Kafka的一个重要更新版本3.4.0。 1. **Kafka的核心概念**: - **主题(Topic)**:主题是Kafka中...
kafka2.12-3.8.0
- `bin`目录:包含了启动和管理Kafka服务的脚本,如`kafka-server-start.sh`、`kafka-console-producer.sh`和`kafka-console-consumer.sh`。 - `config`目录:包含默认配置文件,如`server.properties`(配置Kafka ...
spark3.0.0版本对接kafka数据源需要的jar包,最新的版本导致maven的阿里云仓库不能直接下载下来,所以需要手动导入jar包进行操作,有需要的朋友可以免费下载