`
sillycat
  • 浏览: 2527479 次
  • 性别: Icon_minigender_1
  • 来自: 成都
社区版块
存档分类
最新评论

Kafka Cluster 2019(2)Kafka Cluster 2.12-2.2.0

 
阅读更多
Kafka Cluster 2019(2)Kafka Cluster 2.12-2.2.0

Last time, I get the source codes and compile and package it myself. This time, I will just download the binary directly.
> wget http://mirror.cc.columbia.edu/pub/software/apache/kafka/2.2.0/kafka_2.12-2.2.0.tgz

Unzip and place it in the right directly
> sudo ln -s /home/carl/tool/kafka-2.12-2.2.0 /opt/kafka-2.2.0
> sudo ln -s /opt/kafka-2.2.0 /opt/kafka

Prepare single file first
> vi config/server.properties
zookeeper.connect=ubuntu-master:2181,ubuntu-dev2:2181,ubuntu-dev4:2181

Add Kafka to path
export PATH="/opt/kafka/bin:$PATH"

Start the Server to Have a Try
> kafka-server-start.sh config/server.properties

Exception:
[2019-06-24 18:25:42,264] INFO Opening socket connection to server ubuntu-dev2/192.168.56.3:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2019-06-24 18:25:42,265] INFO Socket error occurred: ubuntu-dev2/192.168.56.3:2181: Connection refused (org.apache.zookeeper.ClientCnxn)

Solution:
First of all, I found the Zookeeper cluster is not running.
I checked the logging in ZooKeeper, it is saying that the /tmp/zookeeper/myid file is missing.

Haha, it is because that my /tmp/zookeeper is gone, it is not a good data directory.
Change the zookeeper configuration
dataDir=/opt/zookeeper/data

It works perfect for ZooKeeper now. Try Kafka again.
>kafka-server-start.sh config/server.properties

For single node, it works this time. Try Set Up Cluster.

Do the similar installation on ubuntu-dev2 and ubuntu-dev4.
Copy the basic configuration files here
> scp ubuntu-master:/opt/kafka/config/server1.properties ./config/

Rename them on different machines
> mv config/server1.properties config/server2.properties

> mv config/server1.properties config/server3.properties

Change the broker.id on each configurations files
broker.id=1
broker.id=2
broker.id=3

Clean the log directory and then start the Kafka again.
> rm -fr /tmp/kafka-logs/*

> nohup bin/kafka-server-start.sh config/server1.properties &

> nohup bin/kafka-server-start.sh config/server2.properties &

> nohup bin/kafka-server-start.sh config/server3.properties &

Check the installation.
Create the topic
>bin/kafka-topics.sh --create --zookeeper ubuntu-master:2181,ubuntu-dev2:2181,ubuntu-dev4:2181 --replication-factor 2 --partitions 2 --topic cluster1
Created topic "cluster1".

Producer
>bin/kafka-console-producer.sh --broker-list ubuntu-master:9092,ubuntu-dev2:9092,ubuntu-dev4:9092 --topic cluster1

Consumer
>bin/kafka-console-consumer.sh --zookeeper ubuntu-master:2181,ubuntu-dev2:2181,ubuntu-dev4:2181 --topic cluster1 --from-beginning

Exception:
zookeeper is not a recognized option

Solution:
>bin/kafka-console-consumer.sh --bootstrap-server ubuntu-master:9092,ubuntu-dev2:9092,ubuntu-dev4:9092 --topic cluster1 --from-beginning


It should be working well, next step is to set up driver to connect to it.

References:
https://www.jianshu.com/p/5297773fcc1b



分享到:
评论

相关推荐

    kafka-clients-2.2.0-API文档-中文版.zip

    赠送jar包:kafka-clients-2.2.0.jar; 赠送原API文档:kafka-clients-2.2.0-javadoc.jar; 赠送源代码:kafka-clients-2.2.0-sources.jar; 赠送Maven依赖信息文件:kafka-clients-2.2.0.pom; 包含翻译后的API文档...

    flink-connector-kafka-2.12-1.14.3-API文档-中英对照版.zip

    赠送jar包:flink-connector-kafka_2.12-1.14.3.jar; 赠送原API文档:flink-connector-kafka_2.12-1.14.3-javadoc.jar; 赠送源代码:flink-connector-kafka_2.12-1.14.3-sources.jar; 赠送Maven依赖信息文件:...

    flink-sql-connector-kafka-2.12-1.13.1.jar

    flink-sql-connector-kafka_2.12-1.13.1.jar 是 Apache Flink 的一个 Kafka SQL Connector 的 JAR 包,用于在 Flink SQL 环境中与 Apache Kafka 集成。这里面的数字 2.12 和 1.13.1 分别表示了这个 JAR 包所依赖的 ...

    kafka-2.12-3.6.1.tgz

    /usr/local/kafka_2.12-3.6.1/bin/kafka-topics.sh --create --topic my-topic --partitions 3 --replication-factor 2 --if-not-exists --zookeeper localhost:2181 ``` 七、生产消息 创建一个简单的消息生产者,...

    flink-connector-kafka-2.12-1.14.3-API文档-中文版.zip

    赠送jar包:flink-connector-kafka_2.12-1.14.3.jar 赠送原API文档:flink-connector-kafka_2.12-1.14.3-javadoc.jar 赠送源代码:flink-connector-kafka_2.12-1.14.3-sources.jar 包含翻译后的API文档:flink-...

    kafka-2.12-2.7.0.tar

    kafka_2.12-2.7.0.tar kafka_2.12-2.7.0.tar kafka_2.12-2.7.0.tar kafka_2.12-2.7.0.tar kafka_2.12-2.7.0.tar kafka_2.12-2.7.0.tar kafka_2.12-2.7.0.tar kafka_2.12-2.7.0.tar kafka_2.12-2.7.0.tar kafka_2.12-...

    下载慢?给你Kafka 2.xx所有版本下载的百度网盘链接

    2.11-2.4.0.tgz, kafka_2.11-2.4.1.tgz, kafka_2.12-2.0.0.tgz, kafka_2.12-2.0.1.tgz, kafka_2.12-2.1.0.tgz, kafka_2.12-2.1.1.tgz, kafka_2.12-2.2.0.tgz, kafka_2.12-2.2.1.tgz, kafka_2.12-2.2.2.tgz, kafka_...

    flink-connector-kafka_2.12-1.13.2.jar

    flink-connector-kafka_2.12-1.13.2.jar

    spark-streaming-kafka-0-10_2.12-3.0.0.jar

    spark3.0.0版本对接kafka数据源需要的jar包,最新的版本导致maven的阿里云仓库不能直接下载下来,所以需要手动导入jar包进行操作,有需要的朋友可以免费下载

    kafka_2.12-2.2.0.tgz

    - **安装与启动**:通过解压"**kafka_2.12-2.2.0.tgz**",我们可以获取Kafka的二进制文件,按照官方文档进行配置和启动。 - **创建主题**:使用Kafka的命令行工具创建主题,设置分区和副本数量。 - **配置生产者...

    kafka-clients-2.2.0-API文档-中英对照版.zip

    赠送jar包:kafka-clients-2.2.0.jar; 赠送原API文档:kafka-clients-2.2.0-javadoc.jar; 赠送源代码:kafka-clients-2.2.0-sources.jar; 赠送Maven依赖信息文件:kafka-clients-2.2.0.pom; 包含翻译后的API文档...

    spark-streaming-kafka-0-10_2.12-2.4.0.jar

    spakr streaming的kafka依赖

    kafka-2.12-3.3.2.tgz

    **Kafka概述** Kafka是由LinkedIn开发并...总的来说,Kafka 2.12-3.3.2版本在保持其核心特性的基础上,对生产者和消费者的API进行了优化,增强了集群的稳定性和安全性,使得Kafka在大数据领域的应用更加广泛和可靠。

    pylink链接kafka资源jar包flink-connector-kafka_2.12-1.11.0

    标题中的“pylink链接kafka资源jar包flink-connector-kafka_2.12-1.11.0”表明这是一个关于使用Python(pylink)连接Apache Flink与Kafka资源的Java Archive (JAR) 文件。描述简单地提到了“flink-connector-kafka_...

    kafka_2.12-3.3.1.tgz

    - `bin`目录:包含了启动和管理Kafka服务的脚本,如`kafka-server-start.sh`、`kafka-console-producer.sh`和`kafka-console-consumer.sh`。 - `config`目录:包含默认配置文件,如`server.properties`(配置Kafka ...

    kafka_2.12-2.4.1.zip

    - 使用`./kafka-console-producer.sh --broker-list localhost:9092 --topic test`和`./kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning`进行消息的生产和消费,操作...

    spark-token-provider-kafka-0-10_2.12-3.0.0.jar

    spark3.0.0版本对接kafka数据源需要的jar包,最新的版本导致maven的阿里云仓库不能直接下载下来,所以需要手动导入jar包进行操作,有需要的朋友可以免费下载

    Apache Kafka 2.8.1(kafka_2.12-2.8.1.tgz)

    Apache Kafka 2.8.1(Scala 2.12 :kafka_2.12-2.8.1.tgz) 是一个开源分布式事件流平台,被数千家公司用于高性能数据管道、流分析、数据集成和关键任务应用程序。) 是一个开源分布式事件流平台,被数千家公司用于高...

    kafka安装包kafka_2.12-2.7.0

    kafka安装包及安装步骤(原始安装及docker安装)

    kafka-2.12-0.10.2.0文件安装包

    7. **生产与消费消息**:使用kafka-console-producer.sh和kafka-console-consumer.sh脚本进行消息的生产和消费测试。 需要注意的是,由于Kafka最初设计时更多考虑的是Linux环境,因此在Windows上部署可能会遇到一些...

Global site tag (gtag.js) - Google Analytics