参考官网site:
http://kafka.apache.org/documentation.html#basic_ops_cluster_expansion
https://cwiki.apache.org/confluence/display/KAFKA/Replication+tools#Replicationtools-6.ReassignPartitionsTool
说明:
当我们对kafka集群扩容时,需要满足2点要求:
- 将指定topic迁移到集群内新增的node上。
- 将topic的指定partition迁移到新增的node上。
1. 迁移topic到新增的node上
假如现在一个kafka集群运行三个broker,broker.id依次为101,102,103,后来由于业务数据突然暴增,需要新增三个broker,broker.id依次为104,105,106.目的是要把push-token-topic迁移到新增node上。
1、脚本migration-push-token-topic.json文件内容如下:
- {
-
- "topics":
-
- [
-
- {
-
- "topic": "push-token-topic"
-
- }
-
- ],
-
- "version":1
-
- }
2、执行脚本如下所示:
- root@localhost:$ ./bin/kafka-reassign-partitions.sh --zookeeper 192.168.2.225:2183 --topics-to-move-json-file migration-push-token-topic.json --broker-list "104,105,106" --generate
生成分配partitions的json脚本 备份恢复使用:
Current partition replica assignment
{"version":1,"partitions":[{"topic":"cluster-switch-topic","partition":10,"replicas":[8]},{"topic":"cluster-switch-topic","partition":5,"replicas":[4]},{"topic":"cluster-switch-topic","partition":3,"replicas":[5]},{"topic":"cluster-switch-topic","partition":4,"replicas":[5]},{"topic":"cluster-switch-topic","partition":9,"replicas":[5]},{"topic":"cluster-switch-topic","partition":1,"replicas":[5]},{"topic":"cluster-switch-topic","partition":11,"replicas":[4]},{"topic":"cluster-switch-topic","partition":7,"replicas":[5]},{"topic":"cluster-switch-topic","partition":2,"replicas":[4]},{"topic":"cluster-switch-topic","partition":0,"replicas":[4]},{"topic":"cluster-switch-topic","partition":6,"replicas":[4]},{"topic":"cluster-switch-topic","partition":8,"replicas":[4]}]}
重新分配parttions的json脚本如下:
migration-topic-cluster-switch-topic.json
{"version":1,"partitions":[{"topic":"cluster-switch-topic","partition":10,"replicas":[5]},{"topic":"cluster-switch-topic","partition":5,"replicas":[4]},{"topic":"cluster-switch-topic","partition":4,"replicas":[5]},{"topic":"cluster-switch-topic","partition":3,"replicas":[4]},{"topic":"cluster-switch-topic","partition":9,"replicas":[4]},{"topic":"cluster-switch-topic","partition":1,"replicas":[4]},{"topic":"cluster-switch-topic","partition":11,"replicas":[4]},{"topic":"cluster-switch-topic","partition":7,"replicas":[4]},{"topic":"cluster-switch-topic","partition":2,"replicas":[5]},{"topic":"cluster-switch-topic","partition":0,"replicas":[5]},{"topic":"cluster-switch-topic","partition":6,"replicas":[5]},{"topic":"cluster-switch-topic","partition":8,"replicas":[5]}]}
3、执行:
- root@localhost:$ bin/kafka-reassign-partitions.sh --zookeeper 192.168.2.225:2183 --reassignment-json-file migration-topic-cluster-switch-topic.json --execute
执行后会生成一个json格式文件expand-cluster-reassignment.json
4、查询执行状态:
- bin/kafka-reassign-partitions.sh --zookeeper 192.168.2.225:2183 --reassignment-json-file expand-cluster-reassignment.json --verify
正常执行后会返回当前数据迁移的不用partion的,信息状态类似下面
- Reassignment of partition [push-token-topic,0] completed successfully
- Reassignment of partition [push-token-topic,1] is in progress
- Reassignment of partition [push-token-topic,2] is in progress
- Reassignment of partition [push-token-topic,1] completed successfully
- Reassignment of partition [push-token-topic,2] completed successfully
这样做不会影响原来集群上的topic业务
2.topic修改(replicats-factor)副本个数
假如初始时push-token-topic为一个副本,为了提高可用性,需要改为2副本模式。
脚本replicas-update-push-token-topic.json文件内容如下:
{
"partitions":
[
{
"topic": "log.mobile_nginx",
"partition": 0,
"replicas": [101,102,104]
},
{
"topic": "log.mobile_nginx",
"partition": 1,
"replicas": [102,103,106]
}
],
"version":1
}
2、执行:
- root@localhost:$ ./bin/kafka-reassign-partitions.sh --zookeeper 192.168.2.225:2183 --reassignment-json-file replicas-update-push-token-topic.json --execute
执行后会列出当前的partition和修改后的patition
3、verify
- bin/kafka-reassign-partitions.sh --zookeeper 192.168.2.225:2181 --reassignment-json-file replicas-update-push-token-topic.json --verify
如下:
Status of partition reassignment:
Reassignment of partition [log.mobile_nginx,0] completed successfully
Reassignment of partition [log.mobile_nginx,1] completed successfully
3.自定义分区和迁移
1、The first step is to hand craft the custom reassignment plan in a json file-
> cat custom-reassignment.json
{"version":1,"partitions":[{"topic":"foo1","partition":0,"replicas":[5,6]},{"topic":"foo2","partition":1,"replicas":[2,3]}]}
2、Then, use the json file with the --execute option to start the reassignment process-
> bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 --reassignment-json-file custom-reassignment.json --execute
Current partition replica assignment
{"version":1,
"partitions":[{"topic":"foo1","partition":0,"replicas":[1,2]},
{"topic":"foo2","partition":1,"replicas":[3,4]}]
}
Save this to use as the --reassignment-json-file option during rollback
Successfully started reassignment of partitions
{"version":1,
"partitions":[{"topic":"foo1","partition":0,"replicas":[5,6]},
{"topic":"foo2","partition":1,"replicas":[2,3]}]
}
3、The --verify option can be used with the tool to check the status of the partition reassignment. Note that the same expand-cluster-reassignment.json (used with the --execute option) should be used with the --verify option
bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 --reassignment-json-file custom-reassignment.json --verify
Status of partition reassignment:
Reassignment of partition [foo1,0] completed successfully
Reassignment of partition [foo2,1] completed successfully
4.topic的分区扩容用法
a.先扩容分区数量,脚本如下:
例如:push-token-topic初始分区数量为12,目前到增加到15个
root@localhost:$ ./bin/kafka-topics.sh --zookeeper 192.168.2.225:2183 --alter --partitions 15 --topic push-token-topic
b.设置topic分区副本
root@localhost:$ ./bin/kafka-reassign-partitions.sh --zookeeper 192.168.2.225:2183
--reassignment-json-file partitions-extension-push-token-topic.json --execute
脚本partitions-extension-push-token-topic.json文件内容如下:
{
"partitions":
[
{
"topic": "push-token-topic",
"partition": 12,
"replicas": [101,102]
},
{
"topic": "push-token-topic",
"partition": 13,
"replicas": [103,104]
},
{
"topic": "push-token-topic",
"partition": 14,
"replicas": [105,106]
}
],
"version":1
}
分享到:
相关推荐
### Kafka 3.2 常用命令详解 #### 一、启动 ZooKeeper 服务 在启动 Kafka 之前,必须先启动 ZooKeeper 服务。ZooKeeper 为 Kafka 提供了集群协调服务。 ##### 操作步骤: 1. **打开命令行窗口**: - 打开 cmd ...
以下是一些常见的Kafka命令行工具及其用途: 1. **启动和停止Kafka服务** - `bin/zookeeper-server-start.sh config/zookeeper.properties`: 启动Zookeeper服务,Kafka依赖Zookeeper进行集群协调。 - `bin/kafka-...
以下是对 Kafka 常用命令行的详细说明: 1. **查看 Topic 详细信息**: 使用 `kafka-topics.sh` 命令行工具,配合 `-zookeeper` 参数指定 ZooKeeper 地址,`-describe` 参数来获取 Topic 的详细信息,例如: ```...
安装Kafka涉及下载Apache Kafka的二进制包,解压后配置环境变量,并启动Zookeeper(Kafka的依赖服务)和Kafka服务器。启动命令通常是`bin/zookeeper-server-start.sh config/zookeeper.properties`和`bin/kafka-...
#### 三、Kafka常用操作命令 Kafka提供了丰富的命令行工具来管理和操作Topic。 ##### 1. 查看当前服务器中的所有Topic 可以通过`kafka-topics.sh`命令来查看当前服务器上的所有Topic: ```bash kafka-topics.sh ...
4. **Kafka常用命令和参数**: Kafka提供了丰富的命令行工具,用于管理Topics、Producers、Consumers等。例如: - `kafka-topics.sh`:创建、查看、修改Topic。 - `kafka-console-producer.sh`:启动命令行...
##### 1.3 常用命令总结 - **文件/文件夹复制**:`cp <source> <destination>` - **解压文件**:`tar -zxvf <filename>` - **移动文件或文件夹**:`mv <source> <destination>` - **切换目录**:`cd <directory>` ...
常用 Shell 命令 Java API 的使用 基于 Zookeeper 搭建 Hadoop 高可用集群 二、Hive 简介及核心概念 Linux 环境下 Hive 的安装部署 CLI 和 Beeline 命令行的基本使用 常用 DDL 操作 分区表和分桶表 视图和索引 常用 ...
本文将详细介绍 xshell 中的常用命令,包括文件管理、目录操作、文件查看、搜索和编辑等方面。 一、文件管理命令 1. 命令 ls:用于列出文件和目录,包括隐藏文件。例如,ls -la 命令可以显示当前目录下的所有文件...
8. **命令行操作**:如`kafka-topics.sh`创建和管理主题,`kafka-console-producer.sh`发送消息。 9. **生产者与消费者代码示例**:展示如何编程与Kafka交互。 10. **Kafka Tool**:Windows下的可视化工具,方便...
HDFS 常用 Shell 命令 HDFS Java API 的使用 基于 Zookeeper 搭建 Hadoop 高可用集群 Hive Hive 简介及核心概念 Linux 环境下 Hive 的安装部署 Hive CLI 和 Beeline 命令行的基本使用 Hive 常用 DDL 操作 Hive 分区...
3. **添加Kafka的bin目录到PATH**:这样我们可以在脚本中直接调用Kafka的命令行工具。 ```bash export PATH=$PATH:$KAFKA_HOME/bin ``` 4. **启动Zookeeper**:Kafka依赖Zookeeper进行集群管理和元数据存储,...
在大数据实时处理领域,Flume、Kafka 和 Spark Streaming 是常用的数据采集、传输与处理工具。本实验报告详细阐述了如何将这三个组件结合使用,构建一个高效的数据流处理系统。 一、Flume 与 Spark Streaming 的...
10. **消费与生产消息**:你可以使用Kafka的命令行工具来测试生产者和消费者,或者通过编写Java、Python等语言的程序来操作Kafka。 通过以上步骤,你将在Linux系统上成功部署了Kafka服务,并使用OpenJDK 8运行它。...
值得注意的是,在Windows命令行中,逻辑运算符`&&`、`||`被用来控制命令之间的顺序执行,而不是用于表达式求值。 #### 示例代码分析 在给出的部分内容中,使用了`os.popen`函数来执行shell命令。这里举一个具体的...
8. **Kafka命令行工具**:Kafka提供了一系列的命令行工具,如`kafka-console-producer.sh`和`kafka-console-consumer.sh`,用于在命令行界面与Kafka集群交互,生产和消费消息。 9. **Docker网络**:在Docker ...
`tail`是Linux系统中一个常用的命令行工具,它用于查看文件的尾部内容。在Kafka-Tail-Producer中,`tail -f`参数被用来持续监控日志文件,当文件有新的内容追加时,`tail`会立即显示出来,非常适合实时数据监控。 #...