配置文件在config/server.properties
下面的一些配置可能是你需要进行修改的。
broker.id |
整数,建议根据ip区分 |
|
log.dirs |
kafka存放消息文件的路径, |
默认/tmp/kafka-logs |
port |
broker用于接收producer消息的端口 |
|
zookeeper.connnect |
zookeeper连接 |
格式为 ip1:port,ip2:port,ip3:port |
message.max.bytes |
单条消息的最大长度 |
|
num.network.threads |
broker用于处理网络请求的线程数 |
如不配置默认为3,server.properties默认是2 |
num.io.threads |
broker用于执行网络请求的IO线程数 |
如不配置默认为8,server.properties默认是2可适当增大, |
queued.max.requests |
排队等候IO线程执行的requests |
默认为500 |
host.name |
broker的hostname |
默认null,建议写主机的ip,不然消费端不配置hosts会有麻烦 |
num.partitions |
topic的默认分区数 |
默认1 |
log.retention.hours |
消息被删除前保存多少小时 |
默认1周168小时 |
auto.create.topics.enable |
是否可以程序自动创建Topic |
默认true,建议false |
default.replication.factor |
消息备份数目 |
默认1不做复制,建议修改 |
num.replica.fetchers |
用于复制leader消息到follower的IO线程数 |
默认1 |
下面的文档是官网给出的说明。
The essential configurations are the following:
broker.id
log.dirs
zookeeper.connect
broker.id | Each broker is uniquely identified by a non-negative integer id. This id serves as the brokers "name", and allows the broker to be moved to a different host/port without confusing consumers. You can choose any number you like so long as it is unique. | |
log.dirs | /tmp/kafka-logs | A comma-separated list of one or more directories in which Kafka data is stored. Each new partition that is created will be placed in the directory which currently has the fewest partitions. |
port | 6667 | The port on which the server accepts client connections. |
zookeeper.connect | null |
Specifies the zookeeper connection string in the form hostname:port , where hostname and port are the host and port for a node in your zookeeper cluster. To allow connecting through other zookeeper nodes when that host is down you can also specify multiple hosts in the formhostname1:port1,hostname2:port2,hostname3:port3 .
Zookeeper also allows you to add a "chroot" path which will make all kafka data for this cluster appear under a particular path. This is a way to setup multiple Kafka clusters or other applications on the same zookeeper cluster. To do this give a connection string in the form |
message.max.bytes | 1000000 | The maximum size of a message that the server can receive. It is important that this property be in sync with the maximum fetch size your consumers use or else an unruly consumer will be able to publish messages too large for consumers to consume. |
num.network.threads | 3 | The number of network threads that the server uses for handling network requests. You probably don't need to change this. |
num.io.threads | 8 | The number of I/O threads that the server uses for executing requests. You should have at least as many threads as you have disks. |
queued.max.requests | 500 | The number of requests that can be queued up for processing by the I/O threads before the network threads stop reading in new requests. |
host.name | null |
Hostname of broker. If this is set, it will only bind to this address. If this is not set, it will bind to all interfaces, and publish one to ZK. |
socket.send.buffer.bytes | 100 * 1024 | The SO_SNDBUFF buffer the server prefers for socket connections. |
socket.receive.buffer.bytes | 100 * 1024 | The SO_RCVBUFF buffer the server prefers for socket connections. |
socket.request.max.bytes | 100 * 1024 * 1024 |
The maximum request size the server will allow. This prevents the server from running out of memory and should be smaller than the Java heap size. |
num.partitions | 1 | The default number of partitions per topic. |
log.segment.bytes | 1024 * 1024 * 1024 | The log for a topic partition is stored as a directory of segment files. This setting controls the size to which a segment file will grow before a new segment is rolled over in the log. |
log.segment.bytes.per.topic | "" | This setting allows overriding log.segment.bytes on a per-topic basis |
log.roll.hours | 24 * 7 | This setting will force Kafka to roll a new log segment even if the log.segment.bytes size has not been reached. |
log.roll.hours.per.topic | "" | This setting allows overriding log.roll.hours on a per-topic basis. |
log.retention.hours | 24 * 7 | The number of hours to keep a log segment before it is deleted, i.e. the default data retention window for all topics. Note that if both log.retention.hours and log.retention.bytes are both set we delete a segment when either limit is exceeded. |
log.retention.hours.per.topic | "" | A per-topic override for log.retention.hours. |
log.retention.bytes | -1 | The amount of data to retain in the log for each topic-partitions. Note that this is the limit per-partition so multiple by the number of partitions to get the total data retained for the topic. Also note that if both log.retention.hours and log.retention.bytes are both set we delete a segment when either limit is exceeded. |
log.retention.bytes.per.topic | "" | A per-topic override for log.retention.bytes. |
log.cleanup.interval.mins | 10 | The frequency in minutes that the log cleaner checks whether any log segment is eligible for deletion to meet the retention policies. |
log.index.size.max.bytes | 10 * 1024 * 1024 | The maximum size in bytes we allow for the offset index for each log segment. Note that we will always pre-allocate a sparse file with this much space and shrink it down when the log rolls. If the index fills up we will roll a new log segment even if we haven't reached the log.segment.bytes limit. |
log.index.interval.bytes | 4096 | The byte interval at which we add an entry to the offset index. When executing a fetch request the server must do a linear scan for up to this many bytes to find the correct position in the log to begin and end the fetch. So setting this value to be larger will mean larger index files (and a bit more memory usage) but less scanning. However the server will never add more than one index entry per log append (even if more than log.index.interval worth of messages are appended). In general you probably don't need to mess with this value. |
log.flush.interval.messages | 10000 | The number of messages written to a log partition before we force an fsync on the log. Setting this higher will improve performance a lot but will increase the window of data at risk in the event of a crash (though that is usually best addressed through replication). If both this setting and log.flush.interval.ms are both used the log will be flushed when either criteria is met. |
log.flush.interval.ms.per.topic | "" | The per-topic override for log.flush.interval.messages, e.g., topic1:3000,topic2:6000 |
log.flush.scheduler.interval.ms | 3000 | The frequency in ms that the log flusher checks whether any log is eligible to be flushed to disk. |
log.flush.interval.ms | 3000 | The maximum time between fsync calls on the log. If used in conjuction with log.flush.interval.messages the log will be flushed when either criteria is met. |
auto.create.topics.enable | true | Enable auto creation of topic on the server. If this is set to true then attempts to produce, consume, or fetch metadata for a non-existent topic will automatically create it with the default replication factor and number of partitions. |
controller.socket.timeout.ms | 30000 | The socket timeout for commands from the partition management controller to the replicas. |
controller.message.queue.size | 10 | The buffer size for controller-to-broker-channels |
default.replication.factor | 1 | The default replication factor for automatically created topics. |
replica.lag.time.max.ms | 10000 | If a follower hasn't sent any fetch requests for this window of time, the leader will remove the follower from ISR and treat it as dead. |
replica.lag.max.messages | 4000 | If a replica falls more than this many messages behind the leader, the leader will remove the follower from ISR and treat it as dead. |
replica.socket.timeout.ms | 30 * 1000 | The socket timeout for network requests to the leader for replicating data. |
replica.socket.receive.buffer.bytes | 64 * 1024 | The socket receive buffer for network requests to the leader for replicating data. |
replica.fetch.max.bytes | 1024 * 1024 | The number of byes of messages to attempt to fetch for each partition in the fetch requests the replicas send to the leader. |
replica.fetch.wait.max.ms | 500 | The maximum amount of time to wait time for data to arrive on the leader in the fetch requests sent by the replicas to the leader. |
replica.fetch.min.bytes | 1 | Minimum bytes expected for each fetch response for the fetch requests from the replica to the leader. If not enough bytes, wait up to replica.fetch.wait.max.ms for this many bytes to arrive. |
num.replica.fetchers | 1 |
Number of threads used to replicate messages from leaders. Increasing this value can increase the degree of I/O parallelism in the follower broker. |
replica.high.watermark.checkpoint.interval.ms | 5000 | The frequency with which each replica saves its high watermark to disk to handle recovery. |
fetch.purgatory.purge.interval.requests | 10000 | The purge interval (in number of requests) of the fetch request purgatory. |
producer.purgatory.purge.interval.requests | 10000 | The purge interval (in number of requests) of the producer request purgatory. |
zookeeper.session.timeout.ms | 6000 | Zookeeper session timeout. If the server fails to heartbeat to zookeeper within this period of time it is considered dead. If you set this too low the server may be falsely considered dead; if you set it too high it may take too long to recognize a truly dead server. |
zookeeper.connection.timeout.ms | 6000 | The max time that the client waits to establish a connection to zookeeper. |
zookeeper.sync.time.ms | 2000 | How far a ZK follower can be behind a ZK leader |
controlled.shutdown.enable | false | Enable controlled shutdown of the broker. If enabled, the broker will move all leaders on it to some other brokers before shutting itself down. This reduces the unavailability window during shutdown. |
controlled.shutdown.max.retries | 3 | Number of retries to complete the controlled shutdown successfully before executing an unclean shutdown. |
controlled.shutdown.retry.backoff.ms | 5000 | Backoff time between shutdown retries. |
More details about broker configuration can be found in the scala class kafka.server.KafkaConfig
.
相关推荐
它提供了直观的 Web UI,帮助用户轻松地管理 Kafka 集群,包括主题、消费者、生产者以及集群配置等各个方面。 1. **功能特性** - **集群管理**:Kafka-Manager 可以添加、删除和查看多个 Kafka 集群,方便多集群...
1. **监控与报警**:Kafka Eagle提供实时监控Kafka集群的性能指标,如Broker状态、Partition信息、Topic详情等。当监控指标超过预设阈值时,系统会触发报警,确保及时处理问题。 2. **图形化管理**:通过简洁的用户...
- **配置管理**:调整Kafka集群的配置,如增加或减少副本数量,调整Broker设置等。 - **故障排查**:提供故障检测功能,如检测不活跃的分区、消费者等,帮助快速定位问题。 - **性能监控**:显示关键性能指标,如...
kafka-python最适合与较新的broker(0.10或0.9)一起使用,但同样向后兼容旧版本(到0.8.0)。不过需要注意的是,一些特性仅在较新的broker版本上可用。例如,完全协调的消费者组,即动态为同一组内的多个消费者分配...
1. **全面监控**:它能够监控 Kafka 的关键指标,如 Broker 的状态、Topic 的分区分布、Consumer 的消费进度(offset 和 lag)以及 Owner 信息。 2. **直观界面**:Kafka-Eagle 提供了一个友好的 Web UI,使得管理...
6. **配置管理**:可以查看和修改Kafka集群的配置,便于优化集群性能。 **二、Kafka-Manager的构建与部署** 由于Kafka-Manager是用Scala编写的,因此通常需要使用sbt来编译和打包。然而,由于sbt的编译过程可能较...
《Kafka-Manager 2.0.0.2:配置与运行详解》 Kafka-Manager是Yahoo开源的一款用于管理Apache Kafka集群的强大工具,它提供了直观的Web界面,使得Kafka集群的监控、管理和操作变得更为便捷。本文将详细阐述如何使用...
4. 配置和部署文件:这些文件帮助用户在Kubernetes集群上安装和配置Kafka Broker。 通过这个项目,开发者可以利用Kafka的强大功能,同时享受Knative提供的便捷性,例如自动化的事件路由、订阅管理和可观测性。Kafka...
1. **集群管理**:Kafka-Manager提供了一个直观的Web界面,用户可以轻松查看Kafka集群的状态,包括 broker、topic、partition 和 replica 的信息。它支持添加、删除和修改Kafka集群配置,以及实时监控集群的运行状态...
- **配置文件**:修改conf/kafka-eagle-site.xml,配置Kafka集群地址、Zookeeper地址以及其它相关参数。 - **启动服务**:执行bin/kafka-eagle-start.sh脚本启动服务,然后在浏览器中输入http://服务器IP:8080...
1. **配置参数**:Kafka-clients有许多可配置参数,如`bootstrap.servers`、`acks`等。`AbstractConfig`类封装了所有配置项,方便开发者使用。 2. **日志记录**:源码中的`org.apache.kafka.common.utils....
Ambari的kafka 配置kafka-manager。 CDH、开源也可以步骤一样。 kafka为开启kerberos认证的。 文章中用到的kafka-manager 适配绝大多数。 推荐使用!!!! kafka-manager作用: 管理多个集群 轻松检查群集状态...
3. **Broker管理**:Kafka-Manager允许你查看Broker的状态,包括其领导者分布、ISR(In-Sync Replicas)状态,以及Brokers的配置信息。在出现问题时,可以快速定位并解决故障。 4. **Consumer Group监控**:该工具...
6. **配置管理**:允许管理员在线修改Kafka的配置参数,无需重启服务。 7. **安全控制**:支持基本的用户名和密码认证,保障了管理操作的安全性。 8. **定制化视图**:用户可以根据自己的需求定制监控视图,关注...
你可以添加、删除或编辑集群配置,包括Zookeeper地址、Kafka broker列表等。 2. **主题操作**:在界面上,你可以创建、删除、修改和查看Kafka的主题,包括设置分区数量、复制因子等参数。这有助于优化数据分布和...
Kafka-Manager是Yammer公司开发的一款针对Apache Kafka集群的可视化管理工具,它提供了丰富的界面操作,帮助用户监控、管理和配置Kafka集群,大大简化了日常运维工作。本文将详细介绍如何在CentOS 6.4环境下编译...
- **性能调优**:根据监控数据调整 Kafka 集群参数,如增加分区、调整 Replication Factor、优化 Broker 配置等。 - **报警设置**:配置告警规则,当系统状态异常时自动通知,以便及时发现和解决问题。 6. **安全...
- `conf`: 包含Kafka Manager的配置文件,如`application.conf`,可以在此配置连接到Kafka集群的参数。 - `lib`: 存放运行Kafka Manager所需的jar包,这些库文件支持工具的各项功能。 - `static`: 包含Web应用的静态...
2. `server.properties`: 这是Kafka Broker的配置。一些重要参数包括: - `broker.id`: 每个Kafka节点的唯一标识,通常从0开始。 - `listeners`: 指定Kafka监听的网络接口和端口,如`PLAINTEXT://localhost:9092`...