Kafka 2.0.0 includes a number of significant new features. Here is a summary of some notable changes:
- KIP-290 adds support for prefixed ACLs, simplifying access control management in large secure deployments. Bulk access to topics, consumer groups or transactional ids with a prefix can now be granted using a single rule. Access control for topic creation has also been improved to enable access to be granted to create specific topics or topics with a prefix.
- KIP-255 adds a framework for authenticating to Kafka brokers using OAuth2 bearer tokens. The SASL/OAUTHBEARER implementation is customizable using callbacks for token retrieval and validation.
- Host name verification is now enabled by default for SSL connections to ensure that the default SSL configuration is not susceptible to man-in-the-middle attacks. You can disable this verification if required.
- You can now dynamically update SSL truststores without broker restart. You can also configure security for broker listeners in ZooKeeper before starting brokers, including SSL keystore and truststore passwords and JAAS configuration for SASL. With this new feature, you can store sensitive password configs in encrypted form in ZooKeeper rather than in cleartext in the broker properties file.
- The replication protocol has been improved to avoid log divergence between leader and follower during fast leader failover. We have also improved resilience of brokers by reducing the memory footprint of message down-conversions. By using message chunking, both memory usage and memory reference time have been reduced to avoid OutOfMemory errors in brokers.
- Kafka clients are now notified of throttling before any throttling is applied when quotas are enabled. This enables clients to distinguish between network errors and large throttle times when quotas are exceeded.
- We have added a configuration option for Kafka consumer to avoid indefinite blocking in the consumer.
- We have dropped support for Java 7 and removed the previously deprecated Scala producer and consumer.
- Kafka Connect includes a number of improvements and features. KIP-298 enables you to control how errors in connectors, transformations and converters are handled by enabling automatic retries and controlling the number of errors that are tolerated before the connector is stopped. More contextual information can be included in the logs to help diagnose problems and problematic messages consumed by sink connectors can be sent to a dead letter queue rather than forcing the connector to stop.
- KIP-297 adds a new extension point to move secrets out of connector configurations and integrate with any external key management system. The placeholders in connector configurations are only resolved before sending the configuration to the connector, ensuring that secrets are stored and managed securely in your preferred key management system and not exposed over the REST APIs or in log files.
- We have added a thin Scala wrapper API for our Kafka Streams DSL, which provides better type inference and better type safety during compile time. Scala users can have less boilerplate in their code, notably regarding Serdes with new implicit Serdes.
- Message headers are now supported in the Kafka Streams Processor API, allowing users to add and manipulate headers read from the source topics and propagate them to the sink topics.
- Windowed aggregations performance in Kafka Streams has been largely improved (sometimes by an order of magnitude) thanks to the new single-key-fetch API.
- We have further improved unit testibility of Kafka Streams with the kafka-streams-testutil artifact.
- 增加了前缀通配符访问控制(ACL)的支持,详见 KIP-290,这样我们可以更加细粒度的进行访问控制;
- 更全面的数据安全支持,KIP-255 里面添加了一个框架,我们可以使用OAuth2 bearer tokens 来对访问 KafkaBrokers 进行权限控制。
- 现在,SSL连接默认启用主机名验证(Host name verification),以确保默认 SSL 配置不受中间人攻击的影响。 如果需要,您可以禁用此验证。
- 现在,我们可以在不重启 Broker 的情况下动态更新 SSL 信任库( SSL truststores)。我们还可以在启动 Broker 之前在 ZooKeeper 中为 Broker 侦听器(broker listeners)配置安全性,包括 SSL 密钥库和信任库密码以及 SASL的JAAS配置。 使用此新功能,您可以在 ZooKeeper 中以加密形式存储敏感密码配置,而不是在 Broker 属性文件中以明文形式存储。
- 复制协议已得到改进,以避免在 fast leader failover 期间 leader 和 follower 之间的日志分歧(log divergence)。
- 保证在线升级的方便性,在这一次的 2.0.0 版本中,更多相关的属性被加了进来,详情请参见 KIP-268、KIP-279、KIP-283 等等
- 简化了 Kafka Streams 升级过程,详情参见 KIP-268
- 进一步加强了 Kafka 的可监控性,包括添加了很多系统静态属性以及动态健康指标,请参见 KIP-223、KIP-237、KIP-272 等等。
- 在即将发布的 2.0 版本中,加入了另一个“领先”指标(lead metrics),定义为分区首端(log-start-offset)与消费者在分区上的位置距离,当此指标趋近于零时,代表消费者有跌出可消费范围因而丢失数据的危险。
KIP-290 adds support for prefixed ACLs, simplifying access control management in large secure deployments. Bulk access to topics, consumer groups or transactional ids with a prefix can now be granted using a single rule. Access control for topic creation has also been improved to enable access to be granted to create specific topics or topics with a prefix.
KIP-290增加了对前缀ACL的支持,简化了在大型安全部署中的访问控制管理。现在可以使用单个规则来对主题、消费群体或具有前缀的事务ID进行批量访问。主题创建的访问控制也得到了改进,以便允许访问以创建具有前缀的特定主题或主题。
KIP-255 adds a framework for authenticating to Kafka brokers using OAuth2 bearer tokens. The SASL/OAUTHBEARER implementation is customizable using callbacks for token retrieval and validation.
KIP-255增加了一个使用OAuth2承载令牌对KafkaBroker进行认证的框架。SASL/OAuthBurER实现可使用回调进行令牌检索和验证。
Host name verification is now enabled by default for SSL connections to ensure that the default SSL configuration is not susceptible to man-in-the-middle attacks. You can disable this verification if required.
默认情况下,启用SSL连接的主机名验证,以确保默认SSL配置不受中间人攻击的影响。如果需要,可以禁用此验证。
You can now dynamically update SSL truststores without broker restart. You can also configure security for broker listeners in ZooKeeper before starting brokers, including SSL keystore and truststore passwords and JAAS configuration for SASL. With this new feature, you can store sensitive password configs in encrypted form in ZooKeeper rather than in cleartext in the broker properties file.
您现在可以在不重新启动代理的情况下动态更新SSL信任库。 您还可以在启动代理之前在ZooKeeper中为代理侦听器配置安全性,包括SSL密钥库和信任库密码以及SASL的JAAS配置。 使用此新功能,您可以在ZooKeeper中以加密形式存储敏感密码配置,而不是在代理属性文件中以明文形式存储。
The replication protocol has been improved to avoid log divergence between leader and follower during fast leader failover. We have also improved resilience of brokers by reducing the memory footprint of message down-conversions. By using message chunking, both memory usage and memory reference time have been reduced to avoid OutOfMemory errors in brokers.
复制协议已得到改进,以避免在快速领导者故障转移期间leader和follower之间的日志分歧。 我们还通过减少消息下转换的内存占用来提高代理的恢复能力。 通过使用消息分块,内存使用和内存引用时间都已减少,以避免代理中的OutOfMemory(内存不足)错误。
Kafka clients are now notified of throttling before any throttling is applied when quotas are enabled. This enables clients to distinguish between network errors and large throttle times when quotas are exceeded.
现在,在启用配额之前应用任何限制之前,Kafka客户端会收到限制通知。 这使客户能够在超过配额时区分网络错误和大的节流时间。
We have added a configuration option for Kafka consumer to avoid indefinite blocking in the consumer.
我们为Kafka消费者添加了一个配置选项,以避免消费者无限期阻止。
We have dropped support for Java 7 and removed the previously deprecated Scala producer and consumer.
我们已经放弃了对Java 7的支持,并删除了之前弃用的Scala生产者和消费者。
Kafka Connect includes a number of improvements and features. KIP-298 enables you to control how errors in connectors, transformations and converters are handled by enabling automatic retries and controlling the number of errors that are tolerated before the connector is stopped. More contextual information can be included in the logs to help diagnose problems and problematic messages consumed by sink connectors can be sent to a dead letter queue rather than forcing the connector to stop.
Kafka Connect包含许多改进和功能。 KIP-298使您能够通过启用自动重试和控制连接器停止前容许的错误数来控制连接器,转换和转换器中的错误处理方式。 日志中可以包含更多上下文信息,以帮助诊断问题,并且可以将接收器连接器消耗的有问题消息发送到死信队列,而不是强制连接器停止。
KIP-297 adds a new extension point to move secrets out of connector configurations and integrate with any external key management system. The placeholders in connector configurations are only resolved before sending the configuration to the connector, ensuring that secrets are stored and managed securely in your preferred key management system and not exposed over the REST APIs or in log files.
KIP-297增加了一个新的扩展点,可以将密钥从连接器配置中移除,并与任何外部密钥管理系统集成。 连接器配置中的占位符仅在将配置发送到连接器之前解析,确保在首选密钥管理系统中安全地存储和管理机密,而不是通过REST API或日志文件公开。
We have added a thin Scala wrapper API for our Kafka Streams DSL, which provides better type inference and better type safety during compile time. Scala users can have less boilerplate in their code, notably regarding Serdes with new implicit Serdes.
我们为Kafka Streams DSL添加了一个瘦Scala包装器API,它在编译期间提供了更好的类型推断和更好的类型安全性。 Scala用户可以在代码中使用更少的样板,特别是关于具有新隐式Serdes的Serdes。
Message headers are now supported in the Kafka Streams Processor API, allowing users to add and manipulate headers read from the source topics and propagate them to the sink topics.
Kafka Streams Processor API现在支持消息头,允许用户添加和操作从源主题读取的头,并将它们传播到接收器主题。
Windowed aggregations performance in Kafka Streams has been largely improved (sometimes by an order of magnitude) thanks to the new single-key-fetch API.
由于采用了新的单键获取API,Kafka Streams中的窗口聚合性能已大大提高(有时甚至达到一个数量级)。
We have further improved unit testibility of Kafka Streams with the kafka-streams-testutil artifact.
我们使用kafka-streams-testutil工件进一步改进了Kafka Streams的单元可测性。
相关推荐
### 关于Kafka资源下载kafka_2.11-2.0.0.tgz的知识点 ...“kafka_2.11-2.0.0.tgz”的下载和安装是使用Kafka的基础步骤,通过对版本特性、配置方法及常见问题的了解,可以帮助用户更好地利用这一强大的工具。
二、KafkaManager 2.0.0新特性 1. 用户界面优化:新的UI设计使得操作更加直观,用户体验得到了显著提升。 2. 性能提升:通过对后台算法的优化,2.0.0版本在处理大规模集群时表现出更高的效率。 3. 支持Kafka最新...
5. **Kafka的高级特性** - **幂等性(Idempotence)**:在2.0.0版本中,Kafka引入了幂等性,确保消息在任何情况下只会被处理一次,提高了数据一致性。 - **事务(Transactions)**:Kafka支持跨分区和跨 ...
【标题】"kafka_2.12-2.0.0" 指的是Apache Kafka的一个特定版本,这是由LinkedIn开发并贡献给Apache Software Foundation的分布式流处理平台。Kafka是一个高度可扩展且高性能的消息队列系统,常用于构建实时数据管道...
首先,"ranger-2.0.0-SNAPSHOT-kafka-plugin"的版本号表明这是一个针对Ranger 2.0.0的开发版本,"SNAPSHOT"通常意味着它包含最新的开发特性,但可能未经正式测试和发布。因此,在生产环境中使用时,应谨慎评估其稳定...
根据这一特性,可以使用Storm这种实时流处理系统对消息进行实时在线处理,同时使用Hadoop这种批处理系统进行离线处理,还可以同时将数据实时备份到另一个数据中心,只需要保证这三个操作所使用的Consumer属于不同的...
1. **兼容性**:支持Kafka 2.0.0及更高版本,提供对最新Kafka特性的管理支持。 2. **易用性**:提供图形化界面,用户可以通过浏览器直接进行集群管理,无需深入了解Kafka的底层细节。 3. **功能完善**:包括创建和...
在2.0.0版本中,Spark SQL引入了DataFrame API的改进,如DataFrame的优化、对Hive metastore的支持增强,以及对新的JDBC和ODBC驱动的兼容性,使得SQL查询更为高效且易于使用。 3. Dataset API:在Spark 2.0.0中,...
6. **Kafka集成**:在Apache Atlas 2.0.0中,Kafka 2.1.0的使用增强了实时元数据变更的捕获和传播能力,使得系统能够快速响应数据湖中的变化。 7. **HBase 2.1.0和Hadoop 3.0.0支持**:这两个更新的版本提供了性能...
10. **社区支持**:作为一个开源项目,Scala-Kafka-Client拥有活跃的开发者社区,持续更新和维护,确保其与Kafka的新版本保持同步,并及时响应用户反馈的问题。 在实际项目中,通过使用Scala-Kafka-Client,开发者...
Apache Atlas 2.0.0 的主要特性包括: 1. **元数据管理**:Apache Atlas 提供了一个灵活的元数据模型,允许用户定义和存储各种类型的数据实体(如表、列、数据库等)的元数据。这有助于用户了解数据的结构、含义和...
Kafka作为一个分布式流处理平台,提供消息中间件服务,而Debezium则是一个用于数据库变更数据捕获和传递的开源工具,Mirror则是Kafka中的一个特性,用于实现跨数据中心的数据复制。 **Kafka部署** 在部署Kafka之前...
此版本引入了诸多新特性与改进,如对消息格式的支持更加灵活,以及在消息传递过程中提供了更强大的容错机制等。 ##### 2.1 消息格式 Kafka 消息由多个字段组成: - **CRC**: int32 类型,用于计算消息的校验码,...
8. **链码生命周期**:在Fabric 2.0.0中,链码的部署和升级流程进行了简化,引入了新的生命周期模型,使得链码管理更加便捷。 9. **Docker容器**:Fabric利用Docker容器来运行链码实例,保证了环境的一致性和隔离性...
在Apache Ranger 2.0.0这个版本中,可能引入了一些新特性或修复了已知问题。具体来说,Ranger的改进可能包括: 1. **增强的审计功能**:Ranger提供了全面的审计日志记录,可以帮助监控和分析用户对数据的访问行为,...
因此,"ranger-2.0.0-SNAPSHOT"代表Ranger的2.0.0开发版,可能存在未正式发布的特性或优化。 Tagsync是Ranger中一个关键的元数据管理组件,它的主要任务是保持不同数据源之间的标签(Tag)一致性。标签在大数据环境...
3. **ranger-kafka-plugin**: 提供Ranger与Apache Kafka的集成,使得Kafka主题级别的权限控制成为可能。 4. **ranger-elasticsearch-plugin**: 用于Elasticsearch的Ranger插件,帮助在Elasticsearch集群上实施访问...
此版本2.0.0可能包含性能优化、新功能或者对旧版本的错误修复。 在分布式环境中,如Zookeeper,Avro Codec的作用尤为重要。Zookeeper是一个分布式协调服务,用于管理应用程序配置信息、命名、提供分布式的锁服务等...
SNAPSHOT通常指的是开发中的版本,可能包含最新的特性和改进,但未经过正式发布。这个压缩包用于部署Ranger KMS,以实现对Hadoop集群加密密钥的管理和分发。 Ranger KMS是Hadoop的密钥管理系统,它符合Hadoop的透明...
在Hadoop、Hive、HBase、Kafka等多个组件中,Ranger都能提供细粒度的权限控制和审计功能。本篇文章将深入探讨Ranger的2.0.0-SNAPSHOT版本在Apache Storm环境中的插件使用,以及该插件如何提升大数据处理的安全性。 ...