Reference: https://issues.apache.org/jira/browse/HDFS-5338
Error:
org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException: Datanode denied communication with namenode because hostname cannot be resolved (ip=172.31.34.70, h
ostname=172.31.34.70): DatanodeRegistration(0.0.0.0, datanodeUuid=e218bf57-41b5-46a6-8343-ced968272b9a, infoPort=50075, ipcPort=50020, storageInfo=lv=-56;cid=CID-28174
aeb-5c3e-4500-ba25-1d9ca001538f;nsid=122644145;c=0)
at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:904)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:5085)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:1140)
at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:92)
at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:27329)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038)
ostname=172.31.34.70): DatanodeRegistration(0.0.0.0, datanodeUuid=e218bf57-41b5-46a6-8343-ced968272b9a, infoPort=50075, ipcPort=50020, storageInfo=lv=-56;cid=CID-28174
aeb-5c3e-4500-ba25-1d9ca001538f;nsid=122644145;c=0)
at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:904)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:5085)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:1140)
at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:92)
at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:27329)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038)
Analytics:
This error was throwned via org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java:
public void registerDatanode(DatanodeRegistration nodeReg)
throws DisallowedDatanodeException, UnresolvedTopologyException {
InetAddress dnAddress = Server.getRemoteIp();
if (dnAddress != null) {
// Mostly called inside an RPC, update ip and peer hostname
String hostname = dnAddress.getHostName();
String ip = dnAddress.getHostAddress();
if (checkIpHostnameInRegistration && !isNameResolved(dnAddress)) {
// Reject registration of unresolved datanode to prevent performance
// impact of repetitive DNS lookups later.
final String message = "hostname cannot be resolved (ip="
+ ip + ", hostname=" + hostname + ")";
LOG.warn("Unresolved datanode registration: " + message);
throw new DisallowedDatanodeException(nodeReg, message);
}
public void registerDatanode(DatanodeRegistration nodeReg)
throws DisallowedDatanodeException, UnresolvedTopologyException {
InetAddress dnAddress = Server.getRemoteIp();
if (dnAddress != null) {
// Mostly called inside an RPC, update ip and peer hostname
String hostname = dnAddress.getHostName();
String ip = dnAddress.getHostAddress();
if (checkIpHostnameInRegistration && !isNameResolved(dnAddress)) {
// Reject registration of unresolved datanode to prevent performance
// impact of repetitive DNS lookups later.
final String message = "hostname cannot be resolved (ip="
+ ip + ", hostname=" + hostname + ")";
LOG.warn("Unresolved datanode registration: " + message);
throw new DisallowedDatanodeException(nodeReg, message);
}
checkIpHostnameInRegistration is controled by hdfs-site.xml property "dfs.namenode.datanode.registration.ip-hostname-check". And isNameResolved(dnAddress) is controle by reverse dns and /etc/hosts.
Resolution:
Add datanode ip/hostname into /etc/hosts. Or enable reverse DNS(Make sure you could get hostname via command: host ip). Or add following property in hdfs-site.xml:
<property>
<name>dfs.namenode.datanode.registration.ip-hostname-check</name>
<value>false</value>
</property>
<name>dfs.namenode.datanode.registration.ip-hostname-check</name>
<value>false</value>
</property>
http://blog.csdn.net/lulynn/article/details/46725683
相关推荐
gcr.io/kubebuilder/kube-rbac-proxy:v0.13.0 镜像
解决: Could not find play-services-basement.aar (com.google.android.gms:play-services-basement:15.0.1). Searched in the following locations: ... 最近google 出现不少问题 1、google 欧洲反垄断罚金 ...
Maven更新问题 今天Maven在更新的时候发现一直更新不成功,总结下解决方法。 在apache-maven-3.5.2/conf/setting.xml中加入以下配置即可解决 alimaven aliyun maven ... central junit junit Address/ ...
hive 开发UDF 使用maven工程 引发jar包缺失 hive 开发UDF 使用maven工程 引发jar包缺失
5. **同步Gradle**:在修改了`build.gradle`文件后,点击Android Studio顶部菜单的`File` -> `Sync Project with Gradle Files`进行同步。如果提示更新Repository,按照提示操作。 6. **检查Gradle仓库**:确保你的...
标题中的"net.sourceforge.groboutils.groboutils-core:5"指的是开源项目Groboutils的核心库组件,版本号为5。这个组件是Java开发中的一个工具集,提供了多种实用的功能,帮助开发者简化工作流程。...
Could not resolve dependencies for project org.apache.flink:flink-avro-confluent-registry:jar:1.15.3: Could not find artifact io.confluent:kafka-schema-registry-client:jar:6.2.2 in maven 安装本地...
解决 Cannot resolve org.pentaho:pentaho-aggdesigner-algorithm:5.1.5-jhyde jar放入D根目录执行: 以下命令加入本地maven库 mvn install:install-file -DgroupId=org.pentaho -DartifactId=pentaho-aggdesigner-...
解决maven引入hive的jar包时依赖报错Could not find artifact org.pentaho:pentaho-aggdesigner-algorithm:pom:5.1.5-jhyde in xxx的问题,maven路径org/pentaho/pentaho-aggdesigner-algorithm/5.1.5-jhyde/pentaho...
The library is a DrawerLayout-like ViewGroup, where a "drawer" is hidden under the content view, which can be shifted to make the drawer visible. It doesn't provide you with a drawer builder. Gradle ...
// When building OkHttpClient, the OkHttpClient.Builder() is passed to the with() method to initialize the configuration OkHttpClient = RetrofitUrlManager.getInstance().with(new OkHttpClient.Builder...
可以解决,maven引入hive jar包时,hive Could not find artifact org.pentaho:pentaho-aggdesigner-algorithm:jar:5.1.5-jhyde 问题
vite-plugin-resolve-externals vite插件解析外部 用法 安装 npm i -D vite-plugin-resolve-externals 通过参数传输使用支持设置,还支持在解析中配置外部项目 // vite.config.js const resolveExternalsPlugin = ...
赠送jar包:jakarta.annotation-api-1.3.5.jar; 赠送原API文档:jakarta.annotation-api-1.3.5-javadoc.jar; 赠送源代码:jakarta.annotation-api-1.3.5-sources.jar; 赠送Maven依赖信息文件:jakarta.annotation...
赠送jar包:commons-net-3.8.0.jar; 赠送原API文档:commons-net-3.8.0-javadoc.jar; ... ... ... ...标签:commons、中文文档、jar包、java;...人性化翻译,文档中的代码和结构保持不变,注释和说明精准翻译,请放心使用。
《钉钉Java SDK详解与应用实践》 在数字化办公领域,钉钉作为一款深受企业喜爱的协同办公平台,提供了丰富的API接口供开发者进行定制化开发。本文将详细讲解"taobao-java-sdk-1.0.0",这是一个专门用于Java环境下的...
赠送jar包:sentinel-transport-common-1.8.0.jar; 赠送原API文档:sentinel-transport-common-1.8.0-javadoc.jar; 赠送源代码:sentinel-transport-common-1.8.0-sources.jar; 赠送Maven依赖信息文件:sentinel-...
eclipse-temurin官网下载的eclipse-temurin-alpine-jre-17
赠送jar包:assertj-core-3.19.0.jar; 赠送原API文档:assertj-core-3.19.0-javadoc.jar; 赠送源代码:assertj-core-3.19.0-sources.jar; 赠送Maven依赖信息文件:assertj-core-3.19.0.pom;...
依赖 <groupId>com.xuxueli</groupId> <artifactId>xxl-job-core <version>2.2.0-SNAPSHOT </dependency>