2012-12-17 10:58:59,925 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: org.apache.hadoop.util.DiskChecker$DiskErrorException: Invalid
volume failure config value: 3
at org.apache.hadoop.hdfs.server.datanode.FSDataset.<init>(FSDataset.java:1025)
at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:414)
at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:305)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1606)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1546)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1564)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1690)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1707)
新建了几台机器的集群,启动datanode时候报了这个错。
主要原因是因为dfs.datanode.failed.volumes.tolerated 参数配置了3,
这个参数的含义:The number of volumes that are allowed to fail before a datanode stops offering service. By default any volume failure will cause a datanode to shutdown.
datanode允许磁盘损坏的个数
,datanode在启动时候会使用dfs.data.dir下配置的文件夹(用于存储block的),若是有一些不可以用且个数>上面配置的那个
值,这启动失败,代码见:org.apache.hadoop.hdfs.server.datanode.FSDataset
public FSDataset(DataStorage storage, Configuration conf) throws IOException {
this.maxBlocksPerDir = conf.getInt("dfs.datanode.numblocks", 64);
// The number of volumes required for operation is the total number
// of volumes minus the number of failed volumes we can tolerate.
final int volFailuresTolerated =
conf.getInt("dfs.datanode.failed.volumes.tolerated", 0);
String[] dataDirs = conf.getTrimmedStrings(DataNode.DATA_DIR_KEY);
int volsConfigured = (dataDirs == null) ? 0 : dataDirs.length;
int volsFailed = volsConfigured - storage.getNumStorageDirs();
validVolsRequired = volsConfigured - volFailuresTolerated;
if (volFailuresTolerated < 0 || volFailuresTolerated >= volsConfigured) {
throw new DiskErrorException("Invalid volume failure "
+ " config value: " + volFailuresTolerated);
}
由于dfs.data.dir只配了一个目录,所以将
dfs.datanode.failed.volumes.tolerated设置为0后,问题解决。
分享到:
相关推荐
在大数据处理领域,Hadoop是一个不可或缺的开源框架,它提供了分布式存储和计算的能力。而Hadoop-LZO则是针对Hadoop优化的一种数据压缩库,旨在提高HDFS(Hadoop Distributed File System)上的数据压缩效率和读写...
赠送jar包:hadoop-mapreduce-client-common-2.6.5.jar; 赠送原API文档:hadoop-mapreduce-client-common-2.6.5-javadoc.jar; 赠送源代码:hadoop-mapreduce-client-common-2.6.5-sources.jar; 赠送Maven依赖信息...
赠送jar包:hadoop-auth-2.6.5.jar 赠送原API文档:hadoop-auth-2.6.5-javadoc.jar 赠送源代码:hadoop-auth-2.6.5-sources.jar 包含翻译后的API文档:hadoop-auth-2.6.5-javadoc-API文档-中文(简体)-英语-对照版...
赠送jar包:hadoop-auth-2.5.1.jar; 赠送原API文档:hadoop-auth-2.5.1-javadoc.jar; 赠送源代码:hadoop-auth-2.5.1-sources.jar; 赠送Maven依赖信息文件:hadoop-auth-2.5.1.pom; 包含翻译后的API文档:hadoop...
赠送jar包:hadoop-yarn-client-2.6.5.jar; 赠送原API文档:hadoop-yarn-client-2.6.5-javadoc.jar; 赠送源代码:hadoop-yarn-client-2.6.5-sources.jar; 赠送Maven依赖信息文件:hadoop-yarn-client-2.6.5.pom;...
赠送jar包:hadoop-mapreduce-client-jobclient-2.6.5.jar; 赠送原API文档:hadoop-mapreduce-client-jobclient-2.6.5-javadoc.jar; 赠送源代码:hadoop-mapreduce-client-jobclient-2.6.5-sources.jar; 赠送...
赠送jar包:hadoop-common-2.7.3.jar; 赠送原API文档:hadoop-common-2.7.3-javadoc.jar; 赠送源代码:hadoop-common-2.7.3-sources.jar; 赠送Maven依赖信息文件:hadoop-common-2.7.3.pom; 包含翻译后的API文档...
《Hadoop与LZO压缩:深入理解hadoop-lzo-0.4.15.tar.gz》 在大数据处理领域,Hadoop是不可或缺的核心组件,它为海量数据的存储和计算提供了分布式解决方案。而LZO(Lempel-Ziv-Oberhumer)是一种高效的无损数据压缩...
赠送jar包:hadoop-yarn-common-2.6.5.jar 赠送原API文档:hadoop-yarn-common-2.6.5-javadoc.jar 赠送源代码:hadoop-yarn-common-2.6.5-sources.jar 包含翻译后的API文档:hadoop-yarn-common-2.6.5-javadoc-...
hadoop-lzo-0.4.20 centOS6.5 64位编译出来的 拷贝jar包到hadoop和hbase中 cp /opt/hadoopgpl/lib/hadoop-lzo-0.4.20-SNAPSHOT.jar $HADOOP_HOME/share/hadoop/common/ cp /opt/hadoopgpl/lib/hadoop-lzo-0.4.20-...
赠送jar包:hadoop-hdfs-2.5.1.jar; 赠送原API文档:hadoop-hdfs-2.5.1-javadoc.jar; 赠送源代码:hadoop-hdfs-2.5.1-sources.jar; 赠送Maven依赖信息文件:hadoop-hdfs-2.5.1.pom; 包含翻译后的API文档:hadoop...
赠送jar包:hadoop-annotations-2.7.3.jar; 赠送原API文档:hadoop-annotations-2.7.3-javadoc.jar; 赠送源代码:hadoop-annotations-2.7.3-sources.jar; 赠送Maven依赖信息文件:hadoop-annotations-2.7.3.pom;...
赠送jar包:hadoop-yarn-server-resourcemanager-2.6.0.jar; 赠送原API文档:hadoop-yarn-server-resourcemanager-2.6.0-javadoc.jar; 赠送源代码:hadoop-yarn-server-resourcemanager-2.6.0-sources.jar; 赠送...
Flink-1.11.2与Hadoop3集成JAR包,放到flink安装包的lib目录下,可以避免Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop is not in the classpath/dependencies.这个报错,实现...
hadoop源码2.2.0 Apache Hadoop 2.2.0 is the GA release of Apache Hadoop 2.x. Users are encouraged to immediately move to 2.2.0 since this release is significantly more stable and is guaranteed to ...
赠送jar包:flink-hadoop-fs-1.14.3.jar; 赠送原API文档:flink-hadoop-fs-1.14.3-javadoc.jar; 赠送源代码:flink-hadoop-fs-1.14.3-sources.jar; 赠送Maven依赖信息文件:flink-hadoop-fs-1.14.3.pom; 包含...
赠送jar包:flink-hadoop-fs-1.13.2.jar; 赠送原API文档:flink-hadoop-fs-1.13.2-javadoc.jar; 赠送源代码:flink-hadoop-fs-1.13.2-sources.jar; 赠送Maven依赖信息文件:flink-hadoop-fs-1.13.2.pom; 包含...
赠送jar包:hbase-hadoop-compat-1.1.3.jar; 赠送原API文档:hbase-hadoop-compat-1.1.3-javadoc.jar; 赠送源代码:hbase-hadoop-compat-1.1.3-sources.jar; 赠送Maven依赖信息文件:hbase-hadoop-compat-1.1.3....
赠送jar包:hadoop-yarn-api-2.5.1.jar; 赠送原API文档:hadoop-yarn-api-2.5.1-javadoc.jar; 赠送源代码:hadoop-yarn-api-2.5.1-sources.jar; 赠送Maven依赖信息文件:hadoop-yarn-api-2.5.1.pom; 包含翻译后...