- 浏览: 2182550 次
- 性别:
- 来自: 北京
文章分类
- 全部博客 (682)
- 软件思想 (7)
- Lucene(修真篇) (17)
- Lucene(仙界篇) (20)
- Lucene(神界篇) (11)
- Solr (48)
- Hadoop (77)
- Spark (38)
- Hbase (26)
- Hive (19)
- Pig (25)
- ELK (64)
- Zookeeper (12)
- JAVA (119)
- Linux (59)
- 多线程 (8)
- Nutch (5)
- JAVA EE (21)
- Oracle (7)
- Python (32)
- Xml (5)
- Gson (1)
- Cygwin (1)
- JavaScript (4)
- MySQL (9)
- Lucene/Solr(转) (5)
- 缓存 (2)
- Github/Git (1)
- 开源爬虫 (1)
- Hadoop运维 (7)
- shell命令 (9)
- 生活感悟 (42)
- shell编程 (23)
- Scala (11)
- MongoDB (3)
- docker (2)
- Nodejs (3)
- Neo4j (5)
- storm (3)
- opencv (1)
最新评论
-
qindongliang1922:
粟谷_sugu 写道不太理解“分词字段存储docvalue是没 ...
浅谈Lucene中的DocValues -
粟谷_sugu:
不太理解“分词字段存储docvalue是没有意义的”,这句话, ...
浅谈Lucene中的DocValues -
yin_bp:
高性能elasticsearch ORM开发库使用文档http ...
为什么说Elasticsearch搜索是近实时的? -
hackWang:
请问博主,有用solr做电商的搜索项目?
Solr中Group和Facet的用法 -
章司nana:
遇到的问题同楼上 为什么会返回null
Lucene4.3开发之第八步之渡劫初期(八)
继上次升级hadoop完毕后,集群启动正常,但是在访问Namenode的50070的界面上,发现了如下截图的警告信息:
如上异常,是什么意思呢?看了下官方的FAQ,大致意思就是,有几个块的数据,在现有的DataNode节点上,没有一个存储的,但是在NameNode的元数据里却存在。
怎么解决这个问题呢,下面介绍一个hadoop的健康监测命令fsck,
fsck工具来检验HDFS中的文件是否正常可用。这个工具可以检测文件块是否在DataNode中丢失,是否低于或高于文件副本。 fack命令用法如下:
注:此处必须是启动hadoop hdfs的账号才有权查看
Usage: DFSck <path> [-list-corruptfileblocks | [-move | -delete | -openforwrite] [-files [-blocks [-locations | -racks]]]]
<path> 检查的起始目录
-move 将损坏的文件移到到/lost+found
-delete 删除损坏的文件
-files 打印出所有被检查的文件
-openforwrite 打印出正在写的文件
-list-corruptfileblocks print out list of missing blocks and files they belong to
-blocks 打印出block报告
-locations 打印出每个block的位置
-racks 打印出data-node的网络拓扑结构
默认情况下,fsck会忽略正在写的文件,使用-openforwrite可以汇报这种文件
下面是是一个运行的命令例子:
通过上面这个命令我们就可以监测出文件的存储状况,监测出有无效的或者缺失的block,下面我们就可以执行-delete命令,来清楚无效的信息块。
最后一点,需要注意的是,这个命令在namenode文件信息较大的时候,会比较影响hadoop性能,所以应该慎用,通常可以在集群空闲的时间段,执行一次,查看整体的HDFS副本健康状况!
如上异常,是什么意思呢?看了下官方的FAQ,大致意思就是,有几个块的数据,在现有的DataNode节点上,没有一个存储的,但是在NameNode的元数据里却存在。
怎么解决这个问题呢,下面介绍一个hadoop的健康监测命令fsck,
fsck工具来检验HDFS中的文件是否正常可用。这个工具可以检测文件块是否在DataNode中丢失,是否低于或高于文件副本。 fack命令用法如下:
注:此处必须是启动hadoop hdfs的账号才有权查看
Usage: DFSck <path> [-list-corruptfileblocks | [-move | -delete | -openforwrite] [-files [-blocks [-locations | -racks]]]]
<path> 检查的起始目录
-move 将损坏的文件移到到/lost+found
-delete 删除损坏的文件
-files 打印出所有被检查的文件
-openforwrite 打印出正在写的文件
-list-corruptfileblocks print out list of missing blocks and files they belong to
-blocks 打印出block报告
-locations 打印出每个block的位置
-racks 打印出data-node的网络拓扑结构
默认情况下,fsck会忽略正在写的文件,使用-openforwrite可以汇报这种文件
下面是是一个运行的命令例子:
[search@fse01 hadoop]$ bin/hadoop fsck / -files DEPRECATED: Use of this script to execute hdfs command is deprecated. Instead use the hdfs command for it. Connecting to namenode via http://hadoop1:50070 FSCK started by search (auth:SIMPLE) from /127.0.0.1 for path / at Mon Jul 21 15:10:06 CST 2014 / <dir> /tmp <dir> /tmp/hadoop-search <dir> /tmp/hadoop-search/mapred <dir> /tmp/hadoop-search/mapred/staging <dir> /tmp/hadoop-search/mapred/staging/search <dir> /tmp/hadoop-search/mapred/staging/search/.staging <dir> /tmp/hadoop-search/mapred/staging/search/.staging/job_201312191115_0001 <dir> /tmp/hadoop-search/mapred/staging/search/.staging/job_201312191115_0006 <dir> /tmp/hadoop-search/mapred/staging/search/.staging/job_201312191115_0006/job.jar 13393150 bytes, 1 block(s): /tmp/hadoop-search/mapred/staging/search/.staging/job_201312191115_0006/job.jar: CORRUPT blockpool BP-1264683061-172.16.70.21-1405911170103 block blk_-2026656432152987415 MISSING 1 blocks of total size 13393150 B /tmp/hadoop-search/mapred/staging/zhouming <dir> /tmp/hadoop-search/mapred/staging/zhouming/.staging <dir> /tmp/hadoop-search/mapred/staging/zhouming/.staging/job_201403111456_0028 <dir> /tmp/hadoop-search/mapred/system <dir> /tmp/hadoop-search/mapred/system/jobtracker.info 4 bytes, 1 block(s): /tmp/hadoop-search/mapred/system/jobtracker.info: CORRUPT blockpool BP-1264683061-172.16.70.21-1405911170103 block blk_-7234718182174847600 MISSING 1 blocks of total size 4 B /tmp/hadoop-yarn <dir> /tmp/hadoop-yarn/staging <dir> /tmp/hadoop-yarn/staging/history <dir> /tmp/hadoop-yarn/staging/history/done_intermediate <dir> /tmp/hadoop-yarn/staging/history/done_intermediate/search <dir> /tmp/hadoop-yarn/staging/history/done_intermediate/search/job_1405911270768_0001-1405911635075-search-random%2Dwriter-1405912101075-40-0-SUCCEEDED-default.jhist 340102 bytes, 1 block(s): OK /tmp/hadoop-yarn/staging/history/done_intermediate/search/job_1405911270768_0001.summary 342 bytes, 1 block(s): OK /tmp/hadoop-yarn/staging/history/done_intermediate/search/job_1405911270768_0001_conf.xml 79756 bytes, 1 block(s): OK /tmp/hadoop-yarn/staging/history/done_intermediate/search/job_1405911270768_0003-1405921787298-search-com.dhgate.search.fse.proinfo.dataload.NewLoadData-1405921809468-1-1-SUCCEEDED-default.jhist 32989 bytes, 1 block(s): OK /tmp/hadoop-yarn/staging/history/done_intermediate/search/job_1405911270768_0003.summary 390 bytes, 1 block(s): OK /tmp/hadoop-yarn/staging/history/done_intermediate/search/job_1405911270768_0003_conf.xml 83089 bytes, 1 block(s): OK /tmp/hadoop-yarn/staging/history/done_intermediate/search/job_1405911270768_0004-1405922486108-search-Rebuild+Index%3Afsesearch%2D%3Ecollection1%2D%3Eshard1-1405922501203-1-0-SUCCEEDED-default.jhist 20173 bytes, 1 block(s): OK /tmp/hadoop-yarn/staging/history/done_intermediate/search/job_1405911270768_0004.summary 369 bytes, 1 block(s): OK /tmp/hadoop-yarn/staging/history/done_intermediate/search/job_1405911270768_0004_conf.xml 86592 bytes, 1 block(s): OK /tmp/hadoop-yarn/staging/search <dir> /tmp/hadoop-yarn/staging/search/.staging <dir> /tmp/hadoop-yarn/staging/search/.staging/job_1405911270768_0002 <dir> /tmp/hadoop-yarn/staging/search/.staging/job_1405911270768_0002/job.jar 30472539 bytes, 1 block(s): Under replicated BP-1264683061-172.16.70.21-1405911170103:blk_1073742242_1099511720037. Target Replicas is 10 but found 4 replica(s). /tmp/hadoop-yarn/staging/search/.staging/job_1405911270768_0002/job.split 116 bytes, 1 block(s): Under replicated BP-1264683061-172.16.70.21-1405911170103:blk_1073742243_1099511720038. Target Replicas is 10 but found 4 replica(s). /tmp/hadoop-yarn/staging/search/.staging/job_1405911270768_0002/job.splitmetainfo 31 bytes, 1 block(s): OK /tmp/hadoop-yarn/staging/search/.staging/job_1405911270768_0002/job.xml 72772 bytes, 1 block(s): OK /user <dir> /user/search <dir> /user/search/fse <dir> /user/search/fse/kms <dir> /user/search/fse/kms/in <dir> /user/search/fse/kms/in/0 3 bytes, 1 block(s): OK /user/search/fse/kms/index <dir> /user/search/fse/kms/index/index_shard_00_5e62b650 <dir> /user/search/fse/kms/index/index_shard_00_5e62b650/_0.fdt 11069 bytes, 1 block(s): OK /user/search/fse/kms/index/index_shard_00_5e62b650/_0.fdx 47 bytes, 1 block(s): OK /user/search/fse/kms/index/index_shard_00_5e62b650/_0.fnm 992 bytes, 1 block(s): OK /user/search/fse/kms/index/index_shard_00_5e62b650/_0.nvd 683 bytes, 1 block(s): OK /user/search/fse/kms/index/index_shard_00_5e62b650/_0.nvm 68 bytes, 1 block(s): OK /user/search/fse/kms/index/index_shard_00_5e62b650/_0.si 376 bytes, 1 block(s): OK /user/search/fse/kms/index/index_shard_00_5e62b650/_0_Lucene41_0.doc 50029 bytes, 1 block(s): OK /user/search/fse/kms/index/index_shard_00_5e62b650/_0_Lucene41_0.pos 101757 bytes, 1 block(s): OK /user/search/fse/kms/index/index_shard_00_5e62b650/_0_Lucene41_0.tim 62843 bytes, 1 block(s): OK /user/search/fse/kms/index/index_shard_00_5e62b650/_0_Lucene41_0.tip 2319 bytes, 1 block(s): OK /user/search/fse/kms/index/index_shard_00_5e62b650/segments.gen 20 bytes, 1 block(s): OK /user/search/fse/kms/index/index_shard_00_5e62b650/segments_1 69 bytes, 1 block(s): OK /user/search/fse/kms/out <dir> /user/search/fse/kms/out/0 <dir> /user/search/fse/kms/out/0/00 425334 bytes, 1 block(s): OK /user/search/fse/proinfo <dir> /user/search/fse/proinfo/schema.xml 32194 bytes, 1 block(s): OK /user/search/rand <dir> /user/search/rand/_SUCCESS 0 bytes, 0 block(s): OK /user/search/rand/part-m-00000 1077280794 bytes, 9 block(s): OK /user/search/rand/part-m-00001 1077294346 bytes, 9 block(s): OK /user/search/rand/part-m-00002 1077276960 bytes, 9 block(s): OK /user/search/rand/part-m-00003 1077280905 bytes, 9 block(s): OK /user/search/rand/part-m-00004 1077288199 bytes, 9 block(s): OK /user/search/rand/part-m-00005 1077290933 bytes, 9 block(s): OK /user/search/rand/part-m-00006 1077270249 bytes, 9 block(s): OK /user/search/rand/part-m-00007 1077290630 bytes, 9 block(s): OK /user/search/rand/part-m-00008 1077279315 bytes, 9 block(s): OK /user/search/rand/part-m-00009 1077294441 bytes, 9 block(s): OK /user/search/rand/part-m-00010 1077288193 bytes, 9 block(s): OK /user/search/rand/part-m-00011 1077299978 bytes, 9 block(s): OK /user/search/rand/part-m-00012 1077274992 bytes, 9 block(s): OK /user/search/rand/part-m-00013 1077295278 bytes, 9 block(s): OK /user/search/rand/part-m-00014 1077280338 bytes, 9 block(s): OK /user/search/rand/part-m-00015 1077285350 bytes, 9 block(s): OK /user/search/rand/part-m-00016 1077276847 bytes, 9 block(s): OK /user/search/rand/part-m-00017 1077285962 bytes, 9 block(s): OK /user/search/rand/part-m-00018 1077280969 bytes, 9 block(s): OK /user/search/rand/part-m-00019 1077285691 bytes, 9 block(s): OK /user/search/rand/part-m-00020 1077292174 bytes, 9 block(s): OK /user/search/rand/part-m-00021 1077292141 bytes, 9 block(s): OK /user/search/rand/part-m-00022 1077285419 bytes, 9 block(s): OK /user/search/rand/part-m-00023 1077285751 bytes, 9 block(s): OK /user/search/rand/part-m-00024 1077282762 bytes, 9 block(s): OK /user/search/rand/part-m-00025 1077282468 bytes, 9 block(s): OK /user/search/rand/part-m-00026 1077283184 bytes, 9 block(s): OK /user/search/rand/part-m-00027 1077281056 bytes, 9 block(s): OK /user/search/rand/part-m-00028 1077292387 bytes, 9 block(s): OK /user/search/rand/part-m-00029 1077283341 bytes, 9 block(s): OK /user/search/rand/part-m-00030 1077288015 bytes, 9 block(s): OK /user/search/rand/part-m-00031 1077276923 bytes, 9 block(s): OK /user/search/rand/part-m-00032 1077296469 bytes, 9 block(s): OK /user/search/rand/part-m-00033 1077288224 bytes, 9 block(s): OK /user/search/rand/part-m-00034 1077278284 bytes, 9 block(s): OK /user/search/rand/part-m-00035 1077297217 bytes, 9 block(s): OK /user/search/rand/part-m-00036 1077289846 bytes, 9 block(s): OK /user/search/rand/part-m-00037 1077284545 bytes, 9 block(s): OK /user/search/rand/part-m-00038 1077290084 bytes, 9 block(s): OK /user/search/rand/part-m-00039 1077282121 bytes, 9 block(s): OK Status: CORRUPT Total size: 43136702998 B Total dirs: 32 Total files: 71 Total symlinks: 0 Total blocks (validated): 390 (avg. block size 110606930 B) ******************************** CORRUPT FILES: 2 MISSING BLOCKS: 2 MISSING SIZE: 13393154 B CORRUPT BLOCKS: 2 ******************************** Minimally replicated blocks: 388 (99.48718 %) Over-replicated blocks: 0 (0.0 %) Under-replicated blocks: 2 (0.51282054 %) Mis-replicated blocks: 0 (0.0 %) Default replication factor: 3 Average block replication: 2.9897435 Corrupt blocks: 2 Missing replicas: 12 (1.0075567 %) Number of data-nodes: 4 Number of racks: 1 FSCK ended at Mon Jul 21 15:10:06 CST 2014 in 48 milliseconds The filesystem under path '/' is CORRUPT
通过上面这个命令我们就可以监测出文件的存储状况,监测出有无效的或者缺失的block,下面我们就可以执行-delete命令,来清楚无效的信息块。
[search@fse01 hadoop]$ bin/hadoop fsck / -delete DEPRECATED: Use of this script to execute hdfs command is deprecated. Instead use the hdfs command for it. Connecting to namenode via http://hadoop1:50070 FSCK started by search (auth:SIMPLE) from /127.0.0.1 for path / at Mon Jul 21 15:11:39 CST 2014 . /tmp/hadoop-search/mapred/staging/search/.staging/job_201312191115_0006/job.jar: CORRUPT blockpool BP-1264683061-172.16.70.21-1405911170103 block blk_-2026656432152987415 /tmp/hadoop-search/mapred/staging/search/.staging/job_201312191115_0006/job.jar: MISSING 1 blocks of total size 13393150 B.. /tmp/hadoop-search/mapred/system/jobtracker.info: CORRUPT blockpool BP-1264683061-172.16.70.21-1405911170103 block blk_-7234718182174847600 /tmp/hadoop-search/mapred/system/jobtracker.info: MISSING 1 blocks of total size 4 B........... /tmp/hadoop-yarn/staging/search/.staging/job_1405911270768_0002/job.jar: Under replicated BP-1264683061-172.16.70.21-1405911170103:blk_1073742242_1099511720037. Target Replicas is 10 but found 4 replica(s). . /tmp/hadoop-yarn/staging/search/.staging/job_1405911270768_0002/job.split: Under replicated BP-1264683061-172.16.70.21-1405911170103:blk_1073742243_1099511720038. Target Replicas is 10 but found 4 replica(s). ..........................................................Status: CORRUPT Total size: 43136702998 B Total dirs: 32 Total files: 71 Total symlinks: 0 Total blocks (validated): 390 (avg. block size 110606930 B) ******************************** CORRUPT FILES: 2 MISSING BLOCKS: 2 MISSING SIZE: 13393154 B CORRUPT BLOCKS: 2 ******************************** Minimally replicated blocks: 388 (99.48718 %) Over-replicated blocks: 0 (0.0 %) Under-replicated blocks: 2 (0.51282054 %) Mis-replicated blocks: 0 (0.0 %) Default replication factor: 3 Average block replication: 2.9897435 Corrupt blocks: 2 Missing replicas: 12 (1.0075567 %) Number of data-nodes: 4 Number of racks: 1 FSCK ended at Mon Jul 21 15:11:39 CST 2014 in 45 milliseconds The filesystem under path '/' is CORRUPT
最后一点,需要注意的是,这个命令在namenode文件信息较大的时候,会比较影响hadoop性能,所以应该慎用,通常可以在集群空闲的时间段,执行一次,查看整体的HDFS副本健康状况!
发表评论
-
Storm组件介绍
2016-08-12 20:28 1084(1)Topologies 拓扑 解 ... -
Hadoop2.2内存调优
2014-11-05 21:45 3345今天散仙写了个MapReduce作业,目的是读数据库里面多个表 ... -
Hadoop集群搭建完毕后,如何测试是否正常工作?
2014-10-28 16:25 7125最近,要在沙箱的环境装一个hadoop的集群,用来建索引所需, ... -
hadoop升级小记
2014-07-21 15:54 1579迎合技术发展,公司最近需要对原有的hadoop进行升级操作,升 ... -
Hadoop1.2.0的DataNode启动失败异常
2014-07-16 14:38 1080在配置hadoop1.x的集群时,如果我们在hdfs-site ... -
记一次hadoop磁盘空间满的异常
2014-07-14 18:27 5578本事故,发生在测试的环境上,虽然不是线上的环境,但也是一次比较 ...
相关推荐
大多数Hadoop FS Shell命令的行为与Unix Shell命令类似,但仍有不同之处。命令的出错信息会输出到stderr,其他信息输出到stdout。 下面将详细介绍一些常用命令的使用方法和示例: 1. cat命令:用于查看HDFS文件...
例如,dfsadmin命令可以用来升级和回滚Hadoop集群。管理员可以通过dfsadmin命令来查询集群的状态,结束备份,并执行升级和回滚操作。 Hadoop升级和回滚机制非常重要,管理员需要了解Hadoop的升级和回滚机制,以便更...
这里囊括了所有操作hadoop的shell命令,十分齐全,,。
hadoop1升级到hadoop2具体步骤及方法
Hadoop是一个开源框架,用于在分布式环境中存储和处理大量数据。Hadoop提供了可靠的共享存储和分布式处理能力。Hadoop Shell是一个命令行界面,允许...对Hadoop Shell命令的熟练使用是大数据处理人员必备的技能之一。
Hadoop 命令是 Hadoop 分布式计算系统的核心组件之一,负责执行各种作业和管理任务。Hadoop 命令手册提供了一个详细的命令参考指南,帮助用户熟悉 Hadoop 命令,让云计算更上一步。 Hadoop 命令的基本结构是:...
* hadoop fsck -list-corruptfileblocks:列出损坏的块 * hadoop fsck -delete:删除全部的损坏块 二、HBase hbck 工具 HBase hbck 工具是一个非常有用的工具,可以检查 HBase 集群的健康状态。HBase.RegionServer...
管理命令则用于集群管理任务,如启动/停止守护进程(如daemonlog)、检查文件系统状态(如fsck)、提交作业(如job)、以及集群管理相关的命令(如balancer、datanode、dfsadmin、jobtracker、namenode、...
Hadoop-HDFS常用命令
### Hadoop命令使用手册中文版知识点详解 #### 一、Hadoop概述 Hadoop是一款开源软件框架,主要用于处理大规模数据集(通常在集群环境中)。它能够高效地存储和处理非常大的数据集,使得用户能够在相对较低成本的...
hadoop集群服务开启命令 简单好用 一个命令开启所有服务 炫酷!!!!!!
- `hadoop fsck`:检查HDFS的健康状况,发现并修复错误。 - `journalnode`:查看JournalNode的状态,用于NameNode的高可用性。 这些命令只是Hadoop命令库的一部分,实际使用中还需要结合具体的业务场景和需求进行...
在大数据处理领域,Hadoop是一个不可或缺的开源框架,它提供了分布式存储和计算的能力。本教程将深入探讨Hadoop命令,...希望这些信息对你的Hadoop学习之路有所帮助,不断探索和实践,你将在大数据领域变得更加熟练。
《Ubuntu16.04搭建Hadoop2.6.7-纯命令》 在大数据处理领域,Hadoop作为开源框架的佼佼者,扮演着至关重要的角色。本实验报告主要探讨了如何在Ubuntu 16.04操作系统上搭建Hadoop 2.6.7的伪分布式环境,同时涉及到了...
### Hadoop命令大全详解 #### 一、Hadoop配置与环境变量设置 **1. Hadoop配置文件** Hadoop的配置文件对于整个系统的稳定运行至关重要。`core-site.xml`是Hadoop配置中最核心的部分之一,它包含了Hadoop运行所需...
### Hadoop HDFS常用操作命令详解 #### 一、概述 在大数据处理领域,Hadoop已成为业界标准之一,尤其在分布式存储与计算方面表现出色。Hadoop的核心组件之一HDFS(Hadoop Distributed File System,Hadoop分布式...
### Hadoop大数据常用命令知识点详解 #### 一、启动与关闭Hadoop集群 **启动Hadoop** - **步骤**: 进入HADOOP_HOME目录,然后执行`sh bin/start-all.sh`。 - **作用**: 启动Hadoop集群,包括DataNodes和NameNode。...
### Hadoop下的Shell命令详解 #### 一、前言 Hadoop是一款开源软件框架,用于分布式存储和处理大型数据集。它主要由两个核心组成部分构成:HDFS(Hadoop Distributed File System)和MapReduce。HDFS负责数据的...
管理员还可以使用`hadoop dfsadmin -finalizeUpgrade`命令在升级之前删除上一次升级时制作的集群备份,以及通过`hadoop dfsadmin -upgradeProgress status`命令来检查是否需要执行升级终结操作。升级和回滚操作分别...