hadoop版本是CDH5.3.3的,安装了伪分布模式,一直以来程序运行好好的,突然间运行mapreduce程序卡在running job上 ,mapreduce已经提交到yarn上去了 一直卡着没动 ,纠结了N天,总算搞定了,
之前还以为是我的内存不够 ,我把内存设置成6G了 ,没跑别的应用 ,查看内存使用情况
[ehp@hadoop-ehp hadoop-2.5.0-cdh5.3.3]$ free -m total used free shared buffers cached Mem: 5852 3585 2267 0 143 2567 -/+ buffers/cache: 873 4978 Swap: 4095 0 4095
显然不是内存的问题。
使用hadoop dfsadmin -report 查看hdfs 的状况
[ehp@hadoop-ehp hadoop-2.5.0-cdh5.3.3]$ hadoop dfsadmin -report DEPRECATED: Use of this script to execute hdfs command is deprecated. Instead use the hdfs command for it. 15/08/19 10:09:30 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Configured Capacity: 16702676992 (15.56 GB) Present Capacity: 574803968 (548.18 MB) DFS Remaining: 574779392 (548.15 MB) DFS Used: 24576 (24 KB) DFS Used%: 0.00% Under replicated blocks: 0 Blocks with corrupt replicas: 0 Missing blocks: 0 Missing blocks (with replication factor 1): 0 ------------------------------------------------- Live datanodes (1): Name: 192.168.1.120:50010 (hadoop-ehp.hyman.com) Hostname: hadoop-ehp.hyman.com Decommission Status : Normal Configured Capacity: 16702676992 (15.56 GB) DFS Used: 24576 (24 KB) Non DFS Used: 16127873024 (15.02 GB) DFS Remaining: 574779392 (548.15 MB) DFS Used%: 0.00% DFS Remaining%: 3.44% Configured Cache Capacity: 0 (0 B) Cache Used: 0 (0 B) Cache Remaining: 0 (0 B) Cache Used%: 100.00% Cache Remaining%: 0.00% Xceivers: 1 Last contact: Wed Aug 19 10:09:28 EDT 2015
[ehp@hadoop-ehp hadoop-2.5.0-cdh5.3.3]$ df -hl Filesystem Size Used Avail Use% Mounted on /dev/sda2 16G 15G 549M 97% / tmpfs 2.9G 72K 2.9G 1% /dev/shm /dev/sda1 194M 29M 156M 16% /boot
很明显是硬盘空间不足导致。
解决方法:
1.增加硬盘的容量。
相关推荐
赠送jar包:hadoop-mapreduce-client-jobclient-2.6.5.jar; 赠送原API文档:hadoop-mapreduce-client-jobclient-2.6.5-javadoc.jar; 赠送源代码:hadoop-mapreduce-client-jobclient-2.6.5-sources.jar; 赠送...
hadoop-mapreduce-examples-2.7.1.jar
包org.apache.hadoop.mapreduce的Hadoop源代码分析
赠送jar包:hadoop-mapreduce-client-app-2.6.5.jar; 赠送原API文档:hadoop-mapreduce-client-app-2.6.5-javadoc.jar; 赠送源代码:hadoop-mapreduce-client-app-2.6.5-sources.jar; 赠送Maven依赖信息文件:...
赠送jar包:hadoop-mapreduce-client-jobclient-2.6.5.jar; 赠送原API文档:hadoop-mapreduce-client-jobclient-2.6.5-javadoc.jar; 赠送源代码:hadoop-mapreduce-client-jobclient-2.6.5-sources.jar; 赠送...
赠送jar包:hadoop-mapreduce-client-core-2.5.1.jar; 赠送原API文档:hadoop-mapreduce-client-core-2.5.1-javadoc.jar; 赠送源代码:hadoop-mapreduce-client-core-2.5.1-sources.jar; 赠送Maven依赖信息文件:...
赠送jar包:hadoop-mapreduce-client-app-2.6.5.jar; 赠送原API文档:hadoop-mapreduce-client-app-2.6.5-javadoc.jar; 赠送源代码:hadoop-mapreduce-client-app-2.6.5-sources.jar; 赠送Maven依赖信息文件:...
赠送jar包:hadoop-mapreduce-client-core-2.7.3.jar; 赠送原API文档:hadoop-mapreduce-client-core-2.7.3-javadoc.jar; 赠送源代码:hadoop-mapreduce-client-core-2.7.3-sources.jar; 赠送Maven依赖信息文件:...
hadoop-mapreduce-examples-2.6.5.jar 官方案例源码
赠送jar包:hadoop-mapreduce-client-app-2.7.3.jar; 赠送原API文档:hadoop-mapreduce-client-app-2.7.3-javadoc.jar; 赠送源代码:hadoop-mapreduce-client-app-2.7.3-sources.jar; 赠送Maven依赖信息文件:...
赠送jar包:hadoop-mapreduce-client-core-2.6.5.jar 赠送原API文档:hadoop-mapreduce-client-core-2.6.5-javadoc.jar 赠送源代码:hadoop-mapreduce-client-core-2.6.5-sources.jar 包含翻译后的API文档:...
可能包括了 Java 代码示例,讲解如何创建 MapReduce 程序,并将其提交到 Hadoop 集群执行。 7. **运行与调试**: 在实际环境中,我们需要配置 Hadoop 集群,设置输入文件路径,编译并打包 WordCount 程序,最后...
除此之外,可能还会有驱动程序(Driver)代码,用于配置和提交MapReduce作业。 在MapReduce中,数据通常存储在HDFS(Hadoop Distributed File System)上,而JobTracker和TaskTracker负责作业的调度和任务的执行。...
Hadoop实现了一个分布式文件系统(Hadoop Distributed File System),简称HDFS。HDFS有高容错性的特点,并且设计用来部署在低廉的(low-cost)硬件上;而且它提供高吞吐量(high throughput)来访问应用程序的数据...
赠送jar包:hadoop-mapreduce-client-common-2.7.3.jar; 赠送原API文档:hadoop-mapreduce-client-common-2.7.3-javadoc.jar; 赠送源代码:hadoop-mapreduce-client-common-2.7.3-sources.jar; 赠送Maven依赖信息...
赠送jar包:hadoop-mapreduce-client-common-2.6.5.jar; 赠送原API文档:hadoop-mapreduce-client-common-2.6.5-javadoc.jar; 赠送源代码:hadoop-mapreduce-client-common-2.6.5-sources.jar; 赠送Maven依赖信息...
4. **编写驱动程序**:驱动程序设置输入和输出路径,创建Job对象,设置Mapper和Reducer类,然后提交Job给Hadoop集群执行。 5. **运行和验证结果**:运行程序后,结果将写入到指定的输出路径,通常是一个或多个part-...