I. installation mode
same as hadoop 1.x ,there are several mode to install hadoop:
1.standalone
just run it on one machine,includes running of mapreduce.
2.pseudo
setup it with hdfs mode,and this case contains two types:
a.run hdfs only
in this case,mapreds also run in local mode ,yes ,you can see the job name called as job_localxxxxxx
b.run hdfs with yarn
yes ,this is same as the distributed mode
3.distributed mode/cluster mode
compare to item 2,this item only has some more configures and more than one nodes.
II.configures for cluster mode
file |
property |
value |
default val |
summary |
core-site.xml |
hadoop.tmp.dir |
/usr/local/hadoop/data-2.5.1/tmp |
/tmp/hadoop-${user.name} |
path to a tmp dir, some sub dirs will be filecache,usercache,nmPrivate.so thisdir shoult not set todir 'tmp' for productenvironment;
|
fs.defaultFS |
hdfs://host1:9000 |
file:/// |
the name of the default file system.this will determine the installation mode ;the correspondent deprecated one is: fs.default.name; The name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. The uri's scheme determines the config property (fs.SCHEME.impl) naming the FileSystem implementation class. The uri's authority is used to
determine the host, port, etc. for a filesystem. |
|
hdfs-site.xml |
dfs.nameservices |
hadoop-cluster1 |
Comma-separated list of nameservices.here is single NN only but HA |
|
dfs.namenode.secondary.http-address |
host1:50090 |
0.0.0.0:50090 |
The secondary namenode http server address and port. |
|
dfs.namenode.name.dir |
file:///usr/local/hadoop/data-2.5.1/dfs/name |
file://${hadoop.tmp.dir}/dfs/name |
Determines where on the local filesystem the DFS name node should store the name table(fsimage). If this is a comma-delimited list of directories then the name table is replicated in all of the directories, for redundancy. |
|
dfs.datanode.data.dir |
file:///usr/local/hadoop/data-2.5.1/dfs/data |
file://${hadoop.tmp.dir}/dfs/data |
Determines where on the local filesystem an DFS data node should store its blocks. If this is a comma-delimited list of directories, then data will be stored in all named directories, typically on different devices. Directories that do not exist are ignored. |
|
dfs.replication |
1 | 3 | the replication factor to assign data blocks | |
dfs.webhdfs.enabled |
true | true |
Enable WebHDFS (REST API) in Namenodes and Datanodes. |
|
yarn-site.xml | yarn.nodemanager.aux-services | mapreduce_shuffle | the auxiliary service name | the valid service name should only contain a-zA-Z0-9_ and can not start with numbers |
yarn.resourcemanager.address | host1:8032 | ${yarn.resourcemanager.hostname}:8032 | The address of the applications manager interface in the RM | |
yarn.resourcemanager.scheduler.address | host1:8030 | ${yarn.resourcemanager.hostname}:8030 | the scheduler address of RM | |
yarn.resourcemanager.resource-tracker.address | host1:8031 | ${yarn.resourcemanager.hostname}:8031 | ||
yarn.resourcemanager.admin.address | host1:8033 | ${yarn.resourcemanager.hostname}:8033 | admin addr | |
yarn.resourcemanager.webapp.address | host1:50030 | ${yarn.resourcemanager.hostname}:8088 | the webp ui addr for RM ;here is set to job tracker addr that same as hadoop 1.x | |
mapred-site.xml | mapreduce.framework.name | yarn | local |
The runtime framework for executing MapReduce jobs. Can be one of local, classic or yarn. |
mapreduce.jobhistory.address | host1:10020 | 0.0.0.0:10020 | MapReduce JobHistory Server IPC host:port | |
mapreduce.jobhistory.webapp.address | host1:19888 | 0.0.0.0:19888 | MapReduce JobHistory Server Web UI host:port | |
III.the results of running MR in yarn
below are logs from mapreduce run with pseudo mode:
hadoop@ubuntu:/usr/local/hadoop/hadoop-2.5.1$ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.1.jar wordcount wc wc-out
14/11/05 18:19:23 INFO client.RMProxy: Connecting to ResourceManager at namenode/192.168.1.25:8032
14/11/05 18:19:24 INFO input.FileInputFormat: Total input paths to process : 22
14/11/05 18:19:24 INFO mapreduce.JobSubmitter: number of splits:22
14/11/05 18:19:24 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1415182439385_0001
14/11/05 18:19:25 INFO impl.YarnClientImpl: Submitted application application_1415182439385_0001
14/11/05 18:19:25 INFO mapreduce.Job: The url to track the job: http://namenode:50030/proxy/application_1415182439385_0001/
14/11/05 18:19:25 INFO mapreduce.Job: Running job: job_1415182439385_0001
14/11/05 18:19:32 INFO mapreduce.Job: Job job_1415182439385_0001 running in uber mode : false
14/11/05 18:19:32 INFO mapreduce.Job: map 0% reduce 0%
14/11/05 18:19:44 INFO mapreduce.Job: map 9% reduce 0%
14/11/05 18:19:45 INFO mapreduce.Job: map 27% reduce 0%
14/11/05 18:19:54 INFO mapreduce.Job: map 32% reduce 0%
14/11/05 18:19:55 INFO mapreduce.Job: map 45% reduce 0%
14/11/05 18:19:56 INFO mapreduce.Job: map 50% reduce 0%
14/11/05 18:20:02 INFO mapreduce.Job: map 55% reduce 17%
14/11/05 18:20:03 INFO mapreduce.Job: map 59% reduce 17%
14/11/05 18:20:05 INFO mapreduce.Job: map 68% reduce 20%
14/11/05 18:20:06 INFO mapreduce.Job: map 73% reduce 20%
14/11/05 18:20:08 INFO mapreduce.Job: map 73% reduce 24%
14/11/05 18:20:11 INFO mapreduce.Job: map 77% reduce 24%
14/11/05 18:20:12 INFO mapreduce.Job: map 82% reduce 24%
14/11/05 18:20:13 INFO mapreduce.Job: map 91% reduce 24%
14/11/05 18:20:14 INFO mapreduce.Job: map 95% reduce 30%
14/11/05 18:20:16 INFO mapreduce.Job: map 100% reduce 30%
14/11/05 18:20:17 INFO mapreduce.Job: map 100% reduce 100%
14/11/05 18:20:18 INFO mapreduce.Job: Job job_1415182439385_0001 completed successfully
14/11/05 18:20:18 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=54637
FILE: Number of bytes written=2338563
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=59677
HDFS: Number of bytes written=28233
HDFS: Number of read operations=69
HDFS: Number of large read operations=0
HDFS: Number of write operations=2
Job Counters
Launched map tasks=22
Launched reduce tasks=1
Data-local map tasks=22
Total time spent by all maps in occupied slots (ms)=185554
Total time spent by all reduces in occupied slots (ms)=30206
Total time spent by all map tasks (ms)=185554
Total time spent by all reduce tasks (ms)=30206
Total vcore-seconds taken by all map tasks=185554
Total vcore-seconds taken by all reduce tasks=30206
Total megabyte-seconds taken by all map tasks=190007296
Total megabyte-seconds taken by all reduce tasks=30930944
Map-Reduce Framework
Map input records=1504
Map output records=5727
Map output bytes=77326
Map output materialized bytes=54763
Input split bytes=2498
Combine input records=5727
Combine output records=2838
Reduce input groups=1224
Reduce shuffle bytes=54763
Reduce input records=2838
Reduce output records=1224
Spilled Records=5676
Shuffled Maps =22
Failed Shuffles=0
Merged Map outputs=22
GC time elapsed (ms)=1707
CPU time spent (ms)=14500
Physical memory (bytes) snapshot=5178937344
Virtual memory (bytes) snapshot=22517506048
Total committed heap usage (bytes)=3882549248
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=57179
File Output Format Counters
Bytes Written=28233
FAQs
1.2014-01-22 09:38:20,733 INFO [AsyncDispatcher event handler] rmapp.RMAppImpl (RMAppImpl.java:transition(788)) - Application application_1390354688375_0001 failed 2 times due to AM Container for appattempt_1390354688375_0001_000002 exited with exitCode: 127 due to: Exception from container-launch:
this maybe occur if you dont setup a JAVA_HOME in yarn-env.sh and hadoop-env.sh,and remember to restart yarn:)
2.occurs two jobs by running 'grep' example
it's normal!at first ,i think it's some wrong,but when i run wordcount again,the result shows one job only .so i think it's the nature of this example.
ref:
相关推荐
CSDN海神之光上传的全部代码均可运行,亲测可用,直接替换数据即可,适合小白; 1、代码压缩包内容 主函数:main.m; 调用函数:其他m文件;无需运行 运行结果效果图; 2、代码运行版本 Matlab 2019b;若运行有误,根据提示修改;若不会,可私信博主; 3、运行操作步骤 步骤一:将所有文件放到Matlab的当前文件夹中; 步骤二:双击打开main.m文件; 步骤三:点击运行,等程序运行完得到结果; 4、仿真咨询 如需其他服务,可私信博主或扫描博主博客文章底部QQ名片; 4.1 CSDN博客或资源的完整代码提供 4.2 期刊或参考文献复现 4.3 Matlab程序定制 4.4 科研合作
dynamic_unet-0.0.2-py3-none-any.whl
该项目是一款基于Java语言的动态二维码生成与绘制设计源码,包含126个文件,包括64个JAR包、24个PNG图片、17个GIF动画、10个XML配置、7个Java源文件、2个Markdown文件、1个Idea项目配置文件和1个JPG图片文件。该项目通过图像生成动态二维码,适用于需要动态二维码绘制的场景。
golang linux amd63 sdk
<项目介绍> - 微信小程序—图书馆管理系统 - 不懂运行,下载完可以私聊问,可远程教学 该资源内项目源码是个人的毕设,代码都测试ok,都是运行成功后才上传资源,答辩评审平均分达到96分,放心下载使用! 1、该资源内项目代码都经过测试运行成功,功能ok的情况下才上传的,请放心下载使用! 2、本项目适合计算机相关专业(如计科、人工智能、通信工程、自动化、电子信息等)的在校学生、老师或者企业员工下载学习,也适合小白学习进阶,当然也可作为毕设项目、课程设计、作业、项目初期立项演示等。 3、如果基础还行,也可在此代码基础上进行修改,以实现其他功能,也可用于毕设、课设、作业等。 下载后请首先打开README.md文件(如有),仅供学习参考, 切勿用于商业用途。 --------
ψx60wyycaqwin25rψ Win实现Mac一样的流畅丝滑
Java2Top |「Java学习+面试指南+编程资源」一份涵盖 Javacoder 从零基础到进阶大厂的全面学习与面试指南~。
疫情管理系统-疫情管理系统-疫情管理系统-疫情管理系统-疫情管理系统-疫情管理系统-疫情管理系统-疫情管理系统-疫情管理系统-疫情管理系统-疫情管理系统-疫情管理系统 1、资源说明:疫情管理系统源码,本资源内项目代码都经过测试运行成功,功能ok的情况下才上传的。 2、适用人群:计算机相关专业(如计算计、信息安全、大数据、人工智能、通信、物联网、自动化、电子信息等)在校学生、专业老师或者企业员工等学习者,作为参考资料,进行参考学习使用。 3、资源用途:本资源具有较高的学习借鉴价值,可以作为“参考资料”,注意不是“定制需求”,代码只能作为学习参考,不能完全复制照搬。需要有一定的基础,能够看懂代码,能够自行调试代码,能够自行添加功能修改代码。 4. 最新计算机软件毕业设计选题大全(文章底部有博主联系方式): https://blog.csdn.net/2301_79206800/article/details/135931154 技术栈、环境、工具、软件: ① 系统环境:Windows ② 开发语言:Java ③ 框架:SpringBoot ④ 架构:B/S、MVC ⑤ 开发环境:IDE
CSDN海神之光上传的全部代码均可运行,亲测可用,直接替换数据即可,适合小白; 1、代码压缩包内容 主函数:main.m; 调用函数:其他m文件;无需运行 运行结果效果图; 2、代码运行版本 Matlab 2019b;若运行有误,根据提示修改;若不会,可私信博主; 3、运行操作步骤 步骤一:将所有文件放到Matlab的当前文件夹中; 步骤二:双击打开main.m文件; 步骤三:点击运行,等程序运行完得到结果; 4、仿真咨询 如需其他服务,可私信博主或扫描博主博客文章底部QQ名片; 4.1 CSDN博客或资源的完整代码提供 4.2 期刊或参考文献复现 4.3 Matlab程序定制 4.4 科研合作
af4fff13f6f2a7594ca86979127f5212.JPG
德国大陆24GHz短距宽角毫米波雷达ARS408,SRR308技术资料含DBC
大学生一体化服务平台-大学生一体化服务平台-大学生一体化服务平台-大学生一体化服务平台-大学生一体化服务平台-大学生一体化服务平台-大学生一体化服务平台-大学生一体化服务平台-大学生一体化服务平台-大学生一体化服务平台-大学生一体化服务平台-大学生一体化服务平台 1、资源说明:大学生一体化服务平台源码,本资源内项目代码都经过测试运行成功,功能ok的情况下才上传的。 2、适用人群:计算机相关专业(如计算计、信息安全、大数据、人工智能、通信、物联网、自动化、电子信息等)在校学生、专业老师或者企业员工等学习者,作为参考资料,进行参考学习使用。 3、资源用途:本资源具有较高的学习借鉴价值,可以作为“参考资料”,注意不是“定制需求”,代码只能作为学习参考,不能完全复制照搬。需要有一定的基础,能够看懂代码,能够自行调试代码,能够自行添加功能修改代码。 4. 最新计算机软件毕业设计选题大全(文章底部有博主联系方式): https://blog.csdn.net/2301_79206800/article/details/135931154 技术栈、环境、工具、软件: ① 系统环境:Window
基于asp.net的语言课程网设计与实现.docx
该项目为Java语言编写的Android运行时权限申请封装库,名为lib-android-PermissionHelper,源码包含59个文件,涵盖20个XML配置文件、14个Java源文件、7个PNG资源文件、5个Gradle构建脚本文件、3个Git忽略规则文件以及若干其他辅助文件。此库专注于Android M及以上版本的权限申请流程封装,旨在简化开发者对运行时权限的申请处理。
该项目是一款基于Vue框架构建的程序员网址导航系统源码,包含136个文件,包括56个PNG图片、49个ICO图标、11个Vue组件文件、5个JavaScript文件以及少量JSON、YAML和其他配置文件。该系统以简洁明了的界面,为程序员提供便捷的网址导航服务。
基于c语言的力学相关的流体源码.zip
基于c语言的自创军旗游戏源码.zip
该程序对帧率很敏感,因此需要针对不同的视频进行进一步的参数调整。 当前参数针对 CULane 数据集进行了调整,该数据集的帧速率非常低(我猜,它们每秒只挑选一小部分录制的帧以减小数据集的大小)。它在 上运行得相当不错,这是一个 24 FPS 视频,但如果置信度指数的降低率和容忍范围较低,那就更好了。如果您尝试 60 FPS 的视频,除非您知道如何调整参数,否则这种现象会更严重。
该项目是一款基于Html、JavaScript和Java的相机租赁系统源码,综合运用前端与后端技术,包含265个文件,其中HTML文件31个,JavaScript文件56个,Java文件45个,CSS文件23个,PNG图片53个,以及其他类型的文件如XML、properties等。该系统旨在提供相机租赁管理的便捷解决方案,支持用户交互和数据处理。
1、守护进程启动子进程 2、子进程异常退出,守护进程自动重启子进程 3、子进程正常退出,守护进程退出 4、守护进程退出,子进程退出 5、模拟子进程崩溃闪退情况