`
cloudeagle
  • 浏览: 109836 次
  • 性别: Icon_minigender_1
  • 来自: 合肥
社区版块
存档分类
最新评论

yarn historyserver 使用解析

 
阅读更多

启动服务:

sbin/mr-jobhistory-daemon.sh start historyserver

查询历史页面:

http://202.117.10.25:19888/jobhistory


1. job完成后,历史信息存入hdfs下的mr-history/done/中,

执行命令分析历史信息:

hadoop job -historyall hdfs://202.117.10.26:9000/mr-history/done/2013/01/31/000000/job_1359637210567_0001-1359637233266-liuqiang-word+count-1359637716126-16-1-SUCCEEDED-default.jhist>result.txt


即可。

结果如下:


Hadoop job: job_1361498589930_0003
=====================================
User: liuqiang
JobName: word count
JobConf: hdfs://202.117.10.26:9000/tmp/hadoop-yarn/staging/liuqiang/.staging/job_1361498589930_0003/job.xml
Submitted At: 22-Feb-2013 10:23:51
Launched At: 22-Feb-2013 10:27:42 (3mins, 51sec)
Finished At: 22-Feb-2013 10:32:00 (4mins, 17sec)
Status: SUCCEEDED
Counters:

|Group Name |Counter name |Map Value |Reduce Value|Total Value|
---------------------------------------------------------------------------------------
|File System Counters |FILE: Number of bytes read |66,490,968|18,230,892|84,721,860
|File System Counters |FILE: Number of bytes written |85,744,950|18,294,425|104,039,375
|File System Counters |FILE: Number of read operations|0 |0 |0
|File System Counters |FILE: Number of large read operations|0 |0 |0
|File System Counters |FILE: Number of write operations|0 |0 |0
|File System Counters |HDFS: Number of bytes read |1,035,668,512|0 |1,035,668,512
|File System Counters |HDFS: Number of bytes written |0 |947,869 |947,869
|File System Counters |HDFS: Number of read operations|48 |3 |51
|File System Counters |HDFS: Number of large read operations|0 |0 |0
|File System Counters |HDFS: Number of write operations|0 |2 |2
|Job Counters |Killed map tasks |0 |0 |10
|Job Counters |Launched map tasks |0 |0 |26
|Job Counters |Launched reduce tasks |0 |0 |1
|Job Counters |Data-local map tasks |0 |0 |20
|Job Counters |Rack-local map tasks |0 |0 |6
|Job Counters |Total time spent by all maps in occupied slots (ms)|0 |0 |4,317,327
|Job Counters |Total time spent by all reduces in occupied slots (ms)|0 |0 |196,704
|Map-Reduce Framework |Map input records |9,383,921 |0 |9,383,921
|Map-Reduce Framework |Map output records |177,316,480|0 |177,316,480
|Map-Reduce Framework |Map output bytes |1,753,046,400|0 |1,753,046,400
|Map-Reduce Framework |Map output materialized bytes |18,230,862|0 |18,230,862
|Map-Reduce Framework |Input split bytes |1,952 |0 |1,952
|Map-Reduce Framework |Combine input records |181,455,220|0 |181,455,220
|Map-Reduce Framework |Combine output records |5,311,383 |0 |5,311,383
|Map-Reduce Framework |Reduce input groups |0 |68,979 |68,979
|Map-Reduce Framework |Reduce shuffle bytes |0 |18,230,862|18,230,862
|Map-Reduce Framework |Reduce input records |0 |1,172,643 |1,172,643
|Map-Reduce Framework |Reduce output records |0 |68,979 |68,979
|Map-Reduce Framework |Spilled Records |5,449,341 |1,172,643 |6,621,984
|Map-Reduce Framework |Shuffled Maps |0 |16 |16
|Map-Reduce Framework |Failed Shuffles |0 |0 |0
|Map-Reduce Framework |Merged Map outputs |0 |16 |16
|Map-Reduce Framework |GC time elapsed (ms) |260,716 |364 |261,080
|Map-Reduce Framework |CPU time spent (ms) |1,808,840 |7,850 |1,816,690
|Map-Reduce Framework |Physical memory (bytes) snapshot|3,380,711,424|76,144,640|3,456,856,064
|Map-Reduce Framework |Virtual memory (bytes) snapshot|6,272,446,464|352,477,184|6,624,923,648
|Map-Reduce Framework |Total committed heap usage (bytes)|2,952,212,480|40,366,080|2,992,578,560
|Shuffle Errors |BAD_ID |0 |0 |0
|Shuffle Errors |CONNECTION |0 |0 |0
|Shuffle Errors |IO_ERROR |0 |0 |0
|Shuffle Errors |WRONG_LENGTH |0 |0 |0
|Shuffle Errors |WRONG_MAP |0 |0 |0
|Shuffle Errors |WRONG_REDUCE |0 |0 |0
|File Input Format Counters |Bytes Read |1,035,666,560|0 |1,035,666,560
|File Output Format Counters |Bytes Written |0 |947,869 |947,869
|Job Counters |Killed map tasks |0 |0 |10
|Job Counters |Launched map tasks |0 |0 |26
|Job Counters |Launched reduce tasks |0 |0 |1
|Job Counters |Data-local map tasks |0 |0 |20
|Job Counters |Rack-local map tasks |0 |0 |6
|Job Counters |Total time spent by all maps in occupied slots (ms)|0 |0 |4,317,327
|Job Counters |Total time spent by all reduces in occupied slots (ms)|0 |0 |196,704
|File System Counters |FILE: Number of bytes read |66,490,968|18,230,892|84,721,860
|File System Counters |FILE: Number of bytes written |85,744,950|18,294,425|104,039,375
|File System Counters |FILE: Number of read operations|0 |0 |0
|File System Counters |FILE: Number of large read operations|0 |0 |0
|File System Counters |FILE: Number of write operations|0 |0 |0
|File System Counters |HDFS: Number of bytes read |1,035,668,512|0 |1,035,668,512
|File System Counters |HDFS: Number of bytes written |0 |947,869 |947,869
|File System Counters |HDFS: Number of read operations|48 |3 |51
|File System Counters |HDFS: Number of large read operations|0 |0 |0
|File System Counters |HDFS: Number of write operations|0 |2 |2
|Map-Reduce Framework |Map input records |9,383,921 |0 |9,383,921
|Map-Reduce Framework |Map output records |177,316,480|0 |177,316,480
|Map-Reduce Framework |Map output bytes |1,753,046,400|0 |1,753,046,400
|Map-Reduce Framework |Map output materialized bytes |18,230,862|0 |18,230,862
|Map-Reduce Framework |Input split bytes |1,952 |0 |1,952
|Map-Reduce Framework |Combine input records |181,455,220|0 |181,455,220
|Map-Reduce Framework |Combine output records |5,311,383 |0 |5,311,383
|Map-Reduce Framework |Reduce input groups |0 |68,979 |68,979
|Map-Reduce Framework |Reduce shuffle bytes |0 |18,230,862|18,230,862
|Map-Reduce Framework |Reduce input records |0 |1,172,643 |1,172,643
|Map-Reduce Framework |Reduce output records |0 |68,979 |68,979
|Map-Reduce Framework |Spilled Records |5,449,341 |1,172,643 |6,621,984
|Map-Reduce Framework |Shuffled Maps |0 |16 |16
|Map-Reduce Framework |Failed Shuffles |0 |0 |0
|Map-Reduce Framework |Merged Map outputs |0 |16 |16
|Map-Reduce Framework |GC time elapsed (ms) |260,716 |364 |261,080
|Map-Reduce Framework |CPU time spent (ms) |1,808,840 |7,850 |1,816,690
|Map-Reduce Framework |Physical memory (bytes) snapshot|3,380,711,424|76,144,640|3,456,856,064
|Map-Reduce Framework |Virtual memory (bytes) snapshot|6,272,446,464|352,477,184|6,624,923,648
|Map-Reduce Framework |Total committed heap usage (bytes)|2,952,212,480|40,366,080|2,992,578,560

=====================================

Task Summary
============================
KindTotalSuccessfulFailedKilledStartTimeFinishTime

Setup0000
Map261601022-Feb-2013 10:27:4522-Feb-2013 10:31:53 (4mins, 8sec)
Reduce110022-Feb-2013 10:28:4322-Feb-2013 10:32:00 (3mins, 16sec)
Cleanup0000
============================


Analysis
=========

Time taken by best performing map task task_1361498589930_0003_m_000015: 55sec
Average time taken by map tasks: 2mins, 58sec
Worse performing map tasks:
TaskIdTimetaken
task_1361498589930_0003_m_000004 4mins, 8sec
task_1361498589930_0003_m_000007 4mins, 7sec
task_1361498589930_0003_m_000013 4mins, 4sec
task_1361498589930_0003_m_000003 4mins, 3sec
task_1361498589930_0003_m_000006 3mins, 24sec
task_1361498589930_0003_m_000012 3mins, 23sec
task_1361498589930_0003_m_000009 3mins, 23sec
task_1361498589930_0003_m_000008 3mins, 21sec
task_1361498589930_0003_m_000005 3mins, 20sec
task_1361498589930_0003_m_000001 3mins, 20sec
The last map task task_1361498589930_0003_m_000004 finished at (relative to the Job launch time): 22-Feb-2013 10:31:53 (4mins, 11sec)

Time taken by best performing shuffle task task_1361498589930_0003_r_000000: 3mins, 10sec
Average time taken by shuffle tasks: 3mins, 10sec
Worse performing shuffle tasks:
TaskIdTimetaken
task_1361498589930_0003_r_000000 3mins, 10sec
The last shuffle task task_1361498589930_0003_r_000000 finished at (relative to the Job launch time): 22-Feb-2013 10:31:54 (4mins, 12sec)

Time taken by best performing reduce task task_1361498589930_0003_r_000000: 5sec
Average time taken by reduce tasks: 5sec
Worse performing reduce tasks:
TaskIdTimetaken
task_1361498589930_0003_r_000000 5sec
The last reduce task task_1361498589930_0003_r_000000 finished at (relative to the Job launch time): 22-Feb-2013 10:32:00 (4mins, 17sec)
=========

KILLED task attempts by nodes
HostnameFailedTasks
===============================
node13task_1361498589930_0003_m_000007, task_1361498589930_0003_m_000012, task_1361498589930_0003_m_000013,
node14task_1361498589930_0003_m_000003, task_1361498589930_0003_m_000004,
node21task_1361498589930_0003_m_000000, task_1361498589930_0003_m_000002, task_1361498589930_0003_m_000010, task_1361498589930_0003_m_000011,
node12task_1361498589930_0003_m_000006,

分享到:
评论

相关推荐

    Sparkonyarn集群搭建详细过程.docx

    - 在Master节点上启动Spark的History Server,以便查看任务历史。 - 在集群中任意节点提交一个Spark应用,如`spark-submit --master yarn --class org.apache.spark.examples.SparkPi --deploy-mode cluster spark...

    Hadoop-2.8.1+Zookeeper-3.4.6(HDFS,YARN HA)部署指南

    HOME/sbin/start-all.sh`,并在副节点(hadoop002)上执行`$HADOOP_HOME/sbin/yarn-daemon.sh start resourcemanager`以及`$HADOOP_HOME/sbin/mr-jobhistory-daemon.sh start historyserver`。 #### 关闭集群 关闭...

    hadoop配置手册

    - **`mapreduce.framework.name`**: 设置为`yarn`表示使用YARN作为资源管理器。 - **`mapreduce.jobtracker.http.address`**: MapReduce作业跟踪器HTTP服务的地址。 - **`mapreduce.jobhistory.address`**: Job...

    Spark部署中的关键问题解决之道--许鹏.zip

    3. Spark History Server:记录已完成作业的详细信息,便于后期分析。 六、问题排查 常见问题包括内存溢出、任务延迟、数据倾斜等,需通过日志分析、指标监控等手段定位问题,再结合上述优化手段进行调整。 七、...

    (完整版)hadoop常见笔试题答案.docx

    HistoryServer的WebUI端口号通常是19888,用于查看MapReduce作业的历史记录。 11. 要修改HDFS的block大小,需要在hdfs-site.xml配置文件中设置属性`dfs.blocksize`。 12. Namenode的RPC端口号为8021,用于接收...

    CCTC 2016 王栋:利用ELK监控Hadoop集群负载性能

    数据来源包括YARN REST API、MapReduce History Server、Spark History Server、MapReduce作业日志(HDFS)和Spark作业日志(HDFS)。Logpreparer负责准备日志文件,包括从HDFS和REST API中获取。Logstash则解析这些...

    Hadoop 2.5.2安装和部署

    启动DataNode、NameNode、ResourceManager、NodeManager和HistoryServer等组件,命令可能包括`start-dfs.sh`、`start-yarn.sh`等。 10. **验证安装** 浏览器中打开`http://localhost:50070`和`...

    Hadoop平台搭建方案_hadoop_

    5. 启动Hadoop服务:依次启动DataNode、NameNode、SecondaryNameNode、YARN和MapReduce JobHistory Server等服务。 6. 测试Hadoop集群:通过`hadoop fs -ls /`检查HDFS是否正常工作,或者运行一个简单的MapReduce...

    shield-server

    yarn add @cplabs/shield-server 用法 无密码模式 # Default shield-server . 无代码CLI配置 - 港口 --cors -调试 --history-api-fallback --ssl-cert --ssl键 中间件模式 const express = require ( '...

    大数据技术之高频面试题

    Hadoop 的主要组件如 NameNode(8020)、DataNode(50010)、ResourceManager(8032)、NodeManager(8040)和 JobHistory Server(10020)都有固定的默认端口,面试时可能会问到如何配置和理解这些端口的作用。...

    hadoop 2.x 版本概要讲解,HA搭建指南

    启动 HDFS 和 YARN 有多种方式,可以通过命令行脚本(如 `start-dfs.sh` 和 `start-yarn.sh`)或者使用 Hadoop 自带的守护进程控制工具(如 `mr-jobhistory-daemon.sh` 和 `start-all.sh`)来完成。不同的启动方式...

    Spark大数据处理:技术、应用与性能优化 (大数据技术丛书).pdf

    12. **Spark监控与调试**:通过Spark UI、Spark History Server和日志监控,可以追踪作业状态,诊断性能问题。 以上是Spark大数据处理中的关键知识点,它们涵盖了Spark的主要功能和使用场景,对于理解和掌握Spark...

    搭建一个大数据集群.docx

    # mapred --daemon start historyserver ``` - **查看各主节点的状态**:通过命令行或Web界面检查各服务的状态。 ```bash # hdfs dfsadmin -report # yarn node -list ``` - **Web界面进行查看状态**:通过...

    大数据处理期末考试题库.docx

    2. Spark服务端口:Spark自带的服务端口包括8080(Web UI)、4040(Spark JobHistory Server)和18080(YARN的Web UI),而8090不是Spark的标准端口。 3. Spark版本变更:Spark 1.4版本引入了Spark RC DataFrame,...

    小象sparktraining-master.rar

    - Spark UI和Spark History Server:监控作业状态和历史,帮助诊断问题。 综上所述,"小象Sparktraining-master"项目为学习Spark提供了一个实战平台,通过深入研究和实践,开发者可以深入了解Spark的原理、编程...

    Hadoop-2.7.1分布式安装手册

    `mapreduce.jobhistory.address`和`mapreduce.jobhistory.webapp.address`定义JobHistoryServer的地址和Web UI端口。 8. 初始化和启动:格式化NameNode,启动所有服务,并监控其状态。使用`hadoop dfsadmin -report...

    cdh7.5 cloudera manager 安装全套软件和视频推荐

    提供的链接指向了一个B站视频教程(<https://www.bilibili.com/video/BV1B14y1R7MX/?spm_id_from=333.880.my_history.page.click>),该教程可能包含了详细的安装步骤和实战经验分享,对于想要了解如何安装和配置CDP...

    《Spark编程基础及项目实践》试卷及答案2套.pdf

    10. **Spark服务端口**:Spark默认使用的端口包括8080(Web UI)、4040(Spark History Server UI),18080可能不是Spark自带的服务端口。 11. **广播变量**:广播变量在Spark中是只读的,存储在每个节点上,但不...

Global site tag (gtag.js) - Google Analytics