`
sunwinner
  • 浏览: 202646 次
  • 性别: Icon_minigender_1
  • 来自: 上海
社区版块
存档分类
最新评论

YARN and MapReduce2 Memory

 
阅读更多

First let me explain what's the difference between YARN and MapReduce2. YARN is a generic platform for any form of distributed application to run on, while MapReduce2 is one such distributed application that runs the MapReduce framework on top of YARN.

 

YARN treats memory in a more fine-grained manner than the slot-based model used in the classic implementation of MapReduce. It allows application to request arbitrary amount of memory (with limits) for a task. In the YARN model, node managers allocate memory from a pool, so the number of tasks that are running on a particular node depends on the sum of their memory requirements, and not simply on a fixed number of slots. Also, YARN don't distinguish between map slot and reduce slot to avoid memory wastage.

 

In YARN, each Hadoop daemon uses 1000MB, so for a DataNode and a NodeManager, the total is 2000MB. Set aside enough for other processes that are running on the machine, the reminder can be dedicated to the NodeManager's containers by setting the yarn.nodemanager.resource.memory-mb to the total allocation in MB (default as 8192MB).

 

How to set memory options for individual jobs. There're two controls: mapred.child.java.opts, which allows you to set the JVM heap size of the map of reduce task; and mapreduce.map.memory.mb which is used to specify how much memory you need for map task containers(mapreduce.reduce.memory.mb for reduce task containers). The latter setting is used by the ApplicationMaster when negotiating for resources in the cluster, and also by the NodeManager, which runs and monitors the task containers.

 

Let's be more specific by a example. suppose that the mapred.child.java.opts is set to -Xmx800m and mapreduce.map.memory.mb is left as its default value 1024MB. When a map task run, the NodeManager will allocate a 1024MB container (decreasing the size of it's pool by that amount for the duration of task) and will launch the task JVM configured with a 800MB max heap size. Note that the JVM process will have a larger memory footprint that the heap size, and the overhead will depend on such thing as the native libs in use, the size of PermGen space, etc... The important thing is that the physical memory used by the JVM process, including any processes that it spawns, such as Streaming or Pipes processes, does not exceed it allocation(1024MB). If a container uses more memory than it has been allocated, then it may be terminated by the NodeManager and marked as failed.

 

Schedulers may impose a min or max on memory allocations. For example, for the Capacity Scheduler, the default min memory is 1024MB, which is set by yarn.scheduler.capacity.minimum-allocation-mb; and the default max memory is 10240MB, which is set by yarn.scheduler.capacity.maximum-allocation-MB.

 

There're also virtual memory constraints that a container must meet. If a container's virtual memory usage exceeds a given multiple of the allocated physical memory, the node manager may terminate the process. The multiple is expressed by the yarn.nodemanager.vmem-pmem-ratio property, which defaults to 2.1. In above example, the virtual memory threshold above which the task may be terminated is 2150MB, which is 2.1 x 1024MB.

 

When configuring memory parameters, it's very useful to monitor a task's actual memory usage during a job run, and this is possible via MapReduce task counters. The counters PHYSICAL_MEMORY_BYTES, VIRTUAL_MEMORY_BYTES, and COMMITTED_HEAP_BYTES provide snapshot values of memory usage and are therefore suitable for observation during the course of a task attampt.

分享到:
评论

相关推荐

    23、hadoop集群中yarn运行mapreduce的内存、CPU分配调度计算与优化

    在Hadoop集群中,YARN(Yet Another Resource Negotiator)作为资源管理器,负责调度MapReduce任务的内存和CPU资源。YARN支持基于内存和CPU的两种资源调度策略,以确保集群资源的有效利用。在非默认配置下,合理地...

    hadoop、habse、yarn、MapReduce调优等总结的集合调优.docx

    MapReduce调优主要包括作业调度、任务划分、内存分配等方面,例如调整`mapreduce.map.memory.mb`和`mapreduce.reduce.memory.mb`以适应不同作业的需求,同时优化作业的并行度和数据本地性。 集群权限管理是保障系统...

    YARN内存配置计算脚本

    一个计算yarn内存配置的python脚本yarn-util.py,该脚本有四个参数 参数 描述 -c CORES 每个节点CPU核数 -m MEMORY 每个节点内存总数(单位G) -d DISKS 每个节点的硬盘个数 -k HBASE 如果安装了Hbase则为True,...

    yarn架构与讲解

    - **mapreduce.map.memory.mb** 和 **mapreduce.reduce.memory.mb**:分别设置Map和Reduce任务可用的最大内存。 #### 五、Shuffle过程中的优化策略 - **减少磁盘I/O**:通过增加内存缓冲区大小或调整溢写阈值,...

    MapReduce学习笔记,亲自测试写出来的,1000分都不贵

    <name>yarn.nodemanager.resource.memory-mb <value>2048 <name>yarn.nodemanager.resource.cup.vcores <value>1 ``` 这些配置定义了 ResourceManager 的主机名、NodeManager 的资源限制等。 #### 四、...

    Yarn 内存分配管理机制及相关参数配置.

    `mapreduce.map.memory.mb`和`mapreduce.reduce.memory.mb`分别配置map和reduce任务的Container内存大小。政务云环境下,map任务的内存为2048MB,reduce任务为4096MB。此外,`mapreduce.map.java.opts`和`mapreduce....

    Yarn常见问题维护手册V1.01

    检查 `yarn.nodemanager.resource.memory-mb` 和 `mapreduce.map.memory.mb` 等配置,适当调整内存分配。 - `[YARN-20002]` MR 任务运行失败,报 OOM 异常:这表明任务在运行过程中耗尽了内存。除了调整内存配置,...

    hadoop yarn优化配置项生成工具

    1. **yarn.nodemanager.resource.memory-mb**:定义每个节点上可以分配给YARN容器的最大内存。 2. **yarn.nodemanager.vmem-pmem-ratio**:虚拟内存与物理内存的比例,用于控制容器的内存使用。 3. **yarn.scheduler...

    华为MapReduce服务组件操作指南.rar

    3. MapReduce参数调优:调整诸如map.task.memory,reduce.task.memory等参数,确保任务有足够的内存资源。 4. Combiner使用:在Map阶段使用Combiner函数进行本地聚合,减少网络传输的数据量。 五、安全与权限管理 1...

    Hadoop集群配置及MapReduce开发手册

    - **内存优化**: 调整Hadoop配置文件中的内存参数,如`mapred.child.java.opts`、`yarn.nodemanager.resource.memory-mb`等,以提高系统的内存利用率。 - **master优化**: 对Namenode进行优化,比如增加缓存大小、...

    Yarn资源调用demo案例

    在"Yarn资源调用demo案例"中,可能包含了编写一个简单的MapReduce程序并在YARN上运行的过程。这通常涉及以下步骤: 1. 编写MR程序:实现Mapper和Reducer类。 2. 打包:将源代码打包成JAR文件。 3. 提交应用:通过`...

    YARN相关配置总结

    - **性能调优**:针对不同的应用场景,可以通过修改`mapred-site.xml`中的参数来提高MapReduce作业的性能,例如`mapreduce.map.memory.mb`、`mapreduce.reduce.memory.mb`等。 #### 六、监控与故障排查 - **Web...

    apache hadoop 2.7.2.chm

    Compatibilty between Hadoop 1.x and Hadoop 2.x Encrypted Shuffle Pluggable Shuffle/Sort Distributed Cache Deploy MapReduce REST APIs MR Application Master MR History Server YARN Overview YARN ...

    YARN线上动态资源调优.doc

    - **yarn.app.mapreduce.am.resource.mb**:设置为2048MB,为MapReduce应用管理器(AM)分配更多内存。 - **yarn.nodemanager.resource.cpu-vcores**:设置为16,允许每个节点分配16个虚拟CPU核心给YARN使用。 - **...

    hadoop2部署配置文件

    本配置文件集合包含了运行Hadoop 2.x集群所需的关键组件设置,如HDFS、YARN和MapReduce。现在我们将详细讲解每个配置文件的作用及其重要性。 1. **hadoop-env.sh**: 这个文件是Hadoop环境变量的配置,主要设置JVM...

    guagua:Hadoop MapReduce和Hadoop YARN的迭代计算框架

    瓜瓜瓜Hadoop MapReduce和Hadoop YARN上的迭代计算框架。消息Guagua 0.7.7发布了很多改进。 检查我们的会议入门请访问以获取教程。什么是瓜瓜瓜? Shifu的子项目Guagua是一个基于Hadoop MapReduce和YARN的分布式,可...

    hadoop3.1.4参数默认配置项

    例如,根据实际工作负载调整mapreduce.map.memory.mb和mapreduce.reduce.memory.mb可以避免内存溢出错误。同时,合理设置YARN的资源分配参数能优化集群的整体利用率和响应速度。 总之,Hadoop 3.1.4的这些默认配置...

    hadoop1.x与hadoop2.x配置异同

    在探讨Hadoop1.x与Hadoop2.x配置的异同之前,我们首先简要回顾一下GridGain In-Memory HDFS的特性,这是基于行业首个高性能双模式内存文件系统,完全兼容HDFS。GridGain FileSystem(GGFS)作为Hadoop HDFS的即插即...

Global site tag (gtag.js) - Google Analytics