`
samuschen
  • 浏览: 405491 次
  • 性别: Icon_minigender_2
  • 来自: 北京
社区版块
存档分类
最新评论

Configuration Parameters: What can you just ignore?

阅读更多

http://www.cloudera.com/blog/2009/03/configuration-parameters-what-can-you-just-ignore/

 

Configuring a Hadoop cluster is something akin to voodoo. There are a large number of variables in hadoop-default.xml that you can override in hadoop-site.xml . Some specify file paths on your system, but others adjust levers and knobs deep inside Hadoop’s guts. Unfortuately, there’s little or no documentation on how to set them well. Is there a single optimal configuration? Are there some settings that can just be “set to 11?”

Nigel's guitar goes to 11, but your cluster might not. At Cloudera, we’re working hard to make Hadoop easier to use and to make configuration less painful. Our Hadoop Configuration Tool gives you a web-based guide to help set up your cluster. Once it’s running, though, you might want to look under the hood and tune things a bit.

The rest of this post discusses why it’s a bad idea to just set all the limits as high as they’ll go, and gives you some pointers to get started on finding a happy medium.

Why can’t you just set all the limits to 1,000,000?

Increasing most settings has a direct impact on memory consumption. Increasing DataNode and TaskTracker settings, therefore, has an adverse impact on RAM available to individual MapReduce tasks. On large hardware, they can be set generously high. In general though, unless you have several dozen more more nodes working together, dialing up settings very high wastes system resources like RAM that could be better applied to running your mapper and reducer code.

That having been said, here’s a list of some things that can be cranked up higher than the defaults by a fair margin:

File descriptor limits

A busy Hadoop daemon might need to open a lot of files. The open fd ulimit in Linux defaults to 1024, which might be too low. You can set to something more generous — maybe 16384. Setting this an order of magnitude higher (e.g., 128K) is probably not a good idea. No individual Hadoop daemon is supposed to need hundreds of thousands of fds; if it’s consuming that many, then there’s probably an fd leak or other bug that needs fixing. This would just mask the true problem until errors started showing up somewhere else.

You can view your ulimits in bash by running:

$ ulimit -a

To set the fd ulimit for a process, you’ll need to be root. As root, open a shell, and run:

# ulimit -n 16384

You can then run the Hadoop daemon from that shell; the ulimits will be inherited. e.g.:

# sudo -u hadoop $HADOOP_HOME/bin/hadoop-daemon.sh start namenode

You can also set the ulimit for the hadoop user in /etc/security/limits.conf ; this mechanism will set the value persistently. Make sure pam_limits is enabled for whatever auth mechanism the hadoop daemon is using. The entry will look something like:

hadoop hard nofile 16384

If you’re running our distribution , we ship a modified version of Hadoop 0.18.3 that includes HADOOP-4346 , a fix for the “soft fd leak” that has affected Hadoop since 0.17, so this should be less critical for our users. Users of the official Apache Hadoop release are affected by the fd leak for all 0.17, 0.18, and 0.19 versions. (The fix is committed for 0.20.) For the curious, we’ve published a list of all differences between our release of Hadoop and the stock 0.18.3 release.

If you’re running Linux 2.6.27, you should also set the epoll limit to something generous; maybe 4096 or 8192.

# echo 4096 > /proc/sys/fs/epoll/max_user_instances

Then put the following text in /etc/sysctl.conf :

fs.epoll.max_user_instances = 4096

See http://pero.blogs.aprilmayjune.org/2009/01/22/hadoop-and-linux-kernel-2627-epoll-limits/ for more details.

Internal settings

If there is more RAM available than is consumed by task instances, set io.sort.factor to 25 or 32 (up from 10). io.sort.mb should be 10 * io.sort.factor . Don’t forget, multiply io.sort.mb by the number of concurrent tasks to determine how much RAM you’re actually allocating here, to prevent swapping. (So 10 task instances with io.sort.mb = 320 means you’re actually allocating 3.2 GB of RAM for sorting, up from 1.0 GB.) An open ticket on the Hadoop bug tracking database suggests making the default value here 100. This would likely result in a lower per-stream cache size than 10 MB.

io.file.buffer.size – this is one of the more “magic” parameters. You can set this to 65536 and leave it there. (I’ve profiled this in a bunch of scenarios; this seems to be the sweet spot.)

If the NameNode and JobTracker are on big hardware, set dfs.namenode.handler.count to 64 and same with mapred.job.tracker.handler.count . If you’ve got more than 64 GB of RAM in this machine, you can double it again.

dfs.datanode.handler.count defaults to 3 and could be set a bit higher. (Maybe 8 or 10.) More than this takes up memory that could be devoted to running MapReduce tasks, and I don’t know that it gives you any more performance. (An increased number of HDFS clients implies an increased number of DataNodes to handle the load.)

mapred.child.ulimit should be 2–3x higher than the heap size specified in mapred.child.java.opts and left there to prevent runaway child task memory consumption.

Setting tasktracker.http.threads higher than 40 will deprive individual tasks of RAM, and won’t see a positive impact on shuffle performance until your cluster is approaching 100 nodes or more.

Conclusions

Configuring Hadoop for “optimal performance” is a moving target, and depends heavily on your own applications. There are settings that need to be moved off their defaults, but finding the best value for each is difficult. Our configurator for Hadoop will do a reasonable job of getting you started.

We’d love to hear from you about your own configurations. Did you discover a combination of settings that really made your cluster sing? Please share in the comments.

分享到:
评论

相关推荐

    The Memory System youcan't avoid it, you can't ignore it, you

    《The Memory System: You Can’t Avoid It, You Can’t Ignore It, You Can’t Fake It》一书由Bruce Jacob撰写,并得到了Sadagopan Srinivasan(英特尔公司)等人的贡献,深入探讨了内存系统的核心概念和技术挑战...

    monkey自动化测试入门

    命令参数介绍:1) 参数: -p 2) 参数: -v 3)参数: -s 4) 参数: --throttle 毫秒> 5) 参数: --ignore-crashes 6) 参数: --ignore-timeouts 7) 参数: --ignore-security-exceptions 8) 参数: --...

    清除防火墙console口密码.pdf

    ...幸运的是,有一种方法可以清除密码,...我们探讨了BOOTROM、BOOTROM Menu、“Ignore Configuration”、H3C Series Routers、CPLD、SDRAM和Flash memory等知识点。这些知识点对于理解路由器的工作原理和配置非常重要。

    MiniLyrics

    好用的WMP的歌词插件: What can MiniLyrics do?... * Never expires, you can try it free and ignore the nagging registration notice. * Support saving/loading embedded ID3v2, Lyrics3v2 standard lyrics.

    idea-gitignore:IntelliJ IDEA的.ignore支持插件

    "idea-gitignore:IntelliJ IDEA的.ignore支持插件" 指的是一个专门为IntelliJ IDEA开发的插件,它的主要功能是帮助用户管理和创建`.gitignore`文件。`.gitignore`文件在Git版本控制系统中扮演着重要的角色,它允许...

    qdd:非常快地下载JavaScript依赖项

    qdd是短当q uickly d ownload d ependencies。 这就是全部。 在不需要安装脚本的情况下,它可以代替npm ci 。 (这意味着也没有编译... 如果在安装依赖项时将--ignore-scripts传递给npm或yarn ,则项目运行正常。 您不

    wamp5环境下多站点搭建

    Flags:ignoreerrors // 添加新服务的启动操作 [Menu.Left] Type:separator;Caption:"WAMP5" Type:item;Caption:"Localhost";Action:run;FileName:"C:\WINDOWS\explorer.exe";Parameters:"http://localhost/";...

    php学习笔记_心得

    $gbk_string = iconv('UTF-8', 'GBK//IGNORE', $utf8_string); ``` 通过深入理解这些基本概念,你可以更有效地编写PHP代码,并解决在Web开发过程中遇到的各种问题。不断地实践和学习,你将能够掌握PHP的更多高级...

    窗口大小调整软件

    You can use the middle or right mouse button to resize windows. If you press the shift key while you drag or resize, the window will stick to other windows. You can double-click a window to maximize ...

    前端开源库-rollup-plugin-ignore-import

    `rollup-plugin-ignore-import`就是这样一个工具,它专为前端开发中的Rollup打包器设计,旨在帮助开发者在打包过程中忽略特定的导入模块。这篇文章将深入探讨`rollup-plugin-ignore-import`的核心功能、使用方法以及...

    RegCure 免费版

    With RegCure’s scheduling feature, you can just tell it when you want it to do its job, sit back and relax. You can set it to scan at system startup or assign it to work on a daily, weekly or monthly...

    UE(官方下载)

    Did you know that you can not only change what is on UltraEdit's toolbars, you can also change the icon used, as well as create your own custom toolbars and tools? File tabs Understand how file tabs ...

    Expert Shell Scripting

    What you’ll learn Debug shell scripts using existing debuggers, not inspection. Use and extend text–editing one–liners and learn to forget Perl. Manage files and filesystems using scripting, not ...

    在随处都可以进行zoom的demo

    This can either just be ignored or, if it really bugs you, turned off with the following command: warning off MATLAB:dispatcher:nameConflict However, sometimes these kind of warnings can be usefull if...

    2014届高考英语完形填空常考词汇(三).doc

    - ignore: 忽视,如:Don't ignore the warning signs. 11. "判断" - think, believe, consider, find, feel, conclude, infer - think: 认为,如:I think you're right. - believe: 相信,如:She believes in...

    obsolete please ignore it

    obsolete please ignore it

    高考英语作文的40个高级句型-提分必备!.doc

    例如:On no account can we ignore the value of knowledge. 强调我们绝不能忽视知识的价值。 6. “What will happen to sb.?”(某人将会怎样?) 问题:What will happen to the orphan? 对于孤儿来说,将来会...

    Unity Save Game Pro - Gold Update

    Works well with Unity 2017.x and higher and also you can make it work with older Unity versions just ignore the warning before importing the package. Tested on the below Unity versions: - 2018.2, ...

    ignore-styles:在Node中运行时忽略导入的样式文件

    忽略样式 一个babel/register样式钩子,用于在Node中运行时...要解决此问题,请在您的Mocha测试中要求使用“ ignore-styles : mocha --require ignore-styles 请参阅 ,以了解忽略的扩展程序的完整列表,并在需要

Global site tag (gtag.js) - Google Analytics