- 浏览: 405573 次
- 性别:
- 来自: 北京
文章分类
最新评论
-
秦时明月黑:
深入浅出,楼主很有功底
hive编译部分的源码结构 -
tywo45:
感觉好多错误,但还是支持!
HDFS+MapReduce+Hive+HBase十分钟快速入门 -
xbbHistory:
解析的很棒!!
Linux-VFS -
darrendu:
执行这个命令,bin/hadoop fs -ls /home/ ...
Hadoop示例程序WordCount运行及详解 -
moudaen:
请问楼主,我执行总后一条语句时,执行的是自带的1.sql,你当 ...
TPC-H on Hive
http://www.cloudera.com/blog/2009/12/7-tips-for-improving-mapreduce-performance/
7 Tips for Improving MapReduce Performance
- by Todd Lipcon
- December 17, 2009
- no comments
One service that Cloudera provides for our customers is help with tuning and optimizing MapReduce jobs. Since MapReduce and HDFS are complex distributed systems that run arbitrary user code, there’s no hard and fast set of rules to achieve optimal performance; instead, I tend to think of tuning a cluster or job much like a doctor would treat a sick human being. There are a number of key symptoms to look for, and each set of symptoms leads to a different diagnosis and course of treatment.
In medicine, there’s no automatic process that can replace the experience of a well seasoned doctor. The same is true with complex distributed systems — experienced users and operators often develop a “sixth sense” for common issues. Having worked with Cloudera customers in a number of different industries, each with a different workload, dataset, and cluster hardware, I’ve accumulated a bit of this experience, and would like to share some with you today.
In this blog post, I’ll highlight a few tips for improving MapReduce performance. The first few tips are cluster-wide, and will be useful for operators and developers alike. The latter tips are for developers writing custom MapReduce jobs in Java. For each tip, I’ll also note a few of the “symptoms” or “diagnostic tests” that indicate a particular remedy might bring you some good improvements.
Please note, also, that these tips contain lots of rules of thumb based on my experience across a variety of situations. They may not apply to your particular workload, dataset, or cluster, and you should always benchmark your jobs before and after any changes. For these tips, I’ll show some comparative numbers for a 40GB wordcount job on a small 4-node cluster. Tuned optimally, each of the map tasks in this job runs in about 33 seconds, and the total job runtime is about 8m30s.
Tip 1) Configure your cluster correctly
Diagnostics/symptoms:
-
top
shows slave nodes fairly idle even when all map and reduce task slots are filled up running jobs. -
top
shows kernel processes like RAID (mdX_raid*
) orpdflush
taking most of the CPU time. - Linux load averages are often seen more than twice the number of CPUs on the system.
- Linux load averages stay less than half the number of CPUs on the system, even when running jobs.
- Any swap usage on nodes beyond a few MB.
The first step to optimizing your MapReduce performance is to make sure your cluster configuration has been tuned. For starters, check out our earlier blog post on configuration parameters . In addition to those knobs in the Hadoop configuration, here are a few more checklist items you should go through before beginning to tune the performance of an individual job:
- Make sure the mounts you’re using for DFS and MapReduce storage have been mounted with the
noatime
option. This disables access time tracking and can improve IO performance. - Avoid RAID and LVM on TaskTracker and DataNode machines – it generally reduces performance.
- Make sure you’ve configured
mapred.local.dir
anddfs.data.dir
to point to one directory on each of your disks to ensure that all of your IO capacity is used. Runiostat -dx 5
from thesysstat
package while the cluster is loaded to make sure each disk shows utilization. - Ensure that you have SMART monitoring for the health status of your disk drives. MapReduce jobs are fault tolerant, but dying disks can cause performance to degrade as tasks must be re-executed. If you find that a particular TaskTracker becomes blacklisted on many job invocations, it may have a failing drive.
- Monitor and graph swap usage and network usage with software like Ganglia
. Monitoring Hadoop metrics
in Ganglia is also a good idea. If you see swap being used, reduce the amount of RAM allocated to each task in
mapred.child.java.opts
.
Benchmarks:
Unfortunately I was not able to perform benchmarks for this tip, as it would involve re-imaging the cluster. If you have had relevant experience, feel free to leave a note in the Comments section below.
Tip 2) Use LZO Compression
Diagnostics/symptoms:
- This is almost always a good idea for intermediate data! In the doctor analogy, consider LZO compression your vitamins.
- Output data size of MapReduce job is nontrivial.
- Slave nodes show high
iowait
utilization intop
andiostat
when jobs are running.
Almost every Hadoop job that generates an non-negligible amount of map output will benefit from intermediate data compression with LZO. Although LZO adds a little bit of CPU overhead, the reduced amount of disk IO during the shuffle will usually save time overall.
Whenever a job needs to output a significant amount of data, LZO
compression can also increase performance on the output side. Since
writes are replicated 3x by default, each GB of output data you save
will save 3GB of disk writes.
In order to enable LZO compression, check out our
recent guest blog from Twitter
. Be sure to set
mapred.compress.map.output
to
true
.
Benchmarks:
Disabling LZO compression on the wordcount example increased the job runtime only slightly on our cluster. The FILE_BYTES_WRITTEN
counter increased from 3.5GB to 9.2GB, showing that the compression
yielded a 62% decrease in disk IO. Since this job was not sharing the
cluster, and each node has a high ratio of number of disks to number of
tasks, IO is not the bottleneck here, and thus the improvement was not
substantial. On clusters where disks are pegged due to a lot of
concurrent activity, a 60% reduction in IO can yield a substantial
improvement in job completion speed.
Tip 3) Tune the number of map and reduce tasks appropriately
Diagnostics/symptoms:
- Each map or reduce task finishes in less than 30-40 seconds.
- A large job does not utilize all available slots in the cluster.
- After most mappers or reducers are scheduled, one or two remains pending and then runs all alone.
Tuning the number of map and reduce tasks for a job is important and easy to overlook. Here are some rules of thumb I use to set these parameters:
- If each task takes less than 30-40 seconds, reduce the number of tasks. The task setup and scheduling overhead is a few seconds, so if tasks finish very quickly, you’re wasting time while not doing work. JVM reuse can also be enabled to solve this problem.
-
If a job has more than 1TB of input, consider increasing the block
size of the input dataset to 256M or even 512M
so that the number of
tasks will be smaller. You can change the block size of existing files
with a command like
hadoop distcp -Ddfs.block.size=$[256*1024*1024] /path/to/inputdata /path/to/inputdata-with-largeblocks
. After this command completes, you can remove the original data. - So long as each task runs for at least 30-40 seconds, increase the number of mapper tasks to some multiple of the number of mapper slots in the cluster. If you have 100 map slots in your cluster, try to avoid having a job with 101 mappers – the first 100 will finish at the same time, and then the 101st will have to run alone before the reducers can run. This is more important on small clusters and small jobs.
- Don’t schedule too many reduce tasks – for most jobs, we recommend a number of reduce tasks equal to or a bit less than the number of reduce slots in the cluster.
Benchmarks:
To make the wordcount job run with too many tasks, I ran it with the argument -Dmapred.max.split.size=$[16*1024*1024]
.
This yielded 2640 tasks instead of the 360 that the framework chose by
default. When running with this setting, each task took about 9 seconds,
and watching the Cluster Summary view on the JobTracker showed the
number of running maps fluctuating between 0 and 24 continuously
throughout the job. The entire job finished in 17m52s, more than twice
as slow as the original job.
Tip 4) Write a Combiner
Diagnostics/symptoms:
- A job performs aggregation of some sort, and the Reduce input groups counter is significantly smaller than the Reduce input records counter.
- The job performs a large shuffle (e.g. map output bytes is multiple GB per node)
- The number of spilled records is many times larger than the number of map output records as seen in the Job counters.
If your algorithm involves computing aggregates of any sort, chances are you can use a Combiner in order to perform some kind of initial aggregation before the data hits the reducer. The MapReduce framework runs combiners intelligently in order to reduce the amount of data that has to be written to disk and transfered over the network in between the Map and Reduce stages of computation.
Benchmarks:
I modified the word count example to remove the call to setCombinerClass
,
and otherwise left it the same. This changed the average map task run
time from 33s to 48s, and increased the amount of shuffled data from 1GB
to 1.4GB. The total job runtime increased from 8m30s to 15m42s, nearly a
factor of two. Note that this benchmark was run with map output
compression enabled – without map output compression, the effect of the
combiner would have been even more important.
Tip 5) Use the most appropriate and compact Writable type for your data
Symptoms/diagnostics:
-
Text
objects are used for working with non-textual or complex data -
IntWritable
orLongWritable
objects are used when most output values tend to be significantly smaller than the maximum value.
When users are new to programming in MapReduce, or are switching from Hadoop Streaming to Java MapReduce, they often use the Text
writable type unnecessarily. Although Text
can be convenient, converting numeric data to and from UTF8 strings is
inefficient and can actually make up a significant portion of CPU time.
Whenever dealing with non-textual data, consider using the binary
Writables like IntWritable
, FloatWritable
, etc.
In addition to avoiding the text parsing overhead, the binary
Writable types will take up less space as intermediate data. Since disk
IO and network transfer will become a bottleneck in large jobs, reducing
the sheer number of bytes taken up by the intermediate data can provide
a substantial performance gain. When dealing with integers, it can also
sometimes be faster to use VIntWritable
or VLongWritable
— these implement variable-length integer encoding which saves space
when serializing small integers. For example, the value 4 will be
serialized in a single byte, whereas the value 10000 will be serialized
in two. These variable length numbers can be very effective for data
like counts, where you expect that the majority of records will have a
small number that fits in one or two bytes.
If the Writable types that ship with Hadoop don’t fit the bill,
consider writing your own. It’s pretty simple, and will be significantly
faster than parsing text. If you do so, make sure to provide a RawComparator
— see the source code for the built in Writables for an example.
Along the same vein, if your MapReduce job is part of a multistage workflow, use a binary format like SequenceFile
for the intermediate steps, even if the last stage needs to output
text. This will reduce the amount of data that needs to be materialized
along the way.
Benchmarks:
For the example word count job, I modified the intermediate count values to be Text
type rather than IntWritable
. In the reducer, I used Integer.parseString(value.toString())
when accumulating the sum. The performance of the suboptimal version of
the WordCount was about 10% slower than the original. The full job ran
in a bit over 9 minutes, and each map task took 36 seconds instead of
the original 33. Since integer parsing is itself rather fast, this did
not represent a large improvement; in the general case, I have seen
using more efficient Writables to make as much as a 2-3x difference in
performance.
Tip 6) Reuse Writables
Symptoms/diagnostics:
- Add
-verbose:gc -XX:+PrintGCDetails
tomapred.child.java.opts
. Then inspect the logs for some tasks. If garbage collection is frequent and represents a lot of time, you may be allocating unnecessary objects. - grep for “
new Text
” or “new IntWritable
” in your code base. If you find this in an inner loop, or inside themap
orreduce
functions this tip may help. - This tip is especially helpful when your tasks are constrained in RAM.
One of the first mistakes that many MapReduce users make is to allocate a new Writable
object for every output from a mapper or reducer. For example, one might implement a word-count mapper like this:
public void map(...) { ... for (String word : words) { output.collect(new Text(word), new IntWritable(1)); } }
This implementation causes thousands of very short-lived objects to be allocated. While the Java garbage collector does a reasonable job at dealing with this, it is more efficient to write:
class MyMapper ... { Text wordText = new Text(); IntWritable one = new IntWritable(1); public void map(...) { ... for (String word : words) { wordText.set(word); output.collect(word, one); } } }
Benchmarks:
When I modified the word count example as described above, I initially found it made no difference in the run time of the job. This is because this cluster’s default settings include a 1GB heap size for each task, so garbage collection never ran. However, running it with each task allocated only 200mb of heap size showed a drastic slowdown in the version that did not reuse Writables — the total job runtime increased from around 8m30s to over 17 minutes. The original version, which does reuse Writables, stayed the same speed even with the smaller heap. Since reusing Writables is an easy fix, I recommend always doing so – it may not bring you a gain for every job, but if you’re low on memory it can make a huge difference.
Tip 7) Use “Poor Man’s Profiling” to see what your tasks are doing
This is a trick I almost always use when first looking at the performance of a MapReduce job. Profiling purists will disagree and say that this won’t work, but you can’t argue with results!
In order to do what I call “poor man’s profiling”, ssh
into one of your slave nodes while some tasks from a slow job are running. Then simply run sudo killall -QUIT java
5-10 times in a row, each a few seconds apart. Don’t worry — this
doesn’t cause anything to quit, despite the name. Then, use the
JobTracker interface to navigate to the stdout
logs for one of the tasks that’s running on this node, or look in /var/log/hadoop/userlogs/
for a stdout
file of a task that is currently running. You’ll see stack trace output from each time you sent the SIGQUIT
signal to the JVM.
It takes a bit of experience to parse this output, but here’s the method I usually use:
- For each thread in the trace, quickly scan for the name of your Java package (e.g. com.mycompany.mrjobs). If you don’t see any lines in the trace that are part of your code, skip over this thread.
- When you find a stack trace that has some of your code in it, make a quick mental note what it’s doing. For example, “something NumberFormat-related” is all you need at this point. Don’t worry about specific line numbers yet.
- Go down to the next dump you took a few seconds later in the logs. Perform the same process here and make a note.
- After you’ve gone through 4-5 of the traces, you might notice that
the same vague thing shows up in every one of them. If that thing is
something that you expect to be fast, you probably found your culprit.
If you take 10 traces, and 5 of them show
NumberFormat
in the dump, it means that you’re spending somewhere around 50% of your CPU time formatting numbers, and you might consider doing something differently.
Sure, this method isn’t as scientific as using a real profiler on your tasks, but I’ve found that it’s a surefire way to notice any glaring CPU bottlenecks very quickly and with no setup involved. It’s also a technique that you’ll get better at with practice as you learn what a normal dump looks like and when something jumps out as odd.
Here are a few performance mistakes I often find through this technique:
-
NumberFormat
is slow – avoid it where possible. -
String.split
, as well as encoding or decoding UTF8 are slower than you think – see above tips about using the appropriate Writables - Concatenating
String
s rather than usingStringBuffer.append
发表评论
-
Hadoop的Secondary NameNode方案
2012-11-13 10:39 1282http://book.51cto.com/art/20120 ... -
hadoop
2011-10-08 12:20 1105hadoop job解决 ... -
hadoop作业调优参数整理及原理
2011-04-15 14:02 13111 Map side tuning 参数 ... -
Job运行流程分析
2011-03-31 11:04 1672http://www.cnblogs.com/forfutur ... -
hadoop作业运行部分源码
2011-03-31 10:51 1417一、客户端 Map-Reduce的过程首先是由客户端提交 ... -
eclipse中编译hadoop(hive)源码
2011-03-24 13:20 3419本人按照下面编译Hadoop 所说的方法在eclipse中编 ... -
Configuration Parameters: What can you just ignore?
2011-03-11 15:16 865http://www.cloudera.com/blog/20 ... -
hadoop 源码分析一
2011-02-22 15:29 1203InputFormat : 将输入的 ... -
hadoop参数配置(mapreduce数据流)
2011-01-14 11:08 2902Hadoop配置文件设定了H ... -
混洗和排序
2011-01-05 19:33 3250在mapreduce过程中,map ... -
hadoop中每个节点map和reduce个数的设置调优
2011-01-05 19:28 8377map red.tasktracker.map.tasks. ... -
hadoop profiling
2010-12-20 20:52 2637和debug task一样,profiling一个运行在分布 ... -
关于JVM内存设置
2010-12-20 20:49 1351运行map、reduce任务的JVM内存调整:(我当时是在jo ... -
HADOOP报错Incompatible namespaceIDs
2010-12-14 12:56 1008HADOOP报错Incomp ... -
node1-node6搭建hadoop
2010-12-13 18:42 1126环境: node1-node6 node1为主节点 ... -
hadoop启动耗时
2010-12-07 17:28 1321http://blog.csdn.net/AE86_FC/ar ... -
namenode 内部关键数据结构简介
2010-12-07 16:35 1282http://www.tbdata.org/archiv ... -
HDFS常用命令
2010-12-04 14:59 1314文件系统检查 bin/hadoop fsck [pa ... -
HDFS添加和删除节点
2010-12-04 14:45 2014From http://developer.yahoo.co ... -
hadoop 0.20 程式開發
2010-11-30 17:15 1293hadoop 0.20 程式開發 ecl ...
相关推荐
文档的描述部分提供了更为具体的信息:“IEEE Guide for Improving the Lightning Performance of Electric Power Overhead Distribution Lines”(IEEE关于提升电力架空配电线路雷电性能的指南)。这表明文档是由...
Optimizing Java_Practical Techniques for Improving JVM Application Performance-O’Reilly(2018) How do you define performance? Most developers, when asked about the performance of their application, ...
Explore several performance tests and common anti-patterns that can vex your team Understand the pitfalls of measuring Java performance numbers and the drawbacks of microbenchmarking Dive into JVM ...
1. 文件标题“Study Tips for Improving Long-Term Retention and Recall.pdf”表明了文档的主题是提供学习技巧以提高长期记忆和回忆的能力。 2. 描述中提到的“MYTHBUSTERS won’t be on the exam”揭示了一个常见...
Improving TCP Performance over Wireless Networksat the Link Lay 英文原版 We present the transport unaware link improvement protocol (TULIP), which dramatically improves the performance of TCP over ...
The investigation produced detailed measurements in order to profile the application and help on improving the performance. Code analysis shows that the current IPOP version suffers from ...
7. 节点同步:提到了在多栈架构中,所有节点需要同步运行,按照特定组合进行周期性的激活。 这篇文章对提升WSN的路由性能具有重要意义,特别是在使用多种路由协议的场景下,其提出的解决方案对于相关领域的研究和...
本文《Improving Lookup Performance over a widely-deployed DHT》聚焦于Kademlia协议,一种广泛部署的DHT,旨在探讨并优化其查找性能。 ### 核心知识点 #### 1. DHT与Kademlia简介 DHT是一种分布式数据存储机制...
在本文档"Improving Execution Performance on SPI Flash of NUC505"中,主要讨论了如何通过将关键代码和数据移动到SRAM中以提高基于32位NuMicro®系列微控制器(特别是NUC505系列)在SPI闪存上的执行性能。...
platforms are driving the need not only for greater networking bandwidth, but also for more efficient processing of network data. The lower power usage and greater processing capabilities of these ...
"GNSS信号设计中Code Shift Keying技术的前景" Code Shift Keying(CSK)是一种编码技术,主要应用于全球导航卫星系统(GNSS)的信号设计中。CSK技术可以提高GNSS信号的精度和可用性,从而满足更多的added-value...
### 提升Linux下千兆以太网驱动性能 #### 概览 本文档主要讨论了在Linux环境下如何提升千兆以太网驱动的性能。它不仅涵盖了基础的驱动原理介绍,还包括了一些具体的优化方法,例如发送中断节制、TCP慢启动改进以及...
Improving .NET Application Performance and Scalability <br>http://msdn.microsoft.com/en-us/library/ms998530.aspx
### 提升LTE网络内部切换期间TCP性能的关键技术 #### 摘要与研究背景 随着移动数据使用量的显著增加以及在线游戏、移动电视、Web2.0等新应用的出现,LTE(Long Term Evolution)作为UMTS(通用移动通信系统)的后续...
, then this book is for you., What You Will Learn, Setup high performance development and production environment for PHP 7Discover new OOP features in PHP 7 to achieve high performanceImprove your ...
Shiny_tips_&_tricks_for_improving_your_apps_and_so_advanced-shiny