Deploying MapReduce v2 (YARN) on a Cluster
This section describes configuration tasks for YARN clusters only, and is specifically tailored for administrators who have installed YARN from packages.
Do the following tasks after you have configured and deployed HDFS:
- Configure properties for YARN clusters
- Configure YARN daemons
- Configure the History Server
- Configure the Staging Directory
- Deploy your custom configuration to your entire cluster
- Start HDFS
- Create the HDFS /tmp directory
- Create the History Directory and Set Permissions
- Create Log Directories
- Verify the HDFS File Structure
- Start YARN and the MapReduce JobHistory Server
- Create a home directory for each MapReduce user
- Configure the Hadoop daemons to start at boot time
When starting, stopping and restarting CDH components, always use the service (8) command rather than running scripts in /etc/init.d directly. This is important because service sets the current working directory to / and removes most environment variables (passing only LANG and TERM) so as to create a predictable environment in which to administer the service. If you run the scripts in /etc/init.d, any environment variables you have set remain in force, and could produce unpredictable results. (If you install CDH from packages, service will be installed as part of the Linux Standard Base (LSB).)
About MapReduce v2 (YARN)
The default installation in CDH 5 is MapReduce 2.x (MRv2) built on the YARN framework. In this document we usually refer to this new version as YARN. The fundamental idea of MRv2's YARN architecture is to split up the two primary responsibilities of the JobTracker — resource management and job scheduling/monitoring — into separate daemons: a global ResourceManager (RM) and per-application ApplicationMasters (AM). With MRv2, the ResourceManager (RM) and per-node NodeManagers (NM), form the data-computation framework. The ResourceManager service effectively replaces the functions of the JobTracker, and NodeManagers run on slave nodes instead of TaskTracker daemons. The per-application ApplicationMaster is, in effect, a framework specific library and is tasked with negotiating resources from the ResourceManager and working with the NodeManager(s) to execute and monitor the tasks. For details of the new architecture, see Apache Hadoop NextGen MapReduce (YARN).
See also Selecting Appropriate JAR files for your MRv1 and YARN Jobs.
Make sure you are not trying to run MRv1 and YARN on the same set of nodes at the same time. This is not supported; it will degrade performance and may result in an unstable cluster deployment.
- If you have installed YARN from packages, follow the instructions below to deploy it. (To deploy MRv1 instead, see Deploying MapReduce v1 (MRv1) on a Cluster.)
- If you have installed CDH 5 from tarballs, the default deployment is YARN. Keep in mind that the instructions on this page are tailored for a deployment following installation from packages.
Step 1: Configure Properties for YARN Clusters
Edit these files in the custom directory you created when you copied the Hadoop configuration. When you have finished, you will push this configuration to all the nodes in the cluster; see Step 5.
mapreduce.framework.name |
mapred-site.xml |
If you plan on running YARN, you must set this property to the value of yarn. |
Sample Configuration:
mapred-site.xml:
<property> <name>mapreduce.framework.name</name> <value>yarn</value> </property>
Step 2: Configure YARN daemons
Configure the following services: ResourceManager (on a dedicated host) and NodeManager (on every host where you plan to run MapReduce v2 jobs).
The following table shows the most important properties that you must configure for your cluster in yarn-site.xml
yarn.nodemanager.aux-services |
mapreduce_shuffle |
Shuffle service that needs to be set for Map Reduce applications. |
yarn.resourcemanager.hostname |
resourcemanager.company.com |
The following properties will be set to their default ports on this host:
yarn.resourcemanager. address, yarn.resourcemanager. admin.address, yarn.resourcemanager. scheduler.address, yarn.resourcemanager. resource-tracker.address, yarn.resourcemanager. webapp.address |
yarn.application.classpath |
$HADOOP_CONF_DIR, $HADOOP_COMMON_HOME/*, $HADOOP_COMMON_HOME/lib/*,$HADOOP_HDFS_HOME/*, $HADOOP_HDFS_HOME/lib/*, $HADOOP_MAPRED_HOME/*,$HADOOP_MAPRED_HOME/lib/*, $HADOOP_YARN_HOME/*, $HADOOP_YARN_HOME/lib/* |
Classpath for typical applications. |
yarn.log.aggregation.enable |
true |
Next, you need to specify, create, and assign the correct permissions to the local directories where you want the YARN daemons to store data.
You specify the directories by configuring the following two properties in the yarn-site.xml file on all cluster nodes:
yarn.nodemanager.local-dirs |
Specifies the URIs of the directories where the NodeManager stores its localized files. All of the files required for running a particular YARN application will be put here for the duration of the application run. Cloudera recommends that this property specify a directory on each of the JBOD mount points; for example, file:///data/1/yarn/local through /data/N/yarn/local. |
yarn.nodemanager.log-dirs |
Specifies the URIs of the directories where the NodeManager stores container log files. Cloudera recommends that this property specify a directory on each of the JBOD mount points; for example, file:///data/1/yarn/logs throughfile:///data/N/yarn/logs. |
yarn.nodemanager.remote-app-log-dir |
Specifies the URI of the directory where logs are aggregated. Set the value to hdfs://var/log/hadoop-yarn/apps. See also Step 9. |
Here is an example configuration:
yarn-site.xml:
<property> <property> <name>yarn.resourcemanager.hostname</name> <value>resourcemanager.company.com</value> </property> <property> <description>Classpath for typical applications.</description> <name>yarn.application.classpath</name> <value> $HADOOP_CONF_DIR, $HADOOP_COMMON_HOME/*,$HADOOP_COMMON_HOME/lib/*, $HADOOP_HDFS_HOME/*,$HADOOP_HDFS_HOME/lib/*, $HADOOP_MAPRED_HOME/*,$HADOOP_MAPRED_HOME/lib/*, $HADOOP_YARN_HOME/*,$HADOOP_YARN_HOME/lib/* </value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> <property> <name>yarn.nodemanager.local-dirs</name> <value>file:///data/1/yarn/local,file:///data/2/yarn/local,file:///data/3/yarn/local</value> </property> <property> <name>yarn.nodemanager.log-dirs</name> <value>file:///data/1/yarn/logs,file:///data/2/yarn/logs,file:///data/3/yarn/logs</value> </property> <property> </property> <name>yarn.log.aggregation.enable</name> <value>true</value> <property> <description>Where to aggregate logs</description> <name>yarn.nodemanager.remote-app-log-dir</name> <value>hdfs://var/log/hadoop-yarn/apps</value> </property>
After specifying these directories in the yarn-site.xml file, you must create the directories and assign the correct file permissions to them on each node in your cluster.
In the following instructions, local path examples are used to represent Hadoop parameters. Change the path examples to match your configuration.
To configure local storage directories for use by YARN:
- Create the yarn.nodemanager.local-dirs local directories:
$ sudo mkdir -p /data/1/yarn/local /data/2/yarn/local /data/3/yarn/local /data/4/yarn/local
- Create the yarn.nodemanager.log-dirs local directories:
$ sudo mkdir -p /data/1/yarn/logs /data/2/yarn/logs /data/3/yarn/logs /data/4/yarn/logs
- Configure the owner of the yarn.nodemanager.local-dirs directory to be the yarn user:
$ sudo chown -R yarn:yarn /data/1/yarn/local /data/2/yarn/local /data/3/yarn/local /data/4/yarn/local
- Configure the owner of the yarn.nodemanager.log-dirs directory to be the yarn user:
$ sudo chown -R yarn:yarn /data/1/yarn/logs /data/2/yarn/logs /data/3/yarn/logs /data/4/yarn/logs
Here is a summary of the correct owner and permissions of the local directories:
yarn.nodemanager.local-dirs |
yarn:yarn |
drwxr-xr-x |
yarn.nodemanager.log-dirs |
yarn:yarn |
drwxr-xr-x |
Step 3: Configure the History Server
mapreduce.jobhistory.address |
historyserver.company.com:10020 |
The address of the JobHistory Server host:port |
mapreduce.jobhistory.webapp.address |
historyserver.company.com:19888 |
The address of the JobHistory Server web application host:port |
In addition, make sure proxying is enabled for the mapred user; configure the following properties in core-site.xml:
hadoop.proxyuser.mapred.groups |
* |
Allows the mapreduser to move files belonging to users in these groups |
hadoop.proxyuser.mapred.hosts |
* |
Allows the mapreduser to move files belonging on these hosts |
Step 4: Configure the Staging Directory
YARN requires a staging directory for temporary files created by running jobs. By default it creates /tmp/hadoop-yarn/staging with restrictive permissions that may prevent your users from running jobs. To forestall this, you should configure and create the staging directory yourself; in the example that follows we use /user:
- Configure yarn.app.mapreduce.am.staging-dir in mapred-site.xml:
<property> <name>yarn.app.mapreduce.am.staging-dir</name> <value>/user</value> </property>
- Once HDFS is up and running, you will create this directory and a history subdirectory under it (see Step 8).
Alternatively, you can do the following:
- Configure mapreduce.jobhistory.intermediate-done-dir and mapreduce.jobhistory.done-dir in mapred-site.xml.
- Create these two directories.
- Set permissions on mapreduce.jobhistory.intermediate-done-dir to 1777.
- Set permissions on mapreduce.jobhistory.done-dir to 750.
If you configure mapreduce.jobhistory.intermediate-done-dir and mapreduce.jobhistory.done-dir as above, you can skip Step 8.
Step 5: If Necessary, Deploy your Custom Configuration to your Entire Cluster
Deploy the configuration if you have not already done so.
Step 6: If Necessary, Start HDFS on Every Node in the Cluster
Start HDFS if you have not already done so.
Step 7: If Necessary, Create the HDFS /tmp Directory
Create the /tmp directory if you have not already done so.
If you do not create /tmp properly, with the right permissions as shown below, you may have problems with CDH components later. Specifically, if you don't create /tmp yourself, another process may create it automatically with restrictive permissions that will prevent your other applications from using it.
Step 8: Create the history Directory and Set Permissions and Owner
This is a subdirectory of the staging directory you configured in Step 4. In this example we're using /user/history. Create it and set permissions as follows:
sudo -u hdfs hadoop fs -mkdir -p /user/history sudo -u hdfs hadoop fs -chmod -R 1777 /user/history sudo -u hdfs hadoop fs -chown mapred:hadoop /user/history
Step 9: Create Log Directories
See also Step 2.
Create the /var/log/hadoop-yarn directory and set ownership:
sudo -u hdfs hadoop fs -mkdir -p /var/log/hadoop-yarn sudo -u hdfs hadoop fs -chown yarn:mapred /var/log/hadoop-yarn
You need to create this directory because it is the parent of /var/log/hadoop-yarn/apps which is explicitly configured in yarn-site.xml.
Step 10: Verify the HDFS File Structure:
$ sudo -u hdfs hadoop fs -ls -R /
You should see:
drwxrwxrwt - hdfs supergroup 0 2012-04-19 14:31 /tmp drwxr-xr-x - hdfs supergroup 0 2012-05-31 10:26 /user drwxrwxrwt - yarn supergroup 0 2012-04-19 14:31 /user/history drwxr-xr-x - hdfs supergroup 0 2012-05-31 15:31 /var drwxr-xr-x - hdfs supergroup 0 2012-05-31 15:31 /var/log drwxr-xr-x - yarn mapred 0 2012-05-31 15:31 /var/log/hadoop-yarn
Step 11: Start YARN and the MapReduce JobHistory Server
To start YARN, start the ResourceManager and NodeManager services:
Make sure you always start ResourceManager before starting NodeManager services.
On the ResourceManager system:
$ sudo service hadoop-yarn-resourcemanager start
On each NodeManager system (typically the same ones where DataNode service runs):
$ sudo service hadoop-yarn-nodemanager start
To start the MapReduce JobHistory Server
On the MapReduce JobHistory Server system:
$ sudo service hadoop-mapreduce-historyserver start
Step 12: Create a Home Directory for each MapReduce User
Create a home directory for each MapReduce user. It is best to do this on the NameNode; for example:
$ sudo -u hdfs hadoop fs -mkdir /user/<user> $ sudo -u hdfs hadoop fs -chown <user> /user/<user>
where <user> is the Linux username of each user.
Alternatively, you can log in as each Linux user (or write a script to do so) and create the home directory as follows:
sudo -u hdfs hadoop fs -mkdir /user/$USER sudo -u hdfs hadoop fs -chown $USER /user/$USER
Step 13: Configure the Hadoop Daemons to Start at Boot Time
http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH5/latest/CDH5-Installation-Guide/cdh5ig_yarn_cluster_deploy.html#topic_11_4
相关推荐
Starting with installing Hadoop YARN, MapReduce, HDFS, and other Hadoop ecosystem components, with this book, you will soon learn about many exciting topics such as MapReduce patterns, using Hadoop to...
### 部署ASP.NET网站至IIS 7.0:关键知识点详解 #### 引言 部署ASP.NET网站到IIS 7.0是一项重要的技术活动,它涉及到多个方面,包括IIS 7.0的功能特性、部署过程中的最佳实践以及如何优化部署流程等。...
A series of examples on deploying your Node.js applications in production environments are provided, including a discussion on setting up continuous deployment and integration for your team....
Deploying to OpenShift A Guide for Busy Developers 英文epub 本资源转载自网络,如有侵权,请联系上传者或csdn删除 查看此书详细信息请在美国亚马逊官网搜索此书
On Designing and Deploying Internet-Scale Services(译).文主要介绍OVS中QoS的应用和实现方法,因为QoS技术在传统网络中应用较为成熟和广泛,所以本文不赘述QoS的相关原理和算法,只关
19c+rac+rhel7+最佳实践
Building and Deploying a Run-Time Image
Pro SQL Server Always On Availability Groups is aimed at SQL Server architects, database administrators, and IT professionals who are tasked with architecting and deploying a high-availability and ...
通过以上对“Deploying Oracle 10g RAC on AIX V5 with GPFS”的深入分析,我们可以看出Oracle 10g RAC与IBM GPFS的集成部署是一个复杂但极其重要的任务。合理的架构设计、细致的配置过程以及周密的灾难恢复计划都是...
For those interested in deploying Linux when low cost computer reliability is important, this book explains how to place computers with limited resources on a computer network, and install free ...
Get started with Docker by building and deploying a simple web applicationUse Continuous Deployment techniques to push your application to production multiple times a dayLearn various options and ...