Prerequisites先决条件
Required Software
-
JavaTM 1.6.x, preferably from Sun, must be installed.至少是JAVA1.6,最好是SUN公司的,而不是Open JDK
-
ssh must be installed and sshd must be running to use the Hadoop scripts that manage remote Hadoop daemons.ssh必须安装(LINUX下这个基本都有),sshd必须启动,因为要管理远程的Hadoop daemons.ssh
这里要求集群安装ssh,这里需要做机器互信,使得SSH不用输入密码(实际上,下载安装scp复制到其他机器上都需要,否则每次都输入密码。。。)参见:http://lvdccyb.iteye.com/blog/1163686
Installation安装
Typically one machine in the cluster is designated as the NameNode and another machine the as JobTracker, exclusively. These are the masters. The rest of the machines in the cluster act as both DataNode and TaskTracker. These are the slaves.
The root of the distribution is referred to as HADOOP_HOME. All machines in the cluster usually have the same HADOOP_HOME path.
通常一个集群中的一台机器(节点)被指定为NameNode 而相应的,另一台机器为JobTracker。它们2个都是 masters.通剩下的都是slaves,它们同时是 DataNode 和 TaskTracker。
Configuration配置
Configuring the Hadoop Daemons
This section deals with important parameters to be specified in the following:
conf/core-site.xml:
Parameter
Value
Notes
fs.default.name |
URI of NameNode. |
hdfs://hostname/ |
conf/hdfs-site.xml:
Parameter
Value
Notes
dfs.name.dir |
Path on the local filesystem where the NameNode stores the namespace and transactions logs persistently. |
If this is a comma-delimited list of directories then the name table is replicated in all of the directories, for redundancy. |
dfs.data.dir |
Comma separated list of paths on the local filesystem of a DataNodewhere it should store its blocks. |
If this is a comma-delimited list of directories, then data will be stored in all named directories, typically on different devices. |
conf/mapred-site.xml:
Parameter
Value
Notes
mapred.job.tracker |
Host or IP and port ofJobTracker. |
host:port pair. |
mapred.system.dir |
Path on the HDFS where where the MapReduce framework stores system files e.g./hadoop/mapred/system/. |
This is in the default filesystem (HDFS) and must be accessible from both the server and client machines. |
mapred.local.dir |
Comma-separated list of paths on the local filesystem where temporary MapReduce data is written. |
Multiple paths help spread disk i/o. |
mapred.tasktracker.{map|reduce}.tasks.maximum |
The maximum number of MapReduce tasks, which are run simultaneously on a given TaskTracker, individually. |
Defaults to 2 (2 maps and 2 reduces), but vary it depending on your hardware. |
dfs.hosts/dfs.hosts.exclude |
List of permitted/excluded DataNodes. |
If necessary, use these files to control the list of allowable datanodes. |
mapred.hosts/mapred.hosts.exclude |
List of permitted/excluded TaskTrackers. |
If necessary, use these files to control the list of allowable TaskTrackers. |
mapred.queue.names |
Comma separated list of queues to which jobs can be submitted. |
The MapReduce system always supports atleast one queue with the name as default. Hence, this parameter's value should always contain the string default. Some job schedulers supported in Hadoop, like the Capacity Scheduler, support multiple queues. If such a scheduler is being used, the list of configured queue names must be specified here. Once queues are defined, users can submit jobs to a queue using the property name mapred.job.queue.name in the job configuration. There could be a separate configuration file for configuring properties of these queues that is managed by the scheduler. Refer to the documentation of the scheduler for information on the same. |
mapred.acls.enabled |
Boolean, specifying whether checks for queue ACLs and job ACLs are to be done for authorizing users for doing queue operations and job operations. |
If true, queue ACLs are checked while submitting and administering jobs and job ACLs are checked for authorizing view and modification of jobs. Queue ACLs are specified using the configuration parameters of the form mapred.queue.queue-name.acl-name, defined below under mapred-queue-acls.xml. Job ACLs are described at Job Authorization |
conf/mapred-queue-acls.xml
Parameter
Value
Notes
mapred.queue.queue-name.acl-submit-job |
List of users and groups that can submit jobs to the specified queue-name. |
The list of users and groups are both comma separated list of names. The two lists are separated by a blank. Example: user1,user2 group1,group2. If you wish to define only a list of groups, provide a blank at the beginning of the value. |
mapred.queue.queue-name.acl-administer-jobs |
List of users and groups that can view job details, change the priority or kill jobs that have been submitted to the specifiedqueue-name. |
The list of users and groups are both comma separated list of names. The two lists are separated by a blank. Example: user1,user2 group1,group2. If you wish to define only a list of groups, provide a blank at the beginning of the value. Note that the owner of a job can always change the priority or kill his/her own job, irrespective of the ACLs. |
Typically all the above parameters are marked as final to ensure that they cannot be overriden by user-applications.
上面太多,于是设置了几项:
最终的配置如下:
<configuration>
<property> <name>dfs.support.append</name> <value>true</value> </property> <property> <name>dfs.datanode.max.xcievers</name> <value>4096</value> </property> <property> <name>dfs.block.size</name> <value>134217728</value> </property> <property> <name>dfs.namenode.handler.count</name> <value>40</value> </property> </configuration>
conf/hdfs-site.xml
conf/hadoop-env.sh
这里只修改了JAVA_HOME项
conf/mapred-site.xml
<configuration> <property> <name>mapred.reduce.parallel.copies</name> <value>20</value> </property> </configuration>
To start a Hadoop cluster you will need to start both the HDFS and Map/Reduce cluster.
Format a new distributed filesystem:
$ bin/hadoop namenode -format
Start the HDFS with the following command, run on the designated NameNode:
$ bin/start-dfs.sh
The bin/start-dfs.sh script also consults the ${HADOOP_CONF_DIR}/slaves file on the NameNode and starts the DataNode daemon on all the listed slaves.
Start Map-Reduce with the following command, run on the designated JobTracker:
$ bin/start-mapred.sh
The bin/start-mapred.sh script also consults the ${HADOOP_CONF_DIR}/slaves file on the JobTracker and starts the TaskTracker daemon on all the listed slaves.
分享到:
相关推荐
Hadoop集群监控与Hive高可用,Hadoop现场演示与...海量数据处理——hadoop,基于Hadoop的分布式搜索引擎关键技术,理解大数据企业级 Hadoop 和流数据分析, 云计算分布式大数据Hadoop实战之路--从零开始(第1-10讲)
6.2.5 Primary(namenode0)节点配置 6.2.7 Data Node节点配置 6.2.8 Client节点配置 6.2.9 创建目录 6.2.10 挂载NFS 6.2.11 启动Ucarp 6.2.12 格式化 6.2.13 系统启动 6.2.14 检查 6.2.15 NameNode失效切换写文件...
在本课程"云计算分布式大数据Hadoop实战之路--从零开始(第1-10讲)"中,我们将深入探讨云计算、分布式系统以及大数据处理的核心技术——Hadoop。这个系列的讲座旨在为初学者提供一个全面的入门指南,帮助他们理解和...
通过这四期的学习,参与者将能够从零开始构建和管理Hadoop集群,执行复杂的分布式数据处理任务,并了解如何利用Hadoop生态系统中的各种工具解决实际问题。无论是对大数据处理感兴趣的初学者还是希望提升Hadoop技能的...
本文档主要讲述了使用 VMware 搭建伪分布式环境的步骤,包括虚拟机的安装、网络配置、JDK 的安装、Hadoop、HBase 和 Zookeeper 的安装等。 描述解读 该文档是根据老师讲的笔记记录的环境搭建笔记,供以后使用。该...
标题"Hadoop-2.4.0-part0"指的是Hadoop框架的一个特定版本——2.4.0的某个部分。Hadoop是Apache软件基金会开源的分布式计算框架,它为大规模数据处理提供了可扩展的、可靠的基础架构。这个部分可能是Hadoop 2.4.0的...
- **MapReduce 2与YARN**:Hadoop 0.23版引入了基于新的分布式资源管理系统YARN的新MapReduce运行时——MapReduce 2。书中新增章节详细讲解了YARN的工作原理及其应用。 - **MapReduce相关内容**: - 使用Maven打包...
《大数据与云计算培训学习资料——Hadoop集群:HBase之旅》 HBase,作为Apache Hadoop生态中的分布式列式数据库,被广泛应用于处理大规模数据存储和检索。此份学习资料详细介绍了HBase的基础概念和应用实践,旨在...
【Hadoop项目——网站流量日志分析】 在网站流量日志分析中,数据采集是整个流程的第一步,它涉及到如何有效地收集和传输网站用户行为数据。在这个场景下,虽然对数据采集的可靠性与容错能力的要求可能不如其他业务...
标题《hadoop系列教程9》表明本教程是关于Hadoop的第九个教学模块,旨在深入介绍Hadoop生态系统中的一些关键组件和概念。 描述中提到的“hadoop系列教程详细讲解hadoop的安装和实例讲解,包括HDFS和MapReduce的实例...
Hadoop文件系统(HDFS)是一种分布式文件系统,设计用于在大规模集群环境中存储大量数据。HDFS具有以下特点: - **高容错性**: 数据自动复制到多个节点,以防止单点故障。 - **可扩展性**: 可以轻松地添加更多节点...
### 用Hadoop进行分布式数据处理:第3部分——应用程序开发 #### 一、引言 随着前两篇文章中对Hadoop单节点和多节点集群的安装与配置进行了详细介绍,本篇文章作为系列文章的最后一部分,将重点转向如何利用Hadoop...
1. **MapReduce**:Hadoop的核心组件之一是MapReduce编程模型,它由两个主要阶段组成——Map阶段和Reduce阶段。Map阶段将输入数据分割成多个小块,并在各个节点上并行处理,生成键值对形式的中间结果。Reduce阶段则...
在深入分析Hadoop源代码的第十六部分,我们将探讨一个不太常见的流程——DataBlockScanner。这个组件是Hadoop分布式文件系统(HDFS)中一个重要的监控和安全机制,用于定期检查DataNode上的数据块文件的完整性。通过...
《PyPI官网下载:rc-webserver-37.0.211.dev0.tar.gz——探索Python分布式Web服务器与云原生技术》 在Python的世界里,PyPI(Python Package Index)是开发者们的重要资源库,它提供了丰富的第三方库,方便用户下载...
1. **Zookeeper**:Zookeeper是一个分布式协调服务,由Apache Hadoop项目开发,用于管理分布式系统的配置信息、命名服务、同步服务和分区服务。如果`leadguru_common`与Zookeeper相关,那么它可能包含了与Zookeeper...
本文主要探讨大数据分析的第一个环节——数据采集。大数据采集不仅涉及数据的数量,更关乎数据的质量和多样性。 首先,大数据采集在时间和空间两个维度上与传统的小数据采集有显著差异。在时间维度上,大数据采集...
这里,学生将学习到云计算的三大服务模式——基础设施即服务(IAAS)、平台即服务(PAAS)和软件即服务(SAAS),并理解它们的工作原理。 接着,课程进入“云服务”单元,深入探讨云服务的架构,让学生对云服务的...