`

hadoop2.6版本集群环境搭建

阅读更多

一、环境说明

1、机器:一台物理机 和一台虚拟机

2、linux版本:[spark@S1PA11 ~]$ cat /etc/issue
Red Hat Enterprise Linux Server release 5.4 (Tikanga)

3、JDK:[spark@S1PA11 ~]$ java -version
java version "1.6.0_27"
Java(TM) SE Runtime Environment (build 1.6.0_27-b07)
Java HotSpot(TM) 64-Bit Server VM (build 20.2-b06, mixed mode)

4、集群节点:两个S1PA11(Master),S1PA222(Slave)

二、准备工作

1、安装Java jdk前一篇文章撰写了:http://blog.csdn.net/stark_summer/article/details/42391531

2、ssh免密码验证 :http://blog.csdn.net/stark_summer/article/details/42393053

3、下载Hadoop版本:http://mirror.bit.edu.cn/apache/hadoop/common/

三、安装Hadoop

这是下载后的hadoop-2.6.0.tar.gz压缩包,

1、解压 tar -xzvfhadoop-2.6.0.tar.gz

2、move到指定目录下:[spark@S1PA11 software]$ mv hadoop-2.6.0 ~/opt/

3、进入hadoop目前 [spark@S1PA11 opt]$ cd hadoop-2.6.0/
[spark@S1PA11 hadoop-2.6.0]$ ls
bin dfs etc include input lib libexec LICENSE.txt logs NOTICE.txt README.txt sbin share tmp

配置之前,先在本地文件系统创建以下文件夹:~/hadoop/tmp、~/dfs/data、~/dfs/name。 主要涉及的配置文件有7个:都在/hadoop/etc/hadoop文件夹下,可以用gedit命令对其进行编辑。

~/hadoop/etc/hadoop/hadoop-env.sh
~/hadoop/etc/hadoop/yarn-env.sh
~/hadoop/etc/hadoop/slaves
~/hadoop/etc/hadoop/core-site.xml
~/hadoop/etc/hadoop/hdfs-site.xml
~/hadoop/etc/hadoop/mapred-site.xml
~/hadoop/etc/hadoop/yarn-site.xml

4、进去hadoop配置文件目录

[spark@S1PA11 hadoop-2.6.0]$ cd etc/hadoop/
[spark@S1PA11 hadoop]$ ls
capacity-scheduler.xml hadoop-env.sh httpfs-env.sh kms-env.sh mapred-env.sh ssl-client.xml.example
configuration.xsl hadoop-metrics2.properties httpfs-log4j.properties kms-log4j.properties mapred-queues.xml.template ssl-server.xml.example
container-executor.cfg hadoop-metrics.properties httpfs-signature.secret kms-site.xml mapred-site.xml yarn-env.cmd
core-site.xml hadoop-policy.xml httpfs-site.xml log4j.properties mapred-site.xml.template yarn-env.sh
hadoop-env.cmd hdfs-site.xml kms-acls.xml mapred-env.cmd slaves yarn-site.xml

4.1、配置hadoop-env.sh文件-->修改JAVA_HOME

# The java implementation to use.
export JAVA_HOME=/home/spark/opt/java/jdk1.6.0_37

4.2、配置yarn-env.sh 文件-->>修改JAVA_HOME

# some Java parameters

export JAVA_HOME=/home/spark/opt/java/jdk1.6.0_37

4.3、配置slaves文件-->>增加slave节点

S1PA222

4.4、配置core-site.xml文件-->>增加hadoop核心配置(hdfs文件端口是9000、file:/home/spark/opt/hadoop-2.6.0/tmp、

<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://S1PA11:9000</value>
</property>

<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/home/spark/opt/hadoop-2.6.0/tmp</value>
<description>Abasefor other temporary directories.</description>
</property>
<property>
<name>hadoop.proxyuser.spark.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.spark.groups</name>
<value>*</value>
</property>
</configuration>

4.5、配置hdfs-site.xml文件-->>增加hdfs配置信息(namenode、datanode端口和目录位置)

<configuration>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>S1PA11:9001</value>
</property>

<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/spark/opt/hadoop-2.6.0/dfs/name</value>
</property>

<property>
<name>dfs.datanode.data.dir</name>
<value>file:/home/spark/opt/hadoop-2.6.0/dfs/data</value>
</property>

<property>
<name>dfs.replication</name>
<value>3</value>
</property>

<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>

</configuration>

4.6、配置 mapred-site.xml文件-->>增加mapreduce配置(使用yarn框架、jobhistory使用地址以及web地址)

<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>S1PA11:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>S1PA11:19888</value>
</property>
</configuration>

4.7、配置 yarn-site.xml文件-->>增加yarn功能

<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>S1PA11:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>S1PA11:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>S1PA11:8035</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>S1PA11:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>S1PA11:8088</value>
</property>

</configuration>

5、将配置好的hadoop文件copy到另一台slave机器上

[spark@S1PA11 opt]$ scp -r hadoop-2.6.0/ spark@10.126.34.43:~/opt/

四、验证

1、格式化namenode:

[spark@S1PA11 opt]$ cd hadoop-2.6.0/
[spark@S1PA11 hadoop-2.6.0]$ ls
bin dfs etc include input lib libexec LICENSE.txt logs NOTICE.txt README.txt sbin share tmp
[spark@S1PA11 hadoop-2.6.0]$ ./bin/hdfs namenode -format

[spark@S1PA222 .ssh]$ cd ~/opt/hadoop-2.6.0
[spark@S1PA222 hadoop-2.6.0]$ ./bin/hdfs namenode -format

2、启动hdfs:

[spark@S1PA11 hadoop-2.6.0]$ ./sbin/start-dfs.sh
15/01/05 16:41:04 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [S1PA11]
S1PA11: starting namenode, logging to /home/spark/opt/hadoop-2.6.0/logs/hadoop-spark-namenode-S1PA11.out
S1PA222: starting datanode, logging to /home/spark/opt/hadoop-2.6.0/logs/hadoop-spark-datanode-S1PA222.out
Starting secondary namenodes [S1PA11]
S1PA11: starting secondarynamenode, logging to /home/spark/opt/hadoop-2.6.0/logs/hadoop-spark-secondarynamenode-S1PA11.out
15/01/05 16:41:21 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[spark@S1PA11 hadoop-2.6.0]$ jps
22230 Master
30889 Jps
22478 Worker
30498 NameNode
30733 SecondaryNameNode
19781 ResourceManager

3、停止hdfs:

[spark@S1PA11 hadoop-2.6.0]$./sbin/stop-dfs.sh
15/01/05 16:40:28 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Stopping namenodes on [S1PA11]
S1PA11: stopping namenode
S1PA222: stopping datanode
Stopping secondary namenodes [S1PA11]
S1PA11: stopping secondarynamenode
15/01/05 16:40:48 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[spark@S1PA11 hadoop-2.6.0]$ jps
30336 Jps
22230 Master
22478 Worker
19781 ResourceManager

4、启动yarn:

[spark@S1PA11 hadoop-2.6.0]$./sbin/start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /home/spark/opt/hadoop-2.6.0/logs/yarn-spark-resourcemanager-S1PA11.out
S1PA222: starting nodemanager, logging to /home/spark/opt/hadoop-2.6.0/logs/yarn-spark-nodemanager-S1PA222.out
[spark@S1PA11 hadoop-2.6.0]$ jps
31233 ResourceManager
22230 Master
22478 Worker
30498 NameNode
30733 SecondaryNameNode
31503 Jps

5、停止yarn:

[spark@S1PA11 hadoop-2.6.0]$ ./sbin/stop-yarn.sh
stopping yarn daemons
stopping resourcemanager
S1PA222: stopping nodemanager
no proxyserver to stop
[spark@S1PA11 hadoop-2.6.0]$ jps
31167 Jps
22230 Master
22478 Worker
30498 NameNode
30733 SecondaryNameNode

6、查看集群状态:

[spark@S1PA11 hadoop-2.6.0]$ ./bin/hdfs dfsadmin -report
15/01/05 16:44:50 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Configured Capacity: 52101857280 (48.52 GB)
Present Capacity: 45749510144 (42.61 GB)
DFS Remaining: 45748686848 (42.61 GB)
DFS Used: 823296 (804 KB)
DFS Used%: 0.00%
Under replicated blocks: 10
Blocks with corrupt replicas: 0
Missing blocks: 0


-------------------------------------------------
Live datanodes (1):


Name: 10.126.45.56:50010 (S1PA222)
Hostname: S1PA209
Decommission Status : Normal
Configured Capacity: 52101857280 (48.52 GB)
DFS Used: 823296 (804 KB)
Non DFS Used: 6352347136 (5.92 GB)
DFS Remaining: 45748686848 (42.61 GB)
DFS Used%: 0.00%
DFS Remaining%: 87.81%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Mon Jan 05 16:44:50 CST 2015

7、查看hdfs:http://10.58.44.47:50070/


8、查看RM:http://10.58.44.47:8088/


9、运行wordcount程序

9.1、创建 input目录:[spark@S1PA11 hadoop-2.6.0]$ mkdir input

9.2、在input创建f1、f2并写内容

[spark@S1PA11 hadoop-2.6.0]$ cat input/f1
Hello world bye jj
[spark@S1PA11 hadoop-2.6.0]$ cat input/f2
Hello Hadoop bye Hadoop

9.3、在hdfs创建/tmp/input目录

[spark@S1PA11 hadoop-2.6.0]$ ./bin/hadoop fs -mkdir /tmp
15/01/05 16:53:57 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

[spark@S1PA11 hadoop-2.6.0]$ ./bin/hadoop fs -mkdir /tmp/input
15/01/05 16:54:16 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

9.4、将f1、f2文件copy到hdfs /tmp/input目录

[spark@S1PA11 hadoop-2.6.0]$ ./bin/hadoop fs -put input/ /tmp
15/01/05 16:56:01 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

9.5、查看hdfs上是否有f1、f2文件

[spark@S1PA11 hadoop-2.6.0]$ ./bin/hadoop fs -ls /tmp/input/
15/01/05 16:57:42 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 2 items
-rw-r--r-- 3 spark supergroup 20 2015-01-04 19:09 /tmp/input/f1
-rw-r--r-- 3 spark supergroup 25 2015-01-04 19:09 /tmp/input/f2

9.6、执行wordcount程序

[spark@S1PA11 hadoop-2.6.0]$ ./bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar wordcount /tmp/input /output
15/01/05 17:00:09 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/01/05 17:00:09 INFO client.RMProxy: Connecting to ResourceManager at S1PA11/10.58.44.47:8032
15/01/05 17:00:11 INFO input.FileInputFormat: Total input paths to process : 2
15/01/05 17:00:11 INFO mapreduce.JobSubmitter: number of splits:2
15/01/05 17:00:11 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1420447392452_0001
15/01/05 17:00:12 INFO impl.YarnClientImpl: Submitted application application_1420447392452_0001
15/01/05 17:00:12 INFO mapreduce.Job: The url to track the job: http://S1PA11:8088/proxy/application_1420447392452_0001/
15/01/05 17:00:12 INFO mapreduce.Job: Running job: job_1420447392452_0001

9.7、查看执行结果

[spark@S1PA11 hadoop-2.6.0]$ ./bin/hadoop fs -cat /output/part-r-0000
15/01/05 17:06:10 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

 

 

 

 

 

 

 

 

 

 

分享到:
评论

相关推荐

    hadoop2.6_windows_x64.zip

    Hive1.2.2版本与Hadoop2.6是兼容的,可以在同一个集群上部署。安装Hive前,需要确保Hadoop已经正常运行。然后下载Hive的Windows版本,配置HIVE_HOME环境变量,并修改hive-site.xml配置文件,指定Hive的metastore...

    hadoop2.6 hadoop.dll+winutils.exe

    通过以上步骤,用户可以在Windows上搭建并运行Hadoop 2.6集群,利用`hadoop.dll`和`winutils.exe`实现Hadoop在非Linux环境下的功能。不过,值得注意的是,尽管可以这样做,但在生产环境中,由于Windows的兼容性和...

    Hadoop2.6集群环境搭建(HDFS HA+YARN)

    在搭建Hadoop 2.6集群环境时,我们需要关注几个关键组件:HDFS(Hadoop Distributed File System)的高可用性(HA)以及YARN(Yet Another Resource Negotiator)。这个过程涉及多台虚拟机的配置,包括安装操作系统...

    Hadoop2.6集群环境搭建,原来4G内存也能任性一次

    ### Hadoop2.6集群环境搭建详解 #### 一、前言 随着大数据技术的不断发展,Hadoop作为处理大规模数据集的重要工具之一,在各行业中得到了广泛的应用。本文将详细介绍如何在资源有限的情况下(例如仅有4G内存的...

    hadoop2.6集群搭建手册

    在IT行业中,Hadoop是一个广泛使用的开源框架,...在这个过程中,文档如《hadoop2.6集群搭建手册》是不可或缺的指南,它详细解释了每个步骤并提供了配置示例,对于初学者和经验丰富的管理员来说都是宝贵的参考资料。

    windows下hadoop2.6开发环境搭建过程说明及插件

    在Windows环境下搭建Hadoop2.6开发环境是一个相对复杂的过程,但通过详细的步骤和注意事项,可以有效地完成。这里我们将深入探讨这个过程,并介绍如何解决可能出现的问题。 首先,我们需要下载Hadoop2.6.0的安装包...

    spark-1.6.0-bin-hadoop2.6.tgz

    总结,Spark-1.6.0-bin-hadoop2.6.tgz是一个完整的Spark发行版,适用于在Linux环境下搭建Spark集群,涵盖多个核心组件,支持多种数据处理场景。通过熟练掌握Spark的安装、配置和使用,可以充分利用其强大功能处理大...

    hadoop2.6_Win_x64-master

    标题中的"hadoop2.6_Win_x64-master"表明这是一个专为Windows 64位系统设计的Hadoop 2.6版本的解决方案。Hadoop是Apache软件基金会的一个开源项目,它提供了一个分布式文件系统(HDFS)和一个计算框架(MapReduce)...

    Hadoop 2.6 集群在CentOS 6 上的搭建指南

    内容概要:该文档详细介绍了一步一步地在 CentOs 6 平台上编译安装配置 Hadoop 2.6 的详细步骤,涵盖从编译、安装到运行的全过程指导,并附有问题排查方法及简单例子演示流程。 适合人群:从事大数据领域的IT工作者...

    spark-2.0.0-bin-hadoop2.6.tgz (内含有Pyspark 2.7.12)

    在"spark-2.0.0-bin-hadoop2.6.tgz"压缩包中,包含的不仅是Spark 2.0.0的基础二进制文件,还预配置了对Hadoop 2.6的支持,这意味着用户可以直接在具有Hadoop环境的系统上部署和运行这个版本的Spark,而无需额外的...

    hadoop2.6,window7 64bit,hadoop.dll、winutils.exe文件下载

    Hadoop 2.6是Hadoop的一个重要版本,它包含了多项改进和优化,以提高性能和稳定性。在Windows 7 64位操作系统上配置和运行Hadoop可能会遇到一些挑战,因为Hadoop最初是为Linux设计的。不过,通过一些特定的工具和...

    spark-1.3.1-bin-hadoop2.6.tgz

    Spark-1.3.1-bin-hadoop2.6.tgz是一个针对Linux和Windows系统的安装包,包含了Apache Spark 1.3.1版本以及与Hadoop 2.6兼容的依赖。这个压缩包为用户提供了在本地或集群环境中搭建Spark计算平台的基础。 1. **Spark...

    hadoop2.6,hadoop.dll、winutils.exe下载

    Hadoop 2.6是Hadoop发展过程中的一个重要版本,它带来了许多性能优化和功能改进,旨在提高集群效率和稳定性。本资源提供了适用于64位操作系统的Hadoop相关组件,包括hadoop.dll和winutils.exe,这对于在Windows环境...

    hadoop高可用集群搭建手册.docx

    本文档主要介绍了Hadoop 2.6高可用集群的搭建过程,包括集群规划、搭建准备、集群搭建和配置等步骤。下面是从中提取的知识点: 1. 集群规划 在规划Hadoop集群时,需要考虑到集群的拓扑结构、节点的角色、网络配置...

    Linux环境Hadoop2.6+Hbase1.2集群安装部署

    在构建大数据处理环境时,Linux环境下的Hadoop2.6+Hbase1.2集群安装部署是基础步骤,而Spark分布式集群的搭建则是提升数据处理效率的关键。这些技术的组合使用,可以为大规模数据处理提供高效、可靠的解决方案。 ...

    hadoop2.6通用winutils和hadoop.dll

    在Hadoop 2.6版本中,其核心组件包括HDFS(Hadoop Distributed File System)和MapReduce,这两个组件使得Hadoop能够在大规模集群上高效地运行大数据处理任务。然而,Hadoop最初是为Linux操作系统设计的,但在...

    hadoop2.6 ecliplse 插件

    标题提到的"hadoop2.6 eclipse 插件"正是这种插件的一个版本,适用于Hadoop 2.6和Windows 7操作系统。在Windows 7环境下进行Hadoop开发,使用Eclipse插件可以简化配置过程,减少手动设置Hadoop环境的复杂性。 描述...

    spark-2.3.1-bin-hadoop2.6.tgz

    Spark 2.3.1是Apache Spark的一个稳定版本,它是一个快速、通用且可扩展的大数据处理框架。...通过下载并解压"spark-2.3.1-bin-hadoop2.6.tgz",你可以开始搭建本地或集群的Spark环境,探索这个框架的更多可能性。

    hadoop2.6,window7 32bit,hadoop.dll、winutils.exe等文件

    Hadoop 2.6是Apache Hadoop项目的一个重要版本,它提供了大量的改进和新特性,包括YARN(Yet Another Resource Negotiator)资源管理器,增强了集群资源调度和管理的效率。对于Windows用户来说,尽管Hadoop最初设计...

Global site tag (gtag.js) - Google Analytics