`
baoliangquan
  • 浏览: 8158 次
  • 性别: Icon_minigender_1
社区版块
存档分类
最新评论

hadoop集群安装

阅读更多

http://blog.csdn.net/stark_summer/article/details/42424279



一、环境说明
1、机器:一台物理机 和一台虚拟机
2、Linux版本:[Spark@S1PA11 ~]$ cat /etc/issue
Red Hat Enterprise Linux Server release 5.4 (Tikanga)
3、JDK: [spark@S1PA11 ~]$ Java -version
Java version "1.6.0_27"
Java(TM) SE Runtime Environment (build 1.6.0_27-b07)
Java HotSpot(TM) 64-Bit Server VM (build 20.2-b06, mixed mode)
4、集群节点:两个 S1PA11(Master),S1PA222(Slave)
二、准备工作
1、安装Java jdk前一篇文章撰写了:http://blog.csdn.net/stark_summer/article/details/42391531
2、ssh免密码验证 :http://blog.csdn.net/stark_summer/article/details/42393053
3、下载Hadoop版本:http://mirror.bit.edu.cn/apache/hadoop/common/
三、安装Hadoop
这是下载后的hadoop-2.6.0.tar.gz压缩包,  
1、解压 tar -xzvf hadoop-2.6.0.tar.gz
2、move到指定目录下:[spark@S1PA11 software]$ mv hadoop-2.6.0 ~/opt/
3、进入hadoop目前  [spark@S1PA11 opt]$ cd hadoop-2.6.0/
[spark@S1PA11 hadoop-2.6.0]$ ls
bin  dfs  etc  include  input  lib  libexec  LICENSE.txt  logs  NOTICE.txt  README.txt  sbin  share  tmp
配置之前,先在本地文件系统创建以下文件夹:~/hadoop/tmp、~/dfs/data、~/dfs/name。 主要涉及的配置文件有7个:都在/hadoop/etc/hadoop文件夹下,可以用gedit命令对其进行编辑。
~/hadoop/etc/hadoop/hadoop-env.sh
~/hadoop/etc/hadoop/yarn-env.sh
~/hadoop/etc/hadoop/slaves
~/hadoop/etc/hadoop/core-site.xml
~/hadoop/etc/hadoop/hdfs-site.xml
~/hadoop/etc/hadoop/mapred-site.xml
~/hadoop/etc/hadoop/yarn-site.xml
4、进去hadoop配置文件目录
[spark@S1PA11 hadoop-2.6.0]$ cd etc/hadoop/
[spark@S1PA11 hadoop]$ ls
capacity-scheduler.xml  hadoop-env.sh               httpfs-env.sh            kms-env.sh            mapred-env.sh               ssl-client.xml.example
configuration.xsl       hadoop-metrics2.properties  httpfs-log4j.properties  kms-log4j.properties  mapred-queues.xml.template  ssl-server.xml.example
Container-executor.cfg  hadoop-metrics.properties   httpfs-signature.secret  kms-site.xml          mapred-site.xml             yarn-env.cmd
core-site.xml           hadoop-policy.xml           httpfs-site.xml          log4j.properties      mapred-site.xml.template    yarn-env.sh
hadoop-env.cmd          hdfs-site.xml               kms-acls.xml             mapred-env.cmd        slaves                      yarn-site.xml
4.1、配置 hadoop-env.sh文件-->修改JAVA_HOME
# The java implementation to use.
export JAVA_HOME=/home/spark/opt/java/jdk1.6.0_37
4.2、配置 yarn-env.sh 文件-->>修改JAVA_HOME
# some Java parameters
export JAVA_HOME=/home/spark/opt/java/jdk1.6.0_37
4.3、配置slaves文件-->>增加slave节点
S1PA222
4.4、配置 core-site.xml文件-->>增加hadoop核心配置(hdfs文件端口是9000、file:/home/spark/opt/hadoop-2.6.0/tmp、)
<configuration>
<property>
  <name>fs.defaultFS</name>
  <value>hdfs://S1PA11:9000</value>
</property>
<property>
  <name>io.file.buffer.size</name>
  <value>131072</value>
</property>
<property>
  <name>hadoop.tmp.dir</name>
  <value>file:/home/spark/opt/hadoop-2.6.0/tmp</value>
  <description>Abasefor other temporary directories.</description>
</property>
<property>
  <name>hadoop.proxyuser.spark.hosts</name>
  <value>*</value>
</property>
<property>
  <name>hadoop.proxyuser.spark.groups</name>
  <value>*</value>
</property>
</configuration>
4.5、配置  hdfs-site.xml 文件-->>增加hdfs配置信息(namenode、datanode端口和目录位置)
<configuration>
<property>
  <name>dfs.namenode.secondary.http-address</name>
  <value>S1PA11:9001</value>
</property>

  <property>
   <name>dfs.namenode.name.dir</name>
   <value>file:/home/spark/opt/hadoop-2.6.0/dfs/name</value>
</property>

<property>
  <name>dfs.datanode.data.dir</name>
  <value>file:/home/spark/opt/hadoop-2.6.0/dfs/data</value>
  </property>

<property>
  <name>dfs.replication</name>
  <value>3</value>
</property>

<property>
  <name>dfs.webhdfs.enabled</name>
  <value>true</value>
</property>

</configuration>
4.6、配置  mapred-site.xml 文件-->>增加mapreduce配置(使用yarn框架、jobhistory使用地址以及web地址)
<configuration>
  <property>
   <name>mapreduce.framework.name</name>
   <value>yarn</value>
</property>
<property>
  <name>mapreduce.jobhistory.address</name>
  <value>S1PA11:10020</value>
</property>
<property>
  <name>mapreduce.jobhistory.webapp.address</name>
  <value>S1PA11:19888</value>
</property>
</configuration>
4.7、配置   yarn-site.xml  文件-->>增加yarn功能
<configuration>
  <property>
   <name>yarn.nodemanager.aux-services</name>
   <value>mapreduce_shuffle</value>
  </property>
  <property>
   <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
   <value>org.apache.hadoop.mapred.ShuffleHandler</value>
  </property>
  <property>
   <name>yarn.resourcemanager.address</name>
   <value>S1PA11:8032</value>
  </property>
  <property>
   <name>yarn.resourcemanager.scheduler.address</name>
   <value>S1PA11:8030</value>
  </property>
  <property>
   <name>yarn.resourcemanager.resource-tracker.address</name>
   <value>S1PA11:8035</value>
  </property>
  <property>
   <name>yarn.resourcemanager.admin.address</name>
   <value>S1PA11:8033</value>
  </property>
  <property>
   <name>yarn.resourcemanager.webapp.address</name>
   <value>S1PA11:8088</value>
  </property>

</configuration>
5、将配置好的hadoop文件copy到另一台slave机器上
[spark@S1PA11 opt]$ scp -r hadoop-2.6.0/ spark@10.126.34.43:~/opt/
四、验证
1、格式化namenode:
[spark@S1PA11 opt]$ cd hadoop-2.6.0/
[spark@S1PA11 hadoop-2.6.0]$ ls
bin  dfs  etc  include  input  lib  libexec  LICENSE.txt  logs  NOTICE.txt  README.txt  sbin  share  tmp
[spark@S1PA11 hadoop-2.6.0]$ ./bin/hdfs namenode -format
[spark@S1PA222 .ssh]$ cd ~/opt/hadoop-2.6.0
[spark@S1PA222 hadoop-2.6.0]$ ./bin/hdfs  namenode -format
2、启动hdfs:
[spark@S1PA11 hadoop-2.6.0]$ ./sbin/start-dfs.sh
15/01/05 16:41:04 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [S1PA11]
S1PA11: starting namenode, logging to /home/spark/opt/hadoop-2.6.0/logs/hadoop-spark-namenode-S1PA11.out
S1PA222: starting datanode, logging to /home/spark/opt/hadoop-2.6.0/logs/hadoop-spark-datanode-S1PA222.out
Starting secondary namenodes [S1PA11]
S1PA11: starting secondarynamenode, logging to /home/spark/opt/hadoop-2.6.0/logs/hadoop-spark-secondarynamenode-S1PA11.out
15/01/05 16:41:21 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[spark@S1PA11 hadoop-2.6.0]$ jps
22230 Master
30889 Jps
22478 Worker
30498 NameNode
30733 SecondaryNameNode
19781 ResourceManager
3、停止hdfs:
[spark@S1PA11 hadoop-2.6.0]$./sbin/stop-dfs.sh
15/01/05 16:40:28 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Stopping namenodes on [S1PA11]
S1PA11: stopping namenode
S1PA222: stopping datanode
Stopping secondary namenodes [S1PA11]
S1PA11: stopping secondarynamenode
15/01/05 16:40:48 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[spark@S1PA11 hadoop-2.6.0]$ jps
30336 Jps
22230 Master
22478 Worker
19781 ResourceManager
4、启动yarn:
[spark@S1PA11 hadoop-2.6.0]$./sbin/start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /home/spark/opt/hadoop-2.6.0/logs/yarn-spark-resourcemanager-S1PA11.out
S1PA222: starting nodemanager, logging to /home/spark/opt/hadoop-2.6.0/logs/yarn-spark-nodemanager-S1PA222.out
[spark@S1PA11 hadoop-2.6.0]$ jps
31233 ResourceManager
22230 Master
22478 Worker
30498 NameNode
30733 SecondaryNameNode
31503 Jps
5、停止yarn:
[spark@S1PA11 hadoop-2.6.0]$ ./sbin/stop-yarn.sh
stopping yarn daemons
stopping resourcemanager
S1PA222: stopping nodemanager
no proxyserver to stop
[spark@S1PA11 hadoop-2.6.0]$ jps
31167 Jps
22230 Master
22478 Worker
30498 NameNode
30733 SecondaryNameNode
6、查看集群状态:
[spark@S1PA11 hadoop-2.6.0]$ ./bin/hdfs dfsadmin -report
15/01/05 16:44:50 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Configured Capacity: 52101857280 (48.52 GB)
Present Capacity: 45749510144 (42.61 GB)
DFS Remaining: 45748686848 (42.61 GB)
DFS Used: 823296 (804 KB)
DFS Used%: 0.00%
Under replicated blocks: 10
Blocks with corrupt replicas: 0
Missing blocks: 0


-------------------------------------------------
Live datanodes (1):


Name: 10.126.45.56:50010 (S1PA222)
Hostname: S1PA209
Decommission Status : Normal
Configured Capacity: 52101857280 (48.52 GB)
DFS Used: 823296 (804 KB)
Non DFS Used: 6352347136 (5.92 GB)
DFS Remaining: 45748686848 (42.61 GB)
DFS Used%: 0.00%
DFS Remaining%: 87.81%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Mon Jan 05 16:44:50 CST 2015
7、查看hdfs:http://10.58.44.47:50070/


8、查看RM:http://10.58.44.47:8088/


9、运行wordcount程序
9.1、创建 input目录:[spark@S1PA11 hadoop-2.6.0]$ mkdir input
9.2、在input创建f1、f2并写内容
[spark@S1PA11 hadoop-2.6.0]$ cat input/f1
Hello world  bye jj
[spark@S1PA11 hadoop-2.6.0]$ cat input/f2
Hello Hadoop  bye Hadoop
9.3、在hdfs创建/tmp/input目录
[spark@S1PA11 hadoop-2.6.0]$ ./bin/hadoop fs  -mkdir /tmp
15/01/05 16:53:57 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[spark@S1PA11 hadoop-2.6.0]$ ./bin/hadoop fs  -mkdir /tmp/input
15/01/05 16:54:16 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
9.4、将f1、f2文件copy到hdfs /tmp/input目录
[spark@S1PA11 hadoop-2.6.0]$ ./bin/hadoop fs  -put input/ /tmp
15/01/05 16:56:01 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
9.5、查看hdfs上是否有f1、f2文件
[spark@S1PA11 hadoop-2.6.0]$ ./bin/hadoop fs -ls /tmp/input/
15/01/05 16:57:42 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 2 items
-rw-r--r--   3 spark supergroup         20 2015-01-04 19:09 /tmp/input/f1
-rw-r--r--   3 spark supergroup         25 2015-01-04 19:09 /tmp/input/f2
9.6、执行wordcount程序
[spark@S1PA11 hadoop-2.6.0]$ ./bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar wordcount /tmp/input /output
15/01/05 17:00:09 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/01/05 17:00:09 INFO client.RMProxy: Connecting to ResourceManager at S1PA11/10.58.44.47:8032
15/01/05 17:00:11 INFO input.FileInputFormat: Total input paths to process : 2
15/01/05 17:00:11 INFO mapreduce.JobSubmitter: number of splits:2
15/01/05 17:00:11 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1420447392452_0001
15/01/05 17:00:12 INFO impl.YarnClientImpl: Submitted application application_1420447392452_0001
15/01/05 17:00:12 INFO mapreduce.Job: The url to track the job: http://S1PA11:8088/proxy/application_1420447392452_0001/
15/01/05 17:00:12 INFO mapreduce.Job: Running job: job_1420447392452_0001
9.7、查看执行结果
[spark@S1PA11 hadoop-2.6.0]$ ./bin/hadoop fs -cat /output/part-r-0000
15/01/05 17:06:10 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

 

分享到:
评论

相关推荐

    hadoop集群安装步骤

    hadoop集群安装详细步骤,支持snappy hadoop 集群安装

    hadoop集群安装手册

    【Hadoop集群安装手册】 Hadoop是一个开源的分布式计算框架,由Apache基金会开发,主要用于处理和存储海量数据。本手册将详细介绍如何在5台虚拟机上手动安装和配置Hadoop集群,供初学者和专业人士参考。 ### 安装...

    Hadoop集群安装与配置详细步骤

    "Hadoop 集群安装与配置详细步骤" Hadoop 集群安装与配置详细步骤是大数据处理和存储的重要组件。为了实现高效的数据处理和存储,需要安装和配置 Hadoop 集群。本节将详细介绍 Hadoop 集群安装与配置的步骤。 安装...

    Hadoop集群安装和搭建(全面超详细的过程) 文章目录 Hadoop集群安装和搭建(全面超详细的过程) 前言 一、虚拟机的安装

    hadoop集群搭建Hadoop集群安装和搭建(全面超详细的过程) 文章目录 Hadoop集群安装和搭建(全面超详细的过程) 前言 一、虚拟机的安装 二、Linux系统安装 1.环境准备 2.虚拟机安装 三、Centos系统安装 四、静态网络...

    Linux下Hadoop集群安装指南

    Linux 下 Hadoop 集群安装指南 一、Linux 下 Hadoop 集群安装前的准备工作 在开始安装 Hadoop 集群之前,我们需要安装 VMware 和 Ubuntu Linux 作为操作系统。这一步骤非常重要,因为 Hadoop 集群需要在 Linux ...

    Linuxhadoop集群安装

    在IT领域,Linux Hadoop集群安装是一个复杂但至关重要的任务,尤其对于大数据处理和分析的组织来说。Hadoop是Apache软件基金会开发的一个开源框架,它允许分布式存储和处理大规模数据集。下面,我们将深入探讨Hadoop...

    vmware虚拟机下hadoop集群安装过程

    资源名称:vmware虚拟机下hadoop集群安装过程内容简介: Hadoop俗称分布式计算,最早作为一个开源项目,最初只是来源于谷歌的两份白皮书。然而正如十年前的Linux一样,虽然Hadoop最初十分简单,但随着近些年来...

    Hadoop集群安装详细步骤

    Hadoop集群安装详细步骤 Hadoop是一个分布式计算框架,主要提供了分布式文件存储(DFS)和Map/Reduce核心功能。在这里,我们将详细介绍Hadoop集群的安装步骤,包括准备工作、安装Hadoop软件、配置集群环境等内容。 ...

    hadoop集群安装、配置、维护文档

    本文将深入探讨在标题为“hadoop集群安装、配置、维护文档”的压缩包中涉及的关键知识点,包括如何安装、配置Hadoop集群,以及相关的维护技巧。我们将依次讨论每个文件所涵盖的主题。 1. **hadoop增加节点.txt**: ...

    完全分布式模式的Hadoop集群安装

    ### 完全分布式模式的Hadoop集群安装 #### 实验背景与目的 在现代大数据处理领域,Apache Hadoop因其强大的数据处理能力而受到广泛青睐。本文档旨在介绍如何在Linux环境下,利用三台虚拟机(一台主机两台从机)...

    Hadoop集群安装指南

    Hadoop集群安装指南是一份详细的安装和配置教程,它适合于新手学习以及经验丰富的技术人员参考使用。该指南涵盖了安装VMWare Workstation 10、配置Ubuntu Kylin操作系统、安装和配置Hadoop集群等多个方面的内容。...

    hadoop集群安装脚本

    "hadoop集群安装脚本"是实现快速、便捷部署Hadoop集群的一种工具,尤其对于初学者或运维人员来说,这种一键式安装脚本极大地简化了复杂的配置过程。 Hadoop集群的核心组件包括HDFS(Hadoop Distributed File System...

    基于Ubuntu的hadoop集群安装与配置

    "基于Ubuntu的hadoop集群安装与配置" 本文将详细介绍基于Ubuntu环境下的Hadoop集群安装与配置,涵盖Hadoop的基本概念、HDFS(分布式文件系统)、MapReduce(分布式计算模型)、集群架构、NameNode和DataNode的角色...

    2_2 hadoop集群安装部署.pdf

    Hadoop集群安装部署的知识点总结如下: 1. Hadoop集群安装部署的目标: - 掌握Hadoop的集群安装配置方法; - 掌握SSH配置方法; - 掌握Hadoop集群服务启动与停止; - 会使用Hadoop集群运行简单的MapReduce计算...

    安装hadoop集群

    ### Hadoop集群安装与配置详解 #### 一、引言 随着互联网技术的快速发展和企业数据量的激增,高效处理大规模数据的需求日益迫切。Hadoop作为一种开源的大数据处理框架,因其优秀的分布式处理能力和可扩展性,成为...

    hadoop集群安装配置

    hadoop在centos7.0超详细安装配置信息,自己总结的,绝对可用!

Global site tag (gtag.js) - Google Analytics