一、安装 JDK
$chmod +x jdk-6u38-linux-i586.bin
$./jdk-6u38-linux-i586.bin
安装完后设置java 环境变量如下
命令 :/home路径下
$vi /etc/profile
然后添加如下语句
export JAVA_HOME=/home/java/jdk1.6.0_38
export PATH=$JAVA_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
然后source /etc/profile 使其生效
二、ssh 的安装和配置
$实现ssh无密码登陆
$sudo apt-get install ssh
$ssh-keygen
直接回车,完成后会在~/.ssh/生成两个文件:id_dsa 和id_dsa.pub。这两个是成对
出现,类似钥匙和锁。再把id_dsa.pub 追加到授权key 里面(当前并没有authorized_keys
$cat~/.ssh/id_rsa.pub>>~/.ssh/authorized_keys完成后可以实现无密码登录本机
linux-6fp1:/useradd hadoop
linux-6fp1:/home # mkdir hadoop
hadoop@linux-6fp1:~> mkdir .ssh
hadoop@linux-6fp1:~> ls
hadoop@linux-6fp1:~> cd .ssh
hadoop@linux-6fp1:~/.ssh> ssh-keygen -t dsa -P '' -f id_dsa
Generating public/private dsa key pair.
Your identification has been saved in id_dsa.
Your public key has been saved in id_dsa.pub.
The key fingerprint is:
32:32:2a:39:a0:8f:c5:6e:f7:59:9b:b9:1a:ef:71:a6 hadoop@linux-6fp1
The key's randomart image is:
+--[ DSA 1024]----+
| |
| |
| |
| |
|. o o S |
|oo . o o |
|= + . o o |
| B. . = O |
|..o. .+oE. |
+-----------------+
hadoop@linux-6fp1:~/.ssh> ls
id_dsa id_dsa.pub
hadoop@linux-6fp1:~/.ssh> cat id_dsa.pub >>authorized_keys
hadoop@linux-6fp1:~/.ssh> ll
total 12
-rw-r--r-- 1 hadoop users 607 2012-12-25 14:39 authorized_keys
-rw------- 1 hadoop users 668 2012-12-25 14:38 id_dsa
-rw-r--r-- 1 hadoop users 607 2012-12-25 14:38 id_dsa.pub
hadoop@linux-6fp1:~/.ssh> ssh -version
OpenSSH_5.1p1, OpenSSL 0.9.8h 28 May 2008
Bad escape character 'rsion'.
hadoop@linux-6fp1:~/.ssh> ssh localhost
The authenticity of host 'localhost (::1)' can't be established.
RSA key fingerprint is a4:93:5e:af:9f:52:41:ce:c8:44:8f:46:60:5f:68:95.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'localhost' (RSA) to the list of known hosts.
Have a lot of fun...
三、Hadoop 安装与配置
1.下载Hadoop
http://labs.mop.com/apache-mirror/hadoop/common/hadoop-1.0.0/
2.解压Hadoop
tar xzvf hadoop-0.21.0.tar.gz
3. 添加Hadoop Bin到环境变量中
修改 hadoop-env.sh
配置JDK即可
export JAVA_HOME=/home/java/jdk1.6.0_38
vi conf/core-site.xml
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://master:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/tmp</value>
</property>
</configuration>
vi conf/hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
vi conf/mapred-site.xml
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
</configuration>
vi conf/master
localhost 改为ip 192.168.1.102
vi conf/slaves
localhost 改为 192.168.1.102
启动 Hadoop
1.格式化文件系统
hadoop namenode –format
2. 启动hadoop
启动关闭所有服务 start-all.sh/stop-all.sh
启动关闭HDFS: start-dfs.sh/stop-dfs.sh
启动关闭MapReduce: start-mapred.sh/stop-mapred.sh
3. 用jps命令查看进程,确保有 namenode,dataNode,JobTracker,TaskTracker
hadoop@linux-w99d:~/hadoop-0.20.2/bin> ./hadoop namenode -format
12/12/30 14:26:41 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = master/192.168.1.102
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 0.20.2
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
12/12/30 14:26:41 INFO namenode.FSNamesystem: fsOwner=hadoop,users,dialout,video
12/12/30 14:26:41 INFO namenode.FSNamesystem: supergroup=supergroup
12/12/30 14:26:41 INFO namenode.FSNamesystem: isPermissionEnabled=true
12/12/30 14:26:41 INFO common.Storage: Image file of size 96 saved in 0 seconds.
12/12/30 14:26:42 INFO common.Storage: Storage directory /home/hadoop/tmp/dfs/name has been successfully formatted.
12/12/30 14:26:42 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master/192.168.1.102
************************************************************/
hadoop@linux-w99d:~/hadoop-0.20.2/bin> ./start-all.sh
starting namenode, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-namenode-linux-w99d.out
192.168.1.102: datanode running as process 5354. Stop it first.
192.168.1.102: secondarynamenode running as process 5463. Stop it first.
jobtracker running as process 5530. Stop it first.
192.168.1.102: tasktracker running as process 5644. Stop it first.
hadoop@linux-w99d:~/hadoop-0.20.2/bin> jps
6229 NameNode
5644 TaskTracker
6536 Jps
5463 SecondaryNameNode
5530 JobTracker
5354 DataNode
hadoop@linux-w99d:~/hadoop-0.20.2/bin> ./hadoop dfsadmin -report
Configured Capacity: 138087571456 (128.6 GB)
Present Capacity: 130494750735 (121.53 GB)
DFS Remaining: 130494726144 (121.53 GB)
DFS Used: 24591 (24.01 KB)
DFS Used%: 0%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
-------------------------------------------------
Datanodes available: 1 (1 total, 0 dead)
Name: 192.168.1.102:50010
Decommission Status : Normal
Configured Capacity: 138087571456 (128.6 GB)
DFS Used: 24591 (24.01 KB)
Non DFS Used: 7592820721 (7.07 GB)
DFS Remaining: 130494726144(121.53 GB)
DFS Used%: 0%
DFS Remaining%: 94.5%
Last contact: Sun Dec 30 14:28:08 CST 2012
分享到:
相关推荐
hadoop安装和配置,这份PPT讲诉了如何安装和配置Hadoop
本文将详细介绍如何在多台虚拟机上安装和配置Hadoop集群。 #### 二、环境准备 本示例中使用了三台虚拟机作为Hadoop集群的基础环境,它们的IP地址和角色分配如下: - **Master节点**:192.168.1.80 - **Slave1节点...
通过安装 Linux 系统的虚拟机、配置虚拟机网络、安装 Hadoop、配置 HDFS、配置 YUM 源和安装必要软件,可以实现高效的数据处理和存储。本节提供了详细的安装和配置步骤,帮助读者快速安装和配置 Hadoop 集群。 知识...
说明,因为这里涉及到多台计算机的共同操作,对于主节点namenode 整篇文章照做,对于datanode节点,除了Hadoop的配置,其他照做,如果配置主机数为多太,只需在后更改hadoop配置文件即可 ________________
### Hadoop从零开始安装与配置详解 #### 一、事前准备 Hadoop是一种分布式计算框架,能够让数据处理能力分布在多个计算机节点上,从而提高数据处理效率和系统的可扩展性。本文将详细介绍如何从零开始搭建Hadoop...
Hadoop配置文件的正确设置对于集群的正常运行至关重要。以下是一些关键的配置文件及其修改内容: 1. **修改`hadoop-env.sh`** 在`/cloud/hadoop/etc/hadoop/hadoop-env.sh`文件中,大约第27行处添加或修改如下...
Hadoop是一款由Apache基金会开发的开源软件框架,主要用来实现大数据的存储和处理,适用于...此外,由于Hadoop是一个运行在多节点集群上的大数据处理框架,所以在生产环境部署时,还需要进行更多细节的配置和优化。
《Hadoop系统搭建及项目实践》课件02Hadoop安装与配置管理.pdf《Hadoop系统搭建及项目实践》课件02Hadoop安装与配置管理.pdf《Hadoop系统搭建及项目实践》课件02Hadoop安装与配置管理.pdf《Hadoop系统搭建及项目实践...
### Hadoop安装与配置知识点详解 #### 一、Hadoop简介及核心组件 **Hadoop** 是Apache软件基金会旗下的一款开源分布式计算平台,其主要功能是处理和存储大规模数据集。Hadoop的核心组件包括 **Hadoop分布式文件...
本实验报告的主要目的是演示如何在 Ubuntu 环境中安装和配置 Hadoop。Hadoop 是一个开源的分布式计算框架,广泛应用于大数据处理和分析领域。 一、实验名称:Hadoop 安装与配置 二、实验日期:2015 年 9 月 25 日 ...
通过运行简单的WordCount示例,验证Hadoop集群是否正确配置和运行。 8. **维护与优化** 学习监控Hadoop集群的性能,调整参数以适应不同的工作负载,如增加DataNode的内存分配,优化网络通信等。 总之,Hadoop的...
本文详细介绍了Hadoop的安装配置过程,包括基础环境搭建、软件安装、SSH免密码登录配置、Hadoop解压和环境变量更新等方面。 一、基础环境搭建 Hadoop的安装需要具备一定的基础环境,包括Java、vsftpd等软件的安装...
hadoop安装与配置 hadoop安装与配置 Hadoop的安装与配置可以分成几个主要步骤: 1. 安装Java 2. 下载Hadoop 3. 配置Hadoop 4. 格式化Hadoop文件系统 5. 启动Hadoop 以下是基于Linux系统的简化安装与配置步骤: 1. ...
hadoop1.1.2安装和配置hadoop1.1.2安装和配置hadoop1.1.2安装和配置hadoop1.1.2安装和配置hadoop1.1.2安装和配置hadoop1.1.2安装和配置hadoop1.1.2安装和配置hadoop1.1.2安装和配置hadoop1.1.2安装和配置hadoop1.1.2...
hadoop安装与配置 hadoop安装与配置 hadoop安装与配置 hadoop安装与配置 hadoop安装与配置
a high-performance service for building distributed applications hadoop安装与配置 hadoop安装与配置 hadoop安装与配置 hadoop安装与配置 hadoop安装与配置