`
gaojingsong
  • 浏览: 1197498 次
  • 性别: Icon_minigender_1
  • 来自: 深圳
文章分类
社区版块
存档分类
最新评论

【Hadoop环境搭建之SSH免密码登录高级篇】

阅读更多

想象一下这种场景:我们使用1000台廉价的PC机做Hadoop集群,虽然Hadoop号称高可用,低成本;但是廉价机器谁能保证不出现问题呢,况且世界上本身就没有不出问题的电脑,于是今天坏掉一台机器,明天需要扩充增加一个节点增加容量,但是有一个问题SSH免密码登录的认证的公钥文件在各个电脑上不能共享,如果增加一个节点,新产生的id_rsa.pub  文件在各个PC的authorized_keys文件中不存在,因此各个PC拒绝新节点来访问自己,因为新节点没有报到,此时管理员疯了:要把新节点的id_rsa.pub 文件加入到各个PC的authorized_keys中,管理员的噩梦了..................


配置SSH免密码登录,需要公钥和私钥,其中公钥向外公开,私钥自己私有,公钥里面可以盛多个主机的公钥文件,而私钥文件仅能包含自己。对ssh软件来说,私钥和公钥都要放在.ssh目录中。ssh免密码登录是将公钥放在远端需要登录的主机.ssh中,而私钥放在本地.ssh目录中,另外一个公钥文件可以存储多个公钥,而一个私钥只能存储本机一个私钥。因此我们通过nfs共享的是公钥文件authorized_keys,而私钥文件却不能共享。
所 以,我们的做法是: 各节点的.ssh目录仍然是相互独立,不共享,但各节点.ssh目录中的authorized_keys是一个软链接,指向我们事先创建好的nfs共享目 录中的文件,这样各节点往.ssh/authorized_keys追加公钥数据时,其他所有的节点就都能看到了

 

 

教你一手,修改主机名字
[root@bogon ~]# vi /etc/sysconfig/network
network          networking/      network-scripts/
[root@bogon ~]# vi /etc/sysconfig/network

#gaojingsong
NETWORKING=yes
HOSTNAME=node3

1、配置NFS共享
[root@node1 ~]# ls
anaconda-ks.cfg  install.log         portmap-4.0-7.i386.rpm  rlwrap-0.37
Desktop          install.log.syslog  Public                  rlwrap-0.37.tar.gz
Documents        Music               readline-6.2            Templates
Downloads        Pictures            readline-6.2.tar.gz     Videos
[root@node1 ~]# rpm -ivh portmap-4.0-7.i386.rpm
Preparing...                ########################################### [100%]
   1:portmap                ########################################### [100%]
[root@node1 ~]# service portmap start
Starting portmapper: /bin/bash: line 1:  2489 Segmentation fault      (core dumped) portmap
                                                           [FAILED]
[root@node1 ~]#  rpm -q nfs-utils portmap
nfs-utils-1.2.3-39.el6.i686
portmap-4.0-7.i386
估计是版本问题,不想仔细研究,使用yum好了
[root@node1 ~]# yum install portmap
Loaded plugins: fastestmirror, refresh-packagekit, security
Loading mirror speeds from cached hostfile
 * base: mirrors.163.com
 * extras: mirrors.sina.cn
 * updates: centosx4.centos.org
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package portmap.i386 0:4.0-8 will be obsoleted
---> Package rpcbind.i686 0:0.2.0-11.el6 will be updated
---> Package rpcbind.i686 0:0.2.0-11.el6_7 will be obsoleting
--> Finished Dependency Resolution

Dependencies Resolved

=======================================================================
 Package             Arch             Version          Repository      Size
========================================================================
Installing:
 rpcbind             i686            0.2.0-11.el6_7    updates          51 k
     replacing  portmap.i386 4.0-8

Transaction Summary
=============================================================
Install       1 Package(s)

Total download size: 51 k
Is this ok [y/N]: y
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Warning: RPMDB altered outside of yum.
  Installing : rpcbind-0.2.0-11.el6_7.i686                                          1/3
  Erasing    : portmap-4.0-8.i386                                                    2/3
Non-fatal POSTUN scriptlet failure in rpm package portmap
  Cleanup    : rpcbind-0.2.0-11.el6.i686                                              3/3
error reading information on service portmap: No such file or directory
warning: %postun(portmap-4.0-8.i386) scriptlet failed, exit status 1
  Verifying  : rpcbind-0.2.0-11.el6_7.i686                                            1/3
  Verifying  : portmap-4.0-8.i386                                                     2/3
  Verifying  : rpcbind-0.2.0-11.el6.i686                                              3/3

Installed:
  rpcbind.i686 0:0.2.0-11.el6_7                                                                                                        

Replaced:
  portmap.i386 0:4.0-8                                                                                                                 

Complete!
[root@node1 ~]# yum install nfs-utils
Loaded plugins: fastestmirror, refresh-packagekit, security
Loading mirror speeds from cached hostfile
 * base: mirror.neu.edu.cn
 * extras: mirror.neu.edu.cn
 * updates: centosv4.centos.org
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package nfs-utils.i686 1:1.2.3-39.el6 will be updated
---> Package nfs-utils.i686 1:1.2.3-64.el6 will be an update
--> Processing Dependency: python-argparse for package: 1:nfs-utils-1.2.3-64.el6.i686
--> Running transaction check
---> Package python-argparse.noarch 0:1.2.1-2.1.el6 will be installed
--> Finished Dependency Resolution

Running Transaction
  Installing : python-argparse-1.2.1-2.1.el6.noarch                     1/3
  Updating   : 1:nfs-utils-1.2.3-64.el6.i686                            2/3
  Cleanup    : 1:nfs-utils-1.2.3-39.el6.i686                            3/3
  Verifying  : 1:nfs-utils-1.2.3-64.el6.i686                            1/3
  Verifying  : python-argparse-1.2.1-2.1.el6.noarch                     2/3
  Verifying  : 1:nfs-utils-1.2.3-39.el6.i686                            3/3

Dependency Installed:
  python-argparse.noarch 0:1.2.1-2.1.el6                                                                                               

Updated:
  nfs-utils.i686 1:1.2.3-64.el6                                                                                                        

Complete!

 

二、修改配置文件
[root@node1 ~]#  vim /etc/exports
/home/hadoop/.ssh   192.168.*(rw,wdelay,root_squash,no_subtree_check,fsid=0)
[root@node1 ~]# service rpcbind start
[root@node1 ~]# service nfs start
Starting NFS services:                                     [  OK  ]
Starting NFS quotas:                                       [  OK  ]
Starting NFS mountd:                                       [  OK  ]
Starting NFS daemon:                                       [  OK  ]
Starting RPC idmapd:                                       [  OK  ]
[root@node1 ~]# rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp   3510  status
    100021    1   tcp  55646  nlockmgr
    100021    3   tcp  55646  nlockmgr
    100021    4   tcp  55646  nlockmgr


三、创建用户,并用该用户产生密钥
[root@node1 ~]# useradd hadoop
[root@node1 ~]# passwd hadoop
Changing password for user hadoop.
New password:
BAD PASSWORD: it is based on a dictionary word
BAD PASSWORD: is too simple
Retype new password:
passwd: all authentication tokens updated successfully.
[root@node1 ~]# su - hadoop
[hadoop@node1 ~]$ ssh-keygen -t rsa
[hadoop@node1 ~]$ cp .ssh/id_rsa.pub .ssh/authorized_keys
[hadoop@node1 ~]$ ssh 192.168.1.110
The authenticity of host '192.168.1.110 (192.168.1.110)' can't be established.
RSA key fingerprint is df:a1:53:ce:e0:59:0f:23:41:cd:04:16:23:e4:8b:e7.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.1.110' (RSA) to the list of known hosts.
reverse mapping checking getaddrinfo for bogon [192.168.1.110] failed - POSSIBLE BREAK-IN ATTEMPT!
[hadoop@node1 ~]$ exit
logout
Connection to 192.168.1.110 closed.


四、检查 NFS 服务器是否输出共享的目录
[root@node1 ~]#  service nfs restart
Shutting down NFS daemon:                                  [  OK  ]
Shutting down NFS mountd:                                  [  OK  ]
Shutting down NFS quotas:                                  [  OK  ]
Shutting down NFS services:                                [  OK  ]
Shutting down RPC idmapd:                                  [  OK  ]
Starting NFS services:                                     [  OK  ]
Starting NFS quotas:                                       [  OK  ]
Starting NFS mountd:                                       [  OK  ]
Starting NFS daemon:                                       [  OK  ]
Starting RPC idmapd:                                       [  OK  ]
[root@node1 ~]# exportfs
/home/hadoop/.ssh   192.168.*
如果再次修改了exports文件,不用重启NFS,重新加载一次就可以
[root@node1 ~]# exportfs -rv
查看加载结果
[root@node1 ~]# exportfs -v
[root@node1 home]# chmod -R  777 hadoop/

 

-------------------------------------------------------------------------------------------------------


一、客户机挂载,创建soft link

挂载之前,先保证两端软件一致

[root@node3 ~]#  yum install portmap

[root@node3 ~]#  yum install nfs-utils
[root@node3 ~]# chkconfig rpcbind on
[root@node3 ~]# chkconfig nfs on
[root@node3 ~]# service rpcbind start

root@node3 ~]# useradd hadoop
[root@node3 ~]# passwd hadoop
Changing password for user hadoop.
New password:
BAD PASSWORD: it is based on a dictionary word
BAD PASSWORD: is too simple
Retype new password:
passwd: all authentication tokens updated successfully.
[root@node1 ~]# su - hadoop
[root@node3 ~]# showmount -e 192.168.1.110
Export list for 192.168.1.110:
/home/hadoop/.ssh 192.168.*
[root@node3 ~]# mount 192.168.1.110:/home/hadoop/.ssh  /home/exp
教你一手:如果想实现客户端自动挂载,修改 vim /etc/fstab文件


二、新建用于放置挂载文件的目录,
[root@node3 ~]# mkdir -p /home/exp
注意/home/exp权限能允许挂载:
[root@node3 /]# ll /home
drwxr-xr-x 2 root root 4096 Aug 11 01:10 exp

三、挂载NFS服务器中的目录,挂载到本地的/home/exp
[root@node3 ~]# mount  192.168.1.110:/home/hadoop/.ssh /home/exp
[root@node3 ~]# cd /home/exp/
[root@node3 exp]# ls
authorized_keys  id_rsa  id_rsa.pub  known_hosts
[hadoop@node3 .ssh]$ cat authorized_keys
ssh-rsa AAAABjGPb2zQ== hadoop@node1
[root@node3 ~]# ll /home
total 8
drwxr-xr-x. 2 root   root   4096 Mar 12 04:18 exp
drwx------. 4 hadoop hadoop 4096 Mar 12 04:22 hadoop
[root@node3 ~]# su - hadoop
将挂载目录的authorized_keys 软连接到hadoop用户的.ssh目录下
(注意:因为源文件和目标文件不再同一个目录,所以源文件和目标文件一定要使用绝对路径,
否则会出现错误Too many levels of symbolic links错误)
[root@node3 ~]$ ln -s  /home/exp/authorized_keys /home/hadoop/.ssh/
[hadoop@node3 ~]$ ssh-keygen -t rsa
[hadoop@node3 .ssh]$ cat .ssh/id_rsa.pub >> .ssh/authorized_keys

 --------------------------------------------------------------------------------------------------------------------

验证免密码登录

 [hadoop@node3 .ssh]$ ssh 192.168.1.110
The authenticity of host '192.168.1.110 (192.168.1.110)' can't be established.
RSA key fingerprint is df:a1:53:ce:e0:59:0f:23:41:cd:04:16:23:e4:8b:e7.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.1.110' (RSA) to the list of known hosts.
reverse mapping checking getaddrinfo for bogon [192.168.1.110] failed - POSSIBLE BREAK-IN ATTEMPT!
hadoop@192.168.1.110's password:
Last login: Sat Mar 12 04:35:39 2016 from 192.168.1.110
[hadoop@node1 ~]$ exit
logout
Connection to 192.168.1.110 closed.
[hadoop@node3 .ssh]$ ssh 192.168.1.104
The authenticity of host '192.168.1.104 (192.168.1.104)' can't be established.
RSA key fingerprint is df:a1:53:ce:e0:59:0f:23:41:cd:04:16:23:e4:8b:e7.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.1.104' (RSA) to the list of known hosts.
reverse mapping checking getaddrinfo for bogon [192.168.1.104] failed - POSSIBLE BREAK-IN ATTEMPT!
hadoop@192.168.1.104's password:
[hadoop@node3 ~]$ exit
logout
Connection to 192.168.1.104 closed.
[hadoop@node3 .ssh]$

主节点实验

[hadoop@node1 ~]$ ssh 192.168.1.104
The authenticity of host '192.168.1.104 (192.168.1.104)' can't be established.
RSA key fingerprint is df:a1:53:ce:e0:59:0f:23:41:cd:04:16:23:e4:8b:e7.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.1.104' (RSA) to the list of known hosts.
reverse mapping checking getaddrinfo for bogon [192.168.1.104] failed - POSSIBLE BREAK-IN ATTEMPT!
hadoop@192.168.1.104's password:
Last login: Sat Mar 12 10:16:51 2016 from 192.168.1.104
[hadoop@node3 ~]$ exit
logout
Connection to 192.168.1.104 closed.
[hadoop@node1 ~]$ ssh 192.168.1.110
reverse mapping checking getaddrinfo for bogon [192.168.1.110] failed - POSSIBLE BREAK-IN ATTEMPT!
hadoop@192.168.1.110's password:
Last login: Sat Mar 12 10:27:44 2016 from 192.168.1.104
[hadoop@node1 ~]$ exit

 

 

 

 

 

 

 

 

增加新节点实验

[root@node2 ~]# yum install nfs-utils

Loaded plugins: fastestmirror, refresh-packagekit, security

Loading mirror speeds from cached hostfile

 * base: mirrors.163.com

 * extras: mirrors.163.com

 * updates: mirrors.163.com

Setting up Install Process

Resolving Dependencies

--> Running transaction check

---> Package nfs-utils.i686 1:1.2.3-39.el6 will be updated

---> Package nfs-utils.i686 1:1.2.3-64.el6 will be an update

--> Processing Dependency: python-argparse for package: 1:nfs-utils-1.2.3-64.el6.i686

--> Running transaction check

---> Package python-argparse.noarch 0:1.2.1-2.1.el6 will be installed

--> Finished Dependency Resolution

Total download size: 380 k

Is this ok [y/N]: y

Downloading Packages:

(1/2): nfs-utils-1.2.3-64.el6.i686.rpm                          | 333 kB     00:00     

(2/2): python-argparse-1.2.1-2.1.el6.noarch.rpm        |  48 kB     00:00     

----------------------------------------------------------------------------------------------------------------

Total                                                                            51 kB/s | 380 kB     00:07     

warning: rpmts_HdrFromFdno: Header V3 RSA/SHA1 Signature, key ID c105b9de: NOKEY

Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6

Importing GPG key 0xC105B9DE:

 Userid : CentOS-6 Key (CentOS 6 Official Signing Key) <centos-6-key@centos.org>

 Package: centos-release-6-5.el6.centos.11.1.i686 (@anaconda-CentOS-201311271240.i386/6.5)

 From   : /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6

Is this ok [y/N]: y

Running rpm_check_debug

Running Transaction Test

Transaction Test Succeeded

Running Transaction

Warning: RPMDB altered outside of yum.

  Installing : python-argparse-1.2.1-2.1.el6.noarch                                                         1/3 

  Updating   : 1:nfs-utils-1.2.3-64.el6.i686                                                                2/3 

  Cleanup    : 1:nfs-utils-1.2.3-39.el6.i686                                                                3/3 

  Verifying  : 1:nfs-utils-1.2.3-64.el6.i686                                                                1/3 

  Verifying  : python-argparse-1.2.1-2.1.el6.noarch                                                         2/3 

  Verifying  : 1:nfs-utils-1.2.3-39.el6.i686                                                                3/3 

 

Dependency Installed:

  python-argparse.noarch 0:1.2.1-2.1.el6                                                                        

 

Updated:

  nfs-utils.i686 1:1.2.3-64.el6                                                                                 

 

Complete!

[root@node2 ~]#  chkconfig rpcbind on

[root@node2 ~]# chkconfig nfs on

[root@node2 ~]# service rpcbind start

[root@node2 ~]#  showmount -e 192.168.1.110

Export list for 192.168.1.110:

/home/hadoop/.ssh 192.168.*

[root@node2 ~]#  mkdir -p /home/ex

[root@node2 ~]# mkdir -p /home/exp

[root@node2 ~]# rm -rf /home/ex

[root@node2 ~]# mount  192.168.1.110:/home/hadoop/.ssh /home/exp 

[root@node2 ~]#  cd /home/exp/

[root@node2 exp]#  ls

1.txt  authorized_keys  id_rsa  id_rsa.pub  known_hosts

[root@node2 exp]# cat authorized_keys 

ssh-rsa AAAb2zQ== hadoop@node1

ssh-rsa AAAAB3NFTZaw== hadoop@node3

[root@node2 exp]# adduser hadoop

[root@node2 exp]# passwd hadoop

Changing password for user hadoop.

New password: 

BAD PASSWORD: it is based on a dictionary word

BAD PASSWORD: is too simple

Retype new password: 

passwd: all authentication tokens updated successfully.

[root@node2 exp]# su - hadoop

[hadoop@node2 ~]$ ln -s  /home/exp/authorized_keys /home/hadoop/.ssh/

ln: target `/home/hadoop/.ssh/' is not a directory: No such file or directory

[hadoop@node2 ~]$ ssh-keygen -t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): 

Created directory '/home/hadoop/.ssh'.

Enter passphrase (empty for no passphrase): 

Enter same passphrase again: 

Your identification has been saved in /home/hadoop/.ssh/id_rsa.

Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.

The key fingerprint is:

4a:02:e1:c8:45:17:bc:2e:13:aa:88:c9:3b:ab:bc:eb hadoop@node2

The key's randomart image is:

 

[hadoop@node2 ~]$ ln -s  /home/exp/authorized_keys /home/hadoop/.ssh/

[hadoop@node2 ~]$ cat .ssh/id_rsa.pub >> .ssh/authorized_keys

[hadoop@node2 ~]$ cat .ssh/authorized_keys

ssh-rsa QAtQnb2zQ== hadoop@node1

ssh-rsa AAAAN5wglnFTZaw== hadoop@node3

ssh-rsa AAAAB3N6iH6BW6p7wFCufQ== hadoop@node2

[hadoop@node2 ~]$ ssh 192.168.1.110

The authenticity of host '192.168.1.110 (192.168.1.110)' can't be established.

RSA key fingerprint is df:a1:53:ce:e0:59:0f:23:41:cd:04:16:23:e4:8b:e7.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added '192.168.1.110' (RSA) to the list of known hosts.

reverse mapping checking getaddrinfo for bogon [192.168.1.110] failed - POSSIBLE BREAK-IN ATTEMPT!

hadoop@192.168.1.110's password: 

Last login: Sat Mar 12 10:29:01 2016 from 192.168.1.110

[hadoop@node1 ~]$ exit

logout

Connection to 192.168.1.110 closed.

[hadoop@node2 ~]$ ssh 192.168.1.103

The authenticity of host '192.168.1.103 (192.168.1.103)' can't be established.

RSA key fingerprint is df:a1:53:ce:e0:59:0f:23:41:cd:04:16:23:e4:8b:e7.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added '192.168.1.103' (RSA) to the list of known hosts.

reverse mapping checking getaddrinfo for bogon [192.168.1.103] failed - POSSIBLE BREAK-IN ATTEMPT!

hadoop@192.168.1.103's password: 

Last login: Tue Mar 15 04:48:13 2016 from 192.168.1.104

[hadoop@node2 ~]$ exit

logout

Connection to 192.168.1.103 closed.

[hadoop@node2 ~]$ ssh 192.168.1.104

The authenticity of host '192.168.1.104 (192.168.1.104)' can't be established.

RSA key fingerprint is df:a1:53:ce:e0:59:0f:23:41:cd:04:16:23:e4:8b:e7.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added '192.168.1.104' (RSA) to the list of known hosts.

reverse mapping checking getaddrinfo for bogon [192.168.1.104] failed - POSSIBLE BREAK-IN ATTEMPT!

hadoop@192.168.1.104's password: 

Last login: Sat Mar 12 10:17:38 2016 from 192.168.1.110

 

备注:关于NFS操作不同版本可能略有不同,可以参考

永久链接: http://gaojingsong.iteye.com/blog/2278941

预览文章: 【Linux网络文件系统--Network File System搭建】

  • 大小: 38.4 KB
0
4
分享到:
评论

相关推荐

    大数据教程之搭建Hadoop集群.zip_大数据环境搭建hadoop

    7. "细细品味Hadoop_Hadoop集群(第5期副刊)_JDK和SSH无密码配置.pdf":JDK是Java开发工具包,Hadoop是用Java编写的,文件可能讲解如何安装JDK以及如何设置SSH无密码登录,以便于集群内的节点间通信。 8. "Hadoop...

    hadoop-2.x的环境搭建

    2. 配置SSH免密登录:确保所有节点间可以无密码互相SSH登录,简化管理。 八、Hadoop组件安装 Hadoop生态中还包括MapReduce、Hive、Pig、HBase等组件,它们的安装和配置通常在Hadoop环境搭建完成后进行: 1. ...

    hadoop分布式文件系统搭建

    #### 一、配置hadoop分布式文件系统环境搭建 ##### 1. 准备 在开始搭建Hadoop分布式文件系统之前,首先需要确保环境准备妥当。具体步骤包括: - **检查端口占用情况**:通过`netstat -apn | grep 9083`命令检查...

    Hadoop平台详细搭建过程

    搭建Hadoop之前,需要安装Java开发工具包(JDK),因为Hadoop是用Java编写的,并且需要SSH无密码登录,以便在各节点之间进行远程操作。 6. Hadoop版本选择: 现实中安装Hadoop时,需要从官方网站下载对应版本的...

    Hadoop集群搭建部署与MapReduce程序关键点个性化开发.doc

    接着,安装SSH服务并配置无密码登录,这对于远程管理和集群节点间的通信至关重要。Java环境的安装是运行Hadoop的前提,因为Hadoop是用Java编写的。然后,正式安装Hadoop,包括Hadoop Common、HDFS、YARN和MapReduce...

    hadoop 分布式集群搭建

    - 首先,需要准备一个网络中各个节点之间能够通信的环境,确保集群中的每台计算机都能够通过SSH无密码登录,这对于集群中的各个服务能够无干预地进行服务间通信是必要的。 - 接下来,需要下载并安装JDK,因为...

    Hadoop实验环境搭建手册.pdf

    在本文中,我们将详细探讨如何搭建Hadoop实验环境,特别是使用Cloudera QuickVM进行设置。Hadoop是一种开源的分布式计算框架,广泛用于大数据处理。Cloudera Manager是一个流行的工具,用于管理和监控Hadoop集群。 ...

    Ubuntu 1.04搭建hadoop单机版环境.pdf

    在本文中,我们将详细探讨如何在Ubuntu 12.04上搭建Hadoop单机版环境。Hadoop是一个开源的分布式计算框架,主要用于处理和存储大量数据。在单机环境中搭建Hadoop,主要目的是学习和测试Hadoop的功能,而不涉及实际的...

    Hadoop云计算平台搭建最详细过程.doc

    1. **环境准备**:确保所有机器都安装了64位的Ubuntu 12.04.4,并且配置好SSH免密码登录,方便集群间的通信。 2. **安装JDK**:Hadoop依赖Java环境,需要安装JDK 7或以上版本。 3. **下载和解压Hadoop**:将Hadoop的...

    hadoop环境配置(单机集群)

    ### Hadoop环境配置详解——单机集群篇 #### 一、引言 随着大数据时代的到来,Hadoop作为处理海量数据的利器,其重要性不言而喻。本文旨在详细介绍如何在虚拟机上安装Hadoop环境,并搭建单机集群。通过图文并茂的...

    hadoop集群服务搭建共6页.pdf.zip

    4. **软件准备**:需要安装Java环境,Hadoop发行版,可能还需要配置SSH无密码登录,以及安装其他的辅助工具如Hadoop配置管理工具Ambari。 5. **环境配置**:包括配置Hadoop的环境变量,修改hadoop-site.xml、core-...

    大数据教程之搭建Hadoop集群.zip

    - **配置SSH无密码登录**:为了简化节点间的通信,需要配置SSH无密码登录,这可以通过公钥认证实现。 - **安装配置CentOS**:如果选择CentOS作为操作系统,需要进行基本的系统设置,如防火墙配置、用户权限设置等。...

    Hadoop(CDH)分布式环境搭建(简单易懂,绝对有效)1

    4. **SSH无密码拷贝数据**:通过`ssh-keygen`生成密钥对,然后使用`ssh-copy-id`将公钥分发到其他节点,实现节点间无密码登录,简化操作。 5. **设置主机名和IP对应**:编辑`/etc/hosts`文件,将每台机器的主机名和...

    Hadoop云计算平台搭建最详细过程(共22页).pdf

    9. 配置Hadoop集群:Hadoop集群需要配置SSH免密码登录,并且需要配置Hadoop的配置文件。 Hadoop云计算平台的架构可以分为以下几个部分: 1. HDFS:HDFS是一个分布式文件系统,用于存储和管理大量数据。 2. ...

    Hadoop云计算平台搭建最详细过程(共22页).docx

    - SSH:所有机器需配置SSH免密码登录。 **二、Hadoop集群安装部署** 1. **集群角色分配**:在Hadoop集群中,有Master和Slave节点,以及JobTracker、TaskTracker、NameNode和DataNode角色。例如,分配IP地址如下:...

    hadoop超级详细安装文档

    在所有节点之间实现SSH无密码登录,以简化集群管理。使用`ssh-keygen`生成公钥和私钥对,然后将Master节点的公钥复制到每个Slave节点的`~/.ssh/authorized_keys`文件中。 **配置Hadoop** 在Master节点上解压Hadoop...

Global site tag (gtag.js) - Google Analytics