`
knight_black_bob
  • 浏览: 853246 次
  • 性别: Icon_minigender_1
  • 来自: 北京
社区版块
存档分类
最新评论

(完整,亲测)docker hadoop 2.7.1

阅读更多

 

 

0.准备工作

下载  centos

[root@bogon soft]# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
docker.io/centos    latest              d123f4e55e12        7 days ago          196.6 MB

 

 1.创建centos-ssh-root

1.1 创建 centos-ssh-root dockerfile

注意:

这里面 我们先安装 了 vim,本人喜欢vim,不喜欢vi

    先安装which ,后面 hadoop format 需要用到

 

# 选择一个已有的os镜像作为基a础  
FROM docker.io/centos

# 镜像的作者  
MAINTAINER baoyou curiousby

# 安装openssh-server和sudo软件包,并且将sshd的UsePAM参数设置成no  
RUN yum install -y openssh-server sudo
RUN sed -i 's/UsePAM yes/UsePAM no/g' /etc/ssh/sshd_config
#安装openssh-clients
RUN yum  install -y openssh-clients

RUN yum install -y vim
RUN yum install -y which

# 添加测试用户root,密码root,并且将此用户添加到sudoers里  
RUN echo "root:root" | chpasswd
RUN echo "root   ALL=(ALL)       ALL" >> /etc/sudoers
# 下面这两句比较特殊,在centos6上必须要有,否则创建出来的容器sshd不能登录  
RUN ssh-keygen -t dsa -f /etc/ssh/ssh_host_dsa_key
RUN ssh-keygen -t rsa -f /etc/ssh/ssh_host_rsa_key

# 启动sshd服务并且暴露22端口  
RUN mkdir /var/run/sshd
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]

 1.2 bulid

docker build -t baoyou/centos-ssh-root .

 

1.3 bulid 日志

[root@bogon soft]# mkdir centos-ssh-root
[root@bogon soft]# ls
centos-ssh-root
[root@bogon soft]# cd centos-ssh-root/
[root@bogon centos-ssh-root]# ls
[root@bogon centos-ssh-root]# vim Dockerfile
[root@bogon centos-ssh-root]# docker build -t baoyou/centos-ssh-root .
Sending build context to Docker daemon  2.56 kB
Step 1 : FROM docker.io/centos
 ---> d123f4e55e12
Step 2 : MAINTAINER baoyou curiousby
 ---> Running in 4935d9a8417c
 ---> a526aade20a6
Removing intermediate container 4935d9a8417c
Step 3 : RUN yum install -y openssh-server sudo
 ---> Running in f0c0f9d82f34
Loaded plugins: fastestmirror, ovl
Determining fastest mirrors
 * base: mirrors.btte.net
 * extras: mirrors.btte.net
 * updates: mirrors.btte.net
Resolving Dependencies
--> Running transaction check
---> Package openssh-server.x86_64 0:7.4p1-13.el7_4 will be installed
--> Processing Dependency: openssh = 7.4p1-13.el7_4 for package: openssh-server-7.4p1-13.el7_4.x86_64
--> Processing Dependency: fipscheck-lib(x86-64) >= 1.3.0 for package: openssh-server-7.4p1-13.el7_4.x86_64
--> Processing Dependency: libwrap.so.0()(64bit) for package: openssh-server-7.4p1-13.el7_4.x86_64
--> Processing Dependency: libfipscheck.so.1()(64bit) for package: openssh-server-7.4p1-13.el7_4.x86_64
---> Package sudo.x86_64 0:1.8.19p2-11.el7_4 will be installed
--> Running transaction check
---> Package fipscheck-lib.x86_64 0:1.4.1-6.el7 will be installed
--> Processing Dependency: /usr/bin/fipscheck for package: fipscheck-lib-1.4.1-6.el7.x86_64
---> Package openssh.x86_64 0:7.4p1-13.el7_4 will be installed
---> Package tcp_wrappers-libs.x86_64 0:7.6-77.el7 will be installed
--> Running transaction check
---> Package fipscheck.x86_64 0:1.4.1-6.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================
 Package                Arch        Version                  Repository    Size
================================================================================
Installing:
 openssh-server         x86_64      7.4p1-13.el7_4           updates      458 k
 sudo                   x86_64      1.8.19p2-11.el7_4        updates      1.1 M
Installing for dependencies:
 fipscheck              x86_64      1.4.1-6.el7              base          21 k
 fipscheck-lib          x86_64      1.4.1-6.el7              base          11 k
 openssh                x86_64      7.4p1-13.el7_4           updates      509 k
 tcp_wrappers-libs      x86_64      7.6-77.el7               base          66 k

Transaction Summary
================================================================================
Install  2 Packages (+4 Dependent packages)

Total download size: 2.1 M
Installed size: 6.9 M
Downloading packages:
warning: /var/cache/yum/x86_64/7/base/packages/fipscheck-1.4.1-6.el7.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID f4a80eb5: NOKEY
Public key for fipscheck-1.4.1-6.el7.x86_64.rpm is not installed
Public key for openssh-7.4p1-13.el7_4.x86_64.rpm is not installed
--------------------------------------------------------------------------------
Total                                              404 kB/s | 2.1 MB  00:05     
Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
Importing GPG key 0xF4A80EB5:
 Userid     : "CentOS-7 Key (CentOS 7 Official Signing Key) <security@centos.org>"
 Fingerprint: 6341 ab27 53d7 8a78 a7c2 7bb1 24c6 a8a7 f4a8 0eb5
 Package    : centos-release-7-4.1708.el7.centos.x86_64 (@CentOS)
 From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : fipscheck-1.4.1-6.el7.x86_64                                 1/6 
  Installing : fipscheck-lib-1.4.1-6.el7.x86_64                             2/6 
  Installing : openssh-7.4p1-13.el7_4.x86_64                                3/6 
  Installing : tcp_wrappers-libs-7.6-77.el7.x86_64                          4/6 
  Installing : openssh-server-7.4p1-13.el7_4.x86_64                         5/6 
  Installing : sudo-1.8.19p2-11.el7_4.x86_64                                6/6 
  Verifying  : fipscheck-lib-1.4.1-6.el7.x86_64                             1/6 
  Verifying  : tcp_wrappers-libs-7.6-77.el7.x86_64                          2/6 
  Verifying  : fipscheck-1.4.1-6.el7.x86_64                                 3/6 
  Verifying  : openssh-7.4p1-13.el7_4.x86_64                                4/6 
  Verifying  : openssh-server-7.4p1-13.el7_4.x86_64                         5/6 
  Verifying  : sudo-1.8.19p2-11.el7_4.x86_64                                6/6 

Installed:
  openssh-server.x86_64 0:7.4p1-13.el7_4     sudo.x86_64 0:1.8.19p2-11.el7_4    

Dependency Installed:
  fipscheck.x86_64 0:1.4.1-6.el7      fipscheck-lib.x86_64 0:1.4.1-6.el7       
  openssh.x86_64 0:7.4p1-13.el7_4     tcp_wrappers-libs.x86_64 0:7.6-77.el7    

Complete!
 ---> b9b2d9d28e91
Removing intermediate container f0c0f9d82f34
Step 4 : RUN sed -i 's/UsePAM yes/UsePAM no/g' /etc/ssh/sshd_config
 ---> Running in da4de0cafd82
 ---> 4af5db8b4cef
Removing intermediate container da4de0cafd82
Step 5 : RUN yum  install -y openssh-clients
 ---> Running in 68a2fdd224d1
Loaded plugins: fastestmirror, ovl
Loading mirror speeds from cached hostfile
 * base: mirrors.btte.net
 * extras: mirrors.btte.net
 * updates: mirrors.btte.net
Resolving Dependencies
--> Running transaction check
---> Package openssh-clients.x86_64 0:7.4p1-13.el7_4 will be installed
--> Processing Dependency: libedit.so.0()(64bit) for package: openssh-clients-7.4p1-13.el7_4.x86_64
--> Running transaction check
---> Package libedit.x86_64 0:3.0-12.20121213cvs.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================
 Package             Arch       Version                       Repository   Size
================================================================================
Installing:
 openssh-clients     x86_64     7.4p1-13.el7_4                updates     654 k
Installing for dependencies:
 libedit             x86_64     3.0-12.20121213cvs.el7        base         92 k

Transaction Summary
================================================================================
Install  1 Package (+1 Dependent package)

Total download size: 746 k
Installed size: 2.8 M
Downloading packages:
--------------------------------------------------------------------------------
Total                                              384 kB/s | 746 kB  00:01     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : libedit-3.0-12.20121213cvs.el7.x86_64                        1/2 
  Installing : openssh-clients-7.4p1-13.el7_4.x86_64                        2/2 
  Verifying  : libedit-3.0-12.20121213cvs.el7.x86_64                        1/2 
  Verifying  : openssh-clients-7.4p1-13.el7_4.x86_64                        2/2 

Installed:
  openssh-clients.x86_64 0:7.4p1-13.el7_4                                       

Dependency Installed:
  libedit.x86_64 0:3.0-12.20121213cvs.el7                                       

Complete!
 ---> 5a68ae327b7b
Removing intermediate container 68a2fdd224d1
Step 6 : RUN echo "root:root" | chpasswd
 ---> Running in 2ae8f5835434
 ---> e5b5e9580789
Removing intermediate container 2ae8f5835434
Step 7 : RUN echo "root   ALL=(ALL)       ALL" >> /etc/sudoers
 ---> Running in b415558a8bc6
 ---> ca06f821d868
Removing intermediate container b415558a8bc6
Step 8 : RUN ssh-keygen -t dsa -f /etc/ssh/ssh_host_dsa_key
 ---> Running in 7255f91f09b9
Enter passphrase (empty for no passphrase): Enter same passphrase again: Generating public/private dsa key pair.
Your identification has been saved in /etc/ssh/ssh_host_dsa_key.
Your public key has been saved in /etc/ssh/ssh_host_dsa_key.pub.
The key fingerprint is:
SHA256:uAAlx5f2WnMrlIQy3JPw9Zz/9HnD7MVvblLFaIZzKQE root@4935d9a8417c
The key's randomart image is:
+---[DSA 1024]----+
|  .o+o +. E.     |
|   +=.O..o ..    |
|  .  =.+ .+  o + |
|   .   .* ..+ * o|
|    . .+So ..*. .|
|     .... .  oooo|
|      .  .    .*=|
|              o B|
|               *o|
+----[SHA256]-----+
 ---> 36317be611b0
Removing intermediate container 7255f91f09b9
Step 9 : RUN ssh-keygen -t rsa -f /etc/ssh/ssh_host_rsa_key
 ---> Running in 1b3495d71562
Enter passphrase (empty for no passphrase): Enter same passphrase again: Generating public/private rsa key pair.
Your identification has been saved in /etc/ssh/ssh_host_rsa_key.
Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub.
The key fingerprint is:
SHA256:QksGOHmxudCZg1cIDHGJvhnTNhnULXvtdFKbKNoDh9w root@4935d9a8417c
The key's randomart image is:
+---[RSA 2048]----+
|o=+*+oo          |
|..*+oX .   .     |
|. +o% O . o o    |
| + B @ E = +     |
|  * o O S o      |
| o   . + .       |
|        .        |
|                 |
|                 |
+----[SHA256]-----+
 ---> d53cd418ff85
Removing intermediate container 1b3495d71562
Step 10 : RUN mkdir /var/run/sshd
 ---> Running in d3e71c08fd28
 ---> 995e7295beea
Removing intermediate container d3e71c08fd28
Step 11 : EXPOSE 22
 ---> Running in ff7e2cc7c67f
 ---> 3dfc9a6efd6a
Removing intermediate container ff7e2cc7c67f
Step 12 : CMD /usr/sbin/sshd -D
 ---> Running in 81478a7d9251
 ---> 45ef8b6b8254
Removing intermediate container 81478a7d9251
Successfully built 45ef8b6b8254
[root@bogon centos-ssh-root]# docker images
REPOSITORY               TAG                 IMAGE ID            CREATED              SIZE
baoyou/centos-ssh-root   latest              45ef8b6b8254        About a minute ago   303.5 MB
docker.io/centos         latest              d123f4e55e12        7 days ago           196.6 MB

 

 2.创建 centos-ssh-root-java

 2.1 创建 centos-ssh-root-java Dockerfile

FROM baoyou/centos-ssh-root
ADD jdk-7u79-linux-x64.tar.gz  /usr/local/
RUN mv /usr/local/jdk1.7.0_79 /usr/local/jdk1.7
ENV JAVA_HOME /usr/local/jdk1.7
ENV PATH $JAVA_HOME/bin:$PATH

 

 2.2 bulid

 

docker build -t baoyou/centos-ssh-root-java .
 

 

2.3 bulid  日志

 

[root@bogon centos-ssh-root-java]# vim Dockerfile
[root@bogon centos-ssh-root-java]# docker build -t baoyou/centos-ssh-root-java .
Sending build context to Docker daemon 153.5 MB
Step 1 : FROM baoyou/centos-ssh-root
 ---> 45ef8b6b8254
Step 2 : ADD jdk-7u79-linux-x64.tar.gz /usr/local/
 ---> 82d01ceb0da3
Removing intermediate container 32af4ac32299
Step 3 : RUN mv /usr/local/jdk1.7.0_79 /usr/local/jdk1.9
 ---> Running in 2209bd55cef1
 ---> b44bad4a8dcb
Removing intermediate container 2209bd55cef1
Step 4 : ENV JAVA_HOME /usr/local/jdk1.9
 ---> Running in 6f938ad9bfda
 ---> 71e298d66485
Removing intermediate container 6f938ad9bfda
Step 5 : ENV PATH $JAVA_HOME/bin:$PATH
 ---> Running in e89392b2b788
 ---> 0213bbd4d724
Removing intermediate container e89392b2b788
Successfully built 0213bbd4d724
 

 

 3.创建 centos-ssh-root-java-hadoop

 3.1 .创建 centos-ssh-root-java-hadoop Dockerfile

 

FROM baoyou/centos-ssh-root-java
ADD hadoop-2.7.1.tar.gz /usr/local
RUN mv /usr/local/hadoop-2.7.1 /usr/local/hadoop
ENV HADOOP_HOME /usr/local/hadoop
ENV PATH $HADOOP_HOME/bin:$PATH
 

 

3.2 bulid

 

docker build -t baoyou/centos-ssh-root-java-hadoop .
 

 

 3.3 bulid 日志

 

[root@bogon centos-ssh-root-java-hadoop]# docker build -t baoyou/centos-ssh-root-java-hadoop .
Sending build context to Docker daemon 547.1 MB
Step 1 : FROM baoyou/centos-ssh-root-java
 ---> 652fc71facfd
Step 2 : ADD hadoop-2.7.1.tar.gz /usr/local
 ---> 55951fc3fdc1
Removing intermediate container f0912988a29b
Step 3 : RUN mv /usr/local/hadoop-2.7.1 /usr/local/hadoop
 ---> Running in d8afac1e59d9
 ---> 56d463beea25
Removing intermediate container d8afac1e59d9
Step 4 : ENV HADOOP_HOME /usr/local/hadoop
 ---> Running in 27ed5fad8981
 ---> 526d79c016fc
Removing intermediate container 27ed5fad8981
Step 5 : ENV PATH $HADOOP_HOME/bin:$PATH
 ---> Running in c238304b499c
 ---> 284dcc575add
Removing intermediate container c238304b499c
Successfully built 284dcc575add
 

 

 3.4 docker images 

 

[root@bogon centos-ssh-root-java-hadoop]# docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
baoyou/centos-ssh-root-java-hadoop   latest              966719de6484        7 seconds ago       1.385 GB
baoyou/centos-ssh-root-java          latest              0213bbd4d724        42 minutes ago      916.1 MB
baoyou/centos-ssh-root               latest              45ef8b6b8254        46 minutes ago      303.5 MB
docker.io/centos                     latest              d123f4e55e12        7 days ago          196.6 MB
 

 

 3.5. 启动 hadoop

 

docker run --name hadoop0 --hostname hadoop0 -d -P -p 50070:50070 -p 8088:8088 baoyou/centos-ssh-root-java-hadoop

docker run --name hadoop1 --hostname hadoop1 -d -P  baoyou/centos-ssh-root-java-hadoop

docker run --name hadoop2 --hostname hadoop2 -d -P  baoyou/centos-ssh-root-java-hadoop
 

 

 3.6 docker ps

 

[root@bogon centos-ssh-root-java-hadoop]# docker ps
CONTAINER ID        IMAGE                                COMMAND               CREATED             STATUS              PORTS                                                                     NAMES
8f73f52e8cc1        baoyou/centos-ssh-root-java-hadoop   "/usr/sbin/sshd -D"   7 seconds ago       Up 6 seconds        0.0.0.0:32770->22/tcp                                                     hadoop2
4d553dbf7fbc        baoyou/centos-ssh-root-java-hadoop   "/usr/sbin/sshd -D"   15 seconds ago      Up 14 seconds       0.0.0.0:32769->22/tcp                                                     hadoop1
134a18b42c1a        baoyou/centos-ssh-root-java-hadoop   "/usr/sbin/sshd -D"   53 seconds ago      Up 51 seconds       0.0.0.0:8088->8088/tcp, 0.0.0.0:50070->50070/tcp, 0.0.0.0:32768->22/tcp   hadoop0
 

 

3.7 准备给容器设置固定IP

3.7.1 下载 pipwork 

下载地址:https://github.com/jpetazzo/pipework.git 

 

3.7.2 安装pipwork

 

unzip pipework-master.zip
mv pipework-master pipework
cp -rp pipework/pipework /usr/local/bin/ 
 

 

3.7.3 安装插件bridge-utils

 

yum -y install bridge-utils
 

 

 

3.7.4  brctl show  (查看存在  virbr0 ? 否在创建)

 

[root@bogon baoyou]# brctl show
bridge name	bridge id		STP enabled	interfaces
docker0		8000.024292a9ad4a	no		veth4dc65ee
							veth646bc14
							veth8e3aab5
virbr0		8000.16d3ac819517	yes		veth1pl3187
 

 

  ifconfig

virbr0 192.168.122.1

 

 

[root@bogon centos-ssh-root-java-hadoop]# ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 0.0.0.0
        ether 02:42:d7:fb:9c:a1  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.206.241  netmask 255.255.255.0  broadcast 192.168.206.255
        inet6 fe80::67a3:3777:46a8:8a2f  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:d2:b3:c2  txqueuelen 1000  (Ethernet)
        RX packets 1606  bytes 851375 (831.4 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 757  bytes 90712 (88.5 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
        ether 52:54:00:88:cb:23  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
 

 

 我这里已经存在 virbr0 ,没有virbr0就自己创建

brctl addbr virbr0
ip link set dev virbr0 up
ip addr add 192.168.122.1/24 dev virbr0

 

 本人无网络知识,这点对这个部分会理解吃力

 

3.7.5 分配 ip

pipework virbr0 hadoop0 192.168.122.10/24
pipework virbr0 hadoop1 192.168.122.11/24
pipework virbr0 hadoop2 192.168.122.12/24

 

3.7.6 修改虚拟机hosts

[root@bogon centos-ssh-root-java-hadoop]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.122.10   hadoop0
192.168.122.11    hadoop1
192.168.122.12    hadoop2

 

 

3.7.7 测试  ping 192.168.122.10

[root@bogon centos-ssh-root-java-hadoop]# ping hadoop0
PING hadoop0 (192.168.122.10) 56(84) bytes of data.
64 bytes from hadoop0 (192.168.122.10): icmp_seq=1 ttl=64 time=0.098 ms
64 bytes from hadoop0 (192.168.122.10): icmp_seq=2 ttl=64 time=0.055 ms
64 bytes from hadoop0 (192.168.122.10): icmp_seq=3 ttl=64 time=0.091 ms

 出现以下 即分配ip成功

  测试ssh 容器,成功

ssh hadoop0
ssh hadoop1
ssh hadoop2

 

 

 

3.8 修改 容器 hadoop0 hadoop1 hadoop2 内部hosts

本地 创建 sshhosts

[root@bogon centos-ssh-root-java-hadoop]# cat sshhosts 
 
127.0.0.1	localhost
::1	localhost ip6-localhost ip6-loopback
fe00::0	ip6-localnet
ff00::0	ip6-mcastprefix
ff02::1	ip6-allnodes
ff02::2	ip6-allrouters

172.17.0.2      hadoop0
172.17.0.2      hadoop0.bridge
172.17.0.3      hadoop1
172.17.0.3      hadoop1.bridge
172.17.0.4      hadoop2
172.17.0.4      hadoop2.bridge

192.168.122.10 hadoop0
192.168.122.11 hadoop1
192.168.122.12 hadoop2

  

 copy 到 hadoop0,hadoop1,hadoop2

scp sshhosts  root@hadoop0:/etc/hosts
scp sshhosts  root@hadoop1/etc/hosts
scp sshhosts  root@hadoop2/etc/hosts

 

 3.9 容器内部ssh 免密钥

3.9.1 进入 hadoop0

docker exec  -it  hadoop0 bash

3.9.2 免密钥操作

在hadoop0上执行下面操作
cd  ~
mkdir .ssh
cd .ssh
ssh-keygen -t rsa(一直按回车即可)
ssh-copy-id -i localhost
ssh-copy-id -i hadoop0
ssh-copy-id -i hadoop1
ssh-copy-id -i hadoop2
在hadoop1上执行下面操作  ssh hadoop1
cd  ~
cd .ssh
ssh-keygen -t rsa(一直按回车即可)
ssh-copy-id -i localhost
ssh-copy-id -i hadoop1
在hadoop2上执行下面操作  ssh hadoop2
cd  ~
cd .ssh
ssh-keygen -t rsa(一直按回车即可)
ssh-copy-id -i localhost
ssh-copy-id -i hadoop2

 3.9.3 测试

在 hadoop0 中测试 ssh hadoop0 ,hadoop1,hadoop2

 

3.10 (重点) hadoop 配置

3.10.1 进入 hadoop 目录

cd /usr/local/hadoop/etc/hadoop/

3.10.2 修改配置文件 

3.10.2.1  vim  hadoop-env.sh

 

export JAVA_HOME=/usr/local/jdk1.7
 

 

3.10.2.2 vim core-site.xml

 

<configuration>
   <property>
                <name>fs.defaultFS</name>
                <value>hdfs://hadoop0:9000</value>
        </property>
        <property>
                <name>hadoop.tmp.dir</name>
                <value>/usr/local/hadoop/tmp</value>
        </property>
         <property>
                 <name>fs.trash.interval</name>
                 <value>1440</value>
        </property>
</configuration>
  

 

 3.10.2.3 vim hdfs-site.xml

 

<configuration>
 <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>
        <name>dfs.permissions</name>
        <value>false</value>
    </property>
</configuration>
 3.10.2.4 vim yarn-site.xml

 

 

<configuration>

<!-- Site specific YARN configuration properties -->
  <property>
                <name>yarn.nodemanager.aux-services</name>
                <value>mapreduce_shuffle</value>
        </property>
        <property> 
                <name>yarn.log-aggregation-enable</name> 
                <value>true</value> 
        </property>
</configuration>
 

 

3.10.2.5 vim mapred-site.xml

 

<configuration>
<property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
</configuration>
 

 

 3.10.2.6 测试 单机 伪分布式

3.10.2.6.1 进入hadoop 目录

 

cd /usr/local/hadoop
3.10.2.6.2 hdfs format 

 

 

 bin/hdfs namenode -format
 3.10.2.6.3 format 日志 

 

 

[root@hadoop0 hadoop]# cd /usr/local/hadoop
[root@hadoop0 hadoop]# bin/hdfs namenode -format
17/11/14 11:20:21 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = hadoop0/172.17.0.2
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.4.1
STARTUP_MSG:   classpath = /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-2.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/common/lib/junit-4.8.2.jar:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.5.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.4.1-tests.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.4.1.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-2.4.1.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.4.1-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.4.1.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-xc-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jline-0.9.94.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.10.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.4.1-tests.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.1.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar
STARTUP_MSG:   build = http://svn.apache.org/repos/asf/hadoop/common -r 1604318; compiled by 'jenkins' on 2014-06-21T05:43Z
STARTUP_MSG:   java = 1.7.0_79
************************************************************/
17/11/14 11:20:21 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
17/11/14 11:20:21 INFO namenode.NameNode: createNameNode [-format]
Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /usr/local/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
17/11/14 11:20:22 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Formatting using clusterid: CID-b04e9bd9-1f09-4d72-a469-87baec5795dc
17/11/14 11:20:23 INFO namenode.FSNamesystem: fsLock is fair:true
17/11/14 11:20:23 INFO namenode.HostFileManager: read includes:
HostSet(
)
17/11/14 11:20:23 INFO namenode.HostFileManager: read excludes:
HostSet(
)
17/11/14 11:20:23 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
17/11/14 11:20:23 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
17/11/14 11:20:23 INFO util.GSet: Computing capacity for map BlocksMap
17/11/14 11:20:23 INFO util.GSet: VM type       = 64-bit
17/11/14 11:20:23 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
17/11/14 11:20:23 INFO util.GSet: capacity      = 2^21 = 2097152 entries
17/11/14 11:20:23 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
17/11/14 11:20:23 INFO blockmanagement.BlockManager: defaultReplication         = 1
17/11/14 11:20:23 INFO blockmanagement.BlockManager: maxReplication             = 512
17/11/14 11:20:23 INFO blockmanagement.BlockManager: minReplication             = 1
17/11/14 11:20:23 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
17/11/14 11:20:23 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
17/11/14 11:20:23 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
17/11/14 11:20:23 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
17/11/14 11:20:23 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
17/11/14 11:20:23 INFO namenode.FSNamesystem: fsOwner             = root (auth:SIMPLE)
17/11/14 11:20:23 INFO namenode.FSNamesystem: supergroup          = supergroup
17/11/14 11:20:23 INFO namenode.FSNamesystem: isPermissionEnabled = false
17/11/14 11:20:23 INFO namenode.FSNamesystem: HA Enabled: false
17/11/14 11:20:23 INFO namenode.FSNamesystem: Append Enabled: true
17/11/14 11:20:24 INFO util.GSet: Computing capacity for map INodeMap
17/11/14 11:20:24 INFO util.GSet: VM type       = 64-bit
17/11/14 11:20:24 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
17/11/14 11:20:24 INFO util.GSet: capacity      = 2^20 = 1048576 entries
17/11/14 11:20:24 INFO namenode.NameNode: Caching file names occuring more than 10 times
17/11/14 11:20:24 INFO util.GSet: Computing capacity for map cachedBlocks
17/11/14 11:20:24 INFO util.GSet: VM type       = 64-bit
17/11/14 11:20:24 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
17/11/14 11:20:24 INFO util.GSet: capacity      = 2^18 = 262144 entries
17/11/14 11:20:24 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
17/11/14 11:20:24 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
17/11/14 11:20:24 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
17/11/14 11:20:24 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
17/11/14 11:20:24 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
17/11/14 11:20:24 INFO util.GSet: Computing capacity for map NameNodeRetryCache
17/11/14 11:20:24 INFO util.GSet: VM type       = 64-bit
17/11/14 11:20:24 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
17/11/14 11:20:24 INFO util.GSet: capacity      = 2^15 = 32768 entries
17/11/14 11:20:24 INFO namenode.AclConfigFlag: ACLs enabled? false
17/11/14 11:20:24 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1660706305-172.17.0.2-1510658424624
17/11/14 11:20:24 INFO common.Storage: Storage directory /usr/local/hadoop/tmp/dfs/name has been successfully formatted.
17/11/14 11:20:25 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
17/11/14 11:20:25 INFO util.ExitUtil: Exiting with status 0
17/11/14 11:20:25 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop0/172.17.0.2
************************************************************/
 

 

3.10.2.6.3 确认成功

 倒数 Storage directory /usr/local/hadoop/tmp/dfs/name has been successfully formatted. 即成功 配置

 

3.10.2.6.4 启动伪分布式 启动

 

sbin/start-all.sh
 

 

3.10.2.6.5 启动过程中确认一次 yes

 

Are you sure you want to continue connecting (yes/no)? yes   
 

 

3.10.2.6.6 检测 启动成功

 

[root@hadoop0 hadoop]# jps
3267 SecondaryNameNode
3003 NameNode
3664 Jps
3397 ResourceManager
3090 DataNode
3487 NodeManager
 3.10.2.6.7 关闭伪分布式

 

 

sbin/stop-all.sh
 

 

 

 

 3.10.2.7  启动分布式 

 3.10.2.7.1 进入hadoop 目录

 

cd /usr/local/hadoop/etc/hadoop
 

 

 3.10.2.7.2  vi yarn-site.xml  添加

 

<property>
    <description>The hostname of the RM.</description>
    <name>yarn.resourcemanager.hostname</name>
    <value>hadoop0</value>
  </property>
 

 

3.10.2.7.3 vim slaves

 

hadoop1
hadoop2
  

 

 3.10.2.7.4 copy 配置文件到 其他hadoop1 hadoop2

 

 scp  -rq /usr/local/hadoop   hadoop1:/usr/local
 scp  -rq /usr/local/hadoop   hadoop2:/usr/local
 

 

3.10.2.7.5 启动分布式 hadoop

3.10.2.7.5.1 进入目录

 

 cd /usr/local/hadoop
 

 

3.10.2.7.5.2 hdfs format

 

bin/hdfs namenode -format -force
 

 

3.10.2.7.5.3 format 日志

 

[root@hadoop0 hadoop]# bin/hdfs namenode -format -force
17/11/16 08:32:26 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = hadoop0/172.17.0.2
STARTUP_MSG:   args = [-format, -force]
STARTUP_MSG:   version = 2.7.1
STARTUP_MSG:   classpath = /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-client-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-framework-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/gson-2.2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/usr/local/hadoop/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/common/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.7.1-tests.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-2.7.1.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.7.1-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.7.1.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-registry-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.1-tests.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar
STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r 15ecc87ccf4a0228f35af08fc56de536e6ce657a; compiled by 'jenkins' on 2015-06-29T06:04Z
STARTUP_MSG:   java = 1.7.0_79
************************************************************/
17/11/16 08:32:26 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
17/11/16 08:32:26 INFO namenode.NameNode: createNameNode [-format, -force]
Formatting using clusterid: CID-d94045f1-cf92-4268-9905-df254f372280
17/11/16 08:32:27 INFO namenode.FSNamesystem: No KeyProvider found.
17/11/16 08:32:27 INFO namenode.FSNamesystem: fsLock is fair:true
17/11/16 08:32:27 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
17/11/16 08:32:27 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
17/11/16 08:32:27 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
17/11/16 08:32:27 INFO blockmanagement.BlockManager: The block deletion will start around 2017 Nov 16 08:32:27
17/11/16 08:32:27 INFO util.GSet: Computing capacity for map BlocksMap
17/11/16 08:32:27 INFO util.GSet: VM type       = 64-bit
17/11/16 08:32:27 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
17/11/16 08:32:27 INFO util.GSet: capacity      = 2^21 = 2097152 entries
17/11/16 08:32:27 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
17/11/16 08:32:27 INFO blockmanagement.BlockManager: defaultReplication         = 1
17/11/16 08:32:27 INFO blockmanagement.BlockManager: maxReplication             = 512
17/11/16 08:32:27 INFO blockmanagement.BlockManager: minReplication             = 1
17/11/16 08:32:27 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
17/11/16 08:32:27 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
17/11/16 08:32:27 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
17/11/16 08:32:27 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
17/11/16 08:32:27 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
17/11/16 08:32:27 INFO namenode.FSNamesystem: fsOwner             = root (auth:SIMPLE)
17/11/16 08:32:27 INFO namenode.FSNamesystem: supergroup          = supergroup
17/11/16 08:32:27 INFO namenode.FSNamesystem: isPermissionEnabled = false
17/11/16 08:32:27 INFO namenode.FSNamesystem: HA Enabled: false
17/11/16 08:32:27 INFO namenode.FSNamesystem: Append Enabled: true
17/11/16 08:32:27 INFO util.GSet: Computing capacity for map INodeMap
17/11/16 08:32:27 INFO util.GSet: VM type       = 64-bit
17/11/16 08:32:27 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
17/11/16 08:32:27 INFO util.GSet: capacity      = 2^20 = 1048576 entries
17/11/16 08:32:27 INFO namenode.FSDirectory: ACLs enabled? false
17/11/16 08:32:27 INFO namenode.FSDirectory: XAttrs enabled? true
17/11/16 08:32:27 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
17/11/16 08:32:27 INFO namenode.NameNode: Caching file names occuring more than 10 times
17/11/16 08:32:27 INFO util.GSet: Computing capacity for map cachedBlocks
17/11/16 08:32:27 INFO util.GSet: VM type       = 64-bit
17/11/16 08:32:27 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
17/11/16 08:32:27 INFO util.GSet: capacity      = 2^18 = 262144 entries
17/11/16 08:32:28 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
17/11/16 08:32:28 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
17/11/16 08:32:28 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
17/11/16 08:32:28 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
17/11/16 08:32:28 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
17/11/16 08:32:28 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
17/11/16 08:32:28 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
17/11/16 08:32:28 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
17/11/16 08:32:28 INFO util.GSet: Computing capacity for map NameNodeRetryCache
17/11/16 08:32:28 INFO util.GSet: VM type       = 64-bit
17/11/16 08:32:28 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
17/11/16 08:32:28 INFO util.GSet: capacity      = 2^15 = 32768 entries
Data exists in Storage Directory /usr/local/hadoop/tmp/dfs/name. Formatting anyway.
17/11/16 08:32:28 INFO namenode.FSImage: Allocated new BlockPoolId: BP-455730873-172.17.0.2-1510821148263
17/11/16 08:32:28 INFO common.Storage: Storage directory /usr/local/hadoop/tmp/dfs/name has been successfully formatted.
17/11/16 08:32:28 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
17/11/16 08:32:28 INFO util.ExitUtil: Exiting with status 0
17/11/16 08:32:28 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop0/172.17.0.2
************************************************************/
 

 

3.10.2.7.5.4 启动

 

sbin/start-all.sh
 3.10.2.7.5.5 启动日志

 

 

This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [hadoop0]
hadoop0: starting namenode, logging to /usr/local/hadoop/logs/hadoop-root-namenode-hadoop0.out
hadoop2: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-hadoop2.out
hadoop1: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-hadoop1.out
Starting secondary namenodes [0.0.0.0]
The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.
RSA key fingerprint is SHA256:pVcVMP+s49lnUdVpo99cecqZhYCrfPNSQY6XHFD/3II.
RSA key fingerprint is MD5:15:ec:c3:86:fe:b6:65:3a:dd:be:79:a0:e4:d2:f7:2e.
Are you sure you want to continue connecting (yes/no)? yes   
0.0.0.0: Warning: Permanently added '0.0.0.0' (RSA) to the list of known hosts.
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-root-secondarynamenode-hadoop0.out
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn--resourcemanager-hadoop0.out
hadoop1: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-root-nodemanager-hadoop1.out
hadoop2: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-root-nodemanager-hadoop2.out
 

 

可以看见启动了 hadoop0 namenodes ,hadoop1: starting datanode,hadoop2: starting datanode,hadoop2: starting nodemanager,hadoop1: starting nodemanager, 

 

 

 3.10.2.7.5.6  验证 hadoop0

 

[root@hadoop0 hadoop]# jps
700 SecondaryNameNode
511 NameNode
853 ResourceManager
933 Jps
 

 

3.10.2.7.5.6  验证 hadoop1

 

[root@hadoop1 /]# jps
158 NodeManager
58 DataNode
210 Jps
 

 

3.10.2.7.5.7  验证 hadoop2

 

[root@hadoop2 /]# jps
158 NodeManager
58 DataNode
210 Jps
 

 

3.10.2.7.5.8  进入 hadoop0 验证 hdfs

vim a.txt

 

baoyou 
baoyou
bao
you
hello world
hello bao you
 

 

 

 

[root@hadoop0 /]# hdfs dfs -put a.txt /
17/11/16 09:28:19 WARN hdfs.DFSClient: Slow waitForAckedSeqno took 42798ms (threshold=30000ms)
[root@hadoop0 /]# 

[root@hadoop0 /]# hdfs dfs -put a.txt /
17/11/16 09:28:19 WARN hdfs.DFSClient: Slow waitForAckedSeqno took 42798ms (threshold=30000ms)
[root@hadoop0 /]# hdfs dfs -ls /
Found 1 items
-rw-r--r--   1 root supergroup         73 2017-11-16 09:28 /a.txt
 

 

3.10.2.7.5.9  验证wordcount  

[root@hadoop0 /]# cd /usr/local/hadoop/share/hadoop/mapreduce
[root@hadoop0 mapreduce]# ls

hadoop-mapreduce-client-app-2.7.1.jar	  hadoop-mapreduce-client-hs-2.7.1.jar		     hadoop-mapreduce-client-jobclient-2.7.1.jar  lib
hadoop-mapreduce-client-common-2.7.1.jar  hadoop-mapreduce-client-hs-plugins-2.7.1.jar	     hadoop-mapreduce-client-shuffle-2.7.1.jar	  lib-examples
hadoop-mapreduce-client-core-2.7.1.jar	  hadoop-mapreduce-client-jobclient-2.7.1-tests.jar  hadoop-mapreduce-examples-2.7.1.jar	  sources 
[root@hadoop0 mapreduce]# hadoop jar hadoop-mapreduce-examples-2.7.1.jar  wordcount /a.txt /out

 

 

 cd /usr/local/hadoop/share/hadoop/mapreduce
hadoop jar hadoop-mapreduce-examples-2.7.1.jar  wordcount /a.txt /out

 

3.10.2.7.5.9  验证wordcount   mapreduce 日志

[root@hadoop0 mapreduce]# hadoop jar hadoop-mapreduce-examples-2.7.1.jar  wordcount /a.txt /out
17/11/16 10:26:19 INFO client.RMProxy: Connecting to ResourceManager at hadoop0/172.17.0.2:8032
17/11/16 10:26:25 INFO input.FileInputFormat: Total input paths to process : 1
17/11/16 10:26:30 INFO mapreduce.JobSubmitter: number of splits:1
17/11/16 10:26:31 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1510823892890_0002
17/11/16 10:26:33 INFO impl.YarnClientImpl: Submitted application application_1510823892890_0002
17/11/16 10:26:33 INFO mapreduce.Job: The url to track the job: http://hadoop0:8088/proxy/application_1510823892890_0002/
17/11/16 10:26:33 INFO mapreduce.Job: Running job: job_1510823892890_0002
17/11/16 10:38:58 INFO mapreduce.Job: Job job_1510823892890_0002 running in uber mode : false
17/11/16 10:39:12 INFO mapreduce.Job:  map 0% reduce 0%
17/11/16 11:01:31 INFO mapreduce.Job:  map 100% reduce 0%
17/11/16 11:01:40 INFO mapreduce.Job:  map 0% reduce 0%
17/11/16 11:01:40 INFO mapreduce.Job: Task Id : attempt_1510823892890_0002_m_000000_1000, Status : FAILED
AttemptID:attempt_1510823892890_0002_m_000000_1000 Timed out after 600 secs
Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143

17/11/16 11:03:00 INFO mapreduce.Job:  map 100% reduce 0%
17/11/16 11:03:29 INFO mapreduce.Job:  map 100% reduce 100%
17/11/16 11:03:33 INFO mapreduce.Job: Job job_1510823892890_0002 completed successfully
17/11/16 11:03:34 INFO mapreduce.Job: Counters: 51
	File System Counters
		FILE: Number of bytes read=63
		FILE: Number of bytes written=230833
		FILE: Number of read operations=0
		FILE: Number of large read operations=0
		FILE: Number of write operations=0
		HDFS: Number of bytes read=163
		HDFS: Number of bytes written=37
		HDFS: Number of read operations=6
		HDFS: Number of large read operations=0
		HDFS: Number of write operations=2
	Job Counters 
		Failed map tasks=1
		Launched map tasks=2
		Launched reduce tasks=1
		Other local map tasks=1
		Data-local map tasks=1
		Total time spent by all maps in occupied slots (ms)=1076284
		Total time spent by all reduces in occupied slots (ms)=25207
		Total time spent by all map tasks (ms)=1076284
		Total time spent by all reduce tasks (ms)=25207
		Total vcore-seconds taken by all map tasks=1076284
		Total vcore-seconds taken by all reduce tasks=25207
		Total megabyte-seconds taken by all map tasks=1102114816
		Total megabyte-seconds taken by all reduce tasks=25811968
	Map-Reduce Framework
		Map input records=9
		Map output records=13
		Map output bytes=125
		Map output materialized bytes=63
		Input split bytes=90
		Combine input records=13
		Combine output records=5
		Reduce input groups=5
		Reduce shuffle bytes=63
		Reduce input records=5
		Reduce output records=5
		Spilled Records=10
		Shuffled Maps =1
		Failed Shuffles=0
		Merged Map outputs=1
		GC time elapsed (ms)=784
		CPU time spent (ms)=3450
		Physical memory (bytes) snapshot=330055680
		Virtual memory (bytes) snapshot=1464528896
		Total committed heap usage (bytes)=200278016
	Shuffle Errors
		BAD_ID=0
		CONNECTION=0
		IO_ERROR=0
		WRONG_LENGTH=0
		WRONG_MAP=0
		WRONG_REDUCE=0
	File Input Format Counters 
		Bytes Read=73
	File Output Format Counters 
		Bytes Written=37

 

 

3.10.2.7.5.10  验证wordcount   mapreduce 结果

[root@hadoop0 mapreduce]# hdfs dfs -text /out/part-r-00000 
bao	2
baoyou	3
hello	4
world	2
you	2

 

3.10.2.7.5.11  关闭hadoop 集群

[root@hadoop0 hadoop]# sbin/stop-all.sh 
This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh
Stopping namenodes on [hadoop0]
hadoop0: stopping namenode
hadoop2: stopping datanode
hadoop1: stopping datanode
Stopping secondary namenodes [0.0.0.0]
0.0.0.0: stopping secondarynamenode
stopping yarn daemons
stopping resourcemanager
hadoop1: stopping nodemanager
hadoop2: stopping nodemanager
no proxyserver to stop

 

 

 

 

 

 

 

 

 

 

 

 

 

捐助开发者 

在兴趣的驱动下,写一个免费的东西,有欣喜,也还有汗水,希望你喜欢我的作品,同时也能支持一下。 当然,有钱捧个钱场(支持支付宝和微信 以及扣扣群),没钱捧个人场,谢谢各位。

 

个人主页http://knight-black-bob.iteye.com/



 
 
 谢谢您的赞助,我会做的更好!

0
0
分享到:
评论

相关推荐

Global site tag (gtag.js) - Google Analytics