- 浏览: 234799 次
- 性别:
- 来自: 上海
文章分类
最新评论
-
lwb314:
你的这个是创建的临时的hive表,数据也是通过文件录入进去的, ...
Spark SQL操作Hive数据库 -
yixiaoqi2010:
你好 我的提交上去 总是报错,找不到hive表,可能是哪里 ...
Spark SQL操作Hive数据库 -
bo_hai:
target jvm版本也要选择正确。不能选择太高。2.10对 ...
eclipse开发spark程序配置本地运行
1.集群环境的安装
1.1工具软件版本说明(软件尽量去官网下载):
VMware Workstation :10.0.0 build-1295980
Ubuntn:Ubuntn15.10
JDK:jdk-8u60-linux-x64.tar.gz
Hadoop:hadoop-2.6.0.tar.gz
SecureCRT:Version 6.2.3 (build 313)
WinSCP:5.1.6(build 3394)
VM先按照ubuntu虚拟机步骤网上搜下,自行安装。
Ubuntu15.10设置用root账号登录:http://kevin12.iteye.com/blog/2271687
VMware中安装了ubuntu全屏设置: http://kevin12.iteye.com/blog/2271690
1.2 虚拟机的一般配置
1.2.1修改主机名称:运行命令vim /etc/hostname 将原先的主机名称改成master1,保存并退出,然后重启虚拟机即可;可以用命令hostname验证一下;
1.2.2 设置静态ip:可参考:http://kevin12.iteye.com/blog/2273491
1.2.3 绑定主机名和IP
运行vim /etc/hosts 绑定后保存并退出,如下图:
1.2.4 关闭防火墙
ubuntu防火墙用的是ufw
查看防火墙状态:sudo ufw status
关闭防火墙:sudo ufw disable
1.2.5 设置免密码登陆
Ubuntu默认安装openssh-client,但是木有安装server,如果运行ssh localhost会报错误:ssh: connect to host localhost port 22: Connection refused
通过命令sudo apt-get install ssh进行安装,过程如下:
2.安装jdk
2.1 将jdk-8u60-linux-x64.tar.gz拷贝到虚拟机中,用命令tar -zxvf jdk-8u60-linux-x64.tar.gz 解压,并将解压包移动到/usr/local/jdk下2.安装JDK
2.2 编辑.bashrc文件配置java的环境变量,添加下面的代码,保存并退出,执行命令:source /.bashrc使配置生效。配置如下:
2.3 验证命令:java -version,出现下面的代码说明jdk安装成功;
3.将master1克隆出来四台节点,分别命名为worker1,worker2,worker3,worker4
3.1 修改worker1的主机名 vim /etc/hostname
3.2 设置worker1的静态ip
3.3 设置worker1的静态ip映射 192.168.112.131 worker1
3.4 关闭防火墙 然后重启
3.5 设置ssh:删除.ssh目录下的内容,重新生成ssh
3.6 将worker1中~/.ssh/id_rsa.pub的内容拷贝到master1中的authorized_keys中
按照上面同样的方法操作worker2,worker3,最后修改将master1中的authorized_keys复制到三台worker中的~/.ssh目录下;
修改master1的/etc/hosts文件,worker的机器ip映射添加进来,如下:
用scp命令将master1中的authorized_keys拷贝到其他4个节点上,命令如下:
用ssh worker1 第一次要输入密码,之后就不用输入密码了,其他的节点也是如此。
4.1官网下载hadoop-2.6.0.tar.gz,复制到ubuntu虚拟机/usr/local/中,运行命令tar -zxvf hadoop-2.6.0.tar.gz ./hadoop 命令解压。
配置相关文件:
配置hadoop的环境变量 ~/.bashrc
说明:
HADOOP_CONF_DIR配置是为了更好的支持yarn的运行,包括spark运行yarn;
下面两项可选
export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_HOME}/lib/native
export HADOOP_OPTS="-Djava.library.path=${HADOOP_HOME}/lib"
core-site.xml配置:
hdfs-site.xml配置:
yarn-site.xml配置:
yarn-env.sh配置:
hadoop-env.sh配置:
slaves配置:
4.2在master1上配置好后,用scp命令将配置好的环境变量.bashrc和hadoop拷贝到其他四台worker节点上。
拷贝.bashrc命令:
然后每台worker节点上运行命令source ~/.bashrc,使之生效。
拷贝配置好的hadoop命令:
查看worker上的/usr/local/目录下面是否拷贝过来。
4.3 在master1上格式化namenode
启动hdfs: ./sbin/start-dfs.sh 打印出如下信息:
用命令jps验证master1上进程,启动了NameNode和SecondaryNameNode后台进程;
在worker1上用jps验证,启动了DataNode后台进程;
在master1上启动yarn: ./sbin/start-yarn.sh 打印出如下信息:
在master1上用jps验证,启动了ResourceManager后台进程;
在worker1上用jps验证,启动了NodeManager后台进程;
关闭hdfs用命令./sbin/stop-dfs.sh
关闭yarn用命令./sbin/stop-yarn.sh
查看集群状态用命令./bin/hdfs dfsadmin -report
查看文件块组成: ./bin/hdf sfsck / -files -blocks
root@master1:/usr/local/hadoop/hadoop-2.6.0/bin# hdfs fsck / -files -blocks
16/01/23 22:13:46 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Connecting to namenode via http://master1:50070
FSCK started by root (auth:SIMPLE) from /192.168.112.130 for path / at Sat Jan 23 22:13:47 CST 2016
/ <dir>
Status: HEALTHY
Total size: 0 B
Total dirs: 1
Total files: 0
Total symlinks: 0
Total blocks (validated): 0
Minimally replicated blocks: 0
Over-replicated blocks: 0
Under-replicated blocks: 0
Mis-replicated blocks: 0
Default replication factor: 2
Average block replication: 0.0
Corrupt blocks: 0
Missing replicas: 0
Number of data-nodes: 4
Number of racks: 1
FSCK ended at Sat Jan 23 22:13:47 CST 2016 in 2 milliseconds
The filesystem under path '/' is HEALTHY
4.4 用浏览器访问集群
在master1上输入http://192.168.112.130:50070 或者master1:50070访问
用http://192.168.112.130:8088/查看RM的监控页面
到此为止,hadoop集群已经安装成功!!!
1.1工具软件版本说明(软件尽量去官网下载):
VMware Workstation :10.0.0 build-1295980
Ubuntn:Ubuntn15.10
JDK:jdk-8u60-linux-x64.tar.gz
Hadoop:hadoop-2.6.0.tar.gz
SecureCRT:Version 6.2.3 (build 313)
WinSCP:5.1.6(build 3394)
VM先按照ubuntu虚拟机步骤网上搜下,自行安装。
Ubuntu15.10设置用root账号登录:http://kevin12.iteye.com/blog/2271687
VMware中安装了ubuntu全屏设置: http://kevin12.iteye.com/blog/2271690
1.2 虚拟机的一般配置
1.2.1修改主机名称:运行命令vim /etc/hostname 将原先的主机名称改成master1,保存并退出,然后重启虚拟机即可;可以用命令hostname验证一下;
1.2.2 设置静态ip:可参考:http://kevin12.iteye.com/blog/2273491
1.2.3 绑定主机名和IP
运行vim /etc/hosts 绑定后保存并退出,如下图:
1.2.4 关闭防火墙
ubuntu防火墙用的是ufw
查看防火墙状态:sudo ufw status
关闭防火墙:sudo ufw disable
1.2.5 设置免密码登陆
Ubuntu默认安装openssh-client,但是木有安装server,如果运行ssh localhost会报错误:ssh: connect to host localhost port 22: Connection refused
通过命令sudo apt-get install ssh进行安装,过程如下:
root@master1:~# sudo apt-get install ssh 正在读取软件包列表... 完成 正在分析软件包的依赖关系树 正在读取状态信息... 完成 将会安装下列额外的软件包: libck-connector0 ncurses-term openssh-client openssh-server openssh-sftp-server ssh-import-id 建议安装的软件包: libpam-ssh keychain monkeysphere rssh molly-guard 下列【新】软件包将被安装: libck-connector0 ncurses-term openssh-server openssh-sftp-server ssh ssh-import-id 下列软件包将被升级: openssh-client 升级了 1 个软件包,新安装了 6 个软件包,要卸载 0 个软件包,有 106 个软件包未被升级。 需要下载 8,784 B/1,262 kB 的软件包。 解压缩后会消耗掉 3,523 kB 的额外空间。 您希望继续执行吗? [Y/n] Y 【警告】:下列软件包不能通过验证! libck-connector0 openssh-client openssh-sftp-server openssh-server ssh ncurses-term ssh-import-id 不经验证就安装这些软件包吗? [y/N] y 获取:1 http://cn.archive.ubuntu.com/ubuntu/ wily/main libck-connector0 amd64 0.4.6-5 [8,784 B] 下载 8,784 B,耗时 0秒 (127 kB/s) 正在预设定软件包 ... 正在选中未选择的软件包 libck-connector0:amd64。 (正在读取数据库 ... 系统当前共安装有 180732 个文件和目录。) 正准备解包 .../libck-connector0_0.4.6-5_amd64.deb ... 正在解包 libck-connector0:amd64 (0.4.6-5) ... 正准备解包 .../openssh-client_1%3a6.9p1-2ubuntu0.1_amd64.deb ... 正在将 openssh-client (1:6.9p1-2ubuntu0.1) 解包到 (1:6.9p1-2) 上 ... 正在选中未选择的软件包 openssh-sftp-server。 正准备解包 .../openssh-sftp-server_1%3a6.9p1-2ubuntu0.1_amd64.deb ... 正在解包 openssh-sftp-server (1:6.9p1-2ubuntu0.1) ... 正在选中未选择的软件包 openssh-server。 正准备解包 .../openssh-server_1%3a6.9p1-2ubuntu0.1_amd64.deb ... 正在解包 openssh-server (1:6.9p1-2ubuntu0.1) ... 正在选中未选择的软件包 ssh。 正准备解包 .../ssh_1%3a6.9p1-2ubuntu0.1_all.deb ... 正在解包 ssh (1:6.9p1-2ubuntu0.1) ... 正在选中未选择的软件包 ncurses-term。 正准备解包 .../ncurses-term_5.9+20150516-2ubuntu1_all.deb ... 正在解包 ncurses-term (5.9+20150516-2ubuntu1) ... 正在选中未选择的软件包 ssh-import-id。 正准备解包 .../ssh-import-id_4.5-0ubuntu1_all.deb ... 正在解包 ssh-import-id (4.5-0ubuntu1) ... 正在处理用于 man-db (2.7.4-1) 的触发器 ... 正在处理用于 ureadahead (0.100.0-19) 的触发器 ... 正在处理用于 systemd (225-1ubuntu9) 的触发器 ... 正在处理用于 ufw (0.34-2) 的触发器 ... 正在设置 libck-connector0:amd64 (0.4.6-5) ... 正在设置 openssh-client (1:6.9p1-2ubuntu0.1) ... 正在设置 openssh-sftp-server (1:6.9p1-2ubuntu0.1) ... 正在设置 openssh-server (1:6.9p1-2ubuntu0.1) ... Creating SSH2 RSA key; this may take some time ... 2048 SHA256:Uo5idLAsqmt7e8rAUxXt4hkB0L33zzVgIKv6Gx/TnX4 root@master1 (RSA) Creating SSH2 DSA key; this may take some time ... 1024 SHA256:fFT1QElhDnmT//1tFoymVPKUVN/b/EnL+ODdzooF7no root@master1 (DSA) Creating SSH2 ECDSA key; this may take some time ... 256 SHA256:ctbDQwKOmzr0Ixl43vm1akZAP360sbX4rHWk5FZ2gpo root@master1 (ECDSA) Creating SSH2 ED25519 key; this may take some time ... 256 SHA256:NP0yHA9iwPNNXN3jXz6h+V+J/lYgGXJd5f2MUV4XG+8 root@master1 (ED25519) 正在设置 ncurses-term (5.9+20150516-2ubuntu1) ... 正在设置 ssh-import-id (4.5-0ubuntu1) ... 正在处理用于 ufw (0.34-2) 的触发器 ... 正在设置 ssh (1:6.9p1-2ubuntu0.1) ... 正在处理用于 libc-bin (2.21-0ubuntu4) 的触发器 ... 正在处理用于 ureadahead (0.100.0-19) 的触发器 ... 正在处理用于 systemd (225-1ubuntu9) 的触发器 ... root@master1:~# /etc/init.d/ssh start [ ok ] Starting ssh (via systemctl): ssh.service. root@master1:~# ps -e | grep ssh 3146 ? 00:00:00 sshd //在home目录下面,执行命令ssh -keygen -t rsa 并一路回车即可;再运行命令cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys root@master1:~# ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: SHA256:CCBcvdZn1zlL033Gv6rdQqC1zY0fM73emg+ra+ELDt0 root@master1 The key's randomart image is: +---[RSA 2048]----+ |o o.. | | o . . | | . o . o..| | + o o + * .=| | . . S + * *.+| | o o.* =o| | . o.E.o *| | o .+..B.| | .o=*B=o| +----[SHA256]-----+ root@master1:~# ssh localhost The authenticity of host 'localhost (127.0.0.1)' can't be established. ECDSA key fingerprint is SHA256:ctbDQwKOmzr0Ixl43vm1akZAP360sbX4rHWk5FZ2gpo. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts. Welcome to Ubuntu 15.10 (GNU/Linux 4.2.0-16-generic x86_64) * Documentation: https://help.ubuntu.com/ 107 packages can be updated. 0 updates are security updates. The programs included with the Ubuntu system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright. Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law. root@master1:~#ssh localhost --再次运行时就不用输密码了 Welcome to Ubuntu 15.10 (GNU/Linux 4.2.0-16-generic x86_64) * Documentation: https://help.ubuntu.com/ 107 packages can be updated. 0 updates are security updates. Last login: Tue Jan 19 22:17:00 2016 from 127.0.0.1 root@master1:~#
2.安装jdk
2.1 将jdk-8u60-linux-x64.tar.gz拷贝到虚拟机中,用命令tar -zxvf jdk-8u60-linux-x64.tar.gz 解压,并将解压包移动到/usr/local/jdk下2.安装JDK
2.2 编辑.bashrc文件配置java的环境变量,添加下面的代码,保存并退出,执行命令:source /.bashrc使配置生效。配置如下:
export JAVA_HOME=/usr/local/jdk/jdk1.8.0_60 export JRE_HOME=${JAVA_HOME}/jre export PATH=.:${JAVA_HOME}/bin:$PATH export CLASS_PATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib export PATH=.:${JAVA_HOME}/bin:$PATH
2.3 验证命令:java -version,出现下面的代码说明jdk安装成功;
root@jylu:~# java -version java version "1.8.0_60" Java(TM) SE Runtime Environment (build 1.8.0_60-b27) Java HotSpot(TM) 64-Bit Server VM (build 25.60-b23, mixed mode
3.将master1克隆出来四台节点,分别命名为worker1,worker2,worker3,worker4
3.1 修改worker1的主机名 vim /etc/hostname
3.2 设置worker1的静态ip
3.3 设置worker1的静态ip映射 192.168.112.131 worker1
3.4 关闭防火墙 然后重启
3.5 设置ssh:删除.ssh目录下的内容,重新生成ssh
3.6 将worker1中~/.ssh/id_rsa.pub的内容拷贝到master1中的authorized_keys中
按照上面同样的方法操作worker2,worker3,最后修改将master1中的authorized_keys复制到三台worker中的~/.ssh目录下;
修改master1的/etc/hosts文件,worker的机器ip映射添加进来,如下:
192.168.112.130 master1 192.168.112.131 worker1 192.168.112.132 worker2 192.168.112.133 worker3 192.168.112.134 worker4
用scp命令将master1中的authorized_keys拷贝到其他4个节点上,命令如下:
scp ~/.ssh/authorized_keys root@worker1:~/.ssh/ scp ~/.ssh/authorized_keys root@worker2:~/.ssh/ scp ~/.ssh/authorized_keys root@worker3:~/.ssh/ scp ~/.ssh/authorized_keys root@worker4:~/.ssh/
用ssh worker1 第一次要输入密码,之后就不用输入密码了,其他的节点也是如此。
4.在master1上安装hadoop
4.1官网下载hadoop-2.6.0.tar.gz,复制到ubuntu虚拟机/usr/local/中,运行命令tar -zxvf hadoop-2.6.0.tar.gz ./hadoop 命令解压。
配置相关文件:
配置hadoop的环境变量 ~/.bashrc
export JAVA_HOME=/usr/local/jdk/jdk1.8.0_60 export JRE_HOME=${JAVA_HOME}/jre export CLASS_PATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib export SCALA_HOME=/usr/local/scala/scala-2.10.4 export HADOOP_HOME=/usr/local/hadoop/hadoop-2.6.0 export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_HOME}/lib/native export HADOOP_OPTS="-Djava.library.path=${HADOOP_HOME}/lib" export PATH=.:${JAVA_HOME}/bin:${SCALA_HOME}/bin:${HADOOP_HOME}/bin:${HADOOP_HOME}/sbin:$PATH
说明:
HADOOP_CONF_DIR配置是为了更好的支持yarn的运行,包括spark运行yarn;
下面两项可选
export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_HOME}/lib/native
export HADOOP_OPTS="-Djava.library.path=${HADOOP_HOME}/lib"
core-site.xml配置:
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://master1:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/usr/local/hadoop/hadoop-2.6.0/tmp</value> </property> <property> <name>hadoop.native.lib</name> <value>true</value> <description>Should native hadoop libraries,if present,be used.</description> </property> </configuration>
hdfs-site.xml配置:
<configuration> <property> <name>dfs.replication</name> <value>2</value> </property> <property> <name>dfs.namenode.secondary.http-address</name> <value>master1:50090</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>/usr/local/hadoop/hadoop-2.6.0/dfs/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>/usr/local/hadoop/hadoop-2.6.0/dfs/data</value> </property> <property> <name>dfs.namenode.checkpoint.dir</name> <value>file:///usr/local/hadoop/hadoop-2.6.0/dfs/namesecondary</value> </property> </configuration>
yarn-site.xml配置:
<configuration> <!-- Site specific YARN configuration properties --> <property> <name>yarn.resourcemanager.hostname</name> <value>master1</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration>
yarn-env.sh配置:
export JAVA_HOME=/usr/local/jdk/jdk1.8.0_60
hadoop-env.sh配置:
export JAVA_HOME=/usr/local/jdk/jdk1.8.0_60 export HADOOP_HOME=/usr/local/hadoop/hadoop-2.6.0 export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_HOME}/lib/native export HADOOP_OPS="-Djava.library.path=${HADOOP_HOME}/lib"
slaves配置:
worker1 worker2 worker3 worker4
4.2在master1上配置好后,用scp命令将配置好的环境变量.bashrc和hadoop拷贝到其他四台worker节点上。
拷贝.bashrc命令:
scp ~/.bashrc root@worker1:~/ scp ~/.bashrc root@worker2:~/ scp ~/.bashrc root@worker3:~/ scp ~/.bashrc root@worker4:~/
然后每台worker节点上运行命令source ~/.bashrc,使之生效。
拷贝配置好的hadoop命令:
scp -r /usr/local/hadoop/ root@worker1:/usr/local/ scp -r /usr/local/hadoop/ root@worker2:/usr/local/ scp -r /usr/local/hadoop/ root@worker3:/usr/local/ scp -r /usr/local/hadoop/ root@worker4:/usr/local/
查看worker上的/usr/local/目录下面是否拷贝过来。
4.3 在master1上格式化namenode
root@master1:/usr/local/hadoop/hadoop-2.6.0/bin# hdfs namenode -format 16/01/23 21:57:57 INFO namenode.NameNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host = master1/192.168.112.130 STARTUP_MSG: args = [-format] STARTUP_MSG: version = 2.6.0 STARTUP_MSG: classpath = /usr/local/hadoop/hadoop-2.6.0/etc/hadoop:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/curator-client-2.6.0.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/hadoop-annotations-2.6.0.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/jsr305-1.3.9.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/junit-4.11.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/commons-el-1.0.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/htrace-core-3.0.4.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/curator-framework-2.6.0.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/curator-recipes-2.6.0.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/hadoop-auth-2.6.0.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/gson-2.2.4.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/hadoop-common-2.6.0.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/hadoop-nfs-2.6.0.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/hadoop-common-2.6.0-tests.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/hdfs:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/hdfs/lib/commons-el-1.0.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/hdfs/lib/htrace-core-3.0.4.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/hdfs/hadoop-hdfs-nfs-2.6.0.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/hdfs/hadoop-hdfs-2.6.0-tests.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/hdfs/hadoop-hdfs-2.6.0.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/yarn/lib/jline-0.9.94.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-client-2.6.0.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.6.0.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-common-2.6.0.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.6.0.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.6.0.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-registry-2.6.0.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.6.0.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-api-2.6.0.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-server-tests-2.6.0.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-server-common-2.6.0.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.6.0.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.6.0.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/mapreduce/lib/hadoop-annotations-2.6.0.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/mapreduce/lib/junit-4.11.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.6.0.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.6.0.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.6.0.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.0-tests.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.6.0.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.6.0.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.0.jar:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.6.0.jar:/usr/local/hadoop/hadoop-2.6.0/contrib/capacity-scheduler/*.jar STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1; compiled by 'jenkins' on 2014-11-13T21:10Z STARTUP_MSG: java = 1.8.0_60 ************************************************************/ 16/01/23 21:57:57 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] 16/01/23 21:57:57 INFO namenode.NameNode: createNameNode [-format] 16/01/23 21:57:58 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 16/01/23 21:57:58 WARN common.Util: Path /usr/local/hadoop/hadoop-2.6.0/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration. 16/01/23 21:57:58 WARN common.Util: Path /usr/local/hadoop/hadoop-2.6.0/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration. Formatting using clusterid: CID-9bd3d990-8e12-4581-b131-5021e35cfe78 16/01/23 21:57:58 INFO namenode.FSNamesystem: No KeyProvider found. 16/01/23 21:57:58 INFO namenode.FSNamesystem: fsLock is fair:true 16/01/23 21:57:58 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000 16/01/23 21:57:58 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true 16/01/23 21:57:58 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000 16/01/23 21:57:58 INFO blockmanagement.BlockManager: The block deletion will start around 2016 一月 23 21:57:58 16/01/23 21:57:58 INFO util.GSet: Computing capacity for map BlocksMap 16/01/23 21:57:58 INFO util.GSet: VM type = 64-bit 16/01/23 21:57:58 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB 16/01/23 21:57:58 INFO util.GSet: capacity = 2^21 = 2097152 entries 16/01/23 21:57:58 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false 16/01/23 21:57:58 INFO blockmanagement.BlockManager: defaultReplication = 2 16/01/23 21:57:58 INFO blockmanagement.BlockManager: maxReplication = 512 16/01/23 21:57:58 INFO blockmanagement.BlockManager: minReplication = 1 16/01/23 21:57:58 INFO blockmanagement.BlockManager: maxReplicationStreams = 2 16/01/23 21:57:58 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = false 16/01/23 21:57:58 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000 16/01/23 21:57:58 INFO blockmanagement.BlockManager: encryptDataTransfer = false 16/01/23 21:57:58 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000 16/01/23 21:57:58 INFO namenode.FSNamesystem: fsOwner = root (auth:SIMPLE) 16/01/23 21:57:58 INFO namenode.FSNamesystem: supergroup = supergroup 16/01/23 21:57:58 INFO namenode.FSNamesystem: isPermissionEnabled = true 16/01/23 21:57:58 INFO namenode.FSNamesystem: HA Enabled: false 16/01/23 21:57:58 INFO namenode.FSNamesystem: Append Enabled: true 16/01/23 21:57:58 INFO util.GSet: Computing capacity for map INodeMap 16/01/23 21:57:58 INFO util.GSet: VM type = 64-bit 16/01/23 21:57:58 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB 16/01/23 21:57:58 INFO util.GSet: capacity = 2^20 = 1048576 entries 16/01/23 21:57:58 INFO namenode.NameNode: Caching file names occuring more than 10 times 16/01/23 21:57:59 INFO util.GSet: Computing capacity for map cachedBlocks 16/01/23 21:57:59 INFO util.GSet: VM type = 64-bit 16/01/23 21:57:59 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB 16/01/23 21:57:59 INFO util.GSet: capacity = 2^18 = 262144 entries 16/01/23 21:57:59 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033 16/01/23 21:57:59 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0 16/01/23 21:57:59 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000 16/01/23 21:57:59 INFO namenode.FSNamesystem: Retry cache on namenode is enabled 16/01/23 21:57:59 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis 16/01/23 21:57:59 INFO util.GSet: Computing capacity for map NameNodeRetryCache 16/01/23 21:57:59 INFO util.GSet: VM type = 64-bit 16/01/23 21:57:59 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB 16/01/23 21:57:59 INFO util.GSet: capacity = 2^15 = 32768 entries 16/01/23 21:57:59 INFO namenode.NNConf: ACLs enabled? false 16/01/23 21:57:59 INFO namenode.NNConf: XAttrs enabled? true 16/01/23 21:57:59 INFO namenode.NNConf: Maximum size of an xattr: 16384 16/01/23 21:57:59 INFO namenode.FSImage: Allocated new BlockPoolId: BP-773392318-192.168.112.130-1453557479065 16/01/23 21:57:59 INFO common.Storage: Storage directory /usr/local/hadoop/hadoop-2.6.0/dfs/name has been successfully formatted. 16/01/23 21:57:59 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0 16/01/23 21:57:59 INFO util.ExitUtil: Exiting with status 0 16/01/23 21:57:59 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at master1/192.168.112.130 ************************************************************/ root@master1:/usr/local/hadoop/hadoop-2.6.0/bin#
启动hdfs: ./sbin/start-dfs.sh 打印出如下信息:
root@master1:/usr/local/hadoop/hadoop-2.6.0/sbin# start-dfs.sh 16/01/23 22:01:06 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Starting namenodes on [master1] The authenticity of host 'master1 (192.168.112.130)' can't be established. ECDSA key fingerprint is SHA256:ctbDQwKOmzr0Ixl43vm1akZAP360sbX4rHWk5FZ2gpo. Are you sure you want to continue connecting (yes/no)? yes master1: Warning: Permanently added 'master1,192.168.112.130' (ECDSA) to the list of known hosts. master1: starting namenode, logging to /usr/local/hadoop/hadoop-2.6.0/logs/hadoop-root-namenode-master1.out worker4: starting datanode, logging to /usr/local/hadoop/hadoop-2.6.0/logs/hadoop-root-datanode-worker4.out worker1: starting datanode, logging to /usr/local/hadoop/hadoop-2.6.0/logs/hadoop-root-datanode-worker1.out worker3: starting datanode, logging to /usr/local/hadoop/hadoop-2.6.0/logs/hadoop-root-datanode-worker3.out worker2: starting datanode, logging to /usr/local/hadoop/hadoop-2.6.0/logs/hadoop-root-datanode-worker2.out Starting secondary namenodes [master1] master1: starting secondarynamenode, logging to /usr/local/hadoop/hadoop-2.6.0/logs/hadoop-root-secondarynamenode-master1.out 16/01/23 22:01:32 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
用命令jps验证master1上进程,启动了NameNode和SecondaryNameNode后台进程;
root@master1:/usr/local/hadoop/hadoop-2.6.0/sbin# jps 4498 Jps 4379 SecondaryNameNode 4175 NameNode
在worker1上用jps验证,启动了DataNode后台进程;
root@worker1:/usr/local/hadoop/hadoop-2.6.0/etc/hadoop# jps 2563 DataNode 2646 Jps
在master1上启动yarn: ./sbin/start-yarn.sh 打印出如下信息:
root@master1:/usr/local/hadoop/hadoop-2.6.0/sbin# start-yarn.sh starting yarn daemons starting resourcemanager, logging to /usr/local/hadoop/hadoop-2.6.0/logs/yarn-root-resourcemanager-master1.out worker1: starting nodemanager, logging to /usr/local/hadoop/hadoop-2.6.0/logs/yarn-root-nodemanager-worker1.out worker3: starting nodemanager, logging to /usr/local/hadoop/hadoop-2.6.0/logs/yarn-root-nodemanager-worker3.out worker2: starting nodemanager, logging to /usr/local/hadoop/hadoop-2.6.0/logs/yarn-root-nodemanager-worker2.out worker4: starting nodemanager, logging to /usr/local/hadoop/hadoop-2.6.0/logs/yarn-root-nodemanager-worker4.out
在master1上用jps验证,启动了ResourceManager后台进程;
root@master1:/usr/local/hadoop/hadoop-2.6.0/sbin# jps 4821 Jps 4551 ResourceManager 4379 SecondaryNameNode 4175 NameNode
在worker1上用jps验证,启动了NodeManager后台进程;
root@worker1:/usr/local/hadoop/hadoop-2.6.0/etc/hadoop# jps 2563 DataNode 2713 NodeManager 2829 Jps
关闭hdfs用命令./sbin/stop-dfs.sh
关闭yarn用命令./sbin/stop-yarn.sh
查看集群状态用命令./bin/hdfs dfsadmin -report
查看文件块组成: ./bin/hdf sfsck / -files -blocks
root@master1:/usr/local/hadoop/hadoop-2.6.0/bin# hdfs fsck / -files -blocks
16/01/23 22:13:46 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Connecting to namenode via http://master1:50070
FSCK started by root (auth:SIMPLE) from /192.168.112.130 for path / at Sat Jan 23 22:13:47 CST 2016
/ <dir>
Status: HEALTHY
Total size: 0 B
Total dirs: 1
Total files: 0
Total symlinks: 0
Total blocks (validated): 0
Minimally replicated blocks: 0
Over-replicated blocks: 0
Under-replicated blocks: 0
Mis-replicated blocks: 0
Default replication factor: 2
Average block replication: 0.0
Corrupt blocks: 0
Missing replicas: 0
Number of data-nodes: 4
Number of racks: 1
FSCK ended at Sat Jan 23 22:13:47 CST 2016 in 2 milliseconds
The filesystem under path '/' is HEALTHY
4.4 用浏览器访问集群
在master1上输入http://192.168.112.130:50070 或者master1:50070访问
用http://192.168.112.130:8088/查看RM的监控页面
到此为止,hadoop集群已经安装成功!!!
发表评论
-
Spark Streaming 统计单词的例子
2016-06-19 12:29 3681测试Spark Streaming 统计单词的例子 1.准备 ... -
Spark SQL操作Hive数据库
2016-04-13 22:37 17601本次例子通过scala编程实现Spark SQL操作Hive数 ... -
Spark SQL on hive配置和实战
2016-03-26 18:40 5557spark sql 官网:http://spark ... -
eclipse开发hadoop环境搭建
2016-02-13 14:54 1387Hadoop2.6.0集群搭建完毕后,下面介绍一下eclips ... -
Hadoop Shuffle(洗牌)过程
2014-03-25 14:26 1037博客来源:http://www.wnt.c ... -
hadoop2.2运行wordcount例子
2014-03-10 11:46 2614转载请注明出处:http://kevin12.iteye.co ...
相关推荐
1.linux系统:Ubuntu14.04 2.hadoop版本:hadoop-2.2.0 3.JDK版本:Jdk1.8.0_74
### Hadoop集群搭建知识点 #### 一、概述 Hadoop是一种能够处理大量数据的大规模分布式存储与计算平台,常用于构建大数据分析系统。Hadoop的核心组件包括HDFS(Hadoop Distributed File System)和MapReduce。在...
CentOS 6.8 + Hadoop2.6.0集群环境搭建指南。
hadoop2.6.0完全分布式搭建
这个"hadop 2.6.0 安装包"是为了帮助用户搭建和配置Hadoop环境,特别适用于大数据处理和分析。 1. **Hadoop的核心组件** - **HDFS(Hadoop Distributed File System)**:分布式文件系统,负责数据的存储。HDFS...
集群搭建的最后一步是启动Hadoop的各个服务,这包括格式化HDFS、启动HDFS的NameNode、DataNode、SecondaryNameNode以及YARN的ResourceManager和NodeManager。通过`jps`命令可以查看这些服务是否已经启动成功。 ### ...
Hadoop2.6.0是这个框架的一个重要版本,它包含了多项优化和改进,以提高系统的稳定性和性能。在这个压缩包中,我们关注的是与Windows环境相关的两个关键组件:Winutils和hadoop.dll。 首先,让我们详细了解一下...
3. 解压Hadoop压缩包:使用`tar -zxvf hadoop2.6.0.tgz`命令解压。 4. 配置环境变量:在`~/.bashrc`或`~/.bash_profile`文件中设置HADOOP_HOME,并添加到PATH。 5. 配置Hadoop配置文件:修改`etc/hadoop/core-site....
在标题中提到的“hadoop2.6.0版本hadoop.dll和winutils.exe”是针对Windows环境下运行Hadoop的一些关键组件。 1. **Hadoop 2.6.0**: 这是Hadoop的一个主要版本,发布于2014年,带来了许多增强和改进。在Hadoop 2.x...
本文将详细介绍如何在Windows环境下搭建Hadoop2.6.0版本。首先,我们需要从指定的下载地址获取CDH(Cloudera Distribution Including Apache Hadoop)提供的Hadoop2.6.0-cdh5.13.0的压缩包,地址为:...
用户可以通过解压此文件,编译安装来搭建自己的Hadoop环境,进行分布式计算和数据存储。这个版本还包含了其他相关工具,如Hadoop命令行工具、Hadoop守护进程等,用于管理和操作Hadoop集群。 而hadoop-2.6.0-cdh...
在Hadoop生态系统中,`hadoop.dll`和`winutils.exe`是两个关键组件,尤其对于Windows用户来说。本文将详细介绍这两个文件以及它们在Hadoop ...正确配置和使用这些文件,对于在Windows上搭建和管理Hadoop集群至关重要。
4. **HDFS模拟器**: Hadoop2.6.0版本包含了HDFS的本地模拟器,使得开发者可以在单机的Windows环境下测试HDFS操作,而无需完整的分布式集群。这对于开发和调试Hadoop应用程序非常有用。 5. **安全认证**: winutils....
Eclipse Hadoop 2.6.0 插件是针对Hadoop开发的一款工具,它使得在Eclipse集成开发环境中管理、配置和调试Hadoop项目变得简单高效。这个插件不仅提高了开发人员的工作效率,还降低了操作Hadoop集群的复杂性。 Hadoop...
这个压缩包“hadoop-2.6.0.tar.gz”包含了Hadoop 2.6.0版本的所有组件,是搭建Hadoop集群的关键组成部分。在这个版本中,Hadoop已经相当成熟,提供了许多改进和新特性,使得它在大数据处理领域更加高效和稳定。 在...
这个工具通常包含在`hadoop2.6.0`版本的核心组件中,并被放置在`HADOOP_HOME\bin`目录下。本文将深入探讨`winutils.exe`的作用、功能以及如何解决在Windows上运行Hadoop时遇到的相关问题。 一、`winutils.exe`简介 ...
- 创建 `/opt/yarn` 目录,并进入该目录使用 `tar xvf hadoop2.6.0.tar.gz` 解压文件。 3. **用户和组创建**: - 根据实际需要创建用户组(例如 `hadoop`),以及用户(如 `yarn`, `hdfs`, `mapred`)。 4. **...
Hadoop 2.6.0+Hbase1.12+mahout0.9 集群搭建,自己写的,可以根据实际情况搭建伪分布式或者完全分布式。
《Hadoop-2.6.0在CentOS6.5上的编译与集群搭建详解》 Hadoop,作为开源的大数据处理框架,是大数据领域的核心工具之一。版本2.6.0是其发展历程中的一个重要里程碑,它带来了诸多性能优化和功能增强。在本文中,我们...