规划:
cloud01、cloud02 Namenode HA
cloud03 YARN
cloud04\05\06 ZK节点、Datanode
机器名 IP地址 进程
cloud01 192.168.1.201 jdk hadoop NameNode、DFSZKFailoverController
cloud02 192.168.1.202 jdk hadoop NameNode、DFSZKFailoverController
cloud03 192.168.1.203 jdk hadoop ResourceManager
cloud04 192.168.1.204 jdk hadoop zookeeper DataNode、NoderManager、JouranlNode、QuorumPeerMain
cloud05 192.168.1.205 jdk hadoop zookeeper DataNode、NoderManager、JouranlNode、QuorumPeerMain
cloud06 192.168.1.206 jdk hadoop zookeeper DataNode、NoderManager、JouranlNode、QuorumPeerMain
说明:
---管理YARN yarn,Yet Another Resource Negotiator ,是 Hadoop 自 0.23.0 版本后新的 map-reduce 框架
---JouranlNode 数据实时同步 YARN 管理多个NodeManager QuorumPeerMain是ZooKeeper的进程
hadoop2.0 namenode 有HA功能 需要安装zookeeper (Master选举) ZK需要要奇数个节点
hadoop2.0 有两个NamdeNode组成,一个处于active,一个standby
安装步骤:
1、先在datanode 节点上安装jdk和zookeeper
1.1 下载jdk 解压到/usr/local目前下 /usr/local/jdk1.8.0_05
1.2 设置环境变量/etc/profile
export JAVA_HOME=/usr/local/jdk1.8.0_05
PATH=$JAVA_HOME/bin:$PATH:$HOME/bin
export PATH
jps验证
1.3 创建mycloud目录,解压zookeeper-3.4.5.tar.gz 至这个文件目录下/mycloud/zookeeper-3.4.6 (之后hadoop也放到这个目录下)
创建/mycloud/zookeeper-3.4.6/data 目录,在该目录下创建文件myid 填入相应的server.id号,一行一个id号
修改conf目录下的zoo.cfg cp zoo_sample.cfg zoo.cfg
修改 zoo.cfg dataDir=/mycloud/zookeeper-3.4.6/data
最后添加:
server.1=cloud04:2888:3888 通讯端口:选举端口
server.2=cloud05:2888:3888 通讯端口:选举端口
server.3=cloud06:2888:3888 通讯端口:选举端口
1.4 启动zk (ZK的三个节点都要启动)
[root@cloud04 bin]# ./zkServer.sh start
jps验证:
看到QuorumPeerMain进程,即启动成功。
查看ZK服务状态:
[root@hnode03 bin]# ./zkServer.sh status
JMX enabled by default
Using config: /mycloud/zookeeper-3.4.6/bin/../conf/zoo.cfg
Mode: follower / Mode: leader (两种模式)
说明:leader是主节点,follower是从节约点
2、安装hadoop
下载 hadoop-2.2.0.tar.gz
tar -xzvf hadoop-2.2.0.tar.gz -C /mycloud/
5个配置文件 hadoop-env.sh core-site.xml hdfs-site.xml mapred-site.xml yarn-site.xml
(生产中最好需要两台独立的机器来做NameNode的HA)
hadoop-env.sh 修改第27行 JAVA_HOME
core-site.xml 修改内容如下:
<configuration>
<!--指定hdfs的nameservice为ns-->
<property>
<name>fs.defaultFS</name>
<value>hdfs://ns</value>
</property>
<!--指定hadoop的临时目录-->
<property>
<name>hadoop.tmp.dir</name>
<value>/mycloud/hadoop-2.2.0/temp</value>
</property>
<!--指定zookeeper地址-->
<property>
<name>ha.zookeeper.quorum</name>
<value>cloud04:2181,cloud05:2181,cloud06:2181</value>
</property>
</configuration>
hdfs-site.xml 修改内容如下:
<!-- Put site-specific property overrides in this file. -->
<!--指定hdfs的nameservice为nsHA之后统一对外的名称-->
<configuration>
<property>
<name>dfs.nameservices</name>
<value>ns</value>
</property>
<!--ns下面有两个NameNode,分别是cloud01和cloud01-->
<property>
<name>dfs.ha.namenodes.ns</name>
<value>cloud01,cloud02</value>
</property>
<!--cloud01的RPC通信地址-->
<property>
<name>dfs.namenode.rpc-address.ns.cloud01</name>
<value>cloud01:8020</value>
</property>
<!--cloud01的http通信地址-->
<property>
<name>dfs.namenode.http-address.ns.cloud01</name>
<value>cloud01:50070</value>
</property>
<!--cloud02的RPC通信地址-->
<property>
<name>dfs.namenode.rpc-address.ns.cloud02</name>
<value>cloud02:8020</value>
</property>
<!--cloud02的http通信地址-->
<property>
<name>dfs.namenode.http-address.ns.cloud02</name>
<value>cloud02:50070</value>
</property>
<!--指定NameNode的元数据在JournalNode上的存放位置-->
<property>
<name>dfs.namenode.share.edits.dir</name>
<value>qjournal://cloud04:8485;cloud05:8485;cloud06:8485/ns</value>
</property>
<!--指定JournalNode在本地磁盘存放数据的位置-->
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/mycloud/hadoop-2.2.0/journal</value>
</property>
<!-- 开启NameNode失败自动切换-->
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
<!--配置失败自动切换实现方式-->
<property>
<name>dfs.client.failover.proxy.provider.ns</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfigureFailoverProxyProvider</value>
</property>
<!--使用隔离机制-->
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<!-- 使用隔离机制时需要ssh免登录 -->
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/root/.ssh/id_rsa</value>
</property>
</configuration>
vi mapred-site.xml
<configuration>
<-- 指定Mr框架为YARN方式 -->
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
vi yarn-site.xml
<configuration>
<-- 指定resourcemanager地址 -->
<property>
<name>yarn.resourcemanager.hostname</name>
<value>cloud03</value>
</property>
<-- 指定nodemanager启动时加载server的方式为mapreduce_shuffle -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
3、ssh配置免密码登录
#ssh-keygen -t rsa
#ls
id_rsa id_rsa.pub
#ssh-copy-id -i cloud02 (到目标机免登录,把公钥拷至目标机)操作时输入目标机的密码
cloud01-->cloud02 前配好的hadoop拷至02、03、04、05、06
#scp -r /mycloud/ cloud02:/
#scp -r /mycloud/ cloud03:/
#scp -r /mycloud/ cloud04:/
#scp -r /mycloud/ cloud05:/
#scp -r /mycloud/ cloud06:/
4、配置环境变量
在第一台机器上添加一个HADOOP_HOME环境变量
/etc/profile
export JAVA_HOME=/usr/local/jdk1.6.0_45
export HADOOP_HOME=/project/hadoop-2.2.0
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin
source /etc/profile
将profile拷至其他节点,同步
scp /etc/profile cloud02:/etc
scp /etc/profile cloud03:/etc
scp /etc/profile cloud04:/etc
scp /etc/profile cloud05:/etc
scp /etc/profile cloud06:/etc
5、配置slaves节点(即运行JouralNode的节点,它的管理节点是:ResourceManager,也就是YARN)
vi /hadoop-2.2.0/etc/hadoop/slaves
cloud04
cloud05
cloud06
6、全部配置好之后,将hadoop-2.2.0拷贝到其他机器
scp -r /mycloud/hadoop-2.2.0 cloud02:/mycloud/
scp -r /mycloud/hadoop-2.2.0 cloud03:/mycloud/
scp -r /mycloud/hadoop-2.2.0 cloud04:/mycloud/
scp -r /mycloud/hadoop-2.2.0 cloud05:/mycloud/
scp -r /mycloud/hadoop-2.2.0 cloud06:/mycloud/
7、启动
7.1 先启动zk(每台机器)
cd /project/zookeeper-3.4.5/bin/
./zkServer.sh start
./zkServer.sh status
(一个leader,两个follower)
7.2 再启动journalnode(只需要在一台机器运行即可,其他机器将自动通过ssh运行。如果只运行自己的,是hadoop-deamon.sh,多台一起运行的是
hadoop-deamons.sh,运行的这些机器,也是安装了ZooKeeper的机器)
sbin/hadoop-deamons.sh start journalnode
验证:jps 查看Jouranlnode进程有没有运行
sbin/hadoop-deamons.sh start journalnode (hadoop-deamon.sh运行本机journalnode hadoop-deamons.sh 其他机器也一起运行)
[root@cloud01 hadoop-2.2.0]# sbin/hadoop-daemons.sh start journalnode
8、格式化HDFS文件系统:
在cloud01(即NameNode的其中一个节点)上执行命令
#bin/hdfs haadmin DFSHAadmin –transitionToActive ns 验证HA条件
hadoop namenode -format
格式化后在根据core-site.xml中的hadoop
两个Namenode,格式化一个后,拷到另一个也作为格式化
9、将格式后之后,生成的HDFS的元数据拷贝至第2个NameNode节点
scp -r tmp/ cloud02:/project/hadoop-2.2.0/ (拷贝的目录tmp是执行格式化命令后生成的)
10、格式化zk(在cloud01上执行即可)
hdfs zkfc -formatZK
cd /project/zookeeper-3.4.5/bin/
./zkCli.sh
ls /
[hadoop-ha,zookeeper] 看到多一个hadoop-ha目录,说明格式化成功
11、启动HDFS(在cloud01上执行即可)
sbin/start-dfs.sh
12、启动YARN(在cloud03上执行)
sbin/start-yarn.sh
jps 查看进程
可以查看到多一个ResourceManager进程,被管理的节点为NodeManager
测试环境与应用:
切换Namenode 的两个服务器需要相互免密码登录
强制kill cloud01的NameNode进程后,cloud02自动转为Active (cloud02到cloud01也必须是ssh免密码登录)
cloud02重新启动后,变为Standby
sbin/hadoop-daemon.sh start namenode
hadoop fs -put /root/install.log / 上传文件
hadoop fs -ls 查看文件
hadoop fs -get /install.log /home/111.log 下载文件
hadoop jar /project/hadoop-2.2.0/.../hadoop-mapreduce-examples-2.2.0.jar wordcount /file.txt /resultout.txt
hadoop -fs -cat /resultout/part-r-0000 查看mapreduce结果文件
错误:
sbin/hadoop-daemon.sh start journalnode
starting journalnode, logging to /mycloud/hadoop-2.2.0/logs/hadoop-root-journalnode-cloud01.out
[Fatal Error] core-site.xml:20:4: 元素内容必须由格式正确的字符数据或标记组成。
Exception in thread "main" java.lang.RuntimeException: org.xml.sax.SAXParseException; systemId: file:/mycloud/hadoop-2.2.0/etc/hadoop/core-site.xml; lineNumber: 20; columnNumber: 4; 元素内容必须由格式正确的字符数据或标记组成。
at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2152)
at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2001)
at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:1918)
at org.apache.hadoop.conf.Configuration.set(Configuration.java:817)
at org.apache.hadoop.conf.Configuration.set(Configuration.java:792)
at org.apache.hadoop.conf.Configuration.setBoolean(Configuration.java:1114)
at org.apache.hadoop.util.GenericOptionsParser.processGeneralOptions(GenericOptionsParser.java:319)
at org.apache.hadoop.util.GenericOptionsParser.parseGeneralOptions(GenericOptionsParser.java:431)
原因: 将property 写成了porperty
<property>
<name>fs.defaultFs</name>
<value>hdfs://nnode</value>
</property>
S大写
<name>fs.defaultFs</name> <name>fs.defaultFS</name>
<--指定hdfs的nameservice为ns1--> 注释方法错误,少了叹号 <!--指定hdfs的nameservice为ns1-->
sbin/hadoop-daemon.sh start journalnode
starting journalnode, logging to /mycloud/hadoop-2.2.0/logs/hadoop-root-journalnode-cloud01.out
[Fatal Error] hdfs-site.xml:20:33: 注释中不允许出现字符串 "--"。
Exception in thread "main" java.lang.RuntimeException: org.xml.sax.SAXParseException; systemId: file:/mycloud/hadoop-2.2.0/etc/hadoop/hdfs-site.xml; lineNumber: 20; columnNumber: 33; 注释中不允许出现字符串 "--"。
at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2152)
at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2001)
at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:1918)
at org.apache.hadoop.conf.Configuration.set(Configuration.java:817)
at org.apache.hadoop.conf.Configuration.set(Configuration.java:792)
at org.apache.hadoop.conf.Configuration.setBoolean(Configuration.java:1114)
at org.apache.hadoop.util.GenericOptionsParser.processGeneralOptions(GenericOptionsParser.java:319)
at org.apache.hadoop.util.GenericOptionsParser.parseGeneralOptions(GenericOptionsParser.java:431)
<!--指定hdfs的nameservice为nnode--HA之后统一对外的名称-->
原因:注释的内容中间还有 --
第一次格式化:HA没有true 见365,原因在hdfs-site.xml中
<property>
<name>dfs.client.failover.proxy.provider.nnode</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfigureFailoverProxyProvider</value>
</property> 这里配置错了,少了Provider
[root@cloud01 hadoop-2.2.0]# hadoop namenode -format
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
14/06/15 22:47:28 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = cloud01/192.168.214.171
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.2.0
STARTUP_MSG: classpath = /mycloud/hadoop-2.2.0/etc/hadoop:/mycloud/hadoop-2.2.0/share/hadoop/common/lib/jetty-6.1.26.jar:/mycloud/hadoop-2.2.0/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/mycloud/hadoop-2.2.0/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/mycloud/hadoop-2.2.0/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/mycloud/hadoop-2.2.0/share/hadoop/common/lib/commons-math-2.1.jar:/mycloud/hadoop-2.2.0/share/hadoop/common/lib/hadoop-auth-2.2.0.jar:/mycloud/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/mycloud/hadoop-2.2.0/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/mycloud/hadoop-2.2.0/share/hadoop/common/lib/commons-logging-1.1.1.jar:/mycloud/hadoop-2.2.0/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/mycloud/hadoop-2.2.0/share/hadoop/common/lib/paranamer-2.3.jar:/mycloud/hadoop-2.2.0/share/hadoop/common/lib/xz-1.0.jar:/mycloud/hadoop-2.2.0/share/hadoop/common/lib/commons-io-2.1.jar:/mycloud/hadoop-2.2.0/share/hadoop/common/lib/commons-collections-3.2.1.jar:/mycloud/hadoop-2.2.0/share/hadoop/common/lib/commons-lang-2.5.jar:/mycloud/hadoop-2.2.0/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/mycloud/hadoop-2.2.0/share/hadoop/common/lib/stax-api-1.0.1.jar:/mycloud/hadoop-2.2.0/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/mycloud/hadoop-2.2.0/share/hadoop/common/lib/guava-11.0.2.jar:/mycloud/hadoop-2.2.0/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/mycloud/hadoop-2.2.0/share/hadoop/common/lib/commons-el-1.0.jar:/mycloud/hadoop-2.2.0/share/hadoop/common/lib/mockito-all-1.8.5.jar:/mycloud/hadoop-2.2.0/share/hadoop/common/lib/commons-configuration-1.6.jar:/mycloud/hadoop-2.2.0/share/hadoop/common/lib/jersey-json-1.9.jar:/mycloud/hadoop-2.2.0/share/hadoop/common/lib/commons-digester-1.8.jar:/mycloud/hadoop-2.2.0/share/hadoop/common/lib/commons-cli-1.2.jar:/mycloud/hadoop-2.2.0/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/mycloud/hadoop-2.2.0/share/hadoop/common/lib/commons-compress-1.4.1.jar:/mycloud/hadoop-2.2.0/share/hadoop/common/lib/jersey-server-1.9.jar:/mycloud/hadoop-2.2.0/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/mycloud/hadoop-2.2.0/share/hadoop/common/lib/jsch-0.1.42.jar:/mycloud/hadoop-2.2.0/share/hadoop/common/lib/commons-codec-1.4.jar:/mycloud/hadoop-2.2.0/share/hadoop/common/lib/commons-httpclient-3.1.jar:/mycloud/hadoop-2.2.0/share/hadoop/common/lib/jersey-core-1.9.jar:/mycloud/hadoop-2.2.0/share/hadoop/common/lib/jsp-api-2.1.jar:/mycloud/hadoop-2.2.0/share/hadoop/common/lib/xmlenc-0.52.jar:/mycloud/hadoop-2.2.0/share/hadoop/common/lib/jets3t-0.6.1.jar:/mycloud/hadoop-2.2.0/share/hadoop/common/lib/zookeeper-3.4.5.jar:/mycloud/hadoop-2.2.0/share/hadoop/common/lib/jsr305-1.3.9.jar:/mycloud/hadoop-2.2.0/share/hadoop/common/lib/activation-1.1.jar:/mycloud/hadoop-2.2.0/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/mycloud/hadoop-2.2.0/share/hadoop/common/lib/avro-1.7.4.jar:/mycloud/hadoop-2.2.0/share/hadoop/common/lib/netty-3.6.2.Final.jar:/mycloud/hadoop-2.2.0/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/mycloud/hadoop-2.2.0/share/hadoop/common/lib/log4j-1.2.17.jar:/mycloud/hadoop-2.2.0/share/hadoop/common/lib/commons-net-3.1.jar:/mycloud/hadoop-2.2.0/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/mycloud/hadoop-2.2.0/share/hadoop/common/lib/junit-4.8.2.jar:/mycloud/hadoop-2.2.0/share/hadoop/common/lib/asm-3.2.jar:/mycloud/hadoop-2.2.0/share/hadoop/common/lib/servlet-api-2.5.jar:/mycloud/hadoop-2.2.0/share/hadoop/common/lib/jetty-util-6.1.26.jar:/mycloud/hadoop-2.2.0/share/hadoop/common/lib/jettison-1.1.jar:/mycloud/hadoop-2.2.0/share/hadoop/common/lib/hadoop-annotations-2.2.0.jar:/mycloud/hadoop-2.2.0/share/hadoop/common/hadoop-common-2.2.0.jar:/mycloud/hadoop-2.2.0/share/hadoop/common/hadoop-common-2.2.0-tests.jar:/mycloud/hadoop-2.2.0/share/hadoop/common/hadoop-nfs-2.2.0.jar:/mycloud/hadoop-2.2.0/share/hadoop/hdfs:/mycloud/hadoop-2.2.0/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/mycloud/hadoop-2.2.0/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/mycloud/hadoop-2.2.0/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/mycloud/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-logging-1.1.1.jar:/mycloud/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-io-2.1.jar:/mycloud/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-lang-2.5.jar:/mycloud/hadoop-2.2.0/share/hadoop/hdfs/lib/guava-11.0.2.jar:/mycloud/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-el-1.0.jar:/mycloud/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/mycloud/hadoop-2.2.0/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/mycloud/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/mycloud/hadoop-2.2.0/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/mycloud/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/mycloud/hadoop-2.2.0/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/mycloud/hadoop-2.2.0/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/mycloud/hadoop-2.2.0/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/mycloud/hadoop-2.2.0/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/mycloud/hadoop-2.2.0/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/mycloud/hadoop-2.2.0/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/mycloud/hadoop-2.2.0/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/mycloud/hadoop-2.2.0/share/hadoop/hdfs/lib/asm-3.2.jar:/mycloud/hadoop-2.2.0/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/mycloud/hadoop-2.2.0/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/mycloud/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-2.2.0-tests.jar:/mycloud/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-nfs-2.2.0.jar:/mycloud/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-2.2.0.jar:/mycloud/hadoop-2.2.0/share/hadoop/yarn/lib/snappy-java-1.0.4.1.jar:/mycloud/hadoop-2.2.0/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/mycloud/hadoop-2.2.0/share/hadoop/yarn/lib/aopalliance-1.0.jar:/mycloud/hadoop-2.2.0/share/hadoop/yarn/lib/paranamer-2.3.jar:/mycloud/hadoop-2.2.0/share/hadoop/yarn/lib/xz-1.0.jar:/mycloud/hadoop-2.2.0/share/hadoop/yarn/lib/commons-io-2.1.jar:/mycloud/hadoop-2.2.0/share/hadoop/yarn/lib/junit-4.10.jar:/mycloud/hadoop-2.2.0/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/mycloud/hadoop-2.2.0/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/mycloud/hadoop-2.2.0/share/hadoop/yarn/lib/javax.inject-1.jar:/mycloud/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-server-1.9.jar:/mycloud/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-core-1.9.jar:/mycloud/hadoop-2.2.0/share/hadoop/yarn/lib/guice-3.0.jar:/mycloud/hadoop-2.2.0/share/hadoop/yarn/lib/avro-1.7.4.jar:/mycloud/hadoop-2.2.0/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/mycloud/hadoop-2.2.0/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/mycloud/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/mycloud/hadoop-2.2.0/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/mycloud/hadoop-2.2.0/share/hadoop/yarn/lib/log4j-1.2.17.jar:/mycloud/hadoop-2.2.0/share/hadoop/yarn/lib/asm-3.2.jar:/mycloud/hadoop-2.2.0/share/hadoop/yarn/lib/hadoop-annotations-2.2.0.jar:/mycloud/hadoop-2.2.0/share/hadoop/yarn/lib/hamcrest-core-1.1.jar:/mycloud/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-common-2.2.0.jar:/mycloud/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.2.0.jar:/mycloud/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.2.0.jar:/mycloud/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-api-2.2.0.jar:/mycloud/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.2.0.jar:/mycloud/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.2.0.jar:/mycloud/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-site-2.2.0.jar:/mycloud/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-tests-2.2.0.jar:/mycloud/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-client-2.2.0.jar:/mycloud/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.2.0.jar:/mycloud/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-common-2.2.0.jar:/mycloud/hadoop-2.2.0/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/mycloud/hadoop-2.2.0/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/mycloud/hadoop-2.2.0/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/mycloud/hadoop-2.2.0/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/mycloud/hadoop-2.2.0/share/hadoop/mapreduce/lib/xz-1.0.jar:/mycloud/hadoop-2.2.0/share/hadoop/mapreduce/lib/commons-io-2.1.jar:/mycloud/hadoop-2.2.0/share/hadoop/mapreduce/lib/junit-4.10.jar:/mycloud/hadoop-2.2.0/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/mycloud/hadoop-2.2.0/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/mycloud/hadoop-2.2.0/share/hadoop/mapreduce/lib/javax.inject-1.jar:/mycloud/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/mycloud/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/mycloud/hadoop-2.2.0/share/hadoop/mapreduce/lib/guice-3.0.jar:/mycloud/hadoop-2.2.0/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/mycloud/hadoop-2.2.0/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/mycloud/hadoop-2.2.0/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/mycloud/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/mycloud/hadoop-2.2.0/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/mycloud/hadoop-2.2.0/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/mycloud/hadoop-2.2.0/share/hadoop/mapreduce/lib/asm-3.2.jar:/mycloud/hadoop-2.2.0/share/hadoop/mapreduce/lib/hadoop-annotations-2.2.0.jar:/mycloud/hadoop-2.2.0/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/mycloud/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.2.0.jar:/mycloud/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.2.0.jar:/mycloud/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.2.0.jar:/mycloud/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar:/mycloud/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.2.0.jar:/mycloud/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0-tests.jar:/mycloud/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.2.0.jar:/mycloud/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.2.0.jar:/mycloud/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0.jar:/mycloud/hadoop-2.2.0/contrib/capacity-scheduler/*.jar:/mycloud/hadoop-2.2.0/contrib/capacity-scheduler/*.jar
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common -r 1529768; compiled by 'hortonmu' on 2013-10-07T06:28Z
STARTUP_MSG: java = 1.8.0_05
************************************************************/
14/06/15 22:47:28 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /mycloud/hadoop-2.2.0/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
14/06/15 22:47:28 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Formatting using clusterid: CID-64625d97-d90d-490c-9d8d-f09a0735676d
14/06/15 22:47:29 INFO namenode.HostFileManager: read includes:
HostSet(
)
14/06/15 22:47:29 INFO namenode.HostFileManager: read excludes:
HostSet(
)
14/06/15 22:47:29 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
14/06/15 22:47:29 INFO util.GSet: Computing capacity for map BlocksMap
14/06/15 22:47:29 INFO util.GSet: VM type = 64-bit
14/06/15 22:47:29 INFO util.GSet: 2.0% max memory = 966.7 MB
14/06/15 22:47:29 INFO util.GSet: capacity = 2^21 = 2097152 entries
14/06/15 22:47:29 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
14/06/15 22:47:29 INFO blockmanagement.BlockManager: defaultReplication = 3
14/06/15 22:47:29 INFO blockmanagement.BlockManager: maxReplication = 512
14/06/15 22:47:29 INFO blockmanagement.BlockManager: minReplication = 1
14/06/15 22:47:29 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
14/06/15 22:47:29 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = false
14/06/15 22:47:29 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
14/06/15 22:47:29 INFO blockmanagement.BlockManager: encryptDataTransfer = false
14/06/15 22:47:29 INFO namenode.FSNamesystem: fsOwner = root (auth:SIMPLE)
14/06/15 22:47:29 INFO namenode.FSNamesystem: supergroup = supergroup
14/06/15 22:47:29 INFO namenode.FSNamesystem: isPermissionEnabled = true
14/06/15 22:47:29 INFO namenode.FSNamesystem: Determined nameservice ID: nnode
14/06/15 22:47:29 INFO namenode.FSNamesystem: HA Enabled: false
14/06/15 22:47:29 INFO namenode.FSNamesystem: Append Enabled: true
14/06/15 22:47:29 INFO util.GSet: Computing capacity for map INodeMap
14/06/15 22:47:29 INFO util.GSet: VM type = 64-bit
14/06/15 22:47:29 INFO util.GSet: 1.0% max memory = 966.7 MB
14/06/15 22:47:29 INFO util.GSet: capacity = 2^20 = 1048576 entries
14/06/15 22:47:29 INFO namenode.NameNode: Caching file names occuring more than 10 times
14/06/15 22:47:29 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
14/06/15 22:47:29 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
14/06/15 22:47:29 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
14/06/15 22:47:29 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
14/06/15 22:47:29 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
14/06/15 22:47:29 INFO util.GSet: Computing capacity for map Namenode Retry Cache
14/06/15 22:47:29 INFO util.GSet: VM type = 64-bit
14/06/15 22:47:29 INFO util.GSet: 0.029999999329447746% max memory = 966.7 MB
14/06/15 22:47:29 INFO util.GSet: capacity = 2^15 = 32768 entries
14/06/15 22:47:29 INFO common.Storage: Storage directory /mycloud/hadoop-2.2.0/temp/dfs/name has been successfully formatted.
14/06/15 22:47:29 INFO namenode.FSImage: Saving image file /mycloud/hadoop-2.2.0/temp/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
14/06/15 22:47:29 INFO namenode.FSImage: Image file /mycloud/hadoop-2.2.0/temp/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 196 bytes saved in 0 seconds.
14/06/15 22:47:29 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
14/06/15 22:47:29 INFO util.ExitUtil: Exiting with status 0
14/06/15 22:47:30 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at cloud01/192.168.214.171
************************************************************/
[root@cloud01 hadoop-2.2.0]#
终于成功了
************************************************************/
14/06/17 23:40:23 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /mycloud/hadoop-2.2.0/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
14/06/17 23:40:24 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Formatting using clusterid: CID-47fe8dae-be4f-4f25-a542-1b43a720752b
14/06/17 23:40:24 INFO namenode.HostFileManager: read includes:
HostSet(
)
14/06/17 23:40:24 INFO namenode.HostFileManager: read excludes:
HostSet(
)
14/06/17 23:40:24 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
14/06/17 23:40:24 INFO util.GSet: Computing capacity for map BlocksMap
14/06/17 23:40:24 INFO util.GSet: VM type = 64-bit
14/06/17 23:40:24 INFO util.GSet: 2.0% max memory = 966.7 MB
14/06/17 23:40:24 INFO util.GSet: capacity = 2^21 = 2097152 entries
14/06/17 23:40:24 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
14/06/17 23:40:24 INFO blockmanagement.BlockManager: defaultReplication = 3
14/06/17 23:40:24 INFO blockmanagement.BlockManager: maxReplication = 512
14/06/17 23:40:24 INFO blockmanagement.BlockManager: minReplication = 1
14/06/17 23:40:24 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
14/06/17 23:40:24 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = false
14/06/17 23:40:24 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
14/06/17 23:40:24 INFO blockmanagement.BlockManager: encryptDataTransfer = false
14/06/17 23:40:24 INFO namenode.FSNamesystem: fsOwner = root (auth:SIMPLE)
14/06/17 23:40:24 INFO namenode.FSNamesystem: supergroup = supergroup
14/06/17 23:40:24 INFO namenode.FSNamesystem: isPermissionEnabled = true
14/06/17 23:40:24 INFO namenode.FSNamesystem: Determined nameservice ID: ns
14/06/17 23:40:24 INFO namenode.FSNamesystem: HA Enabled: true
14/06/17 23:40:24 INFO namenode.FSNamesystem: Append Enabled: true
14/06/17 23:40:24 INFO util.GSet: Computing capacity for map INodeMap
14/06/17 23:40:24 INFO util.GSet: VM type = 64-bit
14/06/17 23:40:24 INFO util.GSet: 1.0% max memory = 966.7 MB
14/06/17 23:40:24 INFO util.GSet: capacity = 2^20 = 1048576 entries
14/06/17 23:40:24 INFO namenode.NameNode: Caching file names occuring more than 10 times
14/06/17 23:40:24 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
14/06/17 23:40:24 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
14/06/17 23:40:24 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
14/06/17 23:40:24 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
14/06/17 23:40:24 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
14/06/17 23:40:24 INFO util.GSet: Computing capacity for map Namenode Retry Cache
14/06/17 23:40:24 INFO util.GSet: VM type = 64-bit
14/06/17 23:40:24 INFO util.GSet: 0.029999999329447746% max memory = 966.7 MB
14/06/17 23:40:24 INFO util.GSet: capacity = 2^15 = 32768 entries
Re-format filesystem in Storage Directory /mycloud/hadoop-2.2.0/temp/dfs/name ? (Y or N) y
14/06/17 23:40:29 INFO common.Storage: Storage directory /mycloud/hadoop-2.2.0/temp/dfs/name has been successfully formatted.
14/06/17 23:40:29 INFO namenode.FSImage: Saving image file /mycloud/hadoop-2.2.0/temp/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
14/06/17 23:40:29 INFO namenode.FSImage: Image file /mycloud/hadoop-2.2.0/temp/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 196 bytes saved in 0 seconds.
14/06/17 23:40:29 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
14/06/17 23:40:29 INFO util.ExitUtil: Exiting with status 0
14/06/17 23:40:29 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at cloud01/192.168.214.171
************************************************************/
- 浏览: 627570 次
- 性别:
- 来自: 杭州
-
文章分类
- 全部博客 (334)
- java core (12)
- struts2.x (2)
- spring (3)
- hibernate (8)
- jpa (6)
- maven (2)
- osgi (5)
- eclipse (4)
- struts2.x+spring2.x+hibernate 整合 (5)
- ebs (0)
- html (0)
- vaadin (1)
- css (0)
- jquery (0)
- javascript (0)
- svn (1)
- cvs (0)
- axas2.x (0)
- eclipse+maven (9)
- annotation (0)
- 基于OSGi的动态化系统搭建 (1)
- notenet (1)
- jboss eclipse (4)
- eclipse工具 (4)
- jdk1.6+maven3.0.3+nuxeo+svn+felix+cxf+spring+springDM (6)
- spring dm (1)
- Nexus介绍 (1)
- proxool listener (0)
- oracle (4)
- mysql (8)
- 搭建你的全文检索 (1)
- hibernatehibernatehibernate (0)
- cvsearchcvsearch (0)
- mycvseach (0)
- asdfasdfasdf (0)
- propertiey (0)
- hibernate annotation (0)
- libs (0)
- icam (2)
- start 数据库配置 (0)
- jboss (1)
- 让Eclipse启动时显示选择workspace的对话框 (1)
- table表头固定 (1)
- s2s3h4 (0)
- leaver (0)
- mycvsaerchddd (0)
- 关于jboss5.0.1部署 (4)
- bookmarks (0)
- PersistenceUnitDeployment (0)
- mycom (0)
- HKEY_CURRENT_USER = &H80000001 (0)
- syspath (1)
- css div (1)
- Dreamweaver CS5 (0)
- generate (0)
- mysql查看表结构命令 (1)
- LOG IN ERROR EMAIL TO SB (0)
- struts2 handle static resource (1)
- jsf (2)
- log4j (1)
- jbpm4.4 (2)
- down: jbpm4.4 (1)
- jstl1.2 (1)
- spring annotation (1)
- java design pattern (1)
- cache (1)
- ehcache (1)
- 11111 (0)
- myge (0)
- pom.xml (0)
- springquartz (0)
- OpenStack (9)
- hadoop (2)
- nginx (1)
- hadoop openstack (1)
- os (1)
- hadoop-2.6.0 zookeeper-3.4.6 hbase-0.98.9-hadoop2 集群 (5)
- hadoop2.7.0 ha Spark (2)
- tess (0)
- system (1)
- asdf (0)
- hbase (2)
- hbase create table error (1)
- ekl (1)
- gitignore (1)
- gitlab-ci.yml (1)
- shell (1)
- elasticsearch (2)
- Azkaban 3.0+ (1)
- centos用命令 (1)
- hive (1)
- kafka (1)
- CaptureBasic (0)
- CentOS7 (1)
- dev tools (1)
- README.md (1)
- Error (1)
- teamviewerd.service (1)
- scala (1)
- spark (1)
- standard (1)
- gitlab (1)
- IDEA (0)
- ApplicationContext (1)
- 传统数仓 (1)
- redis install (1)
- MYSQL AND COLUME (1)
- java版本选择 (1)
- hue (1)
- npm (1)
- es (1)
- 版本管理 (1)
- 升级npm版本 (1)
- git (1)
- 服务器参数设置 (1)
- 调大 IDEA 编译内存大小 (0)
- CentOS8安装GitLab (1)
- gitlab安装使用 (1)
最新评论
-
ssydxa219:
vim /etc/security/limits.confvi ...
ekl -
Gamehu520:
table中无数据
hbase 出现的问题 -
Xleer0102:
为什么都是只有问没有答,哭晕在厕所
hbase 出现的问题 -
jiajiao_5413:
itext table -
CoderDream:
不完整,缺com.tcs.org.demostic.pub.u ...
struts2.3.1.1+hibernate3.6.9Final+spring3.1.0+proxool+maven+annotation
相关推荐
资源名称:CentOS 6.5 x64下安装19实体节点Hadoop 2.2.0集群配置指南内容简介: CentOS 6.5 x64下安装19实体节点Hadoop 2.2.0集群配置指南主要讲述的是CentOS 6.5 x64下安装19实体节点Hadoop 2.2.0集群配置指南;...
### Hadoop 2.2.0 部署详尽指南 #### 一、安装Linux **1. 安装wmware11** - **待补充:** 这部分需要更详细的说明来指导用户如何顺利安装wmware11,包括系统的最低配置要求、安装过程中需要注意的关键步骤等。 **...
本篇将详细介绍如何在CentOS 6.5 x86_64环境下搭建Hadoop 2.2.0版本,并结合HBase和ZooKeeper构建完整的分布式存储解决方案。HBase是一种基于Hadoop的非关系型数据库,提供高可靠性、高性能、面向列的数据存储服务;...
在本文中,我们将深入探讨如何在64位Linux CentOS 6.5系统上编译和安装Hadoop-2.2.0。Hadoop是一个开源的分布式系统基础架构,旨在简化大规模数据处理和存储。它的核心组件包括HDFS(Hadoop Distributed File System...
Hadoop分布式搭建环境: 系统:centos 6.5 64位 软件:Hadoop 2.2.0 64位 jdk 1.7 64位 用户: hadoop 运行环境:虚拟机vm 10 64位
本文档旨在详细介绍如何在CentOS 6.5 x64环境下部署一个包含19个节点的大规模Hadoop 2.2.0集群。该集群由2个NameNode+Yarn节点、1个JournalNode以及16个具有12块4TB硬盘的数据节点组成。通过本实施实战,我们将了解...
本章节主要介绍如何在 CentOS 6.5 系统上搭建 Hadoop 2.2.0 的开发环境,并进行编译。 ##### 1.1 安装 JDK 在编译 Hadoop 前,首先需要安装 JDK 并配置其环境变量。这里以 jdk-7u79-linux-x64 版本为例: 1. **...
1. CentOS6.5系统的安装与配置 - CentOS6.5是一个Linux发行版,属于Red Hat企业版Linux的社区版。 - 在安装CentOS6.5时,需要下载64位的安装介质,文件大小约为4GB。 - 安装过程需要保持Linux系统联网状态,以便...
eclipse-java-luna-SR1-linux-gtk-x86_64.tar hadoop-2.2.0.x86_64.tar hadoop-2.2.0.x86_64.tar 需要上面的资源配合完成,系统环境是centos6.5 64位,java版本是JAD1.7.0_71,需要自己下载ant工具 亲测可用
在本文中,我们将详细探讨在CentOS 6.5 64位系统上编译和安装Hadoop-2.2.0的过程。Hadoop是大数据处理领域广泛使用的框架,它能够跨计算机集群分布式地存储和处理大量数据。以下是详细的安装步骤和知识点。 一、...
这里使用的是Hadoop 2.2.0版本,配合Java 6u31和CentOS 6.5操作系统。 **2. Hadoop Eclipse插件介绍和使用** 2.1 Eclipse插件安装与配置 Hadoop Eclipse插件是为了简化Hadoop程序的开发和调试而设计的。它允许用户...
本文档详细介绍了如何在CentOS 6.5 i586系统上搭建一个基本的Hadoop 2.2.0集群,包括环境准备、具体安装步骤以及配置方法。通过遵循这些步骤,您可以成功地构建自己的Hadoop集群,并为进一步的大数据分析工作打下...
2. **版本匹配**:文档指出,由于使用的是CentOS 6.5 32位系统和Hadoop 2.2.0,因此选择了Hive 0.12.0版本。通常,不同组件之间需要匹配合适的版本以确保稳定运行。 3. **Hive安装**:Hive的安装过程相对简单,只...
本篇文档将详细介绍在CentOS 6.5 32位系统上安装Hive 0.12.0的配置过程,以及在此过程中需要注意的关键点。 首先,确保你的环境中已经正确安装了32位的Hadoop 2.2.0,因为Hive依赖于Hadoop的HDFS和MapReduce服务。...
- **Hadoop**:版本为2.2.0,需编译成64位版本。 - **Scala**:版本为2.10.4。 - **Spark**:版本为1.1.0,需要进行编译。 ##### 集群网络环境 本环境搭建了一个由三台虚拟机组成的集群,节点间能够通过免密码SSH...