编写不易,转载请注明(http://shihlei.iteye.com/blog/2084711)!
说明
本文搭建Hadoop CDH5.0.1 分布式系统,包括NameNode ,ResourceManger HA,忽略了Web Application Proxy 和Job HistoryServer。
word版:见附件吧!
一概述
(一)HDFS
1)基础架构
(1)NameNode(Master)
- 命名空间管理:命名空间支持对HDFS中的目录、文件和块做类似文件系统的创建、修改、删除、列表文件和目录等基本操作。
- 块存储管理
(2)DataNode(Slaver)
namenode和client的指令进行存储或者检索block,并且周期性的向namenode节点报告它存了哪些文件的block
2)HA架构
使用Active NameNode,Standby NameNode 两个结点解决单点问题,两个结点通过JounalNode共享状态,通过ZKFC 选举Active ,监控状态,自动备援。
(1)Active NameNode:
接受client的RPC请求并处理,同时写自己的Editlog和共享存储上的Editlog,接收DataNode的Block report, block location updates和heartbeat;
(2)Standby NameNode:
同样会接到来自DataNode的Block report, block location updates和heartbeat,同时会从共享存储的Editlog上读取并执行这些log操作,使得自己的NameNode中的元数据(Namespcae information + Block locations map)都是和Active NameNode中的元数据是同步的。所以说Standby模式的NameNode是一个热备(Hot Standby NameNode),一旦切换成Active模式,马上就可以提供NameNode服务
(3)JounalNode:
用于Active NameNode , Standby NameNode 同步数据,本身由一组JounnalNode结点组成,该组结点基数个,支持Paxos协议,保证高可用,是CDH5唯一支持的共享方式(相对于CDH4 促在NFS共享方式)
(4)ZKFC:
监控NameNode进程,自动备援。
(二)YARN
1)基础架构
(1)ResourceManager(RM)
接收客户端任务请求,接收和监控NodeManager(NM)的资源情况汇报,负责资源的分配与调度,启动和监控ApplicationMaster(AM)。
(2)NodeManager
节点上的资源管理,启动Container运行task计算,上报资源、container情况给RM和任务处理情况给AM。
(3)ApplicationMaster
单个Application(Job)的task管理和调度,向RM进行资源的申请,向NM发出launch Container指令,接收NM的task处理状态信息。NodeManager
(4)Web Application Proxy
用于防止Yarn遭受Web攻击,本身是ResourceManager的一部分,可通过配置独立进程。ResourceManager Web的访问基于守信用户,当Application Master运行于一个非受信用户,其提供给ResourceManager的可能是非受信连接,Web Application Proxy可以阻止这种连接提供给RM。
(5)Job History Server
NodeManager在启动的时候会初始化LogAggregationService服务, 该服务会在把本机执行的container log (在container结束的时候)收集并存放到hdfs指定的目录下. ApplicationMaster会把jobhistory信息写到hdfs的jobhistory临时目录下, 并在结束的时候把jobhisoty移动到最终目录, 这样就同时支持了job的recovery.History会启动web和RPC服务, 用户可以通过网页或RPC方式获取作业的信息
2)HA架构
ResourceManager HA 由一对Active,Standby结点构成,通过RMStateStore存储内部数据和主要应用的数据及标记。目前支持的可替代的RMStateStore实现有:基于内存的MemoryRMStateStore,基于文件系统的FileSystemRMStateStore,及基于zookeeper的ZKRMStateStore。
ResourceManager HA的架构模式同NameNode HA的架构模式基本一致,数据共享由RMStateStore,而ZKFC成为 ResourceManager进程的一个服务,非独立存在。
二 规划
(一)版本
组件名 |
版本 |
说明 |
JRE |
java version "1.7.0_60" Java(TM) SE Runtime Environment (build 1.7.0_60-b19) Java HotSpot(TM) Client VM (build 24.60-b09, mixed mode) |
|
Hadoop |
hadoop-2.3.0-cdh5.0.1.tar.gz |
主程序包
|
Zookeeper |
zookeeper-3.4.5-cdh5.0.1.tar.gz |
热切,Yarn 存储数据使用的协调服务 |
(二)主机规划
IP |
Host |
部署模块 |
进程 |
8.8.8.11 |
Hadoop-NN-01 |
NameNode ResourceManager |
NameNode DFSZKFailoverController ResourceManager |
8.8.8.12 |
Hadoop-NN-02 |
NameNode ResourceManager |
NameNode DFSZKFailoverController ResourceManager |
8.8.8.13 |
Hadoop-DN-01 Zookeeper-01 |
DataNode NodeManager Zookeeper |
DataNode NodeManager JournalNode QuorumPeerMain |
8.8.8.14 |
Hadoop-DN-02 Zookeeper-02 |
DataNode NodeManager Zookeeper |
DataNode NodeManager JournalNode QuorumPeerMain |
8.8.8.15 |
Hadoop-DN-03 |
DataNode NodeManager Zookeeper |
DataNode NodeManager JournalNode QuorumPeerMain |
各个进程解释:
- NameNode
- ResourceManager
- DFSZKFC:DFS Zookeeper Failover Controller 激活Standby NameNode
- DataNode
- NodeManager
- JournalNode:NameNode共享editlog结点服务(如果使用NFS共享,则该进程和所有启动相关配置接可省略)。
- QuorumPeerMain:Zookeeper主进程
(三)目录规划
名称 |
路径 |
/home/zero/hadoop/hadoop-2.3.0-cdh5.0.1 |
|
Data |
$ HADOOP_HOME/data |
Log |
$ HADOOP_HOME/logs |
三 环境准备
1)关闭防火墙
root 用户:
[root@CentOS-Cluster-01 hadoop-2.3.0-cdh5.0.1]# service iptables stop iptables: Flushing firewall rules: [ OK ] iptables: Setting chains to policy ACCEPT: filter [ OK ] iptables: Unloading modules: [ OK ] |
验证:
[root@CentOS-Cluster-01 hadoop-2.3.0-cdh5.0.1]# service iptables status iptables: Firewall is not running. |
2)安装JRE:略
3)安装Zookeeper :参见《Zookeeper-3.4.5-cdh5.0.1 单机模式、副本 ...》
4)配置SSH互信:
(1)Hadoop-NN-01创建密钥:
[zero@CentOS-Cluster-01 ~]$ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/zero/.ssh/id_rsa): Created directory '/home/zero/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/zero/.ssh/id_rsa. Your public key has been saved in /home/zero/.ssh/id_rsa.pub. The key fingerprint is: 28:0a:29:1d:98:56:55:db:ec:83:93:56:8a:0f:6c:ea zero@CentOS-Cluster-01 The key's randomart image is: +--[ RSA 2048]----+ | ..... | | o. + | |o.. . + | |.o .. ..* | |+ . .=.*So | |.. .o.+ . . | | .. . | | . | | E | +-----------------+ |
(2)分发密钥:
[zero@CentOS-Cluster-01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub zero@Hadoop-NN-01 The authenticity of host 'hadoop-nn-01 (8.8.8.11)' can't be established. RSA key fingerprint is a6:11:09:49:8c:fe:b2:fb:49:d5:01:fa:13:1b:32:24. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'hadoop-nn-01,8.8.8.11' (RSA) to the list of known hosts. puppet@hadoop-nn-01's password: Permission denied, please try again. puppet@hadoop-nn-01's password: [zero@CentOS-Cluster-01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub zero@Hadoop-NN-01 zero@hadoop-nn-01's password: Now try logging into the machine, with "ssh 'zero@Hadoop-NN-01'", and check in:
.ssh/authorized_keys
to make sure we haven't added extra keys that you weren't expecting. [zero@CentOS-Cluster-01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub zero@Hadoop-NN-02 (…略…) [zero@CentOS-Cluster-01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub zero@Hadoop-DN-01 (…略…) [zero@CentOS-Cluster-01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub zero@Hadoop-DN-02 (…略…) [zero@CentOS-Cluster-01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub zero@Hadoop-DN-03 (…略…) |
(3)验证:
[zero@CentOS-Cluster-01 ~]$ ssh Hadoop-NN-01 Last login: Sun Jun 22 19:56:23 2014 from 8.8.8.1 [zero@CentOS-Cluster-01 ~]$ exit logout Connection to Hadoop-NN-01 closed. [zero@CentOS-Cluster-01 ~]$ ssh Hadoop-NN-02 Last login: Sun Jun 22 20:03:31 2014 from 8.8.8.1 [zero@CentOS-Cluster-03 ~]$ exit logout Connection to Hadoop-NN-02 closed. [zero@CentOS-Cluster-01 ~]$ ssh Hadoop-DN-01 Last login: Mon Jun 23 02:00:07 2014 from centos_cluster_01 [zero@CentOS-Cluster-03 ~]$ exit logout Connection to Hadoop-DN-01 closed. [zero@CentOS-Cluster-01 ~]$ ssh Hadoop-DN-02 Last login: Sun Jun 22 20:07:03 2014 from 8.8.8.1 [zero@CentOS-Cluster-04 ~]$ exit logout Connection to Hadoop-DN-02 closed. [zero@CentOS-Cluster-01 ~]$ ssh Hadoop-DN-03 Last login: Sun Jun 22 20:07:05 2014 from 8.8.8.1 [zero@CentOS-Cluster-05 ~]$ exit logout Connection to Hadoop-DN-03 closed. |
5)配置/etc/hosts并分发:
[root@CentOS-Cluster-01 zero]# vi /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 8.8.8.10 CentOS-StandAlone 8.8.8.11 CentOS-Cluster-01 Hadoop-NN-01 8.8.8.12 CentOS-Cluster-02 Hadoop-NN-02 8.8.8.13 CentOS-Cluster-03 Hadoop-DN-01 Zookeeper-01 8.8.8.14 CentOS-Cluster-04 Hadoop-DN-02 Zookeeper-02 8.8.8.15 CentOS-Cluster-05 Hadoop-DN-03 Zookeeper-03 |
6)配置环境变量:vi ~/.bashrc 然后 source ~/.bashrc
四 安装
1)解压
[puppet@BigData-01 cdh4.4]$ tar -xvf hadoop-2.0.0-cdh4.4.0.tar.gz |
2)修改配置文件
说明:
配置名称 |
类型 |
说明 |
hadoop-env.sh |
Bash脚本 |
Hadoop运行环境变量设置 |
core-site.xml |
xml |
配置Hadoop core,如IO |
hdfs-site.xml |
xml |
配置HDFS守护进程:NN、JN、DN |
yarn-env.sh |
Bash脚本 |
Yarn运行环境变量设置 |
yarn-site.xml |
xml |
Yarn框架配置环境 |
mapred-site.xml |
xml |
MR属性设置 |
capacity-scheduler.xml |
xml |
Yarn调度属性设置 |
container-executor.cfg |
Cfg |
Yarn Container配置 |
mapred-queues.xml |
xml |
MR队列设置 |
hadoop-metrics.properties |
Java属性 |
Hadoop Metrics配置 |
hadoop-metrics2.properties |
Java属性 |
Hadoop Metrics配置 |
slaves |
Plain Text |
DN节点配置 |
exclude |
Plain Text |
移除DN节点配置文件 |
log4j.properties |
|
系统日志设置 |
configuration.xsl |
|
|
(1)修改$HADOOP_HOME/etc/hadoop/hadoop-env.sh:
#--------------------Java Env------------------------------ export JAVA_HOME="/usr/runtime/java/jdk1.7.0_60" #--------------------Hadoop Env------------------------------ #export HADOOP_PID_DIR= export HADOOP_PREFIX="/home/zero/hadoop/hadoop-2.3.0-cdh5.0.1" #--------------------Hadoop Daemon Options----------------- #export HADOOP_NAMENODE_OPTS="-XX:+UseParallelGC ${HADOOP_NAMENODE_OPTS}" #export HADOOP_DATANODE_OPTS= #--------------------Hadoop Logs--------------------------- #export HADOOP_LOG_DIR=
(2)修改$HADOOP_HOME/etc/hadoop/hadoop-site.xml
<?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <!--Yarn 需要使用 fs.defaultFS 指定NameNode URI --> <property> <name>fs.defaultFS</name> <value>hdfs://mycluster</value> </property> <!--HDFS超级用户 --> <property> <name>dfs.permissions.superusergroup</name> <value>zero</value> </property> <!--==============================Trash机制======================================= --> <property> <!--多长时间创建CheckPoint NameNode截点上运行的CheckPointer 从Current文件夹创建CheckPoint;默认:0 由fs.trash.interval项指定 --> <name>fs.trash.checkpoint.interval</name> <value>0</value> </property> <property> <!--多少分钟.Trash下的CheckPoint目录会被删除,该配置服务器设置优先级大于客户端,默认:0 不删除 --> <name>fs.trash.interval</name> <value>1440</value> </property> </configuration>
(3)修改$HADOOP_HOME/etc/hadoop/hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <!--开启web hdfs --> <property> <name>dfs.webhdfs.enabled</name> <value>true</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>/home/zero/hadoop/hadoop-2.3.0-cdh5.0.1/data/dfs/name</value> <description> namenode 存放name table(fsimage)本地目录(需要修改)</description> </property> <property> <name>dfs.namenode.edits.dir</name> <value>${dfs.namenode.name.dir}</value> <description>namenode粗放 transaction file(edits)本地目录(需要修改)</description> </property> <property> <name>dfs.datanode.data.dir</name> <value>/home/zero/hadoop/hadoop-2.3.0-cdh5.0.1/data/dfs/data</value> <description>datanode存放block本地目录(需要修改)</description> </property> <property> <name>dfs.replication</name> <value>1</value> </property> <!-- 块大小 (默认) --> <property> <name>dfs.blocksize</name> <value>268435456</value> </property> <!--======================================================================= --> <!--HDFS高可用配置 --> <!--nameservices逻辑名 --> <property> <name>dfs.nameservices</name> <value>mycluster</value> </property> <property> <!--设置NameNode IDs 此版本最大只支持两个NameNode --> <name>dfs.ha.namenodes.mycluster</name> <value>nn1,nn2</value> </property> <!-- Hdfs HA: dfs.namenode.rpc-address.[nameservice ID] rpc 通信地址 --> <property> <name>dfs.namenode.rpc-address.mycluster.nn1</name> <value>Hadoop-NN-01:8020</value> </property> <property> <name>dfs.namenode.rpc-address.mycluster.nn2</name> <value>Hadoop-NN-02:8020</value> </property> <!-- Hdfs HA: dfs.namenode.http-address.[nameservice ID] http 通信地址 --> <property> <name>dfs.namenode.http-address.mycluster.nn1</name> <value>Hadoop-NN-01:50070</value> </property> <property> <name>dfs.namenode.http-address.mycluster.nn2</name> <value>Hadoop-NN-02:50070</value> </property> <!--==================Namenode editlog同步 ============================================ --> <!--保证数据恢复 --> <property> <name>dfs.journalnode.http-address</name> <value>0.0.0.0:8480</value> </property> <property> <name>dfs.journalnode.rpc-address</name> <value>0.0.0.0:8485</value> </property> <property> <!--设置JournalNode服务器地址,QuorumJournalManager 用于存储editlog --> <!--格式:qjournal://<host1:port1>;<host2:port2>;<host3:port3>/<journalId> 端口同journalnode.rpc-address --> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://Hadoop-DN-01:8485;Hadoop-DN-02:8485;Hadoop-DN-03:8485/mycluster</value> </property> <property> <!--JournalNode存放数据地址 --> <name>dfs.journalnode.edits.dir</name> <value>/home/zero/hadoop/hadoop-2.3.0-cdh5.0.1/data/dfs/jn</value> </property> <!--==================DataNode editlog同步 ============================================ --> <property> <!--DataNode,Client连接Namenode识别选择Active NameNode策略 --> <name>dfs.client.failover.proxy.provider.mycluster</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property> <!--==================Namenode fencing:=============================================== --> <!--Failover后防止停掉的Namenode启动,造成两个服务 --> <property> <name>dfs.ha.fencing.methods</name> <value>sshfence</value> </property> <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/home/zero/.ssh/id_rsa</value> </property> <property> <!--多少milliseconds 认为fencing失败 --> <name>dfs.ha.fencing.ssh.connect-timeout</name> <value>30000</value> </property> <!--==================NameNode auto failover base ZKFC and Zookeeper====================== --> <!--开启基于Zookeeper及ZKFC进程的自动备援设置,监视进程是否死掉 --> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>true</value> </property> <property> <name>ha.zookeeper.quorum</name> <value>Zookeeper-01:2181,Zookeeper-02:2181,Zookeeper-03:2181</value> </property> <property> <!--指定ZooKeeper超时间隔,单位毫秒 --> <name>ha.zookeeper.session-timeout.ms</name> <value>2000</value> </property> </configuration>
(4)修改$HADOOP_HOME/etc/hadoop/yarn-env.sh
#Yarn Daemon Options #export YARN_RESOURCEMANAGER_OPTS #export YARN_NODEMANAGER_OPTS #export YARN_PROXYSERVER_OPTS #export HADOOP_JOB_HISTORYSERVER_OPTS #Yarn Logs export YARN_LOG_DIR=” /home/zero/hadoop/hadoop-2.3.0-cdh5.0.1/logs”
(5)$HADOOP_HOEM/etc/hadoop/mapred-site.xml
<configuration> <!-- 配置 MapReduce Applications --> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <!-- JobHistory Server ============================================================== --> <!-- 配置 MapReduce JobHistory Server 地址 ,默认端口10020 --> <property> <name>mapreduce.jobhistory.address</name> <value>0.0.0.0:10020</value> </property> <!-- 配置 MapReduce JobHistory Server web ui 地址, 默认端口19888 --> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>0.0.0.0:19888</value> </property> </configuration>
(6)修改$HADOOP_HOME/etc/hadoop/yarn-site.xml
<configuration> <!-- nodemanager 配置 ================================================= --> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> <property> <description>Address where the localizer IPC is.</description> <name>yarn.nodemanager.localizer.address</name> <value>0.0.0.0:23344</value> </property> <property> <description>NM Webapp address.</description> <name>yarn.nodemanager.webapp.address</name> <value>0.0.0.0:23999</value> </property> <!-- HA 配置 =============================================================== --> <!-- Resource Manager Configs --> <property> <name>yarn.resourcemanager.connect.retry-interval.ms</name> <value>2000</value> </property> <property> <name>yarn.resourcemanager.ha.enabled</name> <value>true</value> </property> <property> <name>yarn.resourcemanager.ha.automatic-failover.enabled</name> <value>true</value> </property> <!-- 使嵌入式自动故障转移。HA环境启动,与 ZKRMStateStore 配合 处理fencing --> <property> <name>yarn.resourcemanager.ha.automatic-failover.embedded</name> <value>true</value> </property> <!-- 集群名称,确保HA选举时对应的集群 --> <property> <name>yarn.resourcemanager.cluster-id</name> <value>yarn-cluster</value> </property> <property> <name>yarn.resourcemanager.ha.rm-ids</name> <value>rm1,rm2</value> </property> <!—这里RM主备结点需要单独指定,(可选) <property> <name>yarn.resourcemanager.ha.id</name> <value>rm2</value> </property> --> <property> <name>yarn.resourcemanager.scheduler.class</name> <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value> </property> <property> <name>yarn.resourcemanager.recovery.enabled</name> <value>true</value> </property> <property> <name>yarn.app.mapreduce.am.scheduler.connection.wait.interval-ms</name> <value>5000</value> </property> <!-- ZKRMStateStore 配置 --> <property> <name>yarn.resourcemanager.store.class</name> <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value> </property> <property> <name>yarn.resourcemanager.zk-address</name> <value>Zookeeper-01:2181,Zookeeper-02:2181,Zookeeper-03:2181</value> </property> <property> <name>yarn.resourcemanager.zk.state-store.address</name> <value>Zookeeper-01:2181,Zookeeper-02:2181,Zookeeper-03:2181</value> </property> <!-- Client访问RM的RPC地址 (applications manager interface) --> <property> <name>yarn.resourcemanager.address.rm1</name> <value>Hadoop-NN-01:23140</value> </property> <property> <name>yarn.resourcemanager.address.rm2</name> <value>Hadoop-NN-02:23140</value> </property> <!-- AM访问RM的RPC地址(scheduler interface) --> <property> <name>yarn.resourcemanager.scheduler.address.rm1</name> <value>Hadoop-NN-01:23130</value> </property> <property> <name>yarn.resourcemanager.scheduler.address.rm2</name> <value>Hadoop-NN-02:23130</value> </property> <!-- RM admin interface --> <property> <name>yarn.resourcemanager.admin.address.rm1</name> <value>Hadoop-NN-01:23141</value> </property> <property> <name>yarn.resourcemanager.admin.address.rm2</name> <value>Hadoop-NN-02:23141</value> </property> <!--NM访问RM的RPC端口 --> <property> <name>yarn.resourcemanager.resource-tracker.address.rm1</name> <value>Hadoop-NN-01:23125</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address.rm2</name> <value>Hadoop-NN-02:23125</value> </property> <!-- RM web application 地址 --> <property> <name>yarn.resourcemanager.webapp.address.rm1</name> <value>Hadoop-NN-01:23188</value> </property> <property> <name>yarn.resourcemanager.webapp.address.rm2</name> <value>Hadoop-NN-02:23188</value> </property> <property> <name>yarn.resourcemanager.webapp.https.address.rm1</name> <value>Hadoop-NN-01:23189</value> </property> <property> <name>yarn.resourcemanager.webapp.https.address.rm2</name> <value>Hadoop-NN-02:23189</value> </property> </configuration>
(7)修改slaves
[zero@CentOS-Cluster-01 hadoop]$ vi slaves Hadoop-DN-01 Hadoop-DN-02 Hadoop-DN-03 |
3)分发程序
[zero@CentOS-Cluster-01 ~]$ scp -r /home/zero/hadoop/hadoop-2.3.0-cdh5.0.1 zero@Hadoop-NN-02: /home/zero/hadoop/ ……. [zero@CentOS-Cluster-01 ~]$ scp -r /home/zero/hadoop/hadoop-2.3.0-cdh5.0.1 zero@Hadoop-DN-01: /home/zero/hadoop/ ……. [zero@CentOS-Cluster-01 ~]$ scp -r /home/zero/hadoop/hadoop-2.3.0-cdh5.0.1 zero@Hadoop-DN-02: /home/zero/hadoop/ ……. [zero@CentOS-Cluster-01 ~]$ scp -r /home/zero/hadoop/hadoop-2.3.0-cdh5.0.1 zero@Hadoop-DN-03: /home/zero/hadoop/ ……. |
4)启动HDFS
(1)启动JournalNode
格式化前需要在JournalNode结点上启动JournalNode:
[zero@CentOS-Cluster-03 hadoop-2.3.0-cdh5.0.1]$ hadoop-daemon.sh start journalnode starting journalnode, logging to /home/puppet/hadoop/cdh4.4/hadoop-2.0.0-cdh4.4.0/logs/hadoop-puppet-journalnode-BigData-03.out |
相关推荐
总结一下,"spark-2.3.0-bin-hadoop2-without-hive"是一个专为不依赖 Hive JAR 包环境设计的 Spark 版本,适合那些希望利用 Spark 的计算优势处理 Hive 数据,而不依赖 Hive 全部功能的场景。在使用时,需要自行配置...
用户可以通过解压此文件,编译安装来搭建自己的Hadoop环境,进行分布式计算和数据存储。这个版本还包含了其他相关工具,如Hadoop命令行工具、Hadoop守护进程等,用于管理和操作Hadoop集群。 而hadoop-2.6.0-cdh...
hadoop-eclipse-plugin-2.3.0插件 eclipse版本4.4.0,经验证win7下可以正常运行。
大数据/Linux安装包-hadoop-2.6.0-cdh5.15.1.tar.gz 大数据/Linux安装包-hadoop-2.6.0-cdh5.15.1.tar.gz 大数据/Linux安装包-hadoop-2.6.0-cdh5.15.1.tar.gz
标题中的"hadoop-2.6.0-cdh5.14.2.tar.gz"是一个针对Apache Hadoop的软件包,具体来说是CDH(Cloudera Distribution Including Apache Hadoop)5.14.2版本,它基于Hadoop 2.6.0。CDH是由Cloudera公司提供的一个开源...
# 解压命令 tar -zxvf flink-shaded-hadoop-2-uber-3.0.0-cdh6.2.0-7.0.jar.tar.gz # 介绍 用于CDH部署 Flink所依赖的jar包
kettle 9.1 连接hadoop clusters (CDH 6.2) 驱动
总的来说,这个“hadoop-2.6.0-cdh5.7.0.tar.gz”压缩包为用户提供了全面的Hadoop环境,可以支持大数据的存储、处理和分析。同时,结合Kafka这样的实时数据处理工具,可以构建一个强大的大数据处理平台,满足各种...
这个压缩包“hadoop-2.6.0-cdh5.14.0-with-centos6.9.tar.gz”是针对CDH(Cloudera Distribution Including Apache Hadoop)版本5.14.0的Hadoop 2.6.0安装包,特别优化以适应CentOS 6.9操作系统。在大数据领域,...
hadoop-common-2.6.0-cdh5.8.4.jarhadoop-common-2.6.0-cdh5.8.4.jar
hadoop-eclipse-plugin-2.3.0.jar 插件
6. **Hadoop安装与配置**: 安装Hadoop-2.6.0-cdh5.7.0版本需要配置集群环境,包括设置环境变量、配置集群节点间通信、初始化HDFS和YARN等。同时,还需要考虑安全性、监控和性能优化等方面。 7. **Hadoop应用开发**:...
综上所述,"spark-2.3.0-bin-hadoop2.7版本.zip"是一个包含了完整的Spark 2.3.0发行版,集成了Hadoop2.7的环境,可供开发者在本地或集群环境中快速搭建Spark开发和测试环境。这个版本的Spark不仅在核心功能上有所...
hadoop-2.6.0-cdh5.14.2.tar.gz适用于Linux环境,centos7已测试
在IT行业中,尤其是在大数据处理领域,Hadoop是一个至关重要的开源框架,它允许分布式存储和处理大规模数据集。本文将详细讲解如何在CentOS 6.5系统上编译Hadoop 2.5.0 - CDH5.3.6与Snappy的源码,以生成适用于该...
在这个场景中,我们有两个安装包:`hadoop-2.6.0-cdh5.7.0.tar.gz` 和 `jdk-7u80-linux-x64.tar.gz`,分别代表了CDH5.7.0版本的Hadoop和Java 7的64位Linux版本。 首先,让我们深入了解一下Hadoop。Hadoop的核心由两...
主要是因为hadoop的cdh5官网收费,项目下载不了了,上传我下载的到csdn方便各位下载
【标题】"Hadoop YARN Server ResourceManager 2.3.0" Hadoop YARN(Yet Another Resource Negotiator)是Apache Hadoop项目中的一个核心组件,它负责管理集群资源的分配和调度,使得大数据处理任务得以高效执行。...