`
qindongliang1922
  • 浏览: 2189254 次
  • 性别: Icon_minigender_1
  • 来自: 北京
博客专栏
7265517b-f87e-3137-b62c-5c6e30e26109
证道Lucene4
浏览量:117681
097be4a0-491e-39c0-89ff-3456fadf8262
证道Hadoop
浏览量:126082
41c37529-f6d8-32e4-8563-3b42b2712a50
证道shell编程
浏览量:60035
43832365-bc15-3f5d-b3cd-c9161722a70c
ELK修真
浏览量:71409
社区版块
存档分类
最新评论

Hadoop2.6.0-cdh5.4.1源码编译安装

阅读更多
版本使用范围,大致 与Apache Hadoop编译步骤一致大同小异,因为CDH的Hadoop的本来就是从社区版迁过来的,所以,这篇文章同样适合所有的以Apache Hadoop为原型的其他商业版本的hadoop编译,例如,Cloudera(CDH)的hadoop和Hortonworks(HDP)的的hadoop编译,下面开工:

1,环境准备(Cenots6.x,其他的大同小异)

(1)yum安装 sudo yum install -y snappy snappy-devel autoconf automake libtool git  gcc gcc-c++  make cmake openssl-devel,ncurses-devel bzip2-devel
(2)安装JDK1.7+
(3)安装Maven3.0+
(4)安装Ant1.8+
(5)安装 protobuf-2.5.0.tar.gz
  安装例子:
  cd /home/search
  tar -zxvf  protobuf-2.5.0.tar.gz
  cd /home/search/protobuf-2.5.0
  ./configure --prefix=/home/search/protobuf(指定的一个安装目录,默认是根目录)
  make && make install

(6)安装snappy1.1.0.tar.gz(可选选项,如果需要编译完的Hadoop支持Snappy压缩,需要此步骤)
   安装例子:
cd /home/search
tar -zxvf snappy1.1.0.tar.gz
cd /home/search/snappy1.1.0
./configure --prefix=/home/search/snappy(指定的一个安装目录,默认是根目录)
make && make install
(7)安装hadoop-snappy
git下载地址
git clone https://github.com/electrum/hadoop-snappy.git
安装例子:
下载完成后
cd hadoop-snappy
执行maven打包命令
mvn package  -Dsnappy.prefix=/home/search/snappy (需要6步骤)
构建成功后





这个目录就是编译后的snappy的本地库,在hadoop-snappy/target/hadoop-snappy-0.0.1-SNAPSHOT-tar/hadoop-snappy-0.0.1-SNAPSHOT/lib目录下,有个hadoop-snappy-0.0.1-SNAPSHOT.jar,在hadoop编译后,需要拷贝到$HADOOP_HOME/lib目录下

上面使用到的包,可到百度网盘:http://pan.baidu.com/s/1mBjZ4下载

2,下载编译hadoop2.6.0
下载cdh-hadoop2.6.0源码:
wget http://archive-primary.cloudera.com/cdh5/cdh/5/hadoop-2.6.0-cdh5.4.1-src.tar.gz
           解压
            tar -zxvf hadoop-2.6.0-cdh5.4.1-src.tar.gz
            解压后进入根目录
            执行下面这个编译命令,就能把snappy库绑定到hadoop的本地库里面,这样就可以在所有的机器上跑了

       mvn clean package -DskipTests -Pdist,native -Dtar -Dsnappy.lib=(hadoop-snappy里面编译后的库地址)   -Dbundle.snappy

中间会报一些异常,无须关心,如果报异常退出了,就继续执行上面这个命令,直到成功为止,一般速度会跟你的网速有关系,大概40分钟左右,最后会编译成功。




3,搭建Hadoop集群
(1)拷贝编译完成后在hadoop-2.6.0-cdh5.4.1/hadoop-dist/target/hadoop-2.6.0-cdh5.4.1.tar.gz位置的tar包,至安装目录
(2)解压执行mv hadoop-2.6.0-cdh5.4.1 hadoop重命名为hadoop
(3)进入hadoop目录下,执行bin/hadoop checknative -a查看本地库,支持情况



(4)配置Hadoop相关的环境变量
#hadoop
export HADOOP_HOME=/home/search/hadoop
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
export CLASSPATH=.:$CLASSPATH:$HADOOP_COMMON_HOME:$HADOOP_COMMON_HOMEi/lib:$HADOOP_MAPRED_HOME:$HADOOP_HDFS_HOME:$HADOOP_HDFS_HOME

(5)选择一个数据目录/data/
新建三个目录
hadooptmp(存放hadoop的一些临时数据)
nd(存放hadoop的namenode数据)
dd(存放hadoop的datanode数据)
(6)进入hadoop/etc/hadoop目录
依次配置
slaves内容如下:

hadoop1
hadoop2
hadoop3



core-site.xml内容如下:
<configuration>
 <property>    
        <name>fs.default.name</name>    
        <value>hdfs://hadoop1:8020</value>    
    </property>
   
  <property>  
    <name>hadoop.tmp.dir</name>  
    <value>/ROOT/tmp/data/hadooptmp</value>  
  </property>

  <property>  
             <name>io.compression.codecs</name>  
             <value>org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.BZip2Codec,org.apache.hadoop.io.compress.SnappyCodec</value>  
       
</property>  
<property>    
  <name>fs.trash.interval</name>    
  <value>1440</value>    
  <description>Number of minutes between trash checkpoints.    
  If zero, the trash feature is disabled.    
  </description>    
</property>  

</configuration>


hdfs-site.xml内容如下:
<configuration>

<property>    
   <name>dfs.replication</name>    
   <value>1</value>    
 </property>    
 
 <property>    
   <name>dfs.namenode.name.dir</name>    
   <value>file:///ROOT/tmp/data/nd</value>    
 </property>    
 
 <property>    
   <name>dfs.datanode.data.dir</name>    
   <value>/ROOT/tmp/data/dd</value>    
 </property>    
 
<property>    
  <name>dfs.permissions</name>    
  <value>false</value>    
</property>  
 

<property>  
    <name>dfs.webhdfs.enabled</name>  
    <value>true</value>  
</property>  
<property>  
        <name>dfs.blocksize</name>  
        <value>134217728</value>  
</property>  
<property>  
        <name>dfs.namenode.handler.count</name>  
        <value>20</value>  
</property>
 
<property>  
        <name>dfs.datanode.max.xcievers</name>  
        <value>65535</value>  
</property>

</configuration>


mapred-site.xml内容如下:
 <configuration> 
<property>  
    <name>mapreduce.framework.name</name>  
    <value>yarn</value>  
</property>  
<property>  
    <name>mapreduce.jobtracker.address</name>  
    <value>hadoop1:8021</value>  
</property>  
<property>  
    <name>mapreduce.jobhistory.address</name>  
    <value>hadoop1:10020</value>  
</property>  
<property>  
    <name>mapreduce.jobhistory.webapp.address</name>  
    <value>hadoop1:19888</value>  
</property>  
<property>  
    <name>mapred.max.maps.per.node</name>  
    <value>4</value>  
</property>  
<property>  
    <name>mapred.max.reduces.per.node</name>  
    <value>2</value>  
</property>  
<property>  
    <name>mapreduce.map.memory.mb</name>  
    <value>1408</value>  
</property>  
<property>  
    <name>mapreduce.map.java.opts</name>  
    <value>-Xmx1126M</value>  
</property>  
 
<property>  
    <name>mapreduce.reduce.memory.mb</name>  
    <value>2816</value>  
</property>  
<property>  
    <name>mapreduce.reduce.java.opts</name>  
    <value>-Xmx2252M</value>  
</property>  
<property>  
    <name>mapreduce.task.io.sort.mb</name>  
    <value>512</value>  
</property>  
<property>  
    <name>mapreduce.task.io.sort.factor</name>  
    <value>100</value>  
</property>  
</configuration>

yarn-site.xml内容如下:
<configuration>
<property> 
    <name>mapreduce.jobhistory.address</name>
    <value>hadoop1:10020</value>
</property>
<property>
    <name>mapreduce.jobhistory.webapp.address</name>
    <value>hadoop1:19888</value>
</property>      
 <property>  
   <name>yarn.resourcemanager.address</name>  
    <value>hadoop1:8032</value>  
  </property>  
  <property>  
    <name>yarn.resourcemanager.scheduler.address</name>  
    <value>hadoop1:8030</value>  
  </property>  
  <property>  
    <name>yarn.resourcemanager.scheduler.class</name>  
    <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>  
  </property>  
  <property>  
    <name>yarn.resourcemanager.resource-tracker.address</name>  
    <value>hadoop1:8031</value>  
  </property>  
  <property>  
    <name>yarn.resourcemanager.admin.address</name>  
    <value>hadoop1:8033</value>  
  </property>  
  <property>  
    <name>yarn.resourcemanager.webapp.address</name>  
    <value>hadoop1:8088</value>  
  </property>  
  <property>  
    <name>yarn.nodemanager.aux-services</name>  
    <value>mapreduce_shuffle</value>  
  </property>  
  <property>  
    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>  
    <value>org.apache.hadoop.mapred.ShuffleHandler</value>  
  </property>  
  <property>    
    <description>Classpath for typical applications.</description>    
    <name>yarn.application.classpath</name>    
    <value>$HADOOP_CONF_DIR  
    ,$HADOOP_COMMON_HOME/share/hadoop/common/*  
    ,$HADOOP_COMMON_HOME/share/hadoop/common/lib/*  
    ,$HADOOP_HDFS_HOME/share/hadoop/hdfs/*  
    ,$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*  
    ,$YARN_HOME/share/hadoop/yarn/*</value>    
  </property>  
   
<!-- Configurations for NodeManager -->  
  <property>  
    <name>yarn.nodemanager.resource.memory-mb</name>  
    <value>5632</value>  
  </property>  
 <property>
    <name>yarn.scheduler.minimum-allocation-mb</name>
    <value>1408</value>
  </property>

 <property>
    <name>yarn.scheduler.maximum-allocation-mb</name>
    <value>5632</value>
  </property>

</configuration>

(6)将整个hadoop目录和/data数据目录,scp分发到各个节点上
(7)格式化HDFS
执行命令bin/hadoop namenode -format
(8)启动集群
sbin/start-dfs.sh 启动hdfs
sbin/start-yarn.sh启动yarn
sbin/mr-jobhistory-daemon.sh start historyserver 启动日志进程
(9)检验集群状态
jps监测:




web页面监测:
http://hadoop1:50070
http://hadoop1:8088
(10)基准测试
测试map
bin/hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0-cdh5.4.1.jar randomwriter rand
测试reduce
bin/hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0-cdh5.4.1.jar sort rand sort-rand

Hadoop官方文档链接:http://hadoop.apache.org/docs/r2.7.0/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html



最后欢迎大家扫码关注微信公众号:我是攻城师(woshigcs),我们一起学习,进步和交流!(woshigcs)
本公众号的内容是有关搜索和大数据技术和互联网等方面内容的分享,也是一个温馨的技术互动交流的小家园,有什么问题随时都可以留言,欢迎大家来访!








  • 大小: 19.1 KB
  • 大小: 60.4 KB
  • 大小: 16 KB
  • 大小: 6.3 KB
2
0
分享到:
评论
1 楼 cj7749910 2015-12-23  
您好,我按照你的编译后的步骤安装完之后,测试时出现以下问题,请问是什么原因造成的,怎么解决
15/12/23 16:01:40 INFO client.RMProxy: Connecting to ResourceManager at hadpmaster/192.168.193.62:8032
Running 30 maps.
Job started: Wed Dec 23 16:01:42 CST 2015
15/12/23 16:01:42 INFO client.RMProxy: Connecting to ResourceManager at hadpmaster/192.168.193.62:8032
15/12/23 16:01:42 WARN security.UserGroupInformation: PriviledgedActionException as:root (auth:SIMPLE) cause:java.net.ConnectException: Call From hadpmaster/192.168.193.62 to hadpmaster:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
java.net.ConnectException: Call From hadpmaster/192.168.193.62 to hadpmaster:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:791)
        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:731)
        at org.apache.hadoop.ipc.Client.call(Client.java:1476)
        at org.apache.hadoop.ipc.Client.call(Client.java:1403)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
        at com.sun.proxy.$Proxy12.getFileInfo(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:752)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:252)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
        at com.sun.proxy.$Proxy13.getFileInfo(Unknown Source)
        at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2095)
        at org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1214)
        at org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1210)
        at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
        at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1210)
        at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1409)
        at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:145)
        at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:269)
        at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:142)
        at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1307)
        at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1304)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
        at org.apache.hadoop.mapreduce.Job.submit(Job.java:1304)
        at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1325)
        at org.apache.hadoop.examples.RandomWriter.run(RandomWriter.java:283)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
        at org.apache.hadoop.examples.RandomWriter.main(RandomWriter.java:294)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
        at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
        at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.net.ConnectException: Connection refused
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744)
        at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494)
        at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:609)
        at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:708)
        at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:370)
        at org.apache.hadoop.ipc.Client.getConnection(Client.java:1525)
        at org.apache.hadoop.ipc.Client.call(Client.java:1442)
        ... 43 more

相关推荐

    hadoop2.6.0-cdh5.14.0 源码

    hadoop-2.6.0-cdh5.14.0 源码 。

    apache-carbondata-1.4.0-bin-spark2.1.0-hadoop2.6.0-cdh5.11.1.jar

    carbondata-1.4,spark-2.1,hadoop-2.6.0-cdh5.11.1源码编译

    spark-assembly-1.6.0-cdh5.9.2-hadoop2.6.0-cdh5.9.2.jar

    spark-assembly-1.6.0-cdh5.9.2-hadoop2.6.0-cdh5.9.2.jar

    hadoop-2.6.0.tar.gz&hadoop-2.6.0-cdh5.16.2.tar.gz

    Hadoop-2.6.0.tar.gz是Apache官方发布的Hadoop 2.6.0源码包,包含了Hadoop的核心组件,如HDFS(Hadoop Distributed File System)、YARN以及MapReduce。用户可以通过解压此文件,编译安装来搭建自己的Hadoop环境,...

    hadoop-2.6.0-cdh5.15.1.tar.gz

    大数据/Linux安装包-hadoop-2.6.0-cdh5.15.1.tar.gz 大数据/Linux安装包-hadoop-2.6.0-cdh5.15.1.tar.gz 大数据/Linux安装包-hadoop-2.6.0-cdh5.15.1.tar.gz

    hadoop-2.6.0-cdh5.15.2编译版本(CentOS 7.3)

    带编译所需的maven库,hadoop-2.6.0-cdh5.15.2在CentOS Linux release 7.3.1611重新编译的版本

    hadoop-2.6.0-cdh5.16.2 for windows

    本篇文章将深入探讨专为Windows平台编译的Hadoop 2.6.0-cdh5.16.2版本,以及如何利用这个版本进行测试和开发工作。 首先,我们来了解CDH,它是Cloudera Distribution Including Apache Hadoop的简称,是Cloudera...

    hadoop-2.6.0-cdh5.7.0版本.zip

    6. **Hadoop安装与配置**: 安装Hadoop-2.6.0-cdh5.7.0版本需要配置集群环境,包括设置环境变量、配置集群节点间通信、初始化HDFS和YARN等。同时,还需要考虑安全性、监控和性能优化等方面。 7. **Hadoop应用开发**:...

    hadoop-2.6.0-cdh5.10.0.tar.gz

    在安装和配置Hadoop 2.6.0-cdh5.10.1时,需要了解集群部署的基本概念,包括NameNode、DataNode、ResourceManager、NodeManager等节点的角色和配置。同时,为了保证集群的稳定运行,需要关注网络设置、安全性配置(如...

    hadoop-2.6.0-cdh5.14.2.tar

    hadoop-2.6.0-cdh5.14.2.tar.gz适用于Linux环境,centos7已测试

    hadoop-2.6.0-cdh5.14.0-with-centos6.9.tar.gz

    这个压缩包“hadoop-2.6.0-cdh5.14.0-with-centos6.9.tar.gz”是针对CDH(Cloudera Distribution Including Apache Hadoop)版本5.14.0的Hadoop 2.6.0安装包,特别优化以适应CentOS 6.9操作系统。在大数据领域,...

    hadoop-2.6.0-cdh5.14.2.tar.gz

    标题中的"hadoop-2.6.0-cdh5.14.2.tar.gz"是一个针对Apache Hadoop的软件包,具体来说是CDH(Cloudera Distribution Including Apache Hadoop)5.14.2版本,它基于Hadoop 2.6.0。CDH是由Cloudera公司提供的一个开源...

    编译过的Hadoop2.6.0-cdh5.7.0的spark2.1.0安装包

    此压缩包"Spark2.1.0-bin-2.6.0-cdh5.7.0"是针对Hadoop2.6.0-cdh5.7.0版本编译优化后的Spark2.1.0版本,适用于Cloudera Distribution Including Apache Hadoop (CDH) 5.7.0环境。 1. **Spark核心组件**:Spark的...

    hadoop-2.6.0-cdh5.9.3.tar.gz

    本文将重点解析Hadoop CDH 2.6.0-cdh5.9.3这一版本的核心技术和应用场景。 一、Hadoop简介 Hadoop是由Apache软件基金会开发的一个开源项目,主要由HDFS(Hadoop Distributed File System)和MapReduce两大部分构成...

    hadoop-2.6.0-cdh5.7.0.tar.gz

    在这个“hadoop-2.6.0-cdh5.7.0.tar.gz”压缩包中,我们可以预期找到的是一个完整的Hadoop 2.6.0 CDH5.7.0安装包。这个安装包通常会包含以下几个关键部分: 1. **Hadoop Distributed File System (HDFS)**:HDFS是...

    hadoop-2.6.0-cdh5.7.0.tar.gz和jdk-7u80-linux-x64.tar.gz安装包

    在这个场景中,我们有两个安装包:`hadoop-2.6.0-cdh5.7.0.tar.gz` 和 `jdk-7u80-linux-x64.tar.gz`,分别代表了CDH5.7.0版本的Hadoop和Java 7的64位Linux版本。 首先,让我们深入了解一下Hadoop。Hadoop的核心由两...

    hadoop-2.6.0-cdh5.14.0 64位 native文件库

    本压缩包文件"hadood-2.6.0-cdh5.14.0 64位 native文件库"包含了针对64位操作系统的Hadoop本地库。这些本地库(native libraries)是Hadoop的一部分,它们是用C++编写的,用于提供与操作系统更紧密的集成,特别是在...

    hadoop-2.6.0-cdh5.14.2.tar _after_compile -src

    标题中的"hadoop-2.6.0-cdh5.14.2.tar_after_compile -src"揭示了这是一个关于Hadoop的源代码版本,经过编译后的结果。Hadoop是大数据处理领域的重要工具,其核心是分布式文件系统(HDFS)和MapReduce计算框架。这个...

    hadoop-2.6.0-cdh5.16.2的压缩包.rar

    主要是因为hadoop的cdh5官网收费,项目下载不了了,上传我下载的到csdn方便各位下载

Global site tag (gtag.js) - Google Analytics