`

32位hadoop编译实现与64位操作系统兼容

阅读更多
没有安装过集群的朋友,可能没有发现,hadoop版本没有64位的,我们在安装hadoop之前需要将hadoop源码包进行编译,否则lib下的部分jar包无法使用【有人可能会说hadoop不分操作系统的bit数,这个问题我有怎么会悄悄告诉你呢!!!!哈哈,开玩笑,接下来,给大家分享一下我第一次编译出现的糗事】
如果不编译会出现啥问题呢??你可以看俺

遇到的问题描述:
[root@db96 hadoop]# hadoop dfs -put ./in
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.


14/07/17 17:07:22 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
put: `./in': No such file or directory
原因查找:
查看本地文件:
[root@db96 hadoop]# file /usr/local/hadoop/lib/native/libhadoop.so.1.0.0 
/usr/local/hadoop/lib/native/libhadoop.so.1.0.0: ELF 32-bit LSB shared object, 
 Intel 80386, version 1 (SYSV), dynamically linked, not stripped
 是32位的hadoop,安装在了64位的linux系统上。lib包编译环境不一样,所以不能使用。
 悲剧了,装好的集群没法用

【编译环境准备】
要重新指定yum源
网上下载了version-groups.conf到/etc/yum/,删除系统原有的源文件
1. 安装必要的包:
[root@db99 data]# yum install autoconfautomake libtool cmake ncurses-devel openssl-devel gcc* --nogpgcheck


2. 安装maven,下载并解压。(版本不能比这新,有压缩包)
http://maven.apache.org/download.cgi  //下载对应的压缩包
apache-maven-3.2.1-bin.tar
[root@db99 ~]# tar -xvf apache-maven-3.2.1-bin.tar
[root@db99 ~]# tar -xvf apache-maven-3.2.1-bin.tar
[root@db99 ~]# ln -s /usr/local/apache-maven-3.2.1/ /usr/local/maven
[root@db99 local]# vim /etc/profile  //添加环境变量中
export MAVEN_HOME=/usr/local/maven
export PATH=$MAVEN_HOME/bin:$PATH 


3. 安装protobuf(版本不要变,有压缩包)
https://code.google.com/p/protobuf/downloads/detail?name=protobuf-2.5.0.tar.gz
下载:protobuf-2.5.0.tar.gz  并解压
[root@db99 protobuf-2.5.0]# pwd
/root/protobuf-2.5.0
[root@db99 protobuf-2.5.0]# ./configure --prefix=/usr/local/protoc/
[root@db99 protobuf-2.5.0]# make
[root@db99 protobuf-2.5.0]# make check
[root@db99 protobuf-2.5.0]# make install
bin目录下执行 protoc --version
libprotoc 2.5.0
安装成功。
添加环境变量:
vi /etc/profile
export MAVEN_HOME=/usr/local/maven
export JAVA_HOME=/usr/java/latest
export HADOOP_HOME=/usr/local/hadoop
export PATH=.:/usr/local/protoc/bin:$MAVEN_HOME/bin:$JAVA_HOME/bin:$PATH

4. 编译hadoop(将hadoop源码包解压就会出现如下的包)
[root@db99 release-2.2.0]# pwd
/data/release-2.2.0
[root@db99 release-2.2.0]# ls
BUILDING.txt       hadoop-common-project     hadoop-maven-plugins  hadoop-tools
dev-support        hadoop-dist               hadoop-minicluster    hadoop-yarn-project
hadoop-assemblies  hadoop-hdfs-project       hadoop-project        pom.xml
hadoop-client      hadoop-mapreduce-project  hadoop-project-dist
[root@db99 release-2.2.0]# mvn package -Pdist,native -DskipTests -Dtar (往后看不要急着编译,这是maven命令)
..............编译需要较长时间大概1个小时左右。
如果出现如下错误:

[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:2.5.1:testCompile (default-testCompile) on project hadoop-auth: Compilation failure: Compilation failure:
[ERROR] /home/hduser/hadoop-2.2.0-src/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/AuthenticatorTestCase.java:[88,11] error: cannot access AbstractLifeCycle
[ERROR] class file for org.mortbay.component.AbstractLifeCycle not found
[ERROR] /home/hduser/hadoop-2.2.0-src/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/AuthenticatorTestCase.java:[96,29] error: cannot access LifeCycle
[ERROR] class file for org.mortbay.component.LifeCycle not found
[ERROR] /home/hduser/hadoop-2.2.0-src/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/AuthenticatorTestCase.java:[98,10] error: cannot find symbol
[ERROR] symbol:   method start()
[ERROR] location: variable server of type Server
[ERROR] /home/hduser/hadoop-2.2.0-src/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/AuthenticatorTestCase.java:[104,12] error: cannot find symbol
[ERROR] -> [Help 1]


需要修改源码下边的hadoop-common-project/hadoop-auth/pom.xml
vi ~/hadoop-common-project/hadoop-auth/pom.xml
[root@db99 release-2.2.0]# vim /data/release-2.2.0/hadoop-common-project/hadoop-auth/pom.xml 
在第55行下添加:
 56     <dependency>
 57         <groupId>org.mortbay.jetty</groupId>
 58         <artifactId>jetty-util</artifactId>
 59         <scope>test</scope>                                                                          
 60     </dependency>
 保存退出,重新编译即可。
最后编译成功:
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-minicluster ---
[INFO] Building jar: /data/release-2.2.0/hadoop-minicluster/target/hadoop-minicluster-2.2.0-javadoc.jar
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop Main ................................ SUCCESS [  1.386 s]
[INFO] Apache Hadoop Project POM ......................... SUCCESS [  1.350 s]
[INFO] Apache Hadoop Annotations ......................... SUCCESS [  2.732 s]
[INFO] Apache Hadoop Assemblies .......................... SUCCESS [  0.358 s]
[INFO] Apache Hadoop Project Dist POM .................... SUCCESS [  2.048 s]
[INFO] Apache Hadoop Maven Plugins ....................... SUCCESS [  3.450 s]
[INFO] Apache Hadoop Auth ................................ SUCCESS [ 16.114 s]
[INFO] Apache Hadoop Auth Examples ....................... SUCCESS [ 13.317 s]
[INFO] Apache Hadoop Common .............................. SUCCESS [05:22 min]
[INFO] Apache Hadoop NFS ................................. SUCCESS [ 16.925 s]
[INFO] Apache Hadoop Common Project ...................... SUCCESS [  0.044 s]
[INFO] Apache Hadoop HDFS ................................ SUCCESS [02:51 min]
[INFO] Apache Hadoop HttpFS .............................. SUCCESS [ 28.601 s]
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SUCCESS [ 27.589 s]
[INFO] Apache Hadoop HDFS-NFS ............................ SUCCESS [  3.966 s]
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.044 s]
[INFO] hadoop-yarn ....................................... SUCCESS [ 52.846 s]
[INFO] hadoop-yarn-api ................................... SUCCESS [ 41.700 s]
[INFO] hadoop-yarn-common ................................ SUCCESS [ 25.945 s]
[INFO] hadoop-yarn-server ................................ SUCCESS [  0.105 s]
[INFO] hadoop-yarn-server-common ......................... SUCCESS [  8.436 s]
[INFO] hadoop-yarn-server-nodemanager .................... SUCCESS [ 15.659 s]
[INFO] hadoop-yarn-server-web-proxy ...................... SUCCESS [  3.647 s]
[INFO] hadoop-yarn-server-resourcemanager ................ SUCCESS [ 12.495 s]
[INFO] hadoop-yarn-server-tests .......................... SUCCESS [  0.684 s]
[INFO] hadoop-yarn-client ................................ SUCCESS [  5.266 s]
[INFO] hadoop-yarn-applications .......................... SUCCESS [  0.102 s]
[INFO] hadoop-yarn-applications-distributedshell ......... SUCCESS [  2.666 s]
[INFO] hadoop-mapreduce-client ........................... SUCCESS [  0.093 s]
[INFO] hadoop-mapreduce-client-core ...................... SUCCESS [ 20.092 s]
[INFO] hadoop-yarn-applications-unmanaged-am-launcher .... SUCCESS [  2.783 s]
[INFO] hadoop-yarn-site .................................. SUCCESS [  0.225 s]
[INFO] hadoop-yarn-project ............................... SUCCESS [ 36.636 s]
[INFO] hadoop-mapreduce-client-common .................... SUCCESS [ 16.645 s]
[INFO] hadoop-mapreduce-client-shuffle ................... SUCCESS [  3.058 s]
[INFO] hadoop-mapreduce-client-app ....................... SUCCESS [  9.441 s]
[INFO] hadoop-mapreduce-client-hs ........................ SUCCESS [  5.482 s]
[INFO] hadoop-mapreduce-client-jobclient ................. SUCCESS [  7.615 s]
[INFO] hadoop-mapreduce-client-hs-plugins ................ SUCCESS [  2.473 s]
[INFO] Apache Hadoop MapReduce Examples .................. SUCCESS [  6.183 s]
[INFO] hadoop-mapreduce .................................. SUCCESS [  6.454 s]
[INFO] Apache Hadoop MapReduce Streaming ................. SUCCESS [  4.802 s]
[INFO] Apache Hadoop Distributed Copy .................... SUCCESS [ 27.635 s]
[INFO] Apache Hadoop Archives ............................ SUCCESS [  2.850 s]
[INFO] Apache Hadoop Rumen ............................... SUCCESS [  6.092 s]
[INFO] Apache Hadoop Gridmix ............................. SUCCESS [  4.742 s]
[INFO] Apache Hadoop Data Join ........................... SUCCESS [  3.155 s]
[INFO] Apache Hadoop Extras .............................. SUCCESS [  3.317 s]
[INFO] Apache Hadoop Pipes ............................... SUCCESS [  9.791 s]
[INFO] Apache Hadoop Tools Dist .......................... SUCCESS [  2.680 s]
[INFO] Apache Hadoop Tools ............................... SUCCESS [  0.036 s]
[INFO] Apache Hadoop Distribution ........................ SUCCESS [ 20.765 s]
[INFO] Apache Hadoop Client .............................. SUCCESS [  6.476 s]
[INFO] Apache Hadoop Mini-Cluster ........................ SUCCESS [  0.215 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 16:32 min
[INFO] Finished at: 2014-07-18T01:18:24+08:00
[INFO] Final Memory: 117M/314M
[INFO] ------------------------------------------------------------------------


此时编译好的文件位于 ~/hadoop-dist/target/hadoop-2.2.0/ 目录中
拷贝hadoop-2.2.0到安装目录下,/usr/local/ 重新修改其配置文件,重新并格式化,启动,即可。
到此已经不会报错,可以使用。
[root@db96 hadoop]# hadoop dfs -put ./in
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.


put: `.': No such file or directory
[root@db96 hadoop]# file /usr/local/hadoop/lib/native/libhadoop.so.1.0.0 
/usr/local/hadoop/lib/native/libhadoop.so.1.0.0: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, not stripped


测试使用:上传一个文件,下载一个文件,查看上传文件的内容:


[root@db96 ~]# cat wwn.txt 
# This is a text txt
# by coco
# 2014-07-18
[root@db96 ~]# hdfs dfs -mkdir /test
[root@db96 ~]# hdfs dfs -put wwn.txt /test
[root@db96 ~]# hdfs dfs -cat /test/wwn.txt
[root@db96 ~]# hdfs dfs -get /test/wwn.txt /tmp
[root@db96 hadoop]# hdfs dfs -rm /test/wwn.txt
[root@db96 tmp]# ll
总用量 6924
-rw-r--r-- 1 root root      70 7月  18 11:50 wwn.txt
[root@db96 ~]# hadoop dfs -ls /test           
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.


Found 2 items
-rw-r--r--   2 root supergroup    6970105 2014-07-18 11:44 /test/gc_comweight.txt
-rw-r--r--   2 root supergroup         59 2014-07-18 14:56 /test/hello.txt
到此我们的hdfs文件系统已经能正常使用。









按照在64位CentOS上编译 Hadoop 2.2.0的步骤,进行对hadoop2.2在rhel6.2上进行编译,
cd hadoop-2.2.0-src
mvn package -DskipTests -Pdist,native -Dtar
大概10分钟左右时报错如下:
Failed to execute goal org.apache.maven.plugins:maven-javadoc-plugin:2.8.1:jar (module-javadocs) on project hadoop-project: Execution ……………………

使用以下命令从新编译:
mvn package -DskipTests -Pdist,native -Dtar -Dmaven.javadoc.skip=true

大概40分钟后,编译完成
INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ hadoop-minicluster ---
[INFO] No sources in project. Archive not created.
[INFO]
[INFO] --- maven-site-plugin:3.0:attach-descriptor (attach-descriptor) @ hadoop-minicluster ---
[INFO]
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-minicluster ---
[INFO] Skipping javadoc generation
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Apache Hadoop Main ................................ SUCCESS [  2.978 s]
[INFO] Apache Hadoop Project POM ......................... SUCCESS [  8.844 s]
[INFO] Apache Hadoop Annotations ......................... SUCCESS [01:58 min]
[INFO] Apache Hadoop Assemblies .......................... SUCCESS [  0.616 s]
[INFO] Apache Hadoop Project Dist POM .................... SUCCESS [ 43.968 s]
[INFO] Apache Hadoop Maven Plugins ....................... SUCCESS [ 46.198 s]
[INFO] Apache Hadoop Auth ................................ SUCCESS [16:14 min]
[INFO] Apache Hadoop Auth Examples ....................... SUCCESS [ 11.632 s]
[INFO] Apache Hadoop Common .............................. SUCCESS [04:25 min]
[INFO] Apache Hadoop NFS ................................. SUCCESS [ 54.758 s]
[INFO] Apache Hadoop Common Project ...................... SUCCESS [  0.055 s]
[INFO] Apache Hadoop HDFS ................................ SUCCESS [02:11 min]
[INFO] Apache Hadoop HttpFS .............................. SUCCESS [ 11.402 s]
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SUCCESS [01:19 min]
[INFO] Apache Hadoop HDFS-NFS ............................ SUCCESS [  3.266 s]
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.032 s]
[INFO] hadoop-yarn ....................................... SUCCESS [01:21 min]
[INFO] hadoop-yarn-api ................................... SUCCESS [ 43.147 s]
[INFO] hadoop-yarn-common ................................ SUCCESS [01:00 min]
[INFO] hadoop-yarn-server ................................ SUCCESS [  0.084 s]
[INFO] hadoop-yarn-server-common ......................... SUCCESS [  4.851 s]
[INFO] hadoop-yarn-server-nodemanager .................... SUCCESS [ 42.176 s]
[INFO] hadoop-yarn-server-web-proxy ...................... SUCCESS [  0.837 s]
[INFO] hadoop-yarn-server-resourcemanager ................ SUCCESS [  5.910 s]
[INFO] hadoop-yarn-server-tests .......................... SUCCESS [  1.097 s]
[INFO] hadoop-yarn-client ................................ SUCCESS [  1.100 s]
[INFO] hadoop-yarn-applications .......................... SUCCESS [  0.083 s]
[INFO] hadoop-yarn-applications-distributedshell ......... SUCCESS [  1.529 s]
[INFO] hadoop-mapreduce-client ........................... SUCCESS [  0.110 s]
[INFO] hadoop-mapreduce-client-core ...................... SUCCESS [  7.229 s]
[INFO] hadoop-yarn-applications-unmanaged-am-launcher .... SUCCESS [  0.806 s]
[INFO] hadoop-yarn-site .................................. SUCCESS [  0.310 s]
[INFO] hadoop-yarn-project ............................... SUCCESS [ 18.240 s]
[INFO] hadoop-mapreduce-client-common .................... SUCCESS [  3.875 s]
[INFO] hadoop-mapreduce-client-shuffle ................... SUCCESS [  0.911 s]
[INFO] hadoop-mapreduce-client-app ....................... SUCCESS [  2.599 s]
[INFO] hadoop-mapreduce-client-hs ........................ SUCCESS [  1.372 s]
[INFO] hadoop-mapreduce-client-jobclient ................. SUCCESS [  4.471 s]
[INFO] hadoop-mapreduce-client-hs-plugins ................ SUCCESS [  0.676 s]
[INFO] Apache Hadoop MapReduce Examples .................. SUCCESS [  1.054 s]
[INFO] hadoop-mapreduce .................................. SUCCESS [  7.665 s]
[INFO] Apache Hadoop MapReduce Streaming ................. SUCCESS [  1.322 s]
[INFO] Apache Hadoop Distributed Copy .................... SUCCESS [ 21.522 s]
[INFO] Apache Hadoop Archives ............................ SUCCESS [  0.677 s]
[INFO] Apache Hadoop Rumen ............................... SUCCESS [  1.394 s]
[INFO] Apache Hadoop Gridmix ............................. SUCCESS [  2.027 s]
[INFO] Apache Hadoop Data Join ........................... SUCCESS [  0.727 s]
[INFO] Apache Hadoop Extras .............................. SUCCESS [  1.062 s]
[INFO] Apache Hadoop Pipes ............................... SUCCESS [  9.102 s]
[INFO] Apache Hadoop Tools Dist .......................... SUCCESS [  4.286 s]
[INFO] Apache Hadoop Tools ............................... SUCCESS [  0.024 s]
[INFO] Apache Hadoop Distribution ........................ SUCCESS [  8.841 s]
[INFO] Apache Hadoop Client .............................. SUCCESS [ 16.353 s]
[INFO] Apache Hadoop Mini-Cluster ........................ SUCCESS [  0.212 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 35:15 min
[INFO] Finished at: 2014-11-25T22:43:39+08:00
[INFO] Final Memory: 95M/368M
[INFO] ------------------------------------------------------------------------





分享到:
评论

相关推荐

    hadoop-2.9.2的64位版本(手动编译)

    在32位操作系统上,Apache官方提供的Hadoop本地库可以正常运行,但在64位系统上,如果继续使用32位的库,可能会遇到兼容性问题,导致性能下降或功能受限。因此,"hadoop-2.9.2的64位版本(手动编译)"这个主题旨在解决...

    linux 32位操作系统 源码编译后hadoop-2.7.7 centos-i686 hadoop-2.7.7.tar.gz

    标签"hadop-2.7.7 centso-i686 linux 32 虚拟机 hadoop编译"进一步强调了关键信息,包括Hadoop的版本、操作系统、体系结构以及源码编译的过程。这里提到的"虚拟机"可能暗示这个压缩包是在虚拟环境中构建的,这对于...

    Hadoop 2.7.4 Windows 64位 编译bin

    这个版本特别为Windows 64位操作系统进行了优化,使得开发者和数据分析师能够在Windows环境下构建和运行Hadoop相关项目,例如Apache Spark。在Windows上安装Hadoop 2.7.4通常涉及到以下几个关键知识点: 1. **...

    hadoop-2.9.2的64位本地库

    在Hadoop 2.9.2版本中,为了在64位操作系统上顺畅运行,我们需要64位的本地库,因为Apache官方默认提供的版本是32位的。32位的本地库在64位系统环境下可能会导致兼容性问题,例如性能下降、错误或无法启动等问题。 ...

    Hadoop 2.7.4 Windows 7 64位 编译bin(包含winutils.exe, hadoop.dll等)

    标题中的“Hadoop 2.7.4 Windows 7 64位 编译bin(包含winutils.exe, hadoop.dll等)”指的是一个专为Windows 7 64位操作系统编译的Hadoop二进制包,包含了关键组件winutils.exe和hadoop.dll。这些文件对于在Windows...

    hadoop-2.7.5.tar.gz 64位CentOS编译

    - **系统兼容性**:该版本的Hadoop已针对64位的Linux环境进行了编译,可以充分利用64位系统的内存和计算资源。 - **依赖库**:可能包含了与CentOS兼容的依赖库,确保在安装和运行过程中不会出现因库文件不兼容而导致...

    hadoop2.8.1+hadoop+winutils编译包

    总结来说,这个“hadoop2.8.1+hadoop+winutils编译包”提供了在Windows环境下运行Hadoop 2.8.1所需的所有组件,包括Hadoop的核心功能和针对Windows的兼容性工具。通过合理安装和配置,用户可以在Windows系统上进行...

    Centos6.5编译64位Hadoop2.7.5.tat.gz

    【标题】"Centos6.5编译64位Hadoop2.7.5.tgz"涉及的关键技术点包括Hadoop、操作系统环境、源码编译以及系统兼容性。Hadoop是一个开源的分布式计算框架,它允许在大量廉价硬件上处理大规模数据。在这个场景中,用户将...

    hadoop 2.7.3 32位

    然而,通常情况下,Hadoop的官方发行版是针对64位操作系统设计的,这使得32位系统的用户面临安装和配置的挑战。描述中提到的情况正是这个问题的体现:在32位Linux系统上安装64位Hadoop会导致兼容性问题,无法正常...

    hadoop2.x native library64位

    在Hadoop 2.x版本中,为了充分利用64位操作系统的性能优势,需要使用与系统架构匹配的64位本地库。标题提到的“hadoop2.x native library64位”正是为了解决这个问题,它是专门为64位操作系统编译的Hadoop本地库。 ...

    hadoop 2.7.4 windows 7 64位 编译bin

    在Windows 7 64位环境下编译Hadoop 2.7.4版本是一项技术挑战,因为Hadoop通常是为Linux环境设计的。本篇文章将深入探讨如何在Windows上进行这一操作,并分享编译过程中的关键步骤和知识点。 首先,理解Hadoop的架构...

    win7(x64)和win10(x64)编译的hadoop的bin

    【标题】"win7(x64)和win10(x64)编译的hadoop的bin" 涉及的知识点主要集中在Hadoop的编译与平台兼容性上,尤其是对于Windows 7和10 64位操作系统的支持。Hadoop是一个开源的分布式计算框架,它允许在大量计算机...

    hadoop 64位本地库

    Hadoop是一款开源的大数据处理框架,由Apache基金会...总之,这个64位的Hadoop本地库对于在CentOS 64位系统上运行高效稳定的Hadoop集群是非常重要的,它可以简化部署过程,提高系统性能,并确保与操作系统的兼容性。

    Hadoop 2.7.5 Windows 7 64位 编译bin(包含winutils.exe, hadoop.dll等)

    本资源针对的是Windows 7 64位操作系统用户,帮助他们在本地环境中搭建Hadoop开发和测试平台。 在Windows系统上编译Hadoop通常是一项挑战,因为Hadoop最初设计是运行在Linux环境下的。然而,通过一些额外的步骤,...

    hadoop2.6.0的32位本地库

    在Hadoop 2.6.0版本中,为了支持32位操作系统,需要特定的本地库来实现与Java虚拟机(JVM)的交互。在32位系统上运行Hadoop时,可能会遇到“无法加载本地库”的错误,这是因为默认提供的库通常针对64位系统编译。 ...

    hadoop2.8.1 64bit native 编译

    本文将深入探讨关于“hadoop2.8.1 64bit native 编译”的相关知识点,帮助你理解如何在64位环境下编译Hadoop的本地库。 1. **Hadoop Native Libraries**: Hadoop的Native Libraries是一组C/C++编写的库,它们提供...

    Ubuntu下hadoop-2.5.2编译好的64bit的native库

    这个库包包含了与Hadoop 2.5.2版本兼容的所有必要的C/C++编译的库文件,它们可以显著提高Hadoop在64位环境下的性能。 描述中提到,用户只需下载这个压缩包,然后将其中的文件复制到自己安装的Hadoop配置中的...

    hadoop2.7.6编译后的

    在描述中提到,该压缩包是在CentOS7.4操作系统上编译完成的,这表明Hadoop源码已经成功地适应了Linux环境,并且能够在该系统上运行。CentOS7.4是一个广泛使用的Linux发行版,其稳定性和兼容性为Hadoop提供了一个良好...

    hadoop windows 编译版 bin 和 lib

    提到"编译版",意味着这个Hadoop版本是专门为Windows操作系统编译的,可能包含了针对Windows平台的特殊优化和修改。通常,原生的Hadoop发行版需要在Linux或Unix系统上编译,但这里提供的是已经过编译,可以直接在...

    hadoop2.2.0的64位安装包

    如果在64位Linux系统上直接使用32位的Hadoop安装包,可能会遇到兼容性问题,例如缺少32位库,导致运行时错误。因此,为了确保Hadoop在64位环境中正常工作,我们需要对源代码进行编译,生成与系统匹配的64位版本。 ...

Global site tag (gtag.js) - Google Analytics