`
liyonghui160com
  • 浏览: 775830 次
  • 性别: Icon_minigender_1
  • 来自: 北京
社区版块
存档分类
最新评论

hadoop2.2.0遇到64位操作系统平台报错

阅读更多

 

遇到的问题

 

[hadoop@hadoop01 input]$ hadoop dfs -put ./in

 

DEPRECATED: Use of this script to executehdfs command is deprecated.

 

Instead use the hdfs command for it.

 

 

 

Java HotSpot(TM) 64-BitServer VM warning: You have loaded library/app/hadoop/hadoop-2.2.0/lib/native/libhadoop.so.1.0.0 which might havedisabled stack guard. The VM will try to fix the stack guard now.

 

It's highly recommendedthat you fix the library with 'execstack -c <libfile>', or link it with'-z noexecstack'.

 

13/10/24 04:08:55 WARNutil.NativeCodeLoader: Unable to load native-hadoop library for yourplatform... using builtin-java classes where applicable

 

put: `in': No such file or directory

 

 

 

查看本地文件

 

[hadoop@hadoop01 input]$ file /app/hadoop/hadoop-2.2.0/lib/native/libhadoop.so.1.0.0

 

/app/hadoop/hadoop-2.2.0/lib/native/libhadoop.so.1.0.0:ELF 32-bit LSB shared object, Intel 80386,version 1 (SYSV), dynamically linked, not stripped

 

 

 

貌似是32位和64位的原因

 

http://mail-archives.apache.org/mod_mbox/hadoop-user/201208.mbox/%3C19AD42E3F64F0F468A305399D0DF39D92EA4521578@winops07.win.compete.com%3E

 

http://www.mail-archive.com/common-issues@hadoop.apache.org/msg52576.html

 

操作系统64位,软件是32位。悲剧了。。。装好的集群没法用。

 

解决方法:重新编译hadoop

解决方法,就是重新编译hadoop软件:

 

载程序代码

 

机器得连网,如果没联网找可以联网的机器下载,但是编译时还是要下载一些东西,所以,实在不行。最好找相同平台(可以是虚拟机)能上网的机器做下面工作,弄好了再拷回来。

 

# svn checkout'http://svn.apache.org/repos/asf/hadoop/common/tags/release-2.2.0'

 

都下载到这了:

 

[hadoop@hadoop01 hadoop]$ ls

 

BUILDING.txt       hadoop-common-project     hadoop-maven-plugins  hadoop-tools

 

dev-support        hadoop-dist               hadoop-minicluster    hadoop-yarn-project

 

hadoop-assemblies  hadoop-hdfs-project       hadoop-project        pom.xml

 

hadoop-client      hadoop-mapreduce-project  hadoop-project-dist

 

安装开发环境

 

1.必要的包

 

[root@hadoop01 /]# yum install svn

 

[root@hadoop01 ~]# yum install autoconfautomake libtool cmake

 

root@hadoop01 ~]# yum install ncurses-devel

 

root@hadoop01 ~]# yum install openssl-devel

 

root@hadoop01 ~]# yum install gcc*

 

2.安装maven

 

下载,并解压

 

http://maven.apache.org/download.cgi

 

 

 

[root@hadoop01 stable]# mvapache-maven-3.1.1 /usr/local/

 

将/usr/local/apache-maven-3.1.1/bin加到环境变量中

 

3.安装protobuf

 

没装 protobuf,后面编译做不完,结果如下:

 

[INFO] ---hadoop-maven-plugins:2.2.0:protoc (compile-protoc) @ hadoop-common ---

 

[WARNING] [protoc, --version] failed:java.io.IOException: Cannot run program "protoc": error=2, No suchfile or directory

 

[ERROR] stdout: []

 

……………………

 

[INFO] Apache Hadoop Main................................ SUCCESS [5.672s]

 

[INFO] Apache Hadoop Project POM......................... SUCCESS [3.682s]

 

[INFO] Apache Hadoop Annotations......................... SUCCESS [8.921s]

 

[INFO] Apache Hadoop Assemblies.......................... SUCCESS [0.676s]

 

[INFO] Apache Hadoop Project Dist POM.................... SUCCESS [4.590s]

 

[INFO] Apache Hadoop Maven Plugins....................... SUCCESS [9.172s]

 

[INFO] Apache Hadoop Auth................................ SUCCESS [10.123s]

 

[INFO] Apache Hadoop Auth Examples....................... SUCCESS [5.170s]

 

[INFO] Apache HadoopCommon .............................. FAILURE [1.224s]

 

[INFO] Apache Hadoop NFS................................. SKIPPED

 

[INFO] Apache Hadoop Common Project...................... SKIPPED

 

[INFO] Apache Hadoop HDFS................................ SKIPPED

 

[INFO] Apache Hadoop HttpFS.............................. SKIPPED

 

[INFO] Apache Hadoop HDFS BookKeeperJournal ............. SKIPPED

 

[INFO] Apache Hadoop HDFS-NFS............................ SKIPPED

 

[INFO] Apache Hadoop HDFS Project........................ SKIPPED

 

安装protobuf过程

 

下载:https://protobuf.googlecode.com/files/protobuf-2.5.0.tar.gz

 

https://code.google.com/p/protobuf/downloads/list

 

[root@hadoop01 protobuf-2.5.0]# pwd

 

/soft/protobuf-2.5.0

 

依次执行下面的命令即可

 

./configure

 

make

 

make check

 

make install

 

[root@hadoop01 protobuf-2.5.0]# protoc--version

 

libprotoc 2.5.0

 

4.cmake安装

 

CMAKE报错:

 

main:

 

   [mkdir] Created dir:/soft/hadoop/hadoop-tools/hadoop-pipes/target/native

 

    [exec] -- The C compiler identification is GNU

 

    [exec] -- The CXX compiler identification is GNU

 

    [exec] -- Check for working C compiler: /usr/bin/gcc

 

    [exec] -- Check for working C compiler: /usr/bin/gcc -- works

 

    [exec] -- Detecting C compiler ABI info

 

    [exec] -- Detecting C compiler ABI info - done

 

    [exec] -- Check for working CXX compiler: /usr/bin/c++

 

    [exec] -- Check for working CXX compiler: /usr/bin/c++ -- works

 

    [exec] -- Detecting CXX compiler ABI info

 

    [exec] -- Detecting CXX compiler ABI info - done

 

    [exec] CMake Error at /usr/share/cmake/Modules/FindOpenSSL.cmake:66(MESSAGE):

 

    [exec]   Could NOT find OpenSSL

 

    [exec] Call Stack (most recent call first):

 

    [exec]   CMakeLists.txt:20(find_package)

 

    [exec]

 

    [exec]

 

    [exec] -- Configuring incomplete, errors occurred!

 

[INFO] Apache Hadoop Gridmix............................. SUCCESS [12.062s]

 

[INFO] Apache Hadoop Data Join........................... SUCCESS [8.694s]

 

[INFO] Apache Hadoop Extras.............................. SUCCESS [6.877s]

 

[INFO] Apache Hadoop Pipes ...............................FAILURE [5.295s]

 

[INFO] Apache Hadoop Tools Dist.......................... SKIPPED

 

[INFO] Apache Hadoop Tools............................... SKIPPED

 

[INFO] Apache Hadoop Distribution........................ SKIPPED

 

[INFO] Apache Hadoop Client.............................. SKIPPED

 

[INFO] Apache Hadoop Mini-Cluster........................ SKIPPED

 

 

 

需要安装

 

root@hadoop01 ~]# yum install ncurses-devel

 

root@hadoop01 ~]# yum install openssl-devel

 

 

 

编译hadoop

 

 

 

[hadoop@hadoop01 hadoop]$ pwd

 

/soft/hadoop

 

[hadoop@hadoop01 hadoop]$ ls

 

BUILDING.txt       hadoop-client          hadoop-hdfs-project       hadoop-minicluster   hadoop-tools

 

dev-support        hadoop-common-project  hadoop-mapreduce-project  hadoop-project       hadoop-yarn-project

 

hadoop-assemblies  hadoop-dist            hadoop-maven-plugins      hadoop-project-dist  pom.xml

 

[hadoop@hadoop01 hadoop]$ mvn package -Pdist,native -DskipTests -Dtar

 

 目前的2.2.0 的Source Code 压缩包解压出来的code有个bug 需要patch后才能编译。否则编译hadoop-auth 会提示上面错误。

解决办法如下:

修改下面的pom文件。该文件在hadoop源码包下寻找:

hadoop-common-project/hadoop-auth/pom.xml

打开上面的的pom文件,在54行加入如下的依赖:(如果没有的话)

     <dependency>
       <groupId>org.mortbay.jetty</groupId>
      <artifactId>jetty-util</artifactId>
      <scope>test</scope>
     </dependency>
     <dependency>
       <groupId>org.mortbay.jetty</groupId>
       <artifactId>jetty</artifactId>
       <scope>test</scope>
     </dependency>

 

编译是个很耗时的工作呀。。。。

 

下面是做完成功的结果

 

[INFO] Reactor Summary:

 

[INFO]

 

[INFO] Apache Hadoop Main................................ SUCCESS [6.600s]

 

[INFO] Apache Hadoop Project POM......................... SUCCESS [3.974s]

 

[INFO] Apache Hadoop Annotations......................... SUCCESS [9.878s]

 

[INFO] Apache Hadoop Assemblies.......................... SUCCESS [0.856s]

 

[INFO] Apache Hadoop Project Dist POM.................... SUCCESS [4.750s]

 

[INFO] Apache Hadoop Maven Plugins....................... SUCCESS [8.720s]

 

[INFO] Apache Hadoop Auth................................ SUCCESS [10.107s]

 

[INFO] Apache Hadoop Auth Examples....................... SUCCESS [5.734s]

 

[INFO] Apache Hadoop Common.............................. SUCCESS [4:32.636s]

 

[INFO] Apache Hadoop NFS................................. SUCCESS [29.700s]

 

[INFO] Apache Hadoop Common Project...................... SUCCESS [0.090s]

 

[INFO] Apache Hadoop HDFS................................ SUCCESS [6:15.394s]

 

[INFO] Apache Hadoop HttpFS.............................. SUCCESS [1:09.238s]

 

[INFO] Apache Hadoop HDFS BookKeeperJournal ............. SUCCESS [27.676s]

 

[INFO] Apache Hadoop HDFS-NFS............................ SUCCESS [13.954s]

 

[INFO] Apache Hadoop HDFS Project........................ SUCCESS [0.212s]

 

[INFO] hadoop-yarn....................................... SUCCESS [0.962s]

 

[INFO] hadoop-yarn-api................................... SUCCESS [1:48.066s]

 

[INFO] hadoop-yarn-common................................ SUCCESS [1:37.543s]

 

[INFO] hadoop-yarn-server................................ SUCCESS [4.301s]

 

[INFO] hadoop-yarn-server-common......................... SUCCESS [29.502s]

 

[INFO] hadoop-yarn-server-nodemanager.................... SUCCESS [36.593s]

 

[INFO] hadoop-yarn-server-web-proxy...................... SUCCESS [13.273s]

 

[INFO] hadoop-yarn-server-resourcemanager................ SUCCESS [30.612s]

 

[INFO] hadoop-yarn-server-tests.......................... SUCCESS [4.374s]

 

[INFO] hadoop-yarn-client................................ SUCCESS [14.115s]

 

[INFO] hadoop-yarn-applications.......................... SUCCESS [0.218s]

 

[INFO]hadoop-yarn-applications-distributedshell ......... SUCCESS [9.871s]

 

[INFO] hadoop-mapreduce-client........................... SUCCESS [1.095s]

 

[INFO] hadoop-mapreduce-client-core...................... SUCCESS [1:30.650s]

 

[INFO]hadoop-yarn-applications-unmanaged-am-launcher .... SUCCESS [15.089s]

 

[INFO] hadoop-yarn-site.................................. SUCCESS [0.637s]

 

[INFO] hadoop-yarn-project............................... SUCCESS [25.809s]

 

[INFO] hadoop-mapreduce-client-common.................... SUCCESS [45.919s]

 

[INFO] hadoop-mapreduce-client-shuffle................... SUCCESS [14.693s]

 

[INFO] hadoop-mapreduce-client-app....................... SUCCESS [39.562s]

 

[INFO] hadoop-mapreduce-client-hs........................ SUCCESS [19.299s]

 

[INFO] hadoop-mapreduce-client-jobclient................. SUCCESS [18.549s]

 

[INFO] hadoop-mapreduce-client-hs-plugins................ SUCCESS [5.134s]

 

[INFO] Apache Hadoop MapReduce Examples.................. SUCCESS [17.823s]

 

[INFO] hadoop-mapreduce.................................. SUCCESS [12.726s]

 

[INFO] Apache Hadoop MapReduce Streaming................. SUCCESS [19.760s]

 

[INFO] Apache Hadoop Distributed Copy.................... SUCCESS [33.332s]

 

[INFO] Apache Hadoop Archives............................ SUCCESS [9.522s]

 

[INFO] Apache Hadoop Rumen............................... SUCCESS [15.141s]

 

[INFO] Apache Hadoop Gridmix............................. SUCCESS [15.052s]

 

[INFO] Apache Hadoop Data Join........................... SUCCESS [8.621s]

 

[INFO] Apache Hadoop Extras.............................. SUCCESS [8.744s]

 

[INFO] Apache Hadoop Pipes............................... SUCCESS [28.645s]

 

[INFO] Apache Hadoop Tools Dist.......................... SUCCESS [6.238s]

 

[INFO] Apache Hadoop Tools............................... SUCCESS [0.126s]

 

[INFO] Apache Hadoop Distribution........................ SUCCESS [1:20.132s]

 

[INFO] Apache Hadoop Client.............................. SUCCESS [18.820s]

 

[INFO] Apache Hadoop Mini-Cluster........................ SUCCESS [2.151s]

 

[INFO]------------------------------------------------------------------------

 

[INFO] BUILD SUCCESS

 

[INFO]------------------------------------------------------------------------

 

[INFO] Total time: 29:07.811s

 

[INFO] Finished at: Thu Oct 24 09:43:18 CST2013

 

[INFO] Final Memory: 78M/239M

 

[INFO]------------------------------------------------------------------------

 

使用用编译好的软件再执行一次

 

[hadoop@hadoop01 input]$ hadoop dfs -put ./in

 

DEPRECATED: Use of this script to executehdfs command is deprecated.

 

Instead use the hdfs command for it.

 

 

 

13/10/24 15:12:53 WARNutil.NativeCodeLoader: Unable to load native-hadoop library for yourplatform... using builtin-java classes where applicable

 

put: `in': No such file or directory

 

 

加入环境变量

export HADOOP_HOME=/usr/hadoop/hadoop-2.2.0
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
export JAVA_LIBRARY_PATH=$HADOOP_HOME/lib/native

 

 

 

hadoop为2.2.0,操作系统为oracle linux 6.3 64位。

[hadoop@hadoop01 input]$ hadoop dfs -put ./in

 

DEPRECATED: Use of this script to executehdfs command is deprecated.

 

Instead use the hdfs command for it.

 

 

 

13/10/24 15:12:53 WARNutil.NativeCodeLoader: Unable to load native-hadoop library for yourplatform... using builtin-java classes where applicable

 

put: `in': No such file or directory

 

         最后一行“put:`in': No such file or directory”先不管,肯定是语法命令有问题。

 

先解决“WARN util.NativeCodeLoader: Unable to loadnative-hadoop library for your platform... using builtin-java classes whereapplicable

 

 

 

         备注:我的hadoop环境是自己编译的,因为64位操作系统,hadoop2.2.0貌似只有32位的软件。

 

解决过程

 

1.开启debug

 

[hadoop@hadoop01 input]$ export HADOOP_ROOT_LOGGER=DEBUG,console

 

[hadoop@hadoop01 input]$ hadoop dfs -put./in

 

DEPRECATED: Use of this script to executehdfs command is deprecated.

 

Instead use the hdfs command for it.

 

 

 

13/10/24 16:11:31 DEBUG util.Shell: setsidexited with exit code 0

 

13/10/24 16:11:31 DEBUGlib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRateorg.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess withannotation @org.apache.hadoop.metrics2.annotation.Metric(valueName=Time,value=[Rate of successful kerberos logins and latency (milliseconds)], about=,type=DEFAULT, always=false, sampleName=Ops)

 

13/10/24 16:11:31 DEBUGlib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRateorg.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure withannotation @org.apache.hadoop.metrics2.annotation.Metric(valueName=Time,value=[Rate of failed kerberos logins and latency (milliseconds)], about=,type=DEFAULT, always=false, sampleName=Ops)

 

13/10/24 16:11:31 DEBUGimpl.MetricsSystemImpl: UgiMetrics, User and group related metrics

 

13/10/24 16:11:32 DEBUGsecurity.Groups:  Creating new Groupsobject

 

13/10/24 16:11:32 DEBUGutil.NativeCodeLoader: Trying to load the custom-built native-hadoop library...

 

13/10/24 16:11:32 DEBUGutil.NativeCodeLoader: Failed to load native-hadoopwith error: java.lang.UnsatisfiedLinkError: no hadoop in java.library.path

 

13/10/24 16:11:32 DEBUGutil.NativeCodeLoader: java.library.path=/usr/java/jdk1.7.0_45/lib:/app/hadoop/hadoop-2.2.0/lib/native:/app/hadoop/hadoop-2.2.0/lib/native

 

13/10/24 16:11:32 WARNutil.NativeCodeLoader: Unable to load native-hadoop library for yourplatform... using builtin-java classes where applicable

 

13/10/24 16:11:32 DEBUG security.JniBasedUnixGroupsMappingWithFallback:Falling back to shell based

 

13/10/24 16:11:32 DEBUGsecurity.JniBasedUnixGroupsMappingWithFallback: Group mappingimpl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping

 

13/10/24 16:11:32 DEBUG security.Groups:Group mappingimpl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback;cacheTimeout=300000

 

13/10/24 16:11:32 DEBUGsecurity.UserGroupInformation: hadoop login

 

13/10/24 16:11:32 DEBUGsecurity.UserGroupInformation: hadoop login commit

 

13/10/24 16:11:32 DEBUGsecurity.UserGroupInformation: using local user:UnixPrincipal: hadoop

 

13/10/24 16:11:32 DEBUGsecurity.UserGroupInformation: UGI loginUser:hadoop (auth:SIMPLE)

 

13/10/24 16:11:33 DEBUGhdfs.BlockReaderLocal: dfs.client.use.legacy.blockreader.local = false

 

13/10/24 16:11:33 DEBUGhdfs.BlockReaderLocal: dfs.client.read.shortcircuit = false

 

13/10/24 16:11:33 DEBUGhdfs.BlockReaderLocal: dfs.client.domain.socket.data.traffic = false

 

13/10/24 16:11:33 DEBUGhdfs.BlockReaderLocal: dfs.domain.socket.path =

 

13/10/24 16:11:33 DEBUGimpl.MetricsSystemImpl: StartupProgress, NameNode startup progress

 

13/10/24 16:11:33 DEBUG retry.RetryUtils:multipleLinearRandomRetry = null

 

13/10/24 16:11:33 DEBUG ipc.Server:rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=classorg.apache.hadoop.ipc.ProtobufRpcEngine$RpcRequestWrapper,rpcInvoker=org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker@2e41d9a2

 

13/10/24 16:11:34 DEBUGhdfs.BlockReaderLocal: Both short-circuit local reads and UNIX domain socketare disabled.

 

13/10/24 16:11:34 DEBUG ipc.Client: Theping interval is 60000 ms.

 

13/10/24 16:11:34 DEBUG ipc.Client:Connecting to localhost/127.0.0.1:8020

 

13/10/24 16:11:34 DEBUG ipc.Client: IPCClient (2141757401) connection to localhost/127.0.0.1:8020 from hadoop:starting, having connections 1

 

13/10/24 16:11:34 DEBUG ipc.Client: IPCClient (2141757401) connection to localhost/127.0.0.1:8020 from hadoop sending#0

 

13/10/24 16:11:34 DEBUG ipc.Client: IPCClient (2141757401) connection to localhost/127.0.0.1:8020 from hadoop gotvalue #0

 

13/10/24 16:11:34 DEBUGipc.ProtobufRpcEngine: Call: getFileInfo took 82ms

 

13/10/24 16:11:34 DEBUG ipc.Client: IPCClient (2141757401) connection to localhost/127.0.0.1:8020 from hadoop sending#1

 

13/10/24 16:11:34 DEBUG ipc.Client: IPCClient (2141757401) connection to localhost/127.0.0.1:8020 from hadoop gotvalue #1

 

13/10/24 16:11:34 DEBUGipc.ProtobufRpcEngine: Call: getFileInfo took 4ms

 

put: `.': No such file or directory

 

13/10/24 16:11:34 DEBUG ipc.Client:Stopping client

 

13/10/24 16:11:34 DEBUG ipc.Client: IPCClient (2141757401) connection to localhost/127.0.0.1:8020 from hadoop: closed

 

13/10/24 16:11:34 DEBUG ipc.Client: IPCClient (2141757401) connection to localhost/127.0.0.1:8020 from hadoop:stopped, remaining connections 0

 

 

 

上述debug中的错误 :

 

Failed to load native-hadoop with error:java.lang.UnsatisfiedLinkError: no hadoop in java.library.path

 

为了解决这个错误,尝试过很多种办法,很多都是对环境变量的修改。都是一筹莫展。。

 

加入环境变量

export HADOOP_HOME=/usr/hadoop/hadoop-2.2.0
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
export JAVA_LIBRARY_PATH=$HADOOP_HOME/lib/native

 

 

2.解决方法

 

最后详细读了官方的NativeLibraries文档。

 

http://hadoop.apache.org/docs/r2.2.0/hadoop-project-dist/hadoop-common/NativeLibraries.html

 

 

 

Either download a hadoop release, whichwill include a pre-built version of the native hadoop library, or build yourown version of the native hadoop library. Whether you download or build, thename for the library is the same: libhadoop.so

 

 

 

发现人家要的名字是这个libhadoop.so,检查我的目录,有libhadoop.so.1.0.0这个。看了官方编译的软件,确实有那个libhadoop.so文件,但只是个link,所以照做

 

 

 

[hadoop@hadoop01 native]$ ln -slibhadoop.so.1.0.0 libhadoop.so

 

[hadoop@hadoop01 native]$ ln -s libhdfs.so.0.0.0libhdfs.so

 

 

 

问题解决了。

 

[hadoop@hadoop01 hadoop]$ hadoop dfs -put./in

 

DEPRECATED: Use of this script to executehdfs command is deprecated.

 

Instead use the hdfs command for it.

 

 

 

put: `.': No such file or directory

 

 

 

分享到:
评论

相关推荐

    hadoop2.2.0的64位安装包

    然而,针对不同的操作系统,Hadoop需要适配相应的位数。在本案例中,我们关注的是在64位Linux系统上安装和运行Hadoop 2.2.0。 通常,Hadoop的官方下载页面提供的是预编译的二进制包,这些包可能针对特定的处理器...

    hadoop2.2.0-linux-64bit安装包和源码包

    Hadoop 2.2.0 在Linux 64位系统上的安装包和源码包为用户提供了一个强大的大数据处理平台。无论是对于初学者还是高级开发者而言,这一版本都提供了丰富的功能和高度的灵活性。通过以上介绍的知识点,相信您可以更加...

    Hadoop 2.2.0 64位native文件(重编译)

    这个64位的native文件是Hadoop针对64位Linux操作系统编译的一组库文件,它们对于Hadoop在Linux环境下高效运行至关重要。在Hadoop的源代码中,native库主要是由C++编写的,提供了与Java层交互的关键功能,尤其是涉及...

    Hadoop 2.2.0 配置文件

    在这个配置文件中,我们将会探讨Hadoop 2.2.0 在4台CentOS 6.4系统上运行所需的配置细节。 首先,Hadoop的核心组件包括HDFS(Hadoop Distributed File System)和MapReduce,它们都需要通过一系列的配置文件来定制...

    hadoop2.2.0

    10. 持续发展:Hadoop 2.2.0之后,Hadoop不断进化,推出了Hadoop 2.3.x、2.4.x等多个版本,进一步完善了功能,提升了性能,为大数据处理提供了更为强大的平台。 总结来说,Hadoop 2.2.0作为Hadoop的重要版本,不仅...

    hadoop2.2.0 64位 native库centos64

    在CentSO_64bit集群搭建,hadoop2.2(64位)编译 新版亮点: 基于yarn计算框架和高可用性DFS的第一个稳定版本。 注1:官网只提供32位release版本, 若机器为64位,需要手动编译。 环境配置是个挺烦人的活,麻烦不说还...

    hadoop2.2.0/2.6.0/2.7.0/2.7.1 64位安装包

    hadoop2.2.0/2.6.0/2.7.0/2.7.1 64位安装包。

    Hadoop2.2.0安装配置手册

    Hadoop2.2.0安装配置手册,新手安装和配置

    hadoop 2.2.0 64位版_part2

    hadoop 2.2.0的64位linux版。由于官网提供的hadoop2.2.0 lib/native下的.so均为32位版本,因此用源代码编译了适合64位linux的版本。本人在生产环境下即是用该版本。

    hadoop 2.2.0 64位版_part1

    hadoop 2.2.0的64位linux版。由于官网提供的hadoop2.2.0 lib/native下的.so均为32位版本,因此用源代码编译了适合64位linux的版本。本人在生产环境下即是用该版本。

    Hadoop2.2.0集群安装

    Hadoop2.2.0完全分布式集群平台安装设置 HDFS HA架构: 1、先设定电脑的IP为静态地址: 2、设置各个主机的hostname 3、在所有电脑的/etc/hosts添加以下配置: 4、设置SSH无密码登陆 5、下载解压hadoop-2.2.0.tar.gz...

    hadoop-2.2.0 64位

    对于一个64位的Hadoop-2.2.0安装,你需要确保你的操作系统也是64位的。安装步骤通常包括配置环境变量,如HADOOP_HOME,设置classpath,以及初始化和格式化HDFS。此外,你还需要配置Hadoop的配置文件,如core-site....

    CentOS6.5x64下安装19实体节点Hadoop2.2.0集群配置指南

    资源名称:CentOS 6.5 x64下安装19实体节点Hadoop 2.2.0集群配置指南内容简介: CentOS 6.5 x64下安装19实体节点Hadoop 2.2.0集群配置指南主要讲述的是CentOS 6.5 x64下安装19实体节点Hadoop 2.2.0集群配置指南;...

    Hadoop2.2.0环境测试详细傻瓜说明

    标题中的“Hadoop2.2.0环境测试详细傻瓜说明”表明了本文将要讨论的是关于Hadoop 2.2.0版本的环境配置和简单的应用测试,特别是针对新手的指南。描述中的“配置以后的一些测试,wordcount啥的,有信心的就不用下了”...

    hadoop2.2.0部署

    ### Hadoop 2.2.0 部署详尽指南 #### 一、安装Linux **1.... - **待补充:** 这部分需要更详细的说明来指导...通过以上步骤,可以完成Hadoop 2.2.0及其相关组件的部署和配置,为大数据处理任务提供一个稳定可靠的平台。

    hadoop2.2.0 eclipse-kepler 编译插件

    hadoop2.2.0 eclipse插件-重新编译过。hadoop用的是hadoop2.2.0版本,eclipse用的是 eclipse-kepler。 插件 eclipse-kepler

    hadoop2.2.0-lib-native-macos.zip

    在Hadoop 2.2.0版本中,引入了一套专门为64位操作系统设计的本地库(native libraries),这个zip文件“hadoop2.2.0-lib-native-macos.zip”就是为了解决在Mac OS上运行Hadoop时可能会遇到的“cannot load native ...

    Hadoop2.2.0安装配置手册!完全分布式Hadoop集群搭建过程

    Hadoop2.2.0安装配置手册!完全分布式Hadoop集群搭建过程 按照文档中的操作步骤,一步步操作就可以完全实现hadoop2.2.0版本的完全分布式集群搭建过程

Global site tag (gtag.js) - Google Analytics