hadoop2.x在apache官网直接下载的并没有64位直接能用的版本,如果我们想在64位系统使用,那么就需要重新编译hadoop,否则直接使用32位的hadoop运行在64位的系统上,将会出现一些库不兼容的异常。如下图所示,最直接的一个异常:
在这之前,散仙先用一个表格来描述下散仙的编译的环境的状况:
序号 | 描述 | 备注 | 1 | centos6.5系统64位 | linux环境 | 2 | Apache Ant1.9 | ant编译 | 3 | Apache Maven3.2.1 | maven打包部署 | 4 | gcc,gcc-c++,make | 依赖库 | 5 | protobuf-2.5.0 | 序列化库 | 6 | JDK1.7 | JAVA 环境 | 7 | Hadoop2.2.0源码包 | 官网下载 | 8 | 屌丝工程师一名 | 主角 | 9 | hadoop交流群376932160 | 技术交流 |
下面进入正题,散仙的环境是在centos下,所以大部分安装编译依赖库,都可以很方便的使用yum命令来完成。
1,安装gcc,执行如下的几个yum命令即可
- yum -y install gcc
- yum install -y bzip2-devel
- yum -y install gcc-c++
- yum install make
- yum install autoconf automake libtool cmake ncurses-devel openssl-devel gcc*
yum -y install gcc yum install -y bzip2-devel yum -y install gcc-c++ yum install make yum install autoconf automake libtool cmake ncurses-devel openssl-devel gcc*
2,安装JDK,并设置环境变量,完成后测试安装成功否
- [root@ganglia ~]# java -version
- java version "1.5.0"
- gij (GNU libgcj) version 4.4.7 20120313 (Red Hat 4.4.7-4)
- Copyright (C) 2007 Free Software Foundation, Inc.
- This is free software; see the source for copying conditions. There is NO
- warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
- [root@ganglia ~]#
[root@ganglia ~]# java -version java version "1.5.0" gij (GNU libgcj) version 4.4.7 20120313 (Red Hat 4.4.7-4) Copyright (C) 2007 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. [root@ganglia ~]#
3, 安装Maven,安装完成后测试安装与否
- [root@ganglia ~]# mvn -v
- Apache Maven 3.2.1 (ea8b2b07643dbb1b84b6d16e1f08391b666bc1e9; 2014-02-15T01:37:52+08:00)
- Maven home: /usr/local/maven
- Java version: 1.7.0_25, vendor: Oracle Corporation
- Java home: /usr/local/jdk1.7.0_25/jre
- Default locale: zh_CN, platform encoding: UTF-8
- OS name: "linux", version: "2.6.32-431.el6.x86_64", arch: "amd64", family: "unix"
- [root@ganglia ~]#
[root@ganglia ~]# mvn -v Apache Maven 3.2.1 (ea8b2b07643dbb1b84b6d16e1f08391b666bc1e9; 2014-02-15T01:37:52+08:00) Maven home: /usr/local/maven Java version: 1.7.0_25, vendor: Oracle Corporation Java home: /usr/local/jdk1.7.0_25/jre Default locale: zh_CN, platform encoding: UTF-8 OS name: "linux", version: "2.6.32-431.el6.x86_64", arch: "amd64", family: "unix" [root@ganglia ~]#
4, 安装Ant, 安装完成后,依旧测试成功与否
- [root@ganglia ~]# ant -version
- Apache Ant(TM) version 1.9.4 compiled on April 29 2014
- [root@ganglia ~]#
[root@ganglia ~]# ant -version Apache Ant(TM) version 1.9.4 compiled on April 29 2014 [root@ganglia ~]#
5,安装protobuf,安装方式,从官网下载tar.gz的包点击下载,并上传到linux上解压,然后进入根目录下,执行如下的几个命令:
wget https://protobuf.googlecode.com/files/protobuf-2.5.0.tar.bz2
- ./configure
- make
- make check
- make install
./configure make make check make install
然后,执行如下命令,进行测试安装成功与否
- [root@ganglia protobuf-2.5.0]# protoc
- Missing input file.
- [root@ganglia protobuf-2.5.0]#
[root@ganglia protobuf-2.5.0]# protoc Missing input file. [root@ganglia protobuf-2.5.0]#
6,从hadoop官网下载hadoop2.2.0的版本的源码的src的包,并查看目录
- [root@ganglia ~]# cd hadoop-2.2.0-src
- [root@ganglia hadoop-2.2.0-src]# ll
- 总用量 108
- -rw-r--r--. 1 67974 users 9968 10月 7 2013 BUILDING.txt
- drwxr-xr-x. 2 67974 users 4096 10月 7 2013 dev-support
- drwxr-xr-x. 4 67974 users 4096 6月 9 17:05 hadoop-assemblies
- drwxr-xr-x. 3 67974 users 4096 6月 9 17:27 hadoop-client
- drwxr-xr-x. 9 67974 users 4096 6月 9 17:14 hadoop-common-project
- drwxr-xr-x. 3 67974 users 4096 6月 9 17:26 hadoop-dist
- drwxr-xr-x. 7 67974 users 4096 6月 9 17:20 hadoop-hdfs-project
- drwxr-xr-x. 11 67974 users 4096 6月 9 17:25 hadoop-mapreduce-project
- drwxr-xr-x. 4 67974 users 4096 6月 9 17:06 hadoop-maven-plugins
- drwxr-xr-x. 3 67974 users 4096 6月 9 17:27 hadoop-minicluster
- drwxr-xr-x. 4 67974 users 4096 6月 9 17:03 hadoop-project
- drwxr-xr-x. 3 67974 users 4096 6月 9 17:05 hadoop-project-dist
- drwxr-xr-x. 12 67974 users 4096 6月 9 17:26 hadoop-tools
- drwxr-xr-x. 4 67974 users 4096 6月 9 17:24 hadoop-yarn-project
- -rw-r--r--. 1 67974 users 15164 10月 7 2013 LICENSE.txt
- -rw-r--r--. 1 67974 users 101 10月 7 2013 NOTICE.txt
- -rw-r--r--. 1 67974 users 16569 10月 7 2013 pom.xml
- -rw-r--r--. 1 67974 users 1366 10月 7 2013 README.txt
- [root@ganglia hadoop-2.2.0-src]#
[root@ganglia ~]# cd hadoop-2.2.0-src [root@ganglia hadoop-2.2.0-src]# ll 总用量 108 -rw-r--r--. 1 67974 users 9968 10月 7 2013 BUILDING.txt drwxr-xr-x. 2 67974 users 4096 10月 7 2013 dev-support drwxr-xr-x. 4 67974 users 4096 6月 9 17:05 hadoop-assemblies drwxr-xr-x. 3 67974 users 4096 6月 9 17:27 hadoop-client drwxr-xr-x. 9 67974 users 4096 6月 9 17:14 hadoop-common-project drwxr-xr-x. 3 67974 users 4096 6月 9 17:26 hadoop-dist drwxr-xr-x. 7 67974 users 4096 6月 9 17:20 hadoop-hdfs-project drwxr-xr-x. 11 67974 users 4096 6月 9 17:25 hadoop-mapreduce-project drwxr-xr-x. 4 67974 users 4096 6月 9 17:06 hadoop-maven-plugins drwxr-xr-x. 3 67974 users 4096 6月 9 17:27 hadoop-minicluster drwxr-xr-x. 4 67974 users 4096 6月 9 17:03 hadoop-project drwxr-xr-x. 3 67974 users 4096 6月 9 17:05 hadoop-project-dist drwxr-xr-x. 12 67974 users 4096 6月 9 17:26 hadoop-tools drwxr-xr-x. 4 67974 users 4096 6月 9 17:24 hadoop-yarn-project -rw-r--r--. 1 67974 users 15164 10月 7 2013 LICENSE.txt -rw-r--r--. 1 67974 users 101 10月 7 2013 NOTICE.txt -rw-r--r--. 1 67974 users 16569 10月 7 2013 pom.xml -rw-r--r--. 1 67974 users 1366 10月 7 2013 README.txt [root@ganglia hadoop-2.2.0-src]#
7,修改/root/hadoop-2.2.0-src/hadoop-common-project/hadoop-auth/pom.xml文件,增加,补丁内容,这部分是hadoop2.2.0的bug,如果是其他的2.x的版本,可以视情况而定,内容如下:
- <dependency>
- <groupId>org.mockito</groupId>
- <artifactId>mockito-all</artifactId>
- <scope>test</scope>
- </dependency>
- <!--新增的内容开始 -->
- <dependency>
- <groupId>org.mortbay.jetty</groupId>
- <artifactId>jetty-util</artifactId>
- <scope>test</scope>
- </dependency>
- <!--新增的内容结束 -->
- <dependency>
- <groupId>org.mortbay.jetty</groupId>
- <artifactId>jetty</artifactId>
- <scope>test</scope>
- </dependency>
<dependency> <groupId>org.mockito</groupId> <artifactId>mockito-all</artifactId> <scope>test</scope> </dependency> <!--新增的内容开始 --> <dependency> <groupId>org.mortbay.jetty</groupId> <artifactId>jetty-util</artifactId> <scope>test</scope> </dependency> <!--新增的内容结束 --> <dependency> <groupId>org.mortbay.jetty</groupId> <artifactId>jetty</artifactId> <scope>test</scope> </dependency>
8,修改完毕后,回到hadoop-2.2.0-src的跟目录下执行编译打包命令:
- mvn clean
- mvn package -Pdist,native -DskipTests -Dtar
mvn clean mvn package -Pdist,native -DskipTests -Dtar
然后等待半个小时左右的编译时间,网速快的话,时间可能会更短,编译完成后,输出的打包信息如下:
- [INFO]
- [INFO] --- maven-resources-plugin:2.2:resources (default-resources) @ hadoop-minicluster ---
- [INFO] Using default encoding to copy filtered resources.
- [INFO]
- [INFO] --- maven-compiler-plugin:2.5.1:compile (default-compile) @ hadoop-minicluster ---
- [INFO] No sources to compile
- [INFO]
- [INFO] --- maven-resources-plugin:2.2:testResources (default-testResources) @ hadoop-minicluster ---
- [INFO] Using default encoding to copy filtered resources.
- [INFO]
- [INFO] --- maven-compiler-plugin:2.5.1:testCompile (default-testCompile) @ hadoop-minicluster ---
- [INFO] No sources to compile
- [INFO]
- [INFO] --- maven-surefire-plugin:2.12.3:test (default-test) @ hadoop-minicluster ---
- [INFO] Tests are skipped.
- [INFO]
- [INFO] --- maven-jar-plugin:2.3.1:jar (default-jar) @ hadoop-minicluster ---
- [WARNING] JAR will be empty - no content was marked for inclusion!
- [INFO] Building jar: /root/hadoop-2.2.0-src/hadoop-minicluster/target/hadoop-minicluster-2.2.0.jar
- [INFO]
- [INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ hadoop-minicluster ---
- [INFO] No sources in project. Archive not created.
- [INFO]
- [INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ hadoop-minicluster ---
- [INFO] No sources in project. Archive not created.
- [INFO]
- [INFO] --- maven-site-plugin:3.0:attach-descriptor (attach-descriptor) @ hadoop-minicluster ---
- [INFO]
- [INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-minicluster ---
- [INFO] Building jar: /root/hadoop-2.2.0-src/hadoop-minicluster/target/hadoop-minicluster-2.2.0-javadoc.jar
- [INFO] ------------------------------------------------------------------------
- [INFO] Reactor Summary:
- [INFO]
- [INFO] Apache Hadoop Main ................................ SUCCESS [01:43 min]
- [INFO] Apache Hadoop Project POM ......................... SUCCESS [01:21 min]
- [INFO] Apache Hadoop Annotations ......................... SUCCESS [ 42.256 s]
- [INFO] Apache Hadoop Assemblies .......................... SUCCESS [ 0.291 s]
- [INFO] Apache Hadoop Project Dist POM .................... SUCCESS [ 41.053 s]
- [INFO] Apache Hadoop Maven Plugins ....................... SUCCESS [ 44.283 s]
- [INFO] Apache Hadoop Auth ................................ SUCCESS [01:49 min]
- [INFO] Apache Hadoop Auth Examples ....................... SUCCESS [ 18.950 s]
- [INFO] Apache Hadoop Common .............................. SUCCESS [05:31 min]
- [INFO] Apache Hadoop NFS ................................. SUCCESS [ 40.498 s]
- [INFO] Apache Hadoop Common Project ...................... SUCCESS [ 0.050 s]
- [INFO] Apache Hadoop HDFS ................................ SUCCESS [03:43 min]
- [INFO] Apache Hadoop HttpFS .............................. SUCCESS [ 26.962 s]
- [INFO] Apache Hadoop HDFS BookKeeper Journal ............. SUCCESS [ 47.056 s]
- [INFO] Apache Hadoop HDFS-NFS ............................ SUCCESS [ 4.237 s]
- [INFO] Apache Hadoop HDFS Project ........................ SUCCESS [ 0.029 s]
- [INFO] hadoop-yarn ....................................... SUCCESS [01:25 min]
- [INFO] hadoop-yarn-api ................................... SUCCESS [ 40.841 s]
- [INFO] hadoop-yarn-common ................................ SUCCESS [ 31.228 s]
- [INFO] hadoop-yarn-server ................................ SUCCESS [ 0.161 s]
- [INFO] hadoop-yarn-server-common ......................... SUCCESS [ 12.289 s]
- [INFO] hadoop-yarn-server-nodemanager .................... SUCCESS [ 19.271 s]
- [INFO] hadoop-yarn-server-web-proxy ...................... SUCCESS [ 3.586 s]
- [INFO] hadoop-yarn-server-resourcemanager ................ SUCCESS [ 14.674 s]
- [INFO] hadoop-yarn-server-tests .......................... SUCCESS [ 1.153 s]
- [INFO] hadoop-yarn-client ................................ SUCCESS [ 7.861 s]
- [INFO] hadoop-yarn-applications .......................... SUCCESS [ 0.106 s]
- [INFO] hadoop-yarn-applications-distributedshell ......... SUCCESS [ 4.540 s]
- [INFO] hadoop-mapreduce-client ........................... SUCCESS [ 0.168 s]
- [INFO] hadoop-mapreduce-client-core ...................... SUCCESS [ 29.360 s]
- [INFO] hadoop-yarn-applications-unmanaged-am-launcher .... SUCCESS [ 3.353 s]
- [INFO] hadoop-yarn-site .................................. SUCCESS [ 0.128 s]
- [INFO] hadoop-yarn-project ............................... SUCCESS [ 29.610 s]
- [INFO] hadoop-mapreduce-client-common .................... SUCCESS [ 19.908 s]
- [INFO] hadoop-mapreduce-client-shuffle ................... SUCCESS [ 3.357 s]
- [INFO] hadoop-mapreduce-client-app ....................... SUCCESS [ 12.116 s]
- [INFO] hadoop-mapreduce-client-hs ........................ SUCCESS [ 5.807 s]
- [INFO] hadoop-mapreduce-client-jobclient ................. SUCCESS [ 6.713 s]
- [INFO] hadoop-mapreduce-client-hs-plugins ................ SUCCESS [ 2.001 s]
- [INFO] Apache Hadoop MapReduce Examples .................. SUCCESS [ 7.684 s]
- [INFO] hadoop-mapreduce .................................. SUCCESS [ 3.664 s]
- [INFO] Apache Hadoop MapReduce Streaming ................. SUCCESS [ 5.645 s]
- [INFO] Apache Hadoop Distributed Copy .................... SUCCESS [ 29.953 s]
- [INFO] Apache Hadoop Archives ............................ SUCCESS [ 2.277 s]
- [INFO] Apache Hadoop Rumen ............................... SUCCESS [ 7.743 s]
- [INFO] Apache Hadoop Gridmix ............................. SUCCESS [ 5.608 s]
- [INFO] Apache Hadoop Data Join ........................... SUCCESS [ 3.385 s]
- [INFO] Apache Hadoop Extras .............................. SUCCESS [ 3.509 s]
- [INFO] Apache Hadoop Pipes ............................... SUCCESS [ 8.266 s]
- [INFO] Apache Hadoop Tools Dist .......................... SUCCESS [ 2.073 s]
- [INFO] Apache Hadoop Tools ............................... SUCCESS [ 0.025 s]
- [INFO] Apache Hadoop Distribution ........................ SUCCESS [ 23.928 s]
- [INFO] Apache Hadoop Client .............................. SUCCESS [ 6.876 s]
- [INFO] Apache Hadoop Mini-Cluster ........................ SUCCESS [ 0.514 s]
- [INFO] ------------------------------------------------------------------------
- [INFO] BUILD SUCCESS
- [INFO] ------------------------------------------------------------------------
- [INFO] Total time: 26:04 min
- [INFO] Finished at: 2014-06-09T17:27:26+08:00
- [INFO] Final Memory: 96M/239M
- [INFO] ------------------------------------------------------------------------
[INFO] [INFO] --- maven-resources-plugin:2.2:resources (default-resources) @ hadoop-minicluster --- [INFO] Using default encoding to copy filtered resources. [INFO] [INFO] --- maven-compiler-plugin:2.5.1:compile (default-compile) @ hadoop-minicluster --- [INFO] No sources to compile [INFO] [INFO] --- maven-resources-plugin:2.2:testResources (default-testResources) @ hadoop-minicluster --- [INFO] Using default encoding to copy filtered resources. [INFO] [INFO] --- maven-compiler-plugin:2.5.1:testCompile (default-testCompile) @ hadoop-minicluster --- [INFO] No sources to compile [INFO] [INFO] --- maven-surefire-plugin:2.12.3:test (default-test) @ hadoop-minicluster --- [INFO] Tests are skipped. [INFO] [INFO] --- maven-jar-plugin:2.3.1:jar (default-jar) @ hadoop-minicluster --- [WARNING] JAR will be empty - no content was marked for inclusion! [INFO] Building jar: /root/hadoop-2.2.0-src/hadoop-minicluster/target/hadoop-minicluster-2.2.0.jar [INFO] [INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ hadoop-minicluster --- [INFO] No sources in project. Archive not created. [INFO] [INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ hadoop-minicluster --- [INFO] No sources in project. Archive not created. [INFO] [INFO] --- maven-site-plugin:3.0:attach-descriptor (attach-descriptor) @ hadoop-minicluster --- [INFO] [INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-minicluster --- [INFO] Building jar: /root/hadoop-2.2.0-src/hadoop-minicluster/target/hadoop-minicluster-2.2.0-javadoc.jar [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary: [INFO] [INFO] Apache Hadoop Main ................................ SUCCESS [01:43 min] [INFO] Apache Hadoop Project POM ......................... SUCCESS [01:21 min] [INFO] Apache Hadoop Annotations ......................... SUCCESS [ 42.256 s] [INFO] Apache Hadoop Assemblies .......................... SUCCESS [ 0.291 s] [INFO] Apache Hadoop Project Dist POM .................... SUCCESS [ 41.053 s] [INFO] Apache Hadoop Maven Plugins ....................... SUCCESS [ 44.283 s] [INFO] Apache Hadoop Auth ................................ SUCCESS [01:49 min] [INFO] Apache Hadoop Auth Examples ....................... SUCCESS [ 18.950 s] [INFO] Apache Hadoop Common .............................. SUCCESS [05:31 min] [INFO] Apache Hadoop NFS ................................. SUCCESS [ 40.498 s] [INFO] Apache Hadoop Common Project ...................... SUCCESS [ 0.050 s] [INFO] Apache Hadoop HDFS ................................ SUCCESS [03:43 min] [INFO] Apache Hadoop HttpFS .............................. SUCCESS [ 26.962 s] [INFO] Apache Hadoop HDFS BookKeeper Journal ............. SUCCESS [ 47.056 s] [INFO] Apache Hadoop HDFS-NFS ............................ SUCCESS [ 4.237 s] [INFO] Apache Hadoop HDFS Project ........................ SUCCESS [ 0.029 s] [INFO] hadoop-yarn ....................................... SUCCESS [01:25 min] [INFO] hadoop-yarn-api ................................... SUCCESS [ 40.841 s] [INFO] hadoop-yarn-common ................................ SUCCESS [ 31.228 s] [INFO] hadoop-yarn-server ................................ SUCCESS [ 0.161 s] [INFO] hadoop-yarn-server-common ......................... SUCCESS [ 12.289 s] [INFO] hadoop-yarn-server-nodemanager .................... SUCCESS [ 19.271 s] [INFO] hadoop-yarn-server-web-proxy ...................... SUCCESS [ 3.586 s] [INFO] hadoop-yarn-server-resourcemanager ................ SUCCESS [ 14.674 s] [INFO] hadoop-yarn-server-tests .......................... SUCCESS [ 1.153 s] [INFO] hadoop-yarn-client ................................ SUCCESS [ 7.861 s] [INFO] hadoop-yarn-applications .......................... SUCCESS [ 0.106 s] [INFO] hadoop-yarn-applications-distributedshell ......... SUCCESS [ 4.540 s] [INFO] hadoop-mapreduce-client ........................... SUCCESS [ 0.168 s] [INFO] hadoop-mapreduce-client-core ...................... SUCCESS [ 29.360 s] [INFO] hadoop-yarn-applications-unmanaged-am-launcher .... SUCCESS [ 3.353 s] [INFO] hadoop-yarn-site .................................. SUCCESS [ 0.128 s] [INFO] hadoop-yarn-project ............................... SUCCESS [ 29.610 s] [INFO] hadoop-mapreduce-client-common .................... SUCCESS [ 19.908 s] [INFO] hadoop-mapreduce-client-shuffle ................... SUCCESS [ 3.357 s] [INFO] hadoop-mapreduce-client-app ....................... SUCCESS [ 12.116 s] [INFO] hadoop-mapreduce-client-hs ........................ SUCCESS [ 5.807 s] [INFO] hadoop-mapreduce-client-jobclient ................. SUCCESS [ 6.713 s] [INFO] hadoop-mapreduce-client-hs-plugins ................ SUCCESS [ 2.001 s] [INFO] Apache Hadoop MapReduce Examples .................. SUCCESS [ 7.684 s] [INFO] hadoop-mapreduce .................................. SUCCESS [ 3.664 s] [INFO] Apache Hadoop MapReduce Streaming ................. SUCCESS [ 5.645 s] [INFO] Apache Hadoop Distributed Copy .................... SUCCESS [ 29.953 s] [INFO] Apache Hadoop Archives ............................ SUCCESS [ 2.277 s] [INFO] Apache Hadoop Rumen ............................... SUCCESS [ 7.743 s] [INFO] Apache Hadoop Gridmix ............................. SUCCESS [ 5.608 s] [INFO] Apache Hadoop Data Join ........................... SUCCESS [ 3.385 s] [INFO] Apache Hadoop Extras .............................. SUCCESS [ 3.509 s] [INFO] Apache Hadoop Pipes ............................... SUCCESS [ 8.266 s] [INFO] Apache Hadoop Tools Dist .......................... SUCCESS [ 2.073 s] [INFO] Apache Hadoop Tools ............................... SUCCESS [ 0.025 s] [INFO] Apache Hadoop Distribution ........................ SUCCESS [ 23.928 s] [INFO] Apache Hadoop Client .............................. SUCCESS [ 6.876 s] [INFO] Apache Hadoop Mini-Cluster ........................ SUCCESS [ 0.514 s] [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 26:04 min [INFO] Finished at: 2014-06-09T17:27:26+08:00 [INFO] Final Memory: 96M/239M [INFO] ------------------------------------------------------------------------
编译好的hadoop包,路径在:
- [root@ganglia target]# pwd
- /root/hadoop-2.2.0-src/hadoop-dist/target
- [root@ganglia target]# ll
- 总用量 282348
- drwxr-xr-x. 2 root root 4096 6月 9 17:26 antrun
- -rw-r--r--. 1 root root 1618 6月 9 17:26 dist-layout-stitching.sh
- -rw-r--r--. 1 root root 635 6月 9 17:26 dist-tar-stitching.sh
- drwxr-xr-x. 9 root root 4096 6月 9 17:26 hadoop-2.2.0
- -rw-r--r--. 1 root root 96183833 6月 9 17:27 hadoop-2.2.0.tar.gz
- -rw-r--r--. 1 root root 2745 6月 9 17:26 hadoop-dist-2.2.0.jar
- -rw-r--r--. 1 root root 192903472 6月 9 17:27 hadoop-dist-2.2.0-javadoc.jar
- drwxr-xr-x. 2 root root 4096 6月 9 17:27 javadoc-bundle-options
- drwxr-xr-x. 2 root root 4096 6月 9 17:26 maven-archiver
- drwxr-xr-x. 2 root root 4096 6月 9 17:26 test-dir
- [root@ganglia target]#
[root@ganglia target]# pwd /root/hadoop-2.2.0-src/hadoop-dist/target [root@ganglia target]# ll 总用量 282348 drwxr-xr-x. 2 root root 4096 6月 9 17:26 antrun -rw-r--r--. 1 root root 1618 6月 9 17:26 dist-layout-stitching.sh -rw-r--r--. 1 root root 635 6月 9 17:26 dist-tar-stitching.sh drwxr-xr-x. 9 root root 4096 6月 9 17:26 hadoop-2.2.0 -rw-r--r--. 1 root root 96183833 6月 9 17:27 hadoop-2.2.0.tar.gz -rw-r--r--. 1 root root 2745 6月 9 17:26 hadoop-dist-2.2.0.jar -rw-r--r--. 1 root root 192903472 6月 9 17:27 hadoop-dist-2.2.0-javadoc.jar drwxr-xr-x. 2 root root 4096 6月 9 17:27 javadoc-bundle-options drwxr-xr-x. 2 root root 4096 6月 9 17:26 maven-archiver drwxr-xr-x. 2 root root 4096 6月 9 17:26 test-dir [root@ganglia target]#
编译完成后的本地库,位于如下位置,并查看本地库支持位数:
- [root@ganglia native]# pwd
- /root/hadoop-2.2.0-src/hadoop-dist/target/hadoop-2.2.0/lib/native
- [root@ganglia native]# ll
- 总用量 3596
- -rw-r--r--. 1 root root 733114 6月 9 17:26 libhadoop.a
- -rw-r--r--. 1 root root 1487236 6月 9 17:26 libhadooppipes.a
- lrwxrwxrwx. 1 root root 18 6月 9 17:26 libhadoop.so -> libhadoop.so.1.0.0
- -rwxr-xr-x. 1 root root 411870 6月 9 17:26 libhadoop.so.1.0.0
- -rw-r--r--. 1 root root 581944 6月 9 17:26 libhadooputils.a
- -rw-r--r--. 1 root root 273330 6月 9 17:26 libhdfs.a
- lrwxrwxrwx. 1 root root 16 6月 9 17:26 libhdfs.so -> libhdfs.so.0.0.0
- -rwxr-xr-x. 1 root root 181042 6月 9 17:26 libhdfs.so.0.0.0
- [root@ganglia native]# file libhadoop.so
- libhadoop.so: symbolic link to `libhadoop.so.1.0.0'
- [root@ganglia native]# file libhadoop.so.1.0.0
- libhadoop.so.1.0.0: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, not stripped
- [root@ganglia native]#
[root@ganglia native]# pwd /root/hadoop-2.2.0-src/hadoop-dist/target/hadoop-2.2.0/lib/native [root@ganglia native]# ll 总用量 3596 -rw-r--r--. 1 root root 733114 6月 9 17:26 libhadoop.a -rw-r--r--. 1 root root 1487236 6月 9 17:26 libhadooppipes.a lrwxrwxrwx. 1 root root 18 6月 9 17:26 libhadoop.so -> libhadoop.so.1.0.0 -rwxr-xr-x. 1 root root 411870 6月 9 17:26 libhadoop.so.1.0.0 -rw-r--r--. 1 root root 581944 6月 9 17:26 libhadooputils.a -rw-r--r--. 1 root root 273330 6月 9 17:26 libhdfs.a lrwxrwxrwx. 1 root root 16 6月 9 17:26 libhdfs.so -> libhdfs.so.0.0.0 -rwxr-xr-x. 1 root root 181042 6月 9 17:26 libhdfs.so.0.0.0 [root@ganglia native]# file libhadoop.so libhadoop.so: symbolic link to `libhadoop.so.1.0.0' [root@ganglia native]# file libhadoop.so.1.0.0 libhadoop.so.1.0.0: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, not stripped [root@ganglia native]#
至此,我们的编译已经,成功完成,然后,我们就可以使用在target目录下,编译生成的hadoop新的tar.gz包,来部署我们的hadoop集群。
相关推荐
在Hadoop生态系统中,"2.7.2 Hadoop本地库64位"是指Hadoop为了在64位操作系统上高效运行而提供的本地库。这些本地库是C++编译的动态链接库(通常为.so文件在Linux系统中),它们为Hadoop提供了与操作系统底层交互的...
### Hadoop运行WordCount实例详解 #### 一、Hadoop简介与WordCount程序的重要性 Hadoop 是一个由Apache基金会所开发的分布式系统基础架构。它能够处理非常庞大的数据集,并且能够在集群上运行,通过将大数据分割...
然而,在这种环境下工作时,可能会遇到一些异常情况,这些异常可能与环境配置、依赖库的兼容性或者系统设置有关。 Hadoop Common是一个核心组件,它提供了HDFS(Hadoop分布式文件系统)和其他Hadoop服务所需的基本...
通常,Hadoop的源代码默认是在Linux环境下编译的,但在Windows系统上编译可能会遇到一些问题,比如路径分隔符、依赖库不兼容、编译脚本不适应等。这些问题可能导致编译失败或者运行时出现异常。因此,这个已经编译好...
然而,经过一系列尝试,可以找到适用于64位Windows系统的Hadoop插件,这将极大地简化在Windows上的Hadoop搭建过程。 标题中的“hadoop插件”可能指的是特定于Windows环境的Hadoop组件或者是一套工具集合,帮助用户...
这样,当你在Windows上运行Hadoop相关的Java程序(例如Hive或Pig)时,系统就能找到并使用这些必要的二进制文件。 总的来说,winutils和hadoop.dll对于在Windows上构建Hadoop开发环境至关重要,它们使得开发者无需...
标题中的“Hadoop在Windows下用IDEA调试”意味着我们将探讨如何在Windows操作系统上使用IntelliJ IDEA(IDEA)这个流行的Java集成开发环境来调试Hadoop项目。Hadoop是一个开源的大数据处理框架,通常用于分布式存储...
这个压缩包文件“hadoop-commond(hadoop.dll)各个版本.rar”显然是为了提供不同版本的`hadoop.dll`,以解决在Windows环境下运行Spark或者Hadoop时可能出现的版本兼容性问题。 1. **Hadoop.dll**:这是一个Windows...
4. **编译问题**:在某些情况下,Hadoop的本地库可能需要重新编译以适应特定的Windows版本,尤其是在64位和32位系统之间切换时。 针对这些问题,可以采取以下步骤来解决: 1. **检查环境配置**:确保Hadoop的本地...
选择正确的版本很重要,因为不同版本的Hadoop可能有不同的API和功能,使用不兼容的插件可能会导致编译错误或运行时异常。 例如,`hadoop-eclipse-kepler-plugin-2.4.1.jar`和`hadoop-eclipse-kepler-plugin-2.2.0....
标题提到的“window环境下需要hadoop.dll”意味着在Windows操作系统上运行Hadoop时,可能会遇到一个特定的问题,即缺少必要的动态链接库文件——hadoop.dll。这个文件是Hadoop在Windows上运行的关键组件,它包含了...
### 基于Hadoop集群的分布式日志分析系统研究 #### Hadoop在分布式日志分析中的应用 随着互联网的飞速发展,Web2.0网站、电子商务平台以及大型网络游戏产生了前所未有的海量数据,其中系统运行日志是关键的数据源...
然而,不同版本的Hadoop可能包含与Flink不兼容的API或类库,因此在使用Hadoop 3.x时,直接将Flink的jar包与Hadoop的jar包一起加载可能会导致冲突。为解决这个问题,Flink社区提供了"flink-shaded-hadoop-3-uber-3.1....
在Windows上搭建Hadoop环境时,如果缺少`hadoop.dll`,会导致Hadoop无法正确地与操作系统交互,可能会引发诸如"找不到方法"或"找不到依赖库"等错误。解决这个问题通常需要将正确的JNA库添加到系统的类路径中,或者...
在Windows 10环境下搭建Apache Hadoop 2.7.2环境时,经常会遇到与`hadoop.dll`和`winutils.exe`相关的...虽然在Windows上运行Hadoop可能会遇到一些挑战,但通过合理配置和适当工具,可以实现有效的本地开发和测试环境。
- 在选择插件时,一定要确认其与Eclipse和Hadoop版本的兼容性,否则可能导致功能缺失或运行异常。 - 安装后,如果出现任何问题,可尝试更新或重装插件,或者查阅官方文档和社区论坛寻求解决方案。 总的来说,这个...
Hadoop分布式文件系统(HDFS,Hadoop Distributed File System)是Hadoop项目的核心组件之一,旨在为海量数据提供高吞吐量访问,适合运行在由一般商用硬件组成的大型集群上。HDFS的设计目标是兼容廉价的硬件设备、流...
这些配置文件定义了Hadoop运行的基础参数,如文件系统默认名称、HDFS的副本数量、YARN资源管理器的配置等。编辑这些配置文件前,要先进入Hadoop的配置目录。 4. 启动Hadoop:需要以管理员身份运行CMD命令提示符,并...
3. **编译与构建**:在将Mahout库与Hadoop项目集成时,可能会遇到编译错误。这可能是由于Maven或者Gradle的依赖管理问题,或者是Mahout源码与Hadoop版本不匹配导致的。解决方法通常包括更新构建工具的配置,或者从...