- 浏览: 13891 次
- 性别:
- 来自: 北京
文章分类
最新评论
2012年5月3日
《转载》hadoop cdh3u3 eclipse插件编译
1. 编译环境
操作系统:debian6 amd64,安装ant和maven2这两个java打包工具。
hadoop:hadoop-0.20.2-cdh3u3.tar.gz
eclipse:eclipse-java-indigo-SR2-win32.zip
2. 编译hadoop
解压源码hadoop-0.20.2-cdh3u3.tar.gz并进入,执行ant,自动下载依赖并编译。
3. 编译eclipse plugin
解压eclipse。
进入hadoop源码的src/contrib/eclipse-plugin目录下,执行:
ant -Declipse.home=/eclipse解压目录/ -Dversion=0.20.2-cdh3u3 jar。
4. 测试
在hadoop源码的build/contrib/eclipse-plugin中有hadoop-eclipse-plugin-0.20.2-cdh3u3.jar。拷贝至eclipse的plugins目录下,启动eclipse。 启动后报错:
An internal error occurred during: "Connecting to DFS localhost".
查看eclipse错误日志,显示:
java.lang.NoClassDefFoundError: org/apache/hadoop/thirdparty/guava/common/collect/LinkedListMultimap
还有一个错误:
java.lang.NoClassDefFoundError: org/codehaus/jackson/map/JsonMappingException
说明eclipse找不到guava和jackson包。
5. 修复bug
首先,在hadoop源码的lib目录下拷贝出guava-r09-jarjar.jar,jackson-mapper-asl-1.5.2.jar包。
5.1 方法一
把guava-r09-jarjar.jar,jackson-mapper-asl-1.5.2.jar中的字节码(org目录)解压至hadoop-eclipse-plugin-0.20.2-cdh3u3.jar的classes下。
5.2 方法二
把guava-r09-jarjar.jar,jackson-mapper-asl-1.5.2.jar放到hadoop-eclipse-plugin-0.20.2-cdh3u3.jar的lib目录下。
然后,修改该包META-INF目录下的MANIFEST.MF,将classpath修改为一下内容:
Bundle-ClassPath: classes/,lib/hadoop-core.jar,lib/guava-r09-jarjar.jar,lib/jackson-mapper-asl-1.5.2.jar
方法二理论上可以,但我测试未成功。
posted @ 2012-05-03 18:39 riverphoenix 阅读(505) 评论(0) 编辑
wenti
An internal error occurred during: "Map/Reduce location status updater".
org/codehaus/jackson/map/JsonMappingException
An internal error occurred during: "Connecting to DFS hadoop".
org/apache/commons/configuration/Configuration
posted @ 2012-05-03 17:06 riverphoenix 阅读(213) 评论(1) 编辑
<zhuan>Hadoop格式化HDFS报错java.net.UnknownHostException: localhost.localdomain: localhost.localdomain
异常描述
在对HDFS格式化,执行hadoop namenode -format命令时,出现未知的主机名的问题,异常信息如下所示:
[plain] view plaincopyprint?
[shirdrn@localhost bin]$ hadoop namenode -format
11/06/22 07:33:31 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = java.net.UnknownHostException: localhost.localdomain: localhost.localdomain
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 0.20.0
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.20 -r 763504; compiled by 'ndaley' on Thu Apr 9 05:18:40 UTC 2009
************************************************************/
Re-format filesystem in /tmp/hadoop/hadoop-shirdrn/dfs/name ? (Y or N) Y
11/06/22 07:33:36 INFO namenode.FSNamesystem: fsOwner=shirdrn,shirdrn
11/06/22 07:33:36 INFO namenode.FSNamesystem: supergroup=supergroup
11/06/22 07:33:36 INFO namenode.FSNamesystem: isPermissionEnabled=true
11/06/22 07:33:36 INFO metrics.MetricsUtil: Unable to obtain hostName
java.net.UnknownHostException: localhost.localdomain: localhost.localdomain
at java.net.InetAddress.getLocalHost(InetAddress.java:1353)
at org.apache.hadoop.metrics.MetricsUtil.getHostName(MetricsUtil.java:91)
at org.apache.hadoop.metrics.MetricsUtil.createRecord(MetricsUtil.java:80)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.initialize(FSDirectory.java:73)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.<init>(FSDirectory.java:68)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:370)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:853)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:947)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:964)
11/06/22 07:33:36 INFO common.Storage: Image file of size 97 saved in 0 seconds.
11/06/22 07:33:36 INFO common.Storage: Storage directory /tmp/hadoop/hadoop-shirdrn/dfs/name has been successfully formatted.
11/06/22 07:33:36 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at java.net.UnknownHostException: localhost.localdomain: localhost.localdomain
************************************************************/
我们通过执行hostname命令可以看到:
[plain] view plaincopyprint?
[shirdrn@localhost bin]# hostname
localhost.localdomain
也就是说,Hadoop在格式化HDFS的时候,通过hostname命令获取到的主机名是localhost.localdomain,然后在/etc/hosts文件中进行映射的时候,没有找到,看下我的/etc/hosts内容:
[plain] view plaincopyprint?
[root@localhost bin]# cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost localhost
192.168.1.103 localhost localhost
也就说,通过localhost.localdomain根本无法映射到一个IP地址,所以报错了。
此时,我们查看一下/etc/sysconfig/network文件:
[plain] view plaincopyprint?
NETWORKING=yes
NETWORKING_IPV6=yes
HOSTNAME=localhost.localdomain
可见,执行hostname获取到这里配置的HOSTNAME的值。
解决方法
修改/etc/sysconfig/network中HOSTNAME的值为localhost,或者自己指定的主机名,保证localhost在/etc/hosts文件中映射为正确的IP地址,然后重新启动网络服务:
[plain] view plaincopyprint?
[root@localhost bin]# /etc/rc.d/init.d/network restart
Shutting down interface eth0: [ OK ]
Shutting down loopback interface: [ OK ]
Bringing up loopback interface: [ OK ]
Bringing up interface eth0:
Determining IP information for eth0... done.
[ OK ]
这时,再执行格式化HDFS命令,以及启动HDFS集群就正常了。
格式化:
[plain] view plaincopyprint?
[shirdrn@localhost bin]$ hadoop namenode -format
11/06/22 08:02:37 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = localhost/127.0.0.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 0.20.0
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.20 -r 763504; compiled by 'ndaley' on Thu Apr 9 05:18:40 UTC 2009
************************************************************/
11/06/22 08:02:37 INFO namenode.FSNamesystem: fsOwner=shirdrn,shirdrn
11/06/22 08:02:37 INFO namenode.FSNamesystem: supergroup=supergroup
11/06/22 08:02:37 INFO namenode.FSNamesystem: isPermissionEnabled=true
11/06/22 08:02:37 INFO common.Storage: Image file of size 97 saved in 0 seconds.
11/06/22 08:02:37 INFO common.Storage: Storage directory /tmp/hadoop/hadoop-shirdrn/dfs/name has been successfully formatted.
11/06/22 08:02:37 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at localhost/127.0.0.1
************************************************************/
启动:
[plain] view plaincopyprint?
[shirdrn@localhost bin]$ start-all.sh
starting namenode, logging to /home/shirdrn/eclipse/eclipse-3.5.2/hadoop/hadoop-0.20.0/logs/hadoop-shirdrn-namenode-localhost.out
localhost: starting datanode, logging to /home/shirdrn/eclipse/eclipse-3.5.2/hadoop/hadoop-0.20.0/logs/hadoop-shirdrn-datanode-localhost.out
localhost: starting secondarynamenode, logging to /home/shirdrn/eclipse/eclipse-3.5.2/hadoop/hadoop-0.20.0/logs/hadoop-shirdrn-secondarynamenode-localhost.out
starting jobtracker, logging to /home/shirdrn/eclipse/eclipse-3.5.2/hadoop/hadoop-0.20.0/logs/hadoop-shirdrn-jobtracker-localhost.out
localhost: starting tasktracker, logging to /home/shirdrn/eclipse/eclipse-3.5.2/hadoop/hadoop-0.20.0/logs/hadoop-shirdrn-tasktracker-localhost.out
查看:
[plain] view plaincopyprint?
[shirdrn@localhost bin]$ jps
8192 TaskTracker
7905 DataNode
7806 NameNode
8065 JobTracker
8002 SecondaryNameNode
8234 Jps
http://blog.csdn.net/shirdrn/article/details/6562292
《转载》hadoop cdh3u3 eclipse插件编译
1. 编译环境
操作系统:debian6 amd64,安装ant和maven2这两个java打包工具。
hadoop:hadoop-0.20.2-cdh3u3.tar.gz
eclipse:eclipse-java-indigo-SR2-win32.zip
2. 编译hadoop
解压源码hadoop-0.20.2-cdh3u3.tar.gz并进入,执行ant,自动下载依赖并编译。
3. 编译eclipse plugin
解压eclipse。
进入hadoop源码的src/contrib/eclipse-plugin目录下,执行:
ant -Declipse.home=/eclipse解压目录/ -Dversion=0.20.2-cdh3u3 jar。
4. 测试
在hadoop源码的build/contrib/eclipse-plugin中有hadoop-eclipse-plugin-0.20.2-cdh3u3.jar。拷贝至eclipse的plugins目录下,启动eclipse。 启动后报错:
An internal error occurred during: "Connecting to DFS localhost".
查看eclipse错误日志,显示:
java.lang.NoClassDefFoundError: org/apache/hadoop/thirdparty/guava/common/collect/LinkedListMultimap
还有一个错误:
java.lang.NoClassDefFoundError: org/codehaus/jackson/map/JsonMappingException
说明eclipse找不到guava和jackson包。
5. 修复bug
首先,在hadoop源码的lib目录下拷贝出guava-r09-jarjar.jar,jackson-mapper-asl-1.5.2.jar包。
5.1 方法一
把guava-r09-jarjar.jar,jackson-mapper-asl-1.5.2.jar中的字节码(org目录)解压至hadoop-eclipse-plugin-0.20.2-cdh3u3.jar的classes下。
5.2 方法二
把guava-r09-jarjar.jar,jackson-mapper-asl-1.5.2.jar放到hadoop-eclipse-plugin-0.20.2-cdh3u3.jar的lib目录下。
然后,修改该包META-INF目录下的MANIFEST.MF,将classpath修改为一下内容:
Bundle-ClassPath: classes/,lib/hadoop-core.jar,lib/guava-r09-jarjar.jar,lib/jackson-mapper-asl-1.5.2.jar
方法二理论上可以,但我测试未成功。
posted @ 2012-05-03 18:39 riverphoenix 阅读(505) 评论(0) 编辑
wenti
An internal error occurred during: "Map/Reduce location status updater".
org/codehaus/jackson/map/JsonMappingException
An internal error occurred during: "Connecting to DFS hadoop".
org/apache/commons/configuration/Configuration
posted @ 2012-05-03 17:06 riverphoenix 阅读(213) 评论(1) 编辑
<zhuan>Hadoop格式化HDFS报错java.net.UnknownHostException: localhost.localdomain: localhost.localdomain
异常描述
在对HDFS格式化,执行hadoop namenode -format命令时,出现未知的主机名的问题,异常信息如下所示:
[plain] view plaincopyprint?
[shirdrn@localhost bin]$ hadoop namenode -format
11/06/22 07:33:31 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = java.net.UnknownHostException: localhost.localdomain: localhost.localdomain
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 0.20.0
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.20 -r 763504; compiled by 'ndaley' on Thu Apr 9 05:18:40 UTC 2009
************************************************************/
Re-format filesystem in /tmp/hadoop/hadoop-shirdrn/dfs/name ? (Y or N) Y
11/06/22 07:33:36 INFO namenode.FSNamesystem: fsOwner=shirdrn,shirdrn
11/06/22 07:33:36 INFO namenode.FSNamesystem: supergroup=supergroup
11/06/22 07:33:36 INFO namenode.FSNamesystem: isPermissionEnabled=true
11/06/22 07:33:36 INFO metrics.MetricsUtil: Unable to obtain hostName
java.net.UnknownHostException: localhost.localdomain: localhost.localdomain
at java.net.InetAddress.getLocalHost(InetAddress.java:1353)
at org.apache.hadoop.metrics.MetricsUtil.getHostName(MetricsUtil.java:91)
at org.apache.hadoop.metrics.MetricsUtil.createRecord(MetricsUtil.java:80)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.initialize(FSDirectory.java:73)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.<init>(FSDirectory.java:68)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:370)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:853)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:947)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:964)
11/06/22 07:33:36 INFO common.Storage: Image file of size 97 saved in 0 seconds.
11/06/22 07:33:36 INFO common.Storage: Storage directory /tmp/hadoop/hadoop-shirdrn/dfs/name has been successfully formatted.
11/06/22 07:33:36 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at java.net.UnknownHostException: localhost.localdomain: localhost.localdomain
************************************************************/
我们通过执行hostname命令可以看到:
[plain] view plaincopyprint?
[shirdrn@localhost bin]# hostname
localhost.localdomain
也就是说,Hadoop在格式化HDFS的时候,通过hostname命令获取到的主机名是localhost.localdomain,然后在/etc/hosts文件中进行映射的时候,没有找到,看下我的/etc/hosts内容:
[plain] view plaincopyprint?
[root@localhost bin]# cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost localhost
192.168.1.103 localhost localhost
也就说,通过localhost.localdomain根本无法映射到一个IP地址,所以报错了。
此时,我们查看一下/etc/sysconfig/network文件:
[plain] view plaincopyprint?
NETWORKING=yes
NETWORKING_IPV6=yes
HOSTNAME=localhost.localdomain
可见,执行hostname获取到这里配置的HOSTNAME的值。
解决方法
修改/etc/sysconfig/network中HOSTNAME的值为localhost,或者自己指定的主机名,保证localhost在/etc/hosts文件中映射为正确的IP地址,然后重新启动网络服务:
[plain] view plaincopyprint?
[root@localhost bin]# /etc/rc.d/init.d/network restart
Shutting down interface eth0: [ OK ]
Shutting down loopback interface: [ OK ]
Bringing up loopback interface: [ OK ]
Bringing up interface eth0:
Determining IP information for eth0... done.
[ OK ]
这时,再执行格式化HDFS命令,以及启动HDFS集群就正常了。
格式化:
[plain] view plaincopyprint?
[shirdrn@localhost bin]$ hadoop namenode -format
11/06/22 08:02:37 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = localhost/127.0.0.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 0.20.0
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.20 -r 763504; compiled by 'ndaley' on Thu Apr 9 05:18:40 UTC 2009
************************************************************/
11/06/22 08:02:37 INFO namenode.FSNamesystem: fsOwner=shirdrn,shirdrn
11/06/22 08:02:37 INFO namenode.FSNamesystem: supergroup=supergroup
11/06/22 08:02:37 INFO namenode.FSNamesystem: isPermissionEnabled=true
11/06/22 08:02:37 INFO common.Storage: Image file of size 97 saved in 0 seconds.
11/06/22 08:02:37 INFO common.Storage: Storage directory /tmp/hadoop/hadoop-shirdrn/dfs/name has been successfully formatted.
11/06/22 08:02:37 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at localhost/127.0.0.1
************************************************************/
启动:
[plain] view plaincopyprint?
[shirdrn@localhost bin]$ start-all.sh
starting namenode, logging to /home/shirdrn/eclipse/eclipse-3.5.2/hadoop/hadoop-0.20.0/logs/hadoop-shirdrn-namenode-localhost.out
localhost: starting datanode, logging to /home/shirdrn/eclipse/eclipse-3.5.2/hadoop/hadoop-0.20.0/logs/hadoop-shirdrn-datanode-localhost.out
localhost: starting secondarynamenode, logging to /home/shirdrn/eclipse/eclipse-3.5.2/hadoop/hadoop-0.20.0/logs/hadoop-shirdrn-secondarynamenode-localhost.out
starting jobtracker, logging to /home/shirdrn/eclipse/eclipse-3.5.2/hadoop/hadoop-0.20.0/logs/hadoop-shirdrn-jobtracker-localhost.out
localhost: starting tasktracker, logging to /home/shirdrn/eclipse/eclipse-3.5.2/hadoop/hadoop-0.20.0/logs/hadoop-shirdrn-tasktracker-localhost.out
查看:
[plain] view plaincopyprint?
[shirdrn@localhost bin]$ jps
8192 TaskTracker
7905 DataNode
7806 NameNode
8065 JobTracker
8002 SecondaryNameNode
8234 Jps
http://blog.csdn.net/shirdrn/article/details/6562292
发表评论
-
Hadoop 2.0集群配置详细教程
2013-12-05 16:47 990作者:杨鑫奇 PS:文章 ... -
hadoop eclipse 2(转)
2013-02-07 16:00 1009hadoop-eclipse开发环境搭 ... -
hadoop eclipse 1(转)
2013-02-07 16:00 1029An internal error occurred duri ... -
hadoop 环境搭建2(转)
2013-02-07 15:59 1091自漫聊1.0发布以来,研究Hadoop也有一段时间了,目前环境 ... -
hadoop 环境搭建1(转)
2013-02-07 15:58 999hadoop-1.1.0 rpm + centos 6.3 6 ...
相关推荐
hadoop eclipse插件 cdh3版本
《Hadoop Eclipse Plugin:开发利器的进化》 在大数据领域,Hadoop作为开源分布式计算框架,扮演着核心角色。为了方便开发者在Eclipse或MyEclipse这样的集成开发环境中高效地进行Hadoop应用开发,Hadoop-Eclipse-...
Hadoop Eclipse是Hadoop开发环境的插件,用户在创建Hadoop程序时,Eclipse插件会自动导入Hadoop编程接口的jar文件,这样用户就可以在Eclipse插件的图形界面中进行编码、调试和运行Hadop程序,也能通过Eclipse插件...
hadoop eclipse 插件2.6.0 开发hadoop必备的插件 值得下载
hadoop eclipse build
3. **创建Hadoop项目**:在Eclipse的“文件”菜单中选择“新建” -> “其他”,在弹出的对话框中找到Hadoop相关选项,创建Hadoop MapReduce项目。 4. **编写MapReduce代码**:在创建的项目中,编写MapReduce程序,...
3. **Hadoop-Eclipse插件功能**: - **项目创建**:通过插件,开发者可以在Eclipse中直接创建Hadoop项目,并设置相关的Hadoop配置。 - **编辑器支持**:提供专门的MapReduce程序编辑器,包括代码提示、自动完成、...
Eclipse是流行的Java集成开发环境(IDE),而Hadoop-Eclipse插件是将Hadoop与Eclipse结合的工具,允许开发者在Eclipse中直接创建、运行和调试Hadoop MapReduce程序。这些文件"hadop-eclipse-plugin-2.5.2.jar"、...
《Hadoop Eclipse Plugin 2.7.4:MapReduce编程的得力助手》 Hadoop Eclipse Plugin 2.7.4是专为Hadoop 2.7.4版本设计的一款集成开发工具,它使得开发者能够在Eclipse环境中直接编写、调试和运行MapReduce程序,极...
Hadoop-Eclipse-Plugin 3.1.1是该插件的一个特定版本,可能包含了一些针对Hadoop 3.x版本的优化和修复,以确保与Hadoop集群的兼容性和稳定性。 6. **使用场景**: 这个插件主要适用于大数据开发人员,特别是那些...
【Hadoop Eclipse Plugin 1.1.2:开启Hadoop在Eclipse中的开发之旅】 Hadoop Eclipse Plugin 1.1.2 是一个专门为Eclipse IDE设计的插件,旨在简化Hadoop应用程序的开发过程。这个插件使得Java开发者能够在熟悉的...
Hadoop Eclipse插件是开发Hadoop MapReduce程序的重要工具,它允许开发者在本地Eclipse集成开发环境中直接编写、测试和调试Hadoop作业,极大地提高了开发效率。以下是对这个插件的详细说明: 首先,安装Hadoop ...
《Hadoop Eclipse Plugin 2.6.5:Eclipse与Hadoop的桥梁》 在大数据处理领域,Hadoop无疑是一个核心组件,它提供了一个分布式文件系统(HDFS)和MapReduce计算框架,使得大规模数据处理变得可能。而为了方便开发者...
3. **提交和监控Job**:编写完MapReduce程序后,可以直接在Eclipse内提交Job到Hadoop集群,并实时监控Job的执行状态,包括进度、错误信息等。 4. **调试MapReduce程序**:提供断点、单步执行等功能,使得调试Hadoop...
Hadoop Eclipse Plugin是一款用于Hadoop开发的Eclipse插件,它极大地简化了Hadoop开发者在Eclipse集成开发环境中的工作流程。这个压缩包文件包含了不同版本的Hadoop Eclipse Plugin,分别是2.5.2、2.6.0、2.7.3和...
3. **远程调试**:这是该插件的一大亮点,开发者可以设置断点,对运行在远程Hadoop集群上的MapReduce任务进行调试,找出并修复问题。 4. **资源管理**:支持对HDFS上的文件进行操作,便于测试和部署。 5. **作业监控...
3. **获取Hadoop Eclipse plugin**:找到并下载hadoop-eclipse-plugin-2.5.2的JAR文件,通常可以从Apache Hadoop的官方网站或者镜像站点获取。 4. **安装插件**:将下载的JAR文件复制到Eclipse的plugins目录下。...
3. **运行与调试**:插件允许开发者直接在Eclipse中提交MapReduce作业到Hadoop集群上运行,无需离开IDE。同时,它支持远程调试,可以在代码中设置断点,观察变量状态,排查问题。 4. **版本差异**: - **2.7.1**:...
hadoop-eclipse-plugin-2.7.4.jar和hadoop-eclipse-plugin-2.7.3.jar还有hadoop-eclipse-plugin-2.6.0.jar的插件都在这打包了,都可以用。
Eclipse集成Hadoop2.10.0的插件,使用`ant`对hadoop的jar包进行打包并适应Eclipse加载,所以参数里有hadoop和eclipse的目录. 必须注意对于不同的hadoop版本,` HADDOP_INSTALL_PATH/share/hadoop/common/lib`下的jar包...