`
zy19982004
  • 浏览: 662202 次
  • 性别: Icon_minigender_1
  • 来自: 深圳
博客专栏
F6f66edc-1c1a-3859-b76b-a22e740b7aa7
Hadoop学习
浏览量:252047
社区版块
存档分类
最新评论

Hadoop学习三十一:Win7下HBase与MapReduce集成时XXX.jar is not a valid DFS filename

阅读更多

一. 代码

  1.      Hbase In Action(HBase实战)和Hbase:The Definitive Guide(HBase权威指南)两本书中,有很多入门级的代码,可以选择自己感兴趣的check out。地址分别为https://github.com/HBaseinaction https://github.com/larsgeorge/hbase-book
  2. 在Win7下运行Hbase与MapReduce集成章节的代码时,出现了错误。比喻这个代码https://github.com/larsgeorge/hbase-book/blob/master/ch07/src/main/java/mapreduce/ParseJson.java

 

二. 错误

Exception in thread "main" java.lang.IllegalArgumentException: Pathname /D:/GoogleCode/platform-components/trunk/SourceCode/study-hadoop/lib/hbase-client-0.96.1.1-hadoop2.jar from hdfs://192.168.1.200:9000/D:/GoogleCode/platform-components/trunk/SourceCode/study-hadoop/lib/hbase-client-0.96.1.1-hadoop2.jar is not a valid DFS filename.
	at org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:184)
	at org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:92)
	at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1106)
	at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1102)
	at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:288)
	at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:224)
	at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamps(ClientDistributedCacheManager.java:93)
	at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestampsAndCacheVisibilities(ClientDistributedCacheManager.java:57)
	at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:264)
	at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:300)
	at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:387)
	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268)
	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:396)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
	at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
	at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1286)
	at com.jyz.study.hadoop.hbase.mapreduce.AnalyzeData.main(AnalyzeData.java:249)

  

 

三. 跟踪代码

    org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil

  public static void addHBaseDependencyJars(Configuration conf) throws IOException {
    addDependencyJars(conf,
      // explicitly pull a class from each module
      org.apache.hadoop.hbase.HConstants.class,                      // hbase-common
      org.apache.hadoop.hbase.protobuf.generated.ClientProtos.class, // hbase-protocol
      org.apache.hadoop.hbase.client.Put.class,                      // hbase-client
      org.apache.hadoop.hbase.CompatibilityFactory.class,            // hbase-hadoop-compat
      org.apache.hadoop.hbase.mapreduce.TableMapper.class,           // hbase-server
      // pull necessary dependencies
      org.apache.zookeeper.ZooKeeper.class,
      org.jboss.netty.channel.ChannelFactory.class,
      com.google.protobuf.Message.class,
      com.google.common.collect.Lists.class,
      org.cloudera.htrace.Trace.class);
  }


public static void addDependencyJars(Configuration conf,
      Class<?>... classes) throws IOException {
      Path path = findOrCreateJar(clazz, localFs, packagedClasses);
    conf.set("tmpjars", StringUtils.arrayToString(jars.toArray(new String[jars.size()])));
  }

      此时tmpjars例如

file:/D:/GoogleCode/platform-components/trunk/SourceCode/study-hadoop/lib/hbase-client-0.96.1.1-hadoop2.jar,file:/D:/GoogleCode/platform-components/trunk/SourceCode/study-hadoop/lib/hbase-server-0.96.1.1-hadoop2.jar,file:/D:/GoogleCode/platform-components/trunk/SourceCode/study-hadoop/lib/htrace-core-2.01.jar,file:/D:/GoogleCode/platform-components/trunk/SourceCode/study-hadoop/lib/hbase-common-0.96.1.1-hadoop2.jar,file:/D:/GoogleCode/platform-components/trunk/SourceCode/study-hadoop/lib/guava-12.0.1.jar,file:/D:/GoogleCode/platform-components/trunk/SourceCode/study-hadoop/lib/hadoop-common-2.2.0.jar,file:/D:/GoogleCode/platform-components/trunk/SourceCode/study-hadoop/lib/hbase-protocol-0.96.1.1-hadoop2.jar,file:/D:/GoogleCode/platform-components/trunk/SourceCode/study-hadoop/lib/hbase-hadoop-compat-0.96.1.1-hadoop2.jar,file:/D:/GoogleCode/platform-components/trunk/SourceCode/study-hadoop/lib/netty-3.6.6.Final.jar,file:/D:/GoogleCode/platform-components/trunk/SourceCode/study-hadoop/lib/protobuf-java-2.5.0.jar,file:/D:/GoogleCode/platform-components/trunk/SourceCode/study-hadoop/lib/hadoop-mapreduce-client-core-2.2.0.jar,file:/D:/GoogleCode/platform-components/trunk/SourceCode/study-hadoop/lib/zookeeper-3.4.5.jar

 

      JobSubmitter的copyAndConfigureFiles方法

String libjars = conf.get("tmpjars");
if (libjars != null) {
      FileSystem.mkdirs(jtFs, libjarsDir, mapredSysPerms);
      String[] libjarsArr = libjars.split(",");
      for (String tmpjars: libjarsArr) {
        Path tmp = new Path(tmpjars);
        Path newPath = copyRemoteFiles(libjarsDir, tmp, conf, replication);
        DistributedCache.addFileToClassPath(
            new Path(newPath.toUri().getPath()), conf);
      }
    }

     copyRemoteFiles会copies 这些jar to the jobtracker filesystem and returns the path where itwas copied to。

     当集群环境运行时,就会返回

[hdfs://192.168.1.200:9000/tmp/hadoop-yarn/staging/root/.staging/job_1396339976222_0035/libjars/hbase-client-0.96.1.1-hadoop2.jar, hdfs://192.168.1.200:9000/tmp/hadoop-yarn/staging/root/.staging/job_1396339976222_0035/libjars/hbase-server-0.96.1.1-hadoop2.jar, hdfs://192.168.1.200:9000/tmp/hadoop-yarn/staging/root/.staging/job_1396339976222_0035/libjars/htrace-core-2.01.jar, hdfs://192.168.1.200:9000/tmp/hadoop-yarn/staging/root/.staging/job_1396339976222_0035/libjars/hbase-common-0.96.1.1-hadoop2.jar, hdfs://192.168.1.200:9000/tmp/hadoop-yarn/staging/root/.staging/job_1396339976222_0035/libjars/guava-12.0.1.jar, hdfs://192.168.1.200:9000/tmp/hadoop-yarn/staging/root/.staging/job_1396339976222_0035/libjars/hadoop-common-2.2.0.jar, hdfs://192.168.1.200:9000/tmp/hadoop-yarn/staging/root/.staging/job_1396339976222_0035/libjars/hbase-protocol-0.96.1.1-hadoop2.jar, hdfs://192.168.1.200:9000/tmp/hadoop-yarn/staging/root/.staging/job_1396339976222_0035/libjars/hbase-hadoop-compat-0.96.1.1-hadoop2.jar, hdfs://192.168.1.200:9000/tmp/hadoop-yarn/staging/root/.staging/job_1396339976222_0035/libjars/netty-3.6.6.Final.jar, hdfs://192.168.1.200:9000/tmp/hadoop-yarn/staging/root/.staging/job_1396339976222_0035/libjars/protobuf-java-2.5.0.jar, hdfs://192.168.1.200:9000/tmp/hadoop-yarn/staging/root/.staging/job_1396339976222_0035/libjars/hadoop-mapreduce-client-core-2.2.0.jar, hdfs://192.168.1.200:9000/tmp/hadoop-yarn/staging/root/.staging/job_1396339976222_0035/libjars/zookeeper-3.4.5.jar]

      如果是本地运行时,则返回

[hdfs://192.168.1.200:9000/D:/GoogleCode/platform-components/trunk/SourceCode/study-hadoop/lib/hbase-client-0.96.1.1-hadoop2.jar, hdfs://192.168.1.200:9000/D:/GoogleCode/platform-components/trunk/SourceCode/study-hadoop/lib/hbase-server-0.96.1.1-hadoop2.jar, hdfs://192.168.1.200:9000/D:/GoogleCode/platform-components/trunk/SourceCode/study-hadoop/lib/htrace-core-2.01.jar, hdfs://192.168.1.200:9000/D:/GoogleCode/platform-components/trunk/SourceCode/study-hadoop/lib/hbase-common-0.96.1.1-hadoop2.jar, hdfs://192.168.1.200:9000/D:/GoogleCode/platform-components/trunk/SourceCode/study-hadoop/lib/guava-12.0.1.jar, hdfs://192.168.1.200:9000/D:/GoogleCode/platform-components/trunk/SourceCode/study-hadoop/lib/hadoop-common-2.2.0.jar, hdfs://192.168.1.200:9000/D:/GoogleCode/platform-components/trunk/SourceCode/study-hadoop/lib/hbase-protocol-0.96.1.1-hadoop2.jar, hdfs://192.168.1.200:9000/D:/GoogleCode/platform-components/trunk/SourceCode/study-hadoop/lib/hbase-hadoop-compat-0.96.1.1-hadoop2.jar, hdfs://192.168.1.200:9000/D:/GoogleCode/platform-components/trunk/SourceCode/study-hadoop/lib/netty-3.6.6.Final.jar, hdfs://192.168.1.200:9000/D:/GoogleCode/platform-components/trunk/SourceCode/study-hadoop/lib/protobuf-java-2.5.0.jar, hdfs://192.168.1.200:9000/D:/GoogleCode/platform-components/trunk/SourceCode/study-hadoop/lib/hadoop-mapreduce-client-core-2.2.0.jar, hdfs://192.168.1.200:9000/D:/GoogleCode/platform-components/trunk/SourceCode/study-hadoop/lib/zookeeper-3.4.5.jar]

      后面会使用Hadoop文件系统检查这两批URL。问题就在这里,它没有区分是本地Window文件系统还是集群Hadoop文件系统,应该区分检查。所以提交到集群运行没问题,本地运行出现上述问题。找个时间去Hadoop Jira上create a issue。

 

四. 代码能跑下去的解决方法

     在TableMapReduceUtil里initTableMapperJob,initTableReducerJob都有大量的重构方法,其中可以指定参数

   * @param addDependencyJars upload HBase jars and jars for any of the configured
   *           job classes via the distributed cache (tmpjars).

      也正是因为addDependencyJars默认为true,才触发了上面的错误

if (addDependencyJars) {
      addDependencyJars(job);
    }

      所以我们可以将其设置为false。修改https://github.com/larsgeorge/hbase-book/blob/master/ch07/src/main/java/mapreduce/ParseJson.java 代码为

TableMapReduceUtil.initTableMapperJob(input, scan, ParseMapper.class, // co ParseJson-3-SetMap Setup map phase details using the utility method.
      ImmutableBytesWritable.class, Put.class, job, false);
    TableMapReduceUtil.initTableReducerJob(output, // co ParseJson-4-SetReduce Configure an identity reducer to store the parsed data.
      IdentityTableReducer.class, job, null, null, null, null, false);

 运行正常,查看结果,testtable data:json的数据划分为 testtable data:column1 data:column2...符合期望。

 

3
1
分享到:
评论
1 楼 houseDaine 2014-05-14  
按您的方式改了。 还是is not a valid DFS filename

可以加你QQ吗

相关推荐

    hadoop-mapreduce-examples-2.6.5.jar

    hadoop-mapreduce-examples-2.6.5.jar 官方案例源码

    hadoop-mapreduce-examples-2.7.1.jar

    hadoop-mapreduce-examples-2.7.1.jar

    hadoop-2.7.2-hbase-jar.tar.gz

    《Hadoop 2.7.2与HBase的集成——深入理解hadoop-2.7.2-hbase-jar.tar.gz》 Hadoop是Apache软件基金会的一个开源项目,它为大规模数据处理提供了一个分布式计算框架。Hadoop的核心包括HDFS(Hadoop Distributed ...

    hbase jar包.zip

    而hadoop-mapreduce-client-common-3.1.3.jar则是Hadoop MapReduce的通用客户端库,提供了与JobTracker通信和管理作业的工具。 guava-30.1.1-jre.jar是Google的Guava库,提供了许多Java集合框架的增强,以及并发、I...

    hbase-0.90.5.tar.gz与hadoop0.20.2版本匹配

    HBase是Apache软件基金会开发的一个开源分布式数据库,它是基于Google的Bigtable模型设计的,用于存储大规模结构化数据。HBase构建在Hadoop之上,两者都是Apache Hadoop生态系统的重要组成部分。Hadoop是一个分布式...

    hadoop hbase 全jar包

    HBase与Hadoop的HDFS(Hadoop Distributed File System)紧密集成,确保了数据的高可用性和容错性。HDFS为HBase提供了底层的分布式存储,而HBase则负责数据的组织和快速检索。 Hadoop的核心组件包括HDFS和MapReduce...

    hadoop最新版本3.1.1全量jar包

    hadoop-annotations-3.1.1.jar hadoop-common-3.1.1.jar hadoop-mapreduce-client-core-3.1.1.jar hadoop-yarn-api-3.1.1.jar hadoop-auth-3.1.1.jar hadoop-hdfs-3.1.1.jar hadoop-mapreduce-client-hs-3.1.1.jar ...

    hive-hbase-handler-1.2.1.jar

    被编译的hive-hbase-handler-1.2.1.jar,用于在Hive中创建关联HBase表的jar,解决创建Hive关联HBase时报FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. org.apache.hadoop....

    hbase的hbase-1.2.0-cdh5.14.2.tar.gz资源包

    HBase是Apache Hadoop生态系统中的一个分布式、高性能、版本化、列族式数据库,它提供了对大规模数据集的实时访问。HBase设计用于处理PB级的数据,并且可以在数千台服务器上扩展。`hbase-1.2.0-cdh5.14.2.tar.gz` 是...

    eclipse链接hbase所需jar包,hbase版本1.2.6,Hadoop版本2.7.1

    HBase是一个分布式、版本化的NoSQL数据库,构建在Hadoop文件系统(HDFS)之上,利用Hadoop的计算框架MapReduce进行批量处理。因此,连接HBase不仅需要HBase的相关jar包,还需要Hadoop的核心库。 以下是你需要导入的...

    hadoop和hbase集成所需jar包

    Hadoop提供了强大的数据处理能力,而HBase则是一个基于Hadoop的分布式列式数据库,适合处理大规模的半结构化数据。为了将这两个系统集成,以便在MapReduce任务中使用HBase,我们需要特定的JAR包来建立连接和通信。...

    java集成hadoop-hbase用到的jar包

    在Java集成Hadoop-HBase的过程中,使用正确的jar包是至关重要的。Hadoop是一个分布式存储和计算框架,而HBase是一个构建在Hadoop之上的非关系型数据库(NoSQL),特别适合处理大规模数据。这里我们将详细探讨Java...

    HBase MapReduce完整实例.rar

    通过这个实例,学习者可以深入了解HBase与MapReduce的整合过程,掌握如何利用MapReduce进行HBase数据的批处理,以及如何设计和优化MapReduce任务以提高处理效率。这对于大数据开发人员来说,是一份非常有价值的参考...

    HBase(hbase-2.4.9-bin.tar.gz)

    HBase(hbase-2.4.9-bin.tar.gz)是一个分布式的、面向列的开源数据库,该技术来源于 Fay Chang 所撰写的Google论文“Bigtable:一个结构化数据的分布式存储系统”。就像Bigtable利用了Google文件系统(File System...

    windows中安装Hadoop与Hbase

    在Windows系统中安装和配置Hadoop和Hbase,是为了建立起一个适合单机测试和开发的本地大数据环境。Hadoop是一个由Apache基金会开发的开源框架,用于存储和处理大规模数据,它通过可靠的分布式存储(HDFS)和分布式...

    hbase2.x-hbck2 jar包及测试命令

    HBase是Apache Hadoop生态系统中的一个分布式、高性能的NoSQL数据库。在HBase 2.x版本中,HBCK2(HBase FileSystem Check Tool 2)是一个重要的工具,用于检查和修复HBase表和Region的不一致性。HBCK2是HBase维护和...

    hadoop1.1.2操作例子 包括hbase hive mapreduce相应的jar包

    这个压缩包文件包含的是Hadoop 1.1.2版本的操作示例,以及与之相关的HBase、Hive和MapReduce的jar包。这些工具是大数据处理生态系统中的核心组件,下面将分别详细介绍它们的功能和用法。 **Hadoop**: Hadoop是...

    Hadoop源代码分析(包org.apache.hadoop.mapreduce)

    包org.apache.hadoop.mapreduce的Hadoop源代码分析

    hadoop+hbase jar包

    5. 集成与开发:开发人员在使用Hadoop和HBase时,需要在代码中引入对应的jar包,并配置相关环境变量。例如,通过`addDependency`或者`classpath`指定jar路径,然后使用Hadoop的API读写HDFS,使用HBase的API操作...

    hadoop_hadoop-2.7.2-hbase-jar.rar linux下包

    标题 "hadoop_hadoop-2.7.2-hbase-jar.rar" 提供的信息表明,这是一个与Hadoop相关的压缩文件,具体来说是Hadoop 2.7.2版本的HBase JAR文件。Hadoop是一个开源框架,主要用于分布式存储和处理大数据。而HBase是建立...

Global site tag (gtag.js) - Google Analytics