`
侯上校
  • 浏览: 223480 次
  • 性别: Icon_minigender_1
  • 来自: 上海
社区版块
存档分类
最新评论

编译基于Hadoop2.6的Spark1.6源码

 
阅读更多
......
[INFO] Including org.spark-project.spark:unused:jar:1.0.0 in the shaded jar.
[WARNING] hadoop-yarn-common-2.6.0.jar, hadoop-yarn-api-2.6.0.jar define 3 overlapping classes: 
[WARNING]   - org.apache.hadoop.yarn.factories.package-info
[WARNING]   - org.apache.hadoop.yarn.util.package-info
[WARNING]   - org.apache.hadoop.yarn.factory.providers.package-info
[WARNING] unused-1.0.0.jar, spark-streaming-kafka_2.10-1.6.0.jar define 1 overlapping classes: 
[WARNING]   - org.apache.spark.unused.UnusedStubClass
[WARNING] hadoop-yarn-common-2.6.0.jar, hadoop-yarn-client-2.6.0.jar define 2 overlapping classes: 
[WARNING]   - org.apache.hadoop.yarn.client.api.impl.package-info
[WARNING]   - org.apache.hadoop.yarn.client.api.package-info
[WARNING] maven-shade-plugin has detected that some class files are
[WARNING] present in two or more JARs. When this happens, only one
[WARNING] single version of the class is copied to the uber jar.
[WARNING] Usually this is not harmful and you can skip these warnings,
[WARNING] otherwise try to manually exclude artifacts based on
[WARNING] mvn dependency:tree -Ddetail=true and the above output.
[WARNING] See http://docs.codehaus.org/display/MAVENUSER/Shade+Plugin
[INFO] Replacing original artifact with shaded artifact.
[INFO] Replacing /usr/local/src/spark-1.6.0/external/kafka-assembly/target/spark-streaming-kafka-assembly_2.10-1.6.0.jar with /usr/local/src/spark-1.6.0/external/kafka-assembly/target/spark-streaming-kafka-assembly_2.10-1.6.0-shaded.jar
[INFO] Dependency-reduced POM written at: /usr/local/src/spark-1.6.0/external/kafka-assembly/dependency-reduced-pom.xml
[INFO] Dependency-reduced POM written at: /usr/local/src/spark-1.6.0/external/kafka-assembly/dependency-reduced-pom.xml
[INFO] 
[INFO] --- maven-source-plugin:2.4:jar-no-fork (create-source-jar) @ spark-streaming-kafka-assembly_2.10 ---
[INFO] Building jar: /usr/local/src/spark-1.6.0/external/kafka-assembly/target/spark-streaming-kafka-assembly_2.10-1.6.0-sources.jar
[INFO] 
[INFO] --- maven-source-plugin:2.4:test-jar-no-fork (create-source-jar) @ spark-streaming-kafka-assembly_2.10 ---
[INFO] Building jar: /usr/local/src/spark-1.6.0/external/kafka-assembly/target/spark-streaming-kafka-assembly_2.10-1.6.0-test-sources.jar
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Spark Project Parent POM ........................... SUCCESS [ 29.783 s]
[INFO] Spark Project Test Tags ............................ SUCCESS [  7.380 s]
[INFO] Spark Project Launcher ............................. SUCCESS [  7.855 s]
[INFO] Spark Project Networking ........................... SUCCESS [  8.191 s]
[INFO] Spark Project Shuffle Streaming Service ............ SUCCESS [  4.095 s]
[INFO] Spark Project Unsafe ............................... SUCCESS [  3.700 s]
[INFO] Spark Project Core ................................. SUCCESS [02:05 min]
[INFO] Spark Project Bagel ................................ SUCCESS [ 13.732 s]
[INFO] Spark Project GraphX ............................... SUCCESS [ 30.345 s]
[INFO] Spark Project Streaming ............................ SUCCESS [ 49.660 s]
[INFO] Spark Project Catalyst ............................. SUCCESS [01:19 min]
[INFO] Spark Project SQL .................................. SUCCESS [01:13 min]
[INFO] Spark Project ML Library ........................... SUCCESS [01:27 min]
[INFO] Spark Project Tools ................................ SUCCESS [ 13.390 s]
[INFO] Spark Project Hive ................................. SUCCESS [ 53.630 s]
[INFO] Spark Project Docker Integration Tests ............. SUCCESS [ 10.886 s]
[INFO] Spark Project REPL ................................. SUCCESS [ 32.466 s]
[INFO] Spark Project YARN Shuffle Service ................. SUCCESS [  9.397 s]
[INFO] Spark Project YARN ................................. SUCCESS [ 25.443 s]
[INFO] Spark Project Hive Thrift Server ................... SUCCESS [ 19.905 s]
[INFO] Spark Project Assembly ............................. SUCCESS [02:55 min]
[INFO] Spark Project External Twitter ..................... SUCCESS [ 12.837 s]
[INFO] Spark Project External Flume Sink .................. SUCCESS [ 16.321 s]
[INFO] Spark Project External Flume ....................... SUCCESS [ 16.459 s]
[INFO] Spark Project External Flume Assembly .............. SUCCESS [  4.650 s]
[INFO] Spark Project External MQTT ........................ SUCCESS [ 24.364 s]
[INFO] Spark Project External MQTT Assembly ............... SUCCESS [  8.288 s]
[INFO] Spark Project External ZeroMQ ...................... SUCCESS [  9.547 s]
[INFO] Spark Project External Kafka ....................... SUCCESS [ 19.217 s]
[INFO] Spark Project Examples ............................. SUCCESS [02:53 min]
[INFO] Spark Project External Kafka Assembly .............. SUCCESS [  9.699 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 19:18 min
[INFO] Finished at: 2016-01-12T14:45:36+08:00
[INFO] Final Memory: 424M/2042M
[INFO] ------------------------------------------------------------------------
[root@Colonel-Hou spark-1.6.0]#        

 

分享到:
评论

相关推荐

    spark-2.2.0-bin-hadoop2.6.tgz

    `spark-2.2.0-bin-hadoop2.6.tgz`是一个压缩包,包含了Spark 2.2.0在YARN(Hadoop Yet Another Resource Negotiator)模式下运行所需的所有组件。 首先,让我们深入了解Spark的核心概念。Spark提供了一个基于内存...

    Linux环境Hadoop2.6+Hbase1.2集群安装部署

    在构建大数据处理环境时,Linux环境下的Hadoop2.6+Hbase1.2集群安装部署是基础步骤,而Spark分布式集群的搭建则是提升数据处理效率的关键。这些技术的组合使用,可以为大规模数据处理提供高效、可靠的解决方案。 ...

    Hive3.1.2编译源码

    使用hive3.1.2和spark3.0.0配置hive on spark的时候,发现官方下载的hive3.1.2和spark3.0.0不兼容,hive3.1.2对应的版本是spark2.3.0,而spark3.0.0对应的hadoop版本是hadoop2.6或hadoop2.7。 所以,如果想要使用高...

    hadoop2.8.5-windows本地开发

    内容概要:windows环境下添加snappy源码,并对2.8.5的apache版本hadoop包进行编译,生成指定的hadoop.dll、snappy.dll文件,方便Windows环境下利用idea工具进行Hadoop、Spark的local模式下代码调试。 版本更新:...

    hadoop 2.3.0

    4. **源码包(SRC包)**:Hadoop 2.3.0 SRC包包含了所有源代码,开发者可以下载此包进行编译、定制和扩展Hadoop的功能,以便适应特定的业务需求或优化性能。 5. **Hadoop配置**:在2.3.0版本中,配置参数有所调整,...

    spark编译部署和sparkbench编译

    Hadoop 是 Spark 的依赖项,需要下载与 Spark 编译或预编译版本对应的 Hadoop 版本,下载网址为: `https://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common/` 选择合适的 Hadoop 版本下载。 配置 Hadoop 的...

    Hive on Spark实施笔记1

    下载Spark-1.4.0的源码,并使用指定的编译命令进行编译,如`mvn -DskipTests clean package -Pdist,spark-external -Phadoop-2.6 -Pyarn -Psparkr -Phive -Phive-thriftserver`。编译完成后,将结果复制到目标机器的...

    hdp3.2.1.0-001(Ambari2.7.6+hadoop3.2.4)arm版本

    集成了hadoop3.2.4,hive3.1.3, spark3.2.1,kyuubi,ozone等组件,完全基于Apache版本使用Ambari2.7.6进行集成,支持centos系的国产操作系统,例如红旗等。同时支持x86和aarch64两种cpu架构,满足国产化改造的需要...

    相关软件安装文档.docx

    文中使用的是`spark-2.3.0-bin-hadoop2.6`,该版本与Hadoop 2.6兼容。 11. **Elasticsearch**:是一个实时的分布式搜索和分析引擎,常用于全文搜索和数据分析。选用的版本为`elasticsearch-6.3.2`。 12. **...

    snappy-java-1.1.2.6.zip

    如果你想要编译和使用`snappy-java`,你需要一个Java开发环境(JDK)和Maven,通过运行`mvn compile`和`mvn package`命令,可以编译源码并生成可执行的JAR文件。 在实际应用中,`SnappyCompressor`和`...

Global site tag (gtag.js) - Google Analytics