Spark 1.5.0是1.x线上的第6个发行版。这个版本共处理了来自230+contributors和80+机构的1400+个patches。Spark 1.5的许多改变都是围绕在提升Spark的性能、可用性以及操作稳定性。Spark 1.5.0焦点在Tungsten项目,它主要是通过对低层次的组建进行优化从而提升Spark的性能。Spark 1.5版本为Streaming增加了operational特性,比如支持backpressure。另外比较重要的更新就是新增加了一些机器学习算法和工具,并扩展了Spark R的相关API。主要做了以下修改:
APIs: RDD, DataFrame和SQL
1. Consistent resolution of column names (see Behavior Changes section)
2. SPARK-3947: New experimental user-defined aggregate function (UDAF) interface
3. SPARK-8300: DataFrame hint for broadcast joins
4 .SPARK-8668: expr function for turning a SQL expression into a DataFrame column
5 .SPARK-9076: Improved support for NaN values
5.1 NaN functions: isnan, nanvl
5.2 dropna/fillna also fill/drop NaN values in addition to NULL values
5.3 Equality test on NaN = NaN returns true
5.4 NaN is greater than all other values
5.5 In aggregation, NaN values go into one group
6. SPARK-8828: Sum function returns null when all input values are nulls
7. Data types
7.1 SPARK-8943: CalendarIntervalType for time intervals
7.2 SPARK-7937: Support ordering on StructType
7.3 SPARK-8866: TimestampType’s precision is reduced to 1 microseconds (1us)
8. SPARK-8159: Added ~100 functions, including date/time, string, math.
9. SPARK-8947: Improved type coercion and error reporting in plan analysis phase (i.e. most errors should be reported in analysis time, rather than execution time)
10. SPARK-1855: Memory and local disk only checkpointing support
Backend Execution: DataFrame and SQL
- 对大多数的DataFrame/SQL函数来说,Code generation默认情况下是开启的。
- DataFrame/SQL中的aggregation execution有所提升。
2.1 Cache friendly in-memory hash map layout
2.2 Fallback to external-sort-based aggregation when memory is exhausted
2.3 aggregations操作中默认开启Code generation - 对 DataFrame/SQL中的Join执行有所提升
3.1 Prefer (external) sort-merge join over hash join in shuffle joins (for left/right outer and inner joins), i.e. join 3.2 data size is now bounded by disk rather than memory
3.3 Support using (external) sort-merge join method for left/right outer joins
3.4 Support for broadcast outer join - 对DataFrame/SQL中的排序引擎有所提升
4.1 Cache-friendly in-memory layout for sorting
4.2 Fallback to external sorting when data exceeds memory size
4.3 Code generated comparator for fast comparisons - Native memory management & representation
5.1 Compact binary in-memory data representation, leading to lower memory usage
5.2 Execution memory is explicitly accounted for, without relying on JVM GC, leading to less GC and more robust memory management - SPARK-8638: Improved performance & memory usage in window functions
- Metrics instrumentation, reporting, and visualization
7.1 SPARK-8856: Plan visualization for DataFrame/SQL
7.2 SPARK-8735: Expose metrics for runtime memory usage in web UI
7.3 SPARK-4598: Pagination for jobs with large number of tasks in web UI
Integrations: Data Sources, Hive, Hadoop, Mesos and Cluster Management
1. Mesos
1.1 SPARK-6284: Support framework authentication and Mesos roles
1.2 SPARK-6287: Dynamic allocation in Mesos coarse-grained mode
1.3 SPARK-6707: User specified constraints on Mesos slave attributes
2. YARN
2.1 SPARK-4352: Dynamic allocation in YARN works with preferred locations
3. Standalone Cluster Manager
3.1 SPARK-4751: Dynamic resource allocation support
4. SPARK-6906: Improved Hive and metastore support
4.1 SPARK-8131: Improved Hive database support
4.2 Upgraded Hive dependency Hive 1.2
4.3 Support connecting to Hive 0.13, 0.14, 1.0/0.14.1, 1.1, 1.2 metastore
4.4 Support partition pruning pushdown into the metastore (off by default; config flag spark.sql.hive.metastorePartitionPruning)
4.5 Support persisting data in Hive compatible format in metastore
5. SPARK-9381: Support data partitioning for JSON data sources
6. SPARK-5463: Parquet improvements
6.1 将Parquet升级到 1.7
6.2 Speedup metadata discovery and schema merging
6.3 Predicate pushdown on by default
6.4 SPARK-6774: Support for reading non-standard legacy Parquet files generated by various libraries/systems by fully implementing all backwards-compatibility rules defined in parquet-format spec
6.5、SPARK-4176: Support for writing decimal values with precision greater than 18
7. ORC improvements (various bug fixes)
8. SPARK-8890: Faster and more robust dynamic partition insert
9. SPARK-9486: DataSourceRegister interface for external data sources to specify short names
R语言
- SPARK-6797: Support for YARN cluster mode in R
- SPARK-6805: GLMs with R formula, binomial/Gaussian families, and elastic-net regularization
- SPARK-8742: Improved error messages for R
- SPARK-9315: Aliases to make DataFrame functions more R-like
Machine Learning and Advanced Analytics
- SPARK-8521: New Feature transformers: CountVectorizer, Discrete Cosine transformation, MinMaxScaler, NGram, PCA, RFormula, StopWordsRemover, and VectorSlicer.
- New Estimators in Pipeline API: SPARK-8600 naive Bayes, SPARK-7879 k-means, and SPARK-8671 isotonic regression.
- New Algorithms: SPARK-9471 multilayer perceptron classifier, SPARK-6487 PrefixSpan for sequential pattern mining, SPARK-8559 association rule generation, SPARK-8598 1-sample Kolmogorov-Smirnov test, etc.
- 提升现有的算法
4.1 LDA: online LDA performance, asymmetric doc concentration, perplexity, log-likelihood, top topics/documents, save/load, etc.
4.2 Trees and ensembles: class probabilities, feature importance for random forests, thresholds for classification, checkpointing for GBTs, etc.
4.3 Pregel-API: more efficient Pregel API implementation for GraphX.
4.4 GMM: distribute matrix inversions. - Model summary for linear and logistic regression.
- Python API: distributed matrices, streaming k-means and linear models, LDA, power iteration clustering, etc.
- Tuning and evaluation: train-validation split and multiclass classification evaluator.
- Documentation: document the release version of public API methods
Spark Streaming
- SPARK-7398: Backpressure: Automatic and dynamic rate controlling in Spark Streaming for handling bursty input streams. This allows a streaming pipeline to dynamically adapt to changes in ingestion rates and computation loads. This works with receivers, as well as, the Direct Kafka approach.
- Python API for streaming sources
2.1 SPARK-8389: Kafka offsets of Direct Kafka streams available through Python API
2.2 SPARK-8564: Kinesis Python API
2.3 SPARK-8378: Flume Python API
2.4 SPARK-5155: MQTT Python API - SPARK-3258: Python API for streaming machine learning algorithms: K-Means, linear regression, and logistic regression
- SPARK-9215: Improved reliability of Kinesis streams : No need for enabling write ahead logs for saving and recovering received data across driver failures
- Direct Kafka API graduated: Not experimental any more.
- SPARK-8701: Input metadata in UI: Kafka offsets, and input files are visible in the batch details UI
- SPARK-8882: Better load balancing and scheduling of receivers across cluster
- SPARK-4072: Include streaming storage in web UI
遗弃、移出、Configs和行为变化
Spark Core
1. DAGScheduler中的local task execution模块被移出
2. Driver和executor的默认内存从512m升到1G
3. JVM中MaxPermSize的默认配置从128m升到256m
4. spark-shell的默认日志级别从INFO提升到WARN
5. 基于NIO的ConnectionManager已经遗弃,并且将在1.6版本中移出。
Spark SQL & DataFrames
- Optimized execution using manually managed memory (Tungsten) is now enabled by default, along with code generation for expression evaluation. These features can both be disabled by setting spark.sql.tungsten.enabled to false.
- Parquet schema merging is no longer enabled by default. It can be re-enabled by setting spark.sql.parquet.mergeSchema to true.
- Resolution of strings to columns in Python now supports using dots (.) to qualify the column or access nested values. For example df[‘table.column.nestedField’]. However, this means that if your column name contains any dots you must now escape them using backticks (e.g., table.
column.with.dots
.nested). - In-memory columnar storage partition pruning is on by default. It can be disabled by setting spark.sql.inMemoryColumnarStorage.partitionPruning to false.
- Unlimited precision decimal columns are no longer supported, instead Spark SQL enforces a maximum precision of 38. When inferring schema from BigDecimal objects, a precision of (38, 18) is now used. When no precision is specified in DDL then the default remains Decimal(10, 0).
- Timestamps are now processed at a precision of 1us, rather than 100ns.
- Sum function returns null when all input values are nulls (null before 1.4, 0 in 1.4).
- In the sql dialect, floating point numbers are now parsed as decimal. HiveQL parsing remains unchanged.
- The canonical name of SQL/DataFrame functions are now lower case (e.g. sum vs SUM).
- It has been determined that using the DirectOutputCommitter when speculation is enabled is unsafe and thus this output committer will not be used by parquet when speculation is on, independent of configuration.
- JSON data source will not automatically load new files that are created by other applications (i.e. files that are not inserted to the dataset through Spark SQL). For a JSON persistent table (i.e. the metadata of the table is stored in Hive Metastore), users can use REFRESH TABLE SQL command or HiveContext’s refreshTable method to include those new files to the table. For a DataFrame representing a JSON dataset, users need to recreate the DataFrame and the new DataFrame will include new files.
Spark Streaming
- 新的实验性backpressure特性可以通过将spark.streaming.backpressure.enabled设置为true来开启。
- Write Ahead Log does not need to be abled for Kinesis streams. The updated Kinesis receiver keeps track of Kinesis sequence numbers received in each batch, and uses that information re-read the necessary data while recovering from failures.
- The number of times the receivers are relaunched on failure are not limited by the max Spark task attempts. The system will always try to relaunch receivers after failures until the StreamingContext is stopped.
- Improved load balancing of receivers across the executors, even after relaunching.
- Enabling checkpointing when using queueStream throws exception as queueStream cannot be checkpointed. However, we found this to break certain existing apps. So this change will be reverted in Spark 1.5.1.
MLlib
- In the spark.mllib package, there are no breaking API changes but some behavior changes:
1.1 SPARK-9005: RegressionMetrics.explainedVariance returns the average regression sum of squares.
1.2 SPARK-8600: NaiveBayesModel.labels become sorted.
1.3 SPARK-3382: GradientDescent has a default convergence tolerance 1e-3, and hence iterations might end earlier than 1.4. - In the experimental spark.ml package, there exists one breaking API change and one behavior change:
2.1 SPARK-9268: Java’s varargs support is removed from Params.setDefault due to a Scala compiler bug.
2.2 SPARK-10097: Evaluator.isLargerBetter is added to indicate metric ordering. Metrics like RMSE no longer flip signs as in 1.4.
尊重原创,拒绝转载,http://blog.csdn.net/stark_summer/article/details/48319233
相关推荐
总之,Apache Ignite 1.5.0.final是面向高性能、分布式计算的一款强大工具,其Windows安装版简化了本地部署,源代码则为开发者提供了深入学习和扩展Ignite功能的机会。无论是在大数据处理、实时分析还是分布式服务中...
#### 五、Spark1.5.0集群部署 - **下载Spark**:从Apache官网下载Spark 1.5.0版本的安装包。 - **解压并配置Spark**:解压Spark安装包,并配置`spark-env.sh`等环境变量。 - **配置Master和Worker节点**:根据集群...
- Spark集成:与Apache Spark集成,共享RDD(弹性分布式数据集)。 - 编组:支持在Ignite节点间高效地序列化和传输对象。 - OSGi支持:提供对OSGi环境的原生支持。 Ignite的开发可以遵循一系列步骤: - 准备阶段:...
这个名为 "apache-flume-1.5.0-cdh5.3.6-bin.zip" 的压缩包文件包含了 Apache Flume 的特定版本,即 1.5.0 版本,针对的是 Cloudera Data Hub (CDH) 平台的 5.3.6 版本。CDH 是一个企业级的大数据平台,集成了 ...
Spark 项目流 org.apache.spark/spark-streaming_2.11/1.5.0/spark-streaming_2.11-1.5.0.jar
然后,在 `spark-1.5.0` 目录下可以看到 `spark-1.5.0-bin-custom-spark.tgz`,这是我们编译好的 Spark,相当于我们直接在官网下载的预编译版本。 五、设置 SSH 无密码登录 `#sudo apt-get install openssh-server...
Spark 项目 GraphX org.apache.spark/spark-graphx_2.11/1.5.0/spark-graphx_2.11-1.5.0.jar
"经过Spark 1.5.0测试" 表明这个比较是在Spark的一个特定版本,即1.5.0上进行的。这很重要,因为随着时间的推移,Spark的API和功能可能会有所变化。 尽管没有提供具体的标签,但我们可以推断这个项目可能涵盖了以下...
本篇文章将深入探讨`pantsbuild.pants.contrib.avro-1.5.0.dev5`这一特定的Python库,该库是pantsbuild项目的一部分,主要关注Apache Avro的集成和支持。 Apache Avro是一个数据序列化系统,广泛用于分布式计算,它...
今天,我们将聚焦于一个特定的库——pantsbuild.pants.contrib.avro-1.5.0,它是一个与Apache Avro相关的Python库,旨在帮助开发人员更好地处理Avro数据格式。让我们深入探讨这个库的功能、用法及其在实际开发中的...
总的来说,"apache-camel-k-runtime-1.5.0-source-release.zip" 包含了用于在Kubernetes环境中运行和管理Apache Camel路由的源代码,这个版本的发布对于开发者来说是一个宝贵的资源,可以帮助他们更高效地在云环境中...
在 `apache-kyuubi-1.5.0-incubating-bin` 压缩包中,包含的主要文件和目录有: - `bin` 目录:包含启动和停止 Kyuubi 服务的脚本。 - `conf` 目录:默认配置文件存放的地方,可以根据需求进行定制。 - `lib` 目录...
本篇文章将聚焦于大规模游戏社交网络节点相似性的计算方法及其在Apache Kyuubi (Incubating) 1.5.0版本中的应用。 首先,让我们来了解节点相似性算法。在游戏社交网络中,节点通常代表玩家,而边则表示玩家之间的...
在大数据处理领域,使用Python结合Apache Spark可以高效地处理大规模数据集。然而,在开发过程中,经常会遇到远程调试的问题。本文将详细介绍如何在PyCharm环境中配置PySpark进行远程调试,帮助开发者更加高效地解决...
spark-python-doc-cnTODOwget 参数详细了解数据准备# wget -m -np http://spark.apache.org/docs/1.5.0/ -X /docs/1.5.0/api/wget 参数说明:参考资料
3. **Kafka**:Kafka是一种高吞吐量的分布式发布订阅消息系统,广泛用于实时数据流处理。"6.课程环境搭建:kafka_2.9.2-0.8.1集群搭建.wps"涵盖了创建Brokers、设置消费者和生产者、以及数据持久化等方面的配置。 4...
其安装目录为/opt/apache-flume-1.5.0-bin,启动命令为bin/flume-ng。配置Flume时,需要定义数据源、通道和目的地。 九、网络配置与SSH 在多节点的大数据集群中,网络配置和SSH无密码登录至关重要。确保各节点间的...
安装在`/opt/apache-flume-1.5.0-bin`,通过`flume-ng`命令进行操作。 7. **Kafka**: Kafka是一个高吞吐量的分布式发布订阅消息系统,常用于构建实时数据管道和流处理应用。尽管在描述中没有明确提到Kafka的安装和...
Apache CarbonData原始阅读原始码: : 版本:1.5.0目录2.1文件目录结构2.2文件内容详解2.2.1模式文件格式2.2.2 carbondata文件格式2.2.3 carbonindex文件格式2.2.4词典文件格式2.2.5 tablestatus文件格式3.1 spark...