`
m635674608
  • 浏览: 5028052 次
  • 性别: Icon_minigender_1
  • 来自: 南京
社区版块
存档分类
最新评论

Spark 官方文档(4)——Configuration配置

 
阅读更多

Spark可以通过三种方式配置系统:

  • 通过SparkConf对象, 或者Java系统属性配置Spark的应用参数
  • 通过每个节点上的conf/spark-env.sh脚本为每台机器配置环境变量
  • 通过log4j.properties配置日志属性

Spark属性

Spark属性可以为每个应用分别进行配置,这些属性可以直接通过SparkConf设定,也可以通过set方法设定相关属性。 
下面展示了在本地机使用两个线程并发执行的配置代码:

<code
 class="hljs fsharp has-numbering" style="display: block; padding: 0px; 
color: inherit; box-sizing: border-box; font-family: 'Source Code Pro', 
monospace;font-size:undefined; white-space: pre; border-radius: 0px; 
word-wrap: normal; background: transparent;"><span 
class="hljs-keyword" style="color: rgb(0, 0, 136); box-sizing: 
border-box;">val</span> conf = <span class="hljs-keyword" 
style="color: rgb(0, 0, 136); box-sizing: 
border-box;">new</span> SparkConf()
             .setMaster(<span class="hljs-string" style="color: 
rgb(0, 136, 0); box-sizing: border-box;">"local[2]"</span>)
             .setAppName(<span class="hljs-string" style="color: 
rgb(0, 136, 0); box-sizing: 
border-box;">"CountingSheep"</span>)
<span class="hljs-keyword" style="color: rgb(0, 0, 136); box-sizing: 
border-box;">val</span> sc = <span class="hljs-keyword" 
style="color: rgb(0, 0, 136); box-sizing: 
border-box;">new</span> SparkContext(conf)</code><ul 
class="pre-numbering" style="box-sizing: border-box; position: absolute;
 width: 50px; top: 0px; left: 0px; margin: 0px; padding: 6px 0px 40px; 
border-right-width: 1px; border-right-style: solid; border-right-color: 
rgb(221, 221, 221); list-style: none; text-align: right; 
background-color: rgb(238, 238, 238);"><li style="box-sizing: 
border-box; padding: 0px 5px;">1</li><li style="box-sizing: 
border-box; padding: 0px 5px;">2</li><li style="box-sizing: 
border-box; padding: 0px 5px;">3</li><li style="box-sizing: 
border-box; padding: 0px 5px;">4</li></ul><ul 
class="pre-numbering" style="box-sizing: border-box; position: absolute;
 width: 50px; top: 0px; left: 0px; margin: 0px; padding: 6px 0px 40px; 
border-right-width: 1px; border-right-style: solid; border-right-color: 
rgb(221, 221, 221); list-style: none; text-align: right; 
background-color: rgb(238, 238, 238);"><li style="box-sizing: 
border-box; padding: 0px 5px;">1</li><li style="box-sizing: 
border-box; padding: 0px 5px;">2</li><li style="box-sizing: 
border-box; padding: 0px 5px;">3</li><li style="box-sizing: 
border-box; padding: 0px 5px;">4</li></ul>

对于部分时间参数需要制定单位,例如

  • 时间单位:ms、s、m(min)、h、d、y分别表示毫秒、秒、分钟、小时、天和年。
  • 存储单位:
<code
 class="hljs livecodeserver has-numbering" style="display: block; 
padding: 0px; color: inherit; box-sizing: border-box; font-family: 
'Source Code Pro', monospace;font-size:undefined; white-space: pre; 
border-radius: 0px; word-wrap: normal; background: 
transparent;"><span class="hljs-number" style="color: rgb(0, 102, 
102); box-sizing: border-box;">1</span>b (<span 
class="hljs-keyword" style="color: rgb(0, 0, 136); box-sizing: 
border-box;">bytes</span>)
<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing:
 border-box;">1</span>k <span class="hljs-operator" 
style="box-sizing: border-box;">or</span> <span 
class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: 
border-box;">1</span>kb (kibibytes = <span 
class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: 
border-box;">1024</span> <span class="hljs-keyword" 
style="color: rgb(0, 0, 136); box-sizing: 
border-box;">bytes</span>)
<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing:
 border-box;">1</span>m <span class="hljs-operator" 
style="box-sizing: border-box;">or</span> <span 
class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: 
border-box;">1</span>mb (mebibytes = <span 
class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: 
border-box;">1024</span> kibibytes)
<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing:
 border-box;">1</span>g <span class="hljs-operator" 
style="box-sizing: border-box;">or</span> <span 
class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: 
border-box;">1</span>gb (gibibytes = <span 
class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: 
border-box;">1024</span> mebibytes)
<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing:
 border-box;">1</span>t <span class="hljs-operator" 
style="box-sizing: border-box;">or</span> <span 
class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: 
border-box;">1</span>tb (tebibytes = <span 
class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: 
border-box;">1024</span> gibibytes)
<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing:
 border-box;">1</span>p <span class="hljs-operator" 
style="box-sizing: border-box;">or</span> <span 
class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: 
border-box;">1</span>pb (pebibytes = <span 
class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: 
border-box;">1024</span> tebibytes)</code><ul 
class="pre-numbering" style="box-sizing: border-box; position: absolute;
 width: 50px; top: 0px; left: 0px; margin: 0px; padding: 6px 0px 40px; 
border-right-width: 1px; border-right-style: solid; border-right-color: 
rgb(221, 221, 221); list-style: none; text-align: right; 
background-color: rgb(238, 238, 238);"><li style="box-sizing: 
border-box; padding: 0px 5px;">1</li><li style="box-sizing: 
border-box; padding: 0px 5px;">2</li><li style="box-sizing: 
border-box; padding: 0px 5px;">3</li><li style="box-sizing: 
border-box; padding: 0px 5px;">4</li><li style="box-sizing: 
border-box; padding: 0px 5px;">5</li><li style="box-sizing: 
border-box; padding: 0px 5px;">6</li></ul><ul 
class="pre-numbering" style="box-sizing: border-box; position: absolute;
 width: 50px; top: 0px; left: 0px; margin: 0px; padding: 6px 0px 40px; 
border-right-width: 1px; border-right-style: solid; border-right-color: 
rgb(221, 221, 221); list-style: none; text-align: right; 
background-color: rgb(238, 238, 238);"><li style="box-sizing: 
border-box; padding: 0px 5px;">1</li><li style="box-sizing: 
border-box; padding: 0px 5px;">2</li><li style="box-sizing: 
border-box; padding: 0px 5px;">3</li><li style="box-sizing: 
border-box; padding: 0px 5px;">4</li><li style="box-sizing: 
border-box; padding: 0px 5px;">5</li><li style="box-sizing: 
border-box; padding: 0px 5px;">6</li></ul>

动态加载Spark配置

有时为了避免通过编码设定参数,可以通过创建空的SparkConf,并在调用脚本时制定相关参数

<code
 class="hljs avrasm has-numbering" style="display: block; padding: 0px; 
color: inherit; box-sizing: border-box; font-family: 'Source Code Pro', 
monospace;font-size:undefined; white-space: pre; border-radius: 0px; 
word-wrap: normal; background: transparent;">./bin/spark-submit 
--name <span class="hljs-string" style="color: rgb(0, 136, 0); 
box-sizing: border-box;">"My app"</span> --master 
local[<span class="hljs-number" style="color: rgb(0, 102, 102); 
box-sizing: border-box;">4</span>] --conf spark<span 
class="hljs-preprocessor" style="color: rgb(68, 68, 68); box-sizing: 
border-box;">.eventLog</span><span class="hljs-preprocessor"
 style="color: rgb(68, 68, 68); box-sizing: 
border-box;">.enabled</span>=false
  --conf <span class="hljs-string" style="color: rgb(0, 136, 0); 
box-sizing: 
border-box;">"spark.executor.extraJavaOptions=-XX:+PrintGCDetails 
-XX:+PrintGCTimeStamps"</span> myApp<span 
class="hljs-preprocessor" style="color: rgb(68, 68, 68); box-sizing: 
border-box;">.jar</span></code><ul 
class="pre-numbering" style="box-sizing: border-box; position: absolute;
 width: 50px; top: 0px; left: 0px; margin: 0px; padding: 6px 0px 40px; 
border-right-width: 1px; border-right-style: solid; border-right-color: 
rgb(221, 221, 221); list-style: none; text-align: right; 
background-color: rgb(238, 238, 238);"><li style="box-sizing: 
border-box; padding: 0px 5px;">1</li><li style="box-sizing: 
border-box; padding: 0px 5px;">2</li></ul><ul 
class="pre-numbering" style="box-sizing: border-box; position: absolute;
 width: 50px; top: 0px; left: 0px; margin: 0px; padding: 6px 0px 40px; 
border-right-width: 1px; border-right-style: solid; border-right-color: 
rgb(221, 221, 221); list-style: none; text-align: right; 
background-color: rgb(238, 238, 238);"><li style="box-sizing: 
border-box; padding: 0px 5px;">1</li><li style="box-sizing: 
border-box; padding: 0px 5px;">2</li></ul>

spark shell和spark-submit提供两种方式动态加载配置

  • 命令行参数动态设定,例如–conf –master
  • 通过配置文件。spark-submit默认读取conf/spark-defaults.conf文件,每一行代表一个配置
<code
 class="hljs avrasm has-numbering" style="display: block; padding: 0px; 
color: inherit; box-sizing: border-box; font-family: 'Source Code Pro', 
monospace;font-size:undefined; white-space: pre; border-radius: 0px; 
word-wrap: normal; background: transparent;">spark<span 
class="hljs-preprocessor" style="color: rgb(68, 68, 68); box-sizing: 
border-box;">.master</span>            spark://<span 
class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: 
border-box;">5.6</span><span class="hljs-number" 
style="color: rgb(0, 102, 102); box-sizing: 
border-box;">.7</span><span class="hljs-number" 
style="color: rgb(0, 102, 102); box-sizing: 
border-box;">.8</span>:<span class="hljs-number" 
style="color: rgb(0, 102, 102); box-sizing: 
border-box;">7077</span>
spark<span class="hljs-preprocessor" style="color: rgb(68, 68, 68); 
box-sizing: border-box;">.executor</span><span 
class="hljs-preprocessor" style="color: rgb(68, 68, 68); box-sizing: 
border-box;">.memory</span>   <span class="hljs-number" 
style="color: rgb(0, 102, 102); box-sizing: 
border-box;">4</span>g
spark<span class="hljs-preprocessor" style="color: rgb(68, 68, 68); 
box-sizing: border-box;">.eventLog</span><span 
class="hljs-preprocessor" style="color: rgb(68, 68, 68); box-sizing: 
border-box;">.enabled</span>  true
spark<span class="hljs-preprocessor" style="color: rgb(68, 68, 68); 
box-sizing: border-box;">.serializer</span>        org<span 
class="hljs-preprocessor" style="color: rgb(68, 68, 68); box-sizing: 
border-box;">.apache</span><span class="hljs-preprocessor" 
style="color: rgb(68, 68, 68); box-sizing: 
border-box;">.spark</span><span class="hljs-preprocessor" 
style="color: rgb(68, 68, 68); box-sizing: 
border-box;">.serializer</span><span 
class="hljs-preprocessor" style="color: rgb(68, 68, 68); box-sizing: 
border-box;">.KryoSerializer</span></code><ul 
class="pre-numbering" style="box-sizing: border-box; position: absolute;
 width: 50px; top: 0px; left: 0px; margin: 0px; padding: 6px 0px 40px; 
border-right-width: 1px; border-right-style: solid; border-right-color: 
rgb(221, 221, 221); list-style: none; text-align: right; 
background-color: rgb(238, 238, 238);"><li style="box-sizing: 
border-box; padding: 0px 5px;">1</li><li style="box-sizing: 
border-box; padding: 0px 5px;">2</li><li style="box-sizing: 
border-box; padding: 0px 5px;">3</li><li style="box-sizing: 
border-box; padding: 0px 5px;">4</li></ul><ul 
class="pre-numbering" style="box-sizing: border-box; position: absolute;
 width: 50px; top: 0px; left: 0px; margin: 0px; padding: 6px 0px 40px; 
border-right-width: 1px; border-right-style: solid; border-right-color: 
rgb(221, 221, 221); list-style: none; text-align: right; 
background-color: rgb(238, 238, 238);"><li style="box-sizing: 
border-box; padding: 0px 5px;">1</li><li style="box-sizing: 
border-box; padding: 0px 5px;">2</li><li style="box-sizing: 
border-box; padding: 0px 5px;">3</li><li style="box-sizing: 
border-box; padding: 0px 5px;">4</li></ul>

参数设置在执行时会进行合并,默认最高优先级是通过代码设置,其次是通过命令行参数,最后是默认的配置文件。

查看Spark配置

可以通过web界面http://:4040中的Environment查看Spark配置信息(仅显示spark-defaults.conf、SparkConf和命令行参数)。可以根据web页面确定配置属性是否生效。

配置参数 Available Properties

大部分配置参数都有默认值,以下是常用配置

Application Properties

属性 默认值 描述
spark.app.name (none) 应用程序的名称,会在日志和webUI显示
spark.driver.cores 1 driver程序占用的CPU核数,只在cluster模式下有小。
spark.driver.maxResultSize 1g 对Spark每个action结果集大小的限制,最少是1M,若设为0则不限制大小。若Job结果超过限制则会异常退出,若结果集限制过大也可能造成OOM问题。
spark.driver.memory 1g driver进程可用的内存。注意:不能在代码中配置,因为此时driver已经启动,可以通过–driver-memory命令行参数或者配置文件进行配置。
spark.executor.memory 1g 每个executor可用的内存数量 (e.g. 2g, 8g).
spark.extraListeners (none) 一系列实现SparkListener的类,spark监听总线会创建这些类的实例。
spark.local.dir /tmp 用于存储mpp输出文件和RDD缓存文件,常配置在SSD等存储设备上,可以通过逗号分隔指定多个目录。 注意: 在Spark 1.0 后续版本,会被SPARK_LOCAL_DIRS (Standalone, Mesos) or LOCAL_DIRS (YARN) 环境变量覆盖.
spark.logConf false 将SparkConf 的有效配置作为INFO进行记录
spark.master (none) 集群master节点

运行时环境

属性 默认值 描述
spark.driver.userClassPathFirst false 用户指定的jars优先于Spark的库。用于解决用户与环境的版本冲突
spark.executor.logs.rolling.maxRetainedFiles (none) 系统保留日志的最大数量,当超限时,旧的日志被删除,默认不启动
spark.executor.logs.rolling.time.interval daily 设置日志rolling时间间隔,默认rolling不启动
spark.executor.userClassPathFirst false executor执行时,用户指定的jars优先于Spark的库。用于解决用户与环境的版本冲突
spark.python.worker.memory 512m 每个worker进程在聚集时的内存上限,若超限则输出到硬盘

Shuffle 行为

属性 默认值 描述
spark.reducer.maxSizeInFlight 48m 多个reduce任务从map输出获取结果的最大尺寸。由于每个reducer需要创建缓存保留数据,除非内存很大,一般不要修改此参数
spark.shuffle.compress true 是否对map的输出结果进行压缩,压缩器为spark.io.compression.codec
spark.shuffle.file.buffer 32k 每个shuffle文件输出流的内存缓存区大小。这些缓冲区减少了系统IO的调用次数
spark.shuffle.manager sort shuffle数据的实现方法,包括sort和hash两种。sort内存利用率更改,从1.2版本后sort作为默认实现方法
spark.shuffle.service.enabled false 激活外部shuffle服务。服务维护executor写的文件,因而executor可以被安全移除。需要设置spark.dynamicAllocation.enabled 为true,同事指定外部shuffle服务。
spark.shuffle.service.port 7337 默认的外部shuffle服务端口
spark.shuffle.sort.bypassMergeThreshold 200 用于设置在Reducer的partition数目少于多少的时候,Sort Based Shuffle内部不使用Merge Sort的方式处理数据,而是直接将每个partition写入单独的文件。这个方式和Hash Based的方式是类似的,区别就是在最后这些文件还是会合并成一个单独的文件,并通过一个index索引文件来标记不同partition的位置信息。从Reducer看来,数据文件和索引文件的格式和内部是否做过Merge Sort是完全相同的。这个可以看做SortBased Shuffle在Shuffle量比较小的时候对于Hash Based Shuffle的一种折衷。当然了它和Hash Based Shuffle一样,也存在同时打开文件过多导致内存占用增加的问题。因此如果GC比较严重或者内存比较紧张,可以适当的降低这个值。
spark.shuffle.spill.compress true 若为true,代表处理的中间结果在spill到本地硬盘时都会进行压缩,在将中间结果取回进行merge的时候,要进行解压。在Disk IO成为瓶颈的场景下,这个被设置为true可能比较合适;如果本地硬盘是SSD,那么这个设置为false可能比较合适。

Spark UI

属性 默认值 描述
spark.eventLog.compress false 是否压缩事务日志,当spark.eventLog.enabled为true时有效
spark.eventLog.dir file:///tmp/spark-events 记录event日志的目录,也可以是hdfs目录
spark.eventLog.enabled false 是否记录Spark events,用于在应用执行完后重建Web UI
spark.eventLog.enabled true 是否允许通过web UI kill掉stages和相关的job
spark.ui.port 4040 应用统计信息的端口
spark.ui.retainedJobs 1000 spark UI和status APIs在垃圾回收之前记录的任务数
spark.ui.retainedStages 1000 spark UI和status APIs在垃圾回收之前记录的Stage数
spark.worker.ui.retainedExecutors 1000 spark UI和status APIs在垃圾回收之前记录的executor数目
spark.worker.ui.retainedDrivers 1000 同上
spark.worker.ui.retainedExecutions 1000 同上
spark.worker.ui.retainedBatches 1000 同上

压缩和序列化

属性 默认值 描述
spark.broadcast.compress true 广播变量是否被压缩
spark.closure.serializer org.apache.spark.serializer.JavaSerializer 闭包的序列化类,目前只支持java序列化
spark.io.compression.codec snappy 内部数据RDD的压缩编码器,用于RDD、广播变量和shuffle数据压缩。支持三种编码器:lz4、lzf和snappy。
spark.io.compression.lz4.blockSize 32k 压缩块大小
spark.io.compression.snappy.blockSize 32k 压缩块大小
spark.kryo.classesToRegister (none) 若使用kryo序列化,本参数指定需要注册的自定义类
spark.kryo.referenceTracking true(false when using Spark SQL Thrift Server) 序列化时是否使用相同的对象,若对象图谱中包含同一对象的多个副本,会提高性能。若不存在该情况,关闭可以提高性能
spark.kryo.registrationRequired false 是否需要kry注册,若为true,在序列化未注册的类时kryo会抛出异常;若为false,对于未注册的类,kryo会在每个对象写入类名,降低了性能。
spark.kryo.registrator (none) 指定自定义的kryo注册类
spark.kryoserializer.buffer.max 64m kryo序列化的缓冲区大小,需要比所有序列化对象大
spark.kryoserializer.buffer 64k 初始化的序列化缓冲区,可以根据需要增长到spark.kryoserializer.buffer.max
spark.rdd.compress false 是否序列化RDD分区,能通过耗费大量CPU降低存储空间
spark.serializer org.apache.spark.serializer.JavaSerializer 序列化对象的类,建议使用org.apache.spark.serializer.KryoSerializer
spark.serializer.objectStreamReset 100 当序列化对象时,为了减少IO会缓存大量数据,然而这会阻止垃圾回收,可以通过reset将刷新缓冲区。

内存管理

属性 默认值 描述
spark.memory.fraction 0.75 用于执行和存储的内存比例。值越小,计算内存越小,缓冲区数据被排除的可能越大。这个比例剩余的部分用于存储spark元数据、用户数据结构,最好使用默认值。
spark.memory.storageFraction 0.5 在存储和计算内存中,缓存所占的内存比例,值越大,计算可用内存越少。
spark.memory.offHeap.enabled false 若为true,则spark会尝试使用堆外内存,同时要求spark.memory.offHeap.size必须是正数
spark.memory.offHeap.size 0 堆外内存可用的字节数
spark.memory.useLegacyMode false 是否可以使用传统内存管理。本参数为true时,以下参数(已废弃)才有效:spark.shuffle.memoryFraction spark.storage.memoryFraction spark.storage.unrollFraction

执行操作

属性 默认值 描述
spark.broadcast.blockSize 4m TorrentBroadcastFactory中每个block的大小。若值太大会降低广播的并行度,若值太小则可能出现BlockManager瓶颈
spark.broadcast.factory org.apache.spark.broadcast.TorrentBroadcastFactory 广播的实现
spark.cleaner.ttl (infinite) spark记忆元数据的时间。若超时则清理,用于长时间运行例如spark stream应用,需要注意被缓存的RDD超时也会被清理。
spark.executor.cores 在Yarn是1;standalone中是所有可用的core 每个executor可用的CPU核心数目,standalone模式下,每个worker会每个executor使用一个CPU核心
spark.default.parallelism 对于reduceByKey和join操作,是RDD中最大分区数;对于parallelize操作,分区数与集群管理相关:本地模式(CPU核心数作为分区数)、Mesos(8)、其他(所有执行器的核心数与2求最大值) 默认的并行数
spark.executor.heartbeatInterval 10s executor与driver的心跳间隔
spark.files.fetchTimeout 60s SparkContext.addFile的超时值
spark.files.useFetchCache true 若为true,同一应用的执行器间通过局部缓存优化;若为false则各个executor获取各自文件
spark.files.overwrite false 当目标文件存在时是否重写
spark.hadoop.cloneConf false 若为true,则为每个task拷贝hadoop的配置对象;
spark.hadoop.validateOutputSpecs true 若设置为true,saveAsHadoopFile会验证输出目录是否存在。虽然设为false可以忽略文件存在的异常,但建议使用Hadoop文件系统的API手动删除输出目录。当通过Spark Streaming的StreamingContext时本参数会被忽略,因为当进行checkpoint恢复时会重写已经存在的文件。
spark.storage.memoryMapThreshold 2m 读取文件块时Spark内存map最小的大小。当map所占内存接近或小于操作系统page大小时,内存映射负载很大
spark.externalBlockStore.blockManager org.apache.spark.storage.TachyonBlockManager 存储RDDs的外部文件块管理器。文件系统的URL被设置为spark.externalBlockStore.url
spark.externalBlockStore.baseDir System.getProperty(“java.io.tmpdir”) 外部块存储RDD的目录。文件系统URL被设置为spark.externalBlockStore.url
spark.externalBlockStore.url tachyon://localhost:19998 for Tachyon 代表外部块文件系统的URL

网络

属性 默认值 描述
spark.akka.frameSize 128 最大消息大小(MB)。一般用于限制executor与driver之间的信息大小,若运行几千个map和reduce任务,可以适当调大参数
spark.akka.heartbeat.interval 1000s 可以调成很大,用于禁止Akka内部的传输失败检测。越大的时间间隔减少网络负载,越小的间隔容易进行Akka错误检测。
spark.akka.heartbeat.pauses 6000s 与spark.akka.heartbeat.interval类似
spark.akka.threads 4 actor用于传输的线程个数。当driver有较多CPU是可以调大该值
spark.akka.timeout 100s spark节点间沟通的超时时间
spark.blockManager.port random block 管理器的监听端口
spark.broadcast.port random driver的http广播监听端口
spark.driver.host (local hostname) driver监听的主机名或者IP地址。用于master和executor的信息传输
spark.driver.port random driver监听的接口
spark.executor.port random executor监听的端口
spark.fileserver.port random driver 文件服务监听的接口
spark.network.timeout 120s 默认所有网络交互的超时时间
spark.port.maxRetries 16 端口重试连接最大次数
spark.replClassServer.port random driver类服务监听的接口
spark.rpc.numRetries 3 RPC任务重试的次数
spark.rpc.retry.wait 3s RPC任务ask操作的延时
spark.rpc.askTimeout 120s RPC任务等待超时
spark.rpc.lookupTimeout 120s RPC远程lookup操作超时时间

作业调度Scheduling

属性 默认值 描述
spark.cores.max (not set) spark应用可用最大CPU内核数,若未设置,stanalone集群使用 spark.deploy.defaultCores作为参数,Mesos可以使用所有CPU。
spark.locality.wait 3s data-local或less-local任务启动任务超时时间。若任务时间长且数据不再本地,则最后调大
spark.locality.wait.node spark.locality.wait Customize the locality wait for node locality. For example, you can set this to 0 to skip node locality and search immediately for rack locality (if your cluster has rack information).
spark.locality.wait.process spark.locality.wait Customize the locality wait for process locality. This affects tasks that attempt to access cached data in a particular executor process.
spark.locality.wait.rack spark.locality.wait Customize the locality wait for rack locality.
spark.scheduler.maxRegisteredResourcesWaitingTime 30s Maximum amount of time to wait for resources to register before scheduling begins.
spark.scheduler.mode FIFO 作业调度模式。可以设置为FAIR公平调度或者FIFO先进先出
spark.scheduler.revive.interval 1s The interval length for the scheduler to revive the worker resource offers to run tasks.
spark.speculation false 若设置为true,则会根据执行慢的stage多次启动,以最先完成为准。
spark.speculation.interval 100ms speculate 的频率
spark.speculation.multiplier 1.5 当task比所有任务执行时间的中值长多少倍时启动speculate
spark.speculation.quantile 0.75 启动speculate前任务完成数据量所占比例值
spark.task.cpus 1 每个task分配的cpu数量
spark.task.maxFailures 4 task失败的最多次数,比重试次数多1

动态分配内存

属性 默认值 描述
spark.dynamicAllocation.enabled false 是否启动动态资源分配
spark.dynamicAllocation.executorIdleTimeout 60s 若动态分配设为true且executor处于idle状态的时间已超时,则移除executor
spark.dynamicAllocation.cachedExecutorIdleTimeout infinity 若executor缓存数据超时,且动态内存分配为true,则移除缓存
spark.dynamicAllocation.initialExecutors spark.dynamicAllocation.minExecutors 若动态分配为true,执行器的初始数量
spark.dynamicAllocation.maxExecutor infinity 执行器最大数量
spark.dynamicAllocation.minExecutor 0 执行器最少数量
spark.dynamicAllocation.schedulerBacklogTimeout 1s If dynamic allocation is enabled and there have been pending tasks backlogged for more than this duration, new executors will be requested.
spark.dynamicAllocation.sustainedSchedulerBacklogTimeout schedulerBacklogTimeout Same as spark.dynamicAllocation.schedulerBacklogTimeout, but used only for subsequent executor requests.

安全

属性 默认值 描述
spark.acls.enable false Whether Spark acls should are enabled. If enabled, this checks to see if the user has access permissions to view or modify the job. Note this requires the user to be known, so if the user comes across as null no checks are done. Filters can be used with the UI to authenticate and set the user.
spark.admin.acls Empty Comma separated list of users/administrators that have view and modify access to all Spark jobs. This can be used if you run on a shared cluster and have a set of administrators or devs who help debug when things work. Putting a “*” in the list means any user can have the priviledge of admin.
spark.authenticate false Whether Spark authenticates its internal connections. See spark.authenticate.secret if not running on YARN.
spark.authenticate.secret None Set the secret key used for Spark to authenticate between components. This needs to be set if not running on YARN and authentication is enabled.
spark.authenticate.enableSaslEncryption false Enable encrypted communication when authentication is enabled. This option is currently only supported by the block transfer service.
spark.network.sasl.serverAlwaysEncrypt false Disable unencrypted connections for services that support SASL authentication. This is currently supported by the external shuffle service.
spark.core.connection.ack.wait.timeout 60s How long for the connection to wait for ack to occur before timing out and giving up. To avoid unwilling timeout caused by long pause like GC, you can set larger value.
spark.core.connection.auth.wait.timeout 30s How long for the connection to wait for authentication to occur before timing out and giving up.
spark.modify.acls Empty Comma separated list of users that have modify access to the Spark job. By default only the user that started the Spark job has access to modify it (kill it for example). Putting a “*” in the list means any user can have access to modify it.
spark.ui.filters None Comma separated list of filter class names to apply to the Spark web UI. The filter should be a standard javax servlet Filter. Parameters to each filter can also be specified by setting a java system property of:spark..params=’param1=value1,param2=value2’For example:-Dspark.ui.filters=com.test.filter1 -Dspark.com.test.filter1.params=’param1=foo,param2=testing’
spark.ui.view.acls Empty Comma separated list of users that have view access to the Spark web ui. By default only the user that started the Spark job has view access. Putting a “*” in the list means any user can have view access to this Spark job.

加密Encryption

属性 默认值 描述
spark.ssl.enabled false 是否在所有协议中支持SSL连接
spark.ssl.enabledAlgorithms Empty 一些列的密码运算,指定的cipher需要被JVM支持
spark.ssl.keyPassword None 私钥密码
spark.ssl.keyStore None key存储文件,可以是组件启动的相对路径也可以是绝对路径
spark.ssl.keyStorePassword None A password to the key-store
spark.ssl.protocol None A protocol name. The protocol must be supported by JVM. The reference list of protocols one can find on this page.
spark.ssl.trustStore None A path to a trust-store file. The path can be absolute or relative to the directory where the component is started in.
spark.ssl.trustStorePassword None A password to the trust-store.

Spark Streaming

属性 默认值 描述
spark.streaming.backpressure.enabled false Enables or disables Spark Streaming’s internal backpressure mechanism (since 1.5). This enables the Spark Streaming to control the receiving rate based on the current batch scheduling delays and processing times so that the system receives only as fast as the system can process. Internally, this dynamically sets the maximum receiving rate of receivers. This rate is upper bounded by the values spark.streaming.receiver.maxRate and spark.streaming.kafka.maxRatePerPartition if they are set (see below).
spark.streaming.blockInterval 200ms Interval at which data received by Spark Streaming receivers is chunked into blocks of data before storing them in Spark. Minimum recommended - 50 ms. See the performance tuning section in the Spark Streaming programing guide for more details.
spark.streaming.receiver.maxRate not set Maximum rate (number of records per second) at which each receiver will receive data. Effectively, each stream will consume at most this number of records per second. Setting this configuration to 0 or a negative number will put no limit on the rate. See the deployment guide in the Spark Streaming programing guide for mode details.
spark.streaming.receiver.writeAheadLog.enable false Enable write ahead logs for receivers. All the input data received through receivers will be saved to write ahead logs that will allow it to be recovered after driver failures. See the deployment guide in the Spark Streaming programing guide for more details.
spark.streaming.unpersist true Force RDDs generated and persisted by Spark Streaming to be automatically unpersisted from Spark’s memory. The raw input data received by Spark Streaming is also automatically cleared. Setting this to false will allow the raw data and persisted RDDs to be accessible outside the streaming application as they will not be cleared automatically. But it comes at the cost of higher memory usage in Spark.
spark.streaming.stopGracefullyOnShutdown false If true, Spark shuts down the StreamingContext gracefully on JVM shutdown rather than immediately.
spark.streaming.kafka.maxRatePerPartition not set Maximum rate (number of records per second) at which data will be read from each Kafka partition when using the new Kafka direct stream API. See the Kafka Integration guide for more details.
spark.streaming.kafka.maxRetries 1 Maximum number of consecutive retries the driver will make in order to find the latest offsets on the leader of each partition (a default value of 1 means that the driver will make a maximum of 2 attempts). Only applies to the new Kafka direct stream API.
spark.streaming.ui.retainedBatches 1000 How many batches the Spark Streaming UI and status APIs remember before garbage collecting.
spark.streaming.driver.writeAheadLog.closeFileAfterWrite false Whether to close the file after writing a write ahead log record on the driver. Set this to ‘true’ when you want to use S3 (or any file system that does not support flushing) for the metadata WAL on the driver.
spark.streaming.receiver.writeAheadLog.closeFileAfterWrite false Whether to close the file after writing a write ahead log record on the receivers. Set this to ‘true’ when you want to use S3 (or any file system that does not support flushing) for the data WAL on the receivers.

SparkR

属性 默认值 描述
spark.r.numRBackendThreads 2 RBackend维护的RPC句柄个数
spark.r.command Rscript Executable for executing R scripts in cluster modes for both driver and workers.
spark.r.driver.command spark.r.command Executable for executing R scripts in client modes for driver. Ignored in cluster modes

其他参数具体参见https://spark.apache.org/docs/latest/configuration.html

环境变量

部分Spark设置可以通过配置环境变量(在conf/spark-env.sh中设置)实现。在standalone和Mesos模式中,这个文件可以设定机器特定的信息,例如主机名。由于spark-env.sh安装后并不存在,可以拷贝spark-env.sh.template,并确保它有执行权限。 
以下是spark-env.sh的常用参数:

环境变量 描述
JAVA_HOME java安装目录
PYSPARK_PYTHON 运行pyspark的python可执行文件,默认是python2.7
SPARK_DRIVER_R SparkR shell的R可执行文件,默认是R
SPARK_LOCAL_IP 机器绑定的IP地址
SPARK_PUBLIC_DNS Spark程序向外广播的主机名

除此之外还有一些spark standalone集群设置的参数,例如每个机器运行的最大内存、CPU核数等。

配置日志

Spark使用log4j记录日志。可以通过配置conf/log4j.properties文件配置日志。

覆盖配置目录

通过指定SPARK_CONF_DIR变量,可以覆盖默认的SAPRK_HOME/conf下面的配置,例如spark-defaults.conf, spark-env.sh, log4j.properties等待。

集成Hadoop集群配置

若想通过spark读写HDFS,需要将以下两个配置文件拷贝到spark classpath目录下 
+ hdfs-size.xml :提供HDFS客户端默认的操作 
+ core-site.xml :设置默认的文件系统名称

虽然不同发行版本配置文件不同,但一般都在/etc/Hadoop/conf目录下。为了使得spark可以找到这些配置文件,在spark-env.sh文件中配置HADOOP_CONF_DIR变量。

 

http://blog.csdn.net/vfgbv/article/details/52035259

分享到:
评论

相关推荐

    Spark官方文档中文翻译

    这份“Spark官方文档中文翻译”涵盖了Spark的核心概念、架构、使用方法以及API,对于理解并应用Spark进行大规模数据处理非常有帮助。 **1. Spark核心概念** Spark基于DAG(有向无环图)执行模型,它将计算任务分解...

    spark官方文档中文版

    总的来说,"Spark 官方文档中文版"会详细介绍如何配置和使用 Spark,包括安装、配置集群、编写应用程序、优化性能等方面的内容。它还会深入讲解 Spark 的高级特性,如广播变量、累加器、持久化策略等。对于希望深入...

    Spark官方中文文档

    但不同于MapReduce的是——Job中间输出结果可以保存在内存中,从而不再需要读写HDFS,因此Spark能更好地适用于数据挖掘与机器学习等需要迭代的MapReduce的算法。 Spark 是一种与 Hadoop 相似的开源集群计算环境,...

    cloudera-spark 官方文档

    根据提供的文件信息,我们可以从Cloudera的Spark官方文档中提炼出以下关键知识点: ### Apache Spark概述 Apache Spark 是一个开源的大数据处理框架,能够高效地处理大量的数据集。它支持多种编程语言(如Java、...

    《Spark 官方文档》Spark SQL, DataFrames 以及 Datasets 编程指南.pdf

    《Spark 官方文档》Spark SQL,DataFrames 以及 Datasets 编程指南

    Spark性能优化指南——高级篇

    Shuffle调优通常包括优化Shuffle写操作的性能,如调整Spark的配置参数,如spark.shuffle.manager、spark.shuffle.file.buffer、spark.shuffle.sort.bypassMergeThreshold等。合理设置这些参数可以减少磁盘I/O的压力...

    Spark官方文档指南chm版本

    这份"Spark官方文档指南chm版本"提供了离线查阅Spark相关知识的便利,尤其适合那些无法连接外部网络的用户。CHM文件是一种Windows平台下的帮助文件格式,包含了丰富的索引和内容结构,方便用户快速查找和学习。 ...

    藏经阁-Spark Meets Smart Meters——Hadoop powering Australia’s energy

    藏经阁-Spark Meets Smart Meters——Hadoop powering Australia’s energy transformation Spark Meets Smart Meters,Hadoop 在澳大利亚能源转型中扮演着重要角色。本文将从大数据、能源、智能电表、Spark、...

    Spark性能优化指南——基础篇 -.pdf

    根据提供的文件内容,这份文档是关于Apache Spark的性能优化指南,特别强调了在大数据处理场景下如何通过对Spark作业进行优化来提升性能。以下是关于文档中提到的各个方面的详细知识点: 1. **Spark作业性能优化的...

    spark 相关文档资料

    Spark的官方文档是学习Spark的重要资源,它详细解释了Spark的所有组件,包括Spark Core、Spark SQL、Spark Streaming、MLlib(机器学习库)和GraphX(图处理)。文档中包含了API的使用示例、配置指南和最佳实践,...

    Spark官方文档

    Spark官方文档离线版,方便大家查阅

    PDF文档_Spark官方文档_中文版.txt

    PDF文档_Spark官方文档_中文版,主要为spark官方文档的内容翻译为中文。

    Apache Spark 2.0.2 中文官方文档

    关于学习 Spark 的部分,英文好点的话,跟着 Spark 官方文档的英文版走就行了。 如果英语不是很好,可以阅读下由 ApacheCN 组织翻译的 Spark 2.2.0 官方文档中文版。 从概述开始,然后编程指南的快速入门,Spark ...

    spark官方文档中文版.pdf

    spark官方文档中文版.pdf !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!...

    spark2官方中文文档

    《Spark2.0官方中文文档》是一份详细解读Apache Spark 2.0核心特性和功能的资料,旨在帮助用户深入理解和应用这一强大的大数据处理框架。Spark以其高效、易用和可扩展性在大数据领域占据重要地位,而Spark 2.0版本更...

    hadoop-spark配置文档1

    4. 使用 `ssh localhost` 和 `ssh hadoop-spark` 登录验证 SSH 配置是否成功。 二、Hadoop 配置 Hadoop 的配置主要涉及几个核心的 XML 文件,包括 `core-site.xml`, `hdfs-site.xml`, 和 `mapred-site.xml`。 1. `...

    Spark安装文档以及介绍

    4. **配置Spark**: - 进入Spark安装目录下的`conf`目录。 - 重命名并编辑`spark-env.sh.template`文件,设置JDK路径、Master IP等配置项。 - 同样地,重命名并编辑`slaves.template`文件,添加Worker节点的信息...

Global site tag (gtag.js) - Google Analytics