`

Spark源码分析7-Metrics的分析

 
阅读更多

spark用metrics-core这个jar包来做spark 各个部件metrics的管理

Metrics.properties.template文件是用来配置metrics的,metrics的配置分为两部分,一是source,二是sink有些类似于flume的source和sink的概念。Source用来收集workmasterderiverexecutor等的信息。SourceApplicationSourceBlockManagerSourceDAGSchedulerSourceExecutorSourceJvmSourceMasterSourceWorkerSource这几类。 我们可以将spark的状态信息发送到graphite,当spark异常时,用seyren发告警。以下是Metrics.properties.template的配置说明

 

#  syntax: [instance].sink|source.[name].[options]=[value]

#  This file configures Spark's internal metrics system. The metrics system is
#  divided into instances which correspond to internal components.
#  Each instance can be configured to report its metrics to one or more sinks.
#  Accepted values for [instance] are "master", "worker", "executor", "driver",
#  and "applications". A wild card "*" can be used as an instance name, in
#  which case all instances will inherit the supplied property.
#
#  Within an instance, a "source" specifies a particular set of grouped metrics.
#  there are two kinds of sources:
#    1. Spark internal sources, like MasterSource, WorkerSource, etc, which will
#    collect a Spark component's internal state. Each instance is paired with a
#    Spark source that is added automatically.
#    2. Common sources, like JvmSource, which will collect low level state.
#    These can be added through configuration options and are then loaded
#    using reflection.
#
#  A "sink" specifies where metrics are delivered to. Each instance can be
#  assigned one or more sinks.
#
#  The sink|source field specifies whether the property relates to a sink or
#  source.
#
#  The [name] field specifies the name of source or sink.
#
#  The [options] field is the specific property of this source or sink. The
#  source or sink is responsible for parsing this property.
#
#  Notes:
#    1. To add a new sink, set the "class" option to a fully qualified class
#    name (see examples below).
#    2. Some sinks involve a polling period. The minimum allowed polling period
#    is 1 second.
#    3. Wild card properties can be overridden by more specific properties.
#    For example, master.sink.console.period takes precedence over
#    *.sink.console.period.
#    4. A metrics specific configuration
#    "spark.metrics.conf=${SPARK_HOME}/conf/metrics.properties" should be
#    added to Java properties using -Dspark.metrics.conf=xxx if you want to
#    customize metrics system. You can also put the file in ${SPARK_HOME}/conf
#    and it will be loaded automatically.
#    5. MetricsServlet is added by default as a sink in master, worker and client
#    driver, you can send http request "/metrics/json" to get a snapshot of all the
#    registered metrics in json format. For master, requests "/metrics/master/json" and
#    "/metrics/applications/json" can be sent seperately to get metrics snapshot of
#    instance master and applications. MetricsServlet may not be configured by self.
#

## List of available sinks and their properties.

# org.apache.spark.metrics.sink.ConsoleSink
#   Name:   Default:   Description:
#   period  10         Poll period
#   unit    seconds    Units of poll period

# org.apache.spark.metrics.sink.CSVSink
#   Name:     Default:   Description:
#   period    10         Poll period
#   unit      seconds    Units of poll period
#   directory /tmp       Where to store CSV files

# org.apache.spark.metrics.sink.GangliaSink
#   Name:     Default:   Description:
#   host      NONE       Hostname or multicast group of Ganglia server
#   port      NONE       Port of Ganglia server(s)
#   period    10         Poll period
#   unit      seconds    Units of poll period
#   ttl       1          TTL of messages sent by Ganglia
#   mode      multicast  Ganglia network mode ('unicast' or 'mulitcast')

# org.apache.spark.metrics.sink.JmxSink

# org.apache.spark.metrics.sink.MetricsServlet
#   Name:     Default:   Description:
#   path      VARIES*    Path prefix from the web server root
#   sample    false      Whether to show entire set of samples for histograms ('false' or 'true')
#
# * Default path is /metrics/json for all instances except the master. The master has two paths:
#     /metrics/aplications/json # App information
#     /metrics/master/json      # Master information

# org.apache.spark.metrics.sink.GraphiteSink
#   Name:     Default:      Description:
#   host      NONE          Hostname of Graphite server
#   port      NONE          Port of Graphite server
#   period    10            Poll period
#   unit      seconds       Units of poll period
#   prefix    EMPTY STRING  Prefix to prepend to metric name

## Examples
# Enable JmxSink for all instances by class name
#*.sink.jmx.class=org.apache.spark.metrics.sink.JmxSink

# Enable ConsoleSink for all instances by class name
#*.sink.console.class=org.apache.spark.metrics.sink.ConsoleSink

# Polling period for ConsoleSink
#*.sink.console.period=10

#*.sink.console.unit=seconds

# Master instance overlap polling period
#master.sink.console.period=15

#master.sink.console.unit=seconds

# Enable CsvSink for all instances
#*.sink.csv.class=org.apache.spark.metrics.sink.CsvSink

# Polling period for CsvSink
#*.sink.csv.period=1

#*.sink.csv.unit=minutes

# Polling directory for CsvSink
#*.sink.csv.directory=/tmp/

# Worker instance overlap polling period
#worker.sink.csv.period=10

#worker.sink.csv.unit=minutes

# Enable jvm source for instance master, worker, driver and executor
#master.source.jvm.class=org.apache.spark.metrics.source.JvmSource

#worker.source.jvm.class=org.apache.spark.metrics.source.JvmSource

#driver.source.jvm.class=org.apache.spark.metrics.source.JvmSource

#executor.source.jvm.class=org.apache.spark.metrics.source.JvmSource

 

分享到:
评论

相关推荐

    Spark core 源码解读与扩展

    在深入分析Spark Core的源码和架构时,我们可以更好地理解其内部工作机制,并能够在基础上进行扩展以满足特定的大数据处理需求。 ### Spark Core 源码解读 Spark Core源码的解读涉及到理解以下几个关键部分: 1. ...

    Spark源码剖析

    通过上述知识点的探讨,可以看出Spark源码剖析涉及到对Spark的运行时机制、性能优化和监控、以及其设计思想的深入理解。掌握这些知识点对于开发者优化Spark作业、解决生产中的性能问题以及进行Spark相关扩展开发都是...

    sparkmllib机器学习源码

    它是大数据处理和分析领域的一个重要组件,尤其在分布式环境下,Spark MLlib能够帮助用户高效地执行各种机器学习任务。下面将深入探讨Spark MLlib的主要功能、设计原则以及其在实际应用中的重要性。 1. **Spark ...

    hadoop-core-0.20.2 源码 hadoop-2.5.1-src.tar.gz 源码 hadoop 源码

    **Hadoop Core源码分析** Hadoop-core-0.20.2是Hadoop早期版本的核心组件,它包含了Hadoop的文件系统接口、分布式计算模型MapReduce以及其他的工具和库。在源码中,我们可以看到以下几个关键部分: 1. **HDFS接口*...

    spark官方文档

    标题“spark官方文档”似乎与文档实际内容不匹配,因为文档内容实际上是关于Storm的源码分析,而不是Apache Spark。这可能是标题的误输入。文档内容主要涵盖了Storm源码的启动场景分析、Topology提交过程、worker...

    Spark运维实战

    其中,Web Interfaces提供了用户界面来查看Spark应用的状态和性能指标,而Spark Metrics提供了更深层次的性能监控和分析。 调优是提高Spark性能的关键环节。本书将通过实际案例,详细解读如何对Spark的各个组件进行...

    spark下的lsvm源码

    ### Spark 下的 LSVM 源码解析 #### 一、引言 在机器学习领域,支持向量机(Support Vector Machine, SVM)是一种广泛使用的监督学习算法,它主要用于分类和回归任务。随着大数据时代的到来,传统的单机SVM算法已经...

    23:Spark2.3.x分布式集群安装部署.zip

    接着,下载Spark源码或预编译二进制包,并解压到相同路径下。同时,确保所有节点之间网络畅通,配置好主机名解析。 2. **配置Spark**:编辑`conf/spark-env.sh`或`conf/spark-env.cmd`(根据操作系统),设置SPARK_...

    sparklens:用于优化Apache Spark性能的Qubole Sparklens工具

    Sparklens是由Qubole开发的一款强大的开源工具,专门用于分析和优化Apache Spark应用程序的性能。这个工具的主要目的是帮助开发者和数据工程师深入了解Spark作业的执行细节,从而更好地理解和调优性能。Sparklens...

    ELK-guide-cn.pdf

    - Spark streaming交互:通过Elasticsearch的Spark connector,可以实现Elasticsearch与Spark的实时数据交互。 性能优化: - Bulk提交:使用bulk API可以提高大量数据的索引效率。 - Gateway配置:Elasticsearch...

    大数据处理、分类、排序、去重复源码

    在大数据领域,处理、分类、排序以及去重复是四个核心环节,它们对于数据挖掘、分析和决策支持至关重要。本文将详细探讨这些知识点,并提供相关的源码解析。 首先,大数据处理通常涉及数据采集、预处理、存储和计算...

    排名系统

    本篇文章将深入探讨排名系统的原理,结合源码分析与工具应用,帮助读者全面理解这一重要的技术领域。 首先,我们要理解排名系统的核心目标是根据特定的规则和算法,对一组对象进行有序排列。在搜索引擎中,这些对象...

    hadoop-2.9.2-src:hadoop

    Hadoop生态系统还包括多个与HDFS和MapReduce协同工作的项目,如HBase(分布式数据库)、Hive(数据仓库工具)、Pig(数据分析工具)和Spark(快速、通用的大数据处理引擎)。这些项目的源码也包含在2.9.2的源码包中...

    Flink在蔚来汽车的应用-Flink Forward Asia 2021.pdf

    2019年9月,平台升级至1.0,引入了YARN作业管理和配置管理,实现了Metrics监控、Flink版本管理、集群资源管理和告警等功能。而到2021年1月,平台进一步发展为2.0,增加了Flink SQL权限管理、UDF管理、日志空间管理及...

Global site tag (gtag.js) - Google Analytics