https://spark.apache.org/docs/latest/cluster-overview.html
This document gives a short overview of how Spark runs on clusters, to make it easier to understand the components involved. Read through theapplication submission guide to submit applications to a cluster.
Components
Spark applications run as independent sets of processes on a cluster, coordinated by the SparkContext object in your main program (called thedriver program). Specifically, to run on a cluster, the SparkContext can connect to several types of cluster managers (either Spark’s own standalone cluster manager or Mesos/YARN), which allocate resources across applications. Once connected, Spark acquires executors on nodes in the cluster, which are processes that run computations and store data for your application. Next, it sends your application code (defined by JAR or Python files passed to SparkContext) to the executors. Finally, SparkContext sends tasks for the executors to run.
There are several useful things to note about this architecture:
- Each application gets its own executor processes, which stay up for the duration of the whole application and run tasks in multiple threads. This has the benefit of isolating applications from each other, on both the scheduling side (each driver schedules its own tasks) and executor side (tasks from different applications run in different JVMs). However, it also means that data cannot be shared across different Spark applications (instances of SparkContext) without writing it to an external storage system.
- Spark is agnostic to the underlying cluster manager. As long as it can acquire executor processes, and these communicate with each other, it is relatively easy to run it even on a cluster manager that also supports other applications (e.g. Mesos/YARN).
- The driver program must listen for and accept incoming connections from its executors throughout its lifetime (e.g., see spark.driver.port and spark.fileserver.port in the network config section). As such, the driver program must be network addressable from the worker nodes.
- Because the driver schedules tasks on the cluster, it should be run close to the worker nodes, preferably on the same local area network. If you’d like to send requests to the cluster remotely, it’s better to open an RPC to the driver and have it submit operations from nearby than to run a driver far away from the worker nodes.
Cluster Manager Types
The system currently supports three cluster managers:
- Standalone – a simple cluster manager included with Spark that makes it easy to set up a cluster.
- Apache Mesos – a general cluster manager that can also run Hadoop MapReduce and service applications.
- Hadoop YARN – the resource manager in Hadoop 2.
In addition, Spark’s EC2 launch scripts make it easy to launch a standalone cluster on Amazon EC2.
Submitting Applications
Applications can be submitted to a cluster of any type using the spark-submit
script. The application submission guide describes how to do this.
Monitoring
Each driver program has a web UI, typically on port 4040, that displays information about running tasks, executors, and storage usage. Simply go to http://<driver-node>:4040
in a web browser to access this UI. The monitoring guide also describes other monitoring options.
Job Scheduling
Spark gives control over resource allocation both across applications (at the level of the cluster manager) and within applications (if multiple computations are happening on the same SparkContext). The job scheduling overview describes this in more detail.
Glossary
The following table summarizes terms you’ll see used to refer to cluster concepts:
Application | User program built on Spark. Consists of a driver program and executors on the cluster. |
Application jar | A jar containing the user's Spark application. In some cases users will want to create an "uber jar" containing their application along with its dependencies. The user's jar should never include Hadoop or Spark libraries, however, these will be added at runtime. |
Driver program | The process running the main() function of the application and creating the SparkContext |
Cluster manager | An external service for acquiring resources on the cluster (e.g. standalone manager, Mesos, YARN) |
Deploy mode | Distinguishes where the driver process runs. In "cluster" mode, the framework launches the driver inside of the cluster. In "client" mode, the submitter launches the driver outside of the cluster. |
Worker node | Any node that can run application code in the cluster |
Executor | A process launched for an application on a worker node, that runs tasks and keeps data in memory or disk storage across them. Each application has its own executors. |
Task | A unit of work that will be sent to one executor |
Job | A parallel computation consisting of multiple tasks that gets spawned in response to a Spark action (e.g. save , collect ); you'll see this term used in the driver's logs. |
Stage | Each job gets divided into smaller sets of tasks called stages that depend on each other (similar to the map and reduce stages in MapReduce); you'll see this term used in the driver's logs. |
相关推荐
python学习资源
jfinal-undertow 用于开发、部署由 jfinal 开发的 web 项目
基于Andorid的音乐播放器项目设计(国外开源)实现源码,主要针对计算机相关专业的正在做毕设的学生和需要项目实战练习的学习者,也可作为课程设计、期末大作业。
python学习资源
python学习资源
python学习一些项目和资源
【毕业设计】java-springboot+vue家具销售平台实现源码(完整前后端+mysql+说明文档+LunW).zip
HTML+CSS+JavaScarip开发的前端网页源代码
python学习资源
【毕业设计】java-springboot-vue健身房信息管理系统源码(完整前后端+mysql+说明文档+LunW).zip
成绩管理系统C/Go。大学生期末小作业,指针实现,C语言版本(ANSI C)和Go语言版本
1_基于大数据的智能菜品个性化推荐与点餐系统的设计与实现.docx
【毕业设计】java-springboot-vue交流互动平台实现源码(完整前后端+mysql+说明文档+LunW).zip
内容概要:本文主要探讨了在高并发情况下如何设计并优化火车票秒杀系统,确保系统的高性能与稳定性。通过对比分析三种库存管理模式(下单减库存、支付减库存、预扣库存),强调了预扣库存结合本地缓存及远程Redis统一库存的优势,同时介绍了如何利用Nginx的加权轮询策略、MQ消息队列异步处理等方式降低系统压力,保障交易完整性和数据一致性,防止超卖现象。 适用人群:具有一定互联网应用开发经验的研发人员和技术管理人员。 使用场景及目标:适用于电商、票务等行业需要处理大量瞬时并发请求的业务场景。其目标在于通过合理的架构规划,实现在高峰期保持平台的稳定运行,保证用户体验的同时最大化销售额。 其他说明:文中提及的技术细节如Epoll I/O多路复用模型以及分布式系统中的容错措施等内容,对于深入理解大规模并发系统的构建有着重要指导意义。
基于 OpenCV 和 PyTorch 的深度车牌识别
【毕业设计-java】springboot-vue教学资料管理系统实现源码(完整前后端+mysql+说明文档+LunW).zip
此数据集包含有关出租车行程的详细信息,包括乘客人数、行程距离、付款类型、车费金额和行程时长。它可用于各种数据分析和机器学习应用程序,例如票价预测和乘车模式分析。
把代码放到Word中,通过开发工具——Visual Basic——插入模块,粘贴在里在,把在硅基流动中申请的API放到VBA代码中。在Word中,选择一个问题,运行这个DeepSeekV3的宏就可以实现在线问答
【毕业设计】java-springboot+vue机动车号牌管理系统实现源码(完整前后端+mysql+说明文档+LunW).zip