`
wbj0110
  • 浏览: 1603171 次
  • 性别: Icon_minigender_1
  • 来自: 上海
文章分类
社区版块
存档分类
最新评论

Spark源码分析之-deploy模块

阅读更多

Deploy模块整体架构

deploy模块主要包含3个子模块:masterworkerclient。他们继承于Actor,通过actor实现互相之间的通信。

  • Master:master的主要功能是接收worker的注册并管理所有的worker,接收client提交的application,(FIFO)调度等待的application并向worker提交。
  • Worker:worker的主要功能是向master注册自己,根据master发送的application配置进程环境,并启动StandaloneExecutorBackend
  • Client:client的主要功能是向master注册并监控application。当用户创建SparkContext时会实例化SparkDeploySchedulerBackend,而实例化SparkDeploySchedulerBackend的同时就会启动client,通过向client传递启动参数和application有关信息,client向master发送请求注册application并且在slave node上启动StandaloneExecutorBackend

下面来看一下deploy模块的类图:

Deploy moduler class chart

Deploy模块通信消息

Deploy模块并不复杂,代码也不多,主要集中在各个子模块之间的消息传递和处理上,因此在这里列出了各个模块之间传递的主要消息:

  • client to master

    1. RegisterApplication (向master注册application)
  • master to client

    1. RegisteredApplication (作为注册application的reply,回复给client)
    2. ExecutorAdded (通知client worker已经启动了Executor环境,当向worker发送LaunchExecutor后通知client)
    3. ExecutorUpdated (通知client Executor状态已经发生变化了,包括结束、异常退出等,当worker向master发送ExecutorStateChanged后通知client)
  • master to worker

    1. LaunchExecutor (发送消息启动Executor环境)
    2. RegisteredWorker (作为worker向master注册的reply)
    3. RegisterWorkerFailed (作为worker向master注册失败的reply)
    4. KillExecutor (发送给worker请求停止executor环境)
  • worker to master

    1. RegisterWorker (向master注册自己)
    2. Heartbeat (定期向master发送心跳信息)
    3. ExecutorStateChanged (向master发送Executor状态改变信息)

Deploy模块代码详解

Deploy模块相比于scheduler模块简单,因此对于deploy模块的代码并不做十分细节的分析,只针对application的提交和结束过程做一定的分析。

Client提交application

Client是由SparkDeploySchedulerBackend创建被启动的,因此client是被嵌入在每一个application中,只为这个applicator所服务,在client启动时首先会先master注册application:

  1. def start(){
  2. // Just launch an actor; it will call back into the listener.
  3. actor = actorSystem.actorOf(Props(newClientActor))
  4. }
  5. overridedef preStart(){
  6. logInfo("Connecting to master "+ masterUrl)
  7. try{
  8. master = context.actorFor(Master.toAkkaUrl(masterUrl))
  9. masterAddress = master.path.address
  10. master !RegisterApplication(appDescription)//向master注册application
  11. context.system.eventStream.subscribe(self, classOf[RemoteClientLifeCycleEvent])
  12. context.watch(master)// Doesn't work with remote actors, but useful for testing
  13. }catch{
  14. case e:Exception=>
  15. logError("Failed to connect to master", e)
  16. markDisconnected()
  17. context.stop(self)
  18. }
  19. }

Master在收到RegisterApplication请求后会把application加到等待队列中,等待调度:

  1. caseRegisterApplication(description)=>{
  2. logInfo("Registering app "+ description.name)
  3. val app = addApplication(description, sender)
  4. logInfo("Registered app "+ description.name +" with ID "+ app.id)
  5. waitingApps += app
  6. context.watch(sender)// This doesn't work with remote actors but helps for testing
  7. sender !RegisteredApplication(app.id)
  8. schedule()
  9. }

Master会在每次操作后调用schedule()函数,以确保等待的application能够被及时调度。

在前面提到deploy模块是资源管理模块,那么Spark的deploy管理的是什么资源,资源以什么单位进行调度的呢?在当前版本的Spark中,集群的cpu数量是Spark资源管理的一个标准,每个提交的application都会标明自己所需要的资源数(也就是cpu的core数),Master以FIFO的方式管理所有的application请求,当资源数量满足当前任务执行需求的时候该任务就会被调度,否则就继续等待,当然如果master能给予当前任务部分资源则也会启动该application。schedule()函数实现的就是此功能。

  1. def schedule(){
  2. if(spreadOutApps){
  3. for(app <- waitingApps if app.coresLeft >0){
  4. val usableWorkers = workers.toArray.filter(_.state ==WorkerState.ALIVE)
  5. .filter(canUse(app, _)).sortBy(_.coresFree).reverse
  6. val numUsable = usableWorkers.length
  7. val assigned =newArray[Int](numUsable)// Number of cores to give on each node
  8. var toAssign = math.min(app.coresLeft, usableWorkers.map(_.coresFree).sum)
  9. var pos =0
  10. while(toAssign >0){
  11. if(usableWorkers(pos).coresFree - assigned(pos)>0){
  12. toAssign -=1
  13. assigned(pos)+=1
  14. }
  15. pos =(pos +1)% numUsable
  16. }
  17. // Now that we've decided how many cores to give on each node, let's actually give them
  18. for(pos <-0until numUsable){
  19. if(assigned(pos)>0){
  20. val exec= app.addExecutor(usableWorkers(pos), assigned(pos))
  21. launchExecutor(usableWorkers(pos),exec, app.desc.sparkHome)
  22. app.state =ApplicationState.RUNNING
  23. }
  24. }
  25. }
  26. }else{
  27. // Pack each app into as few nodes as possible until we've assigned all its cores
  28. for(worker <- workers if worker.coresFree >0&& worker.state ==WorkerState.ALIVE){
  29. for(app <- waitingApps if app.coresLeft >0){
  30. if(canUse(app, worker)){
  31. val coresToUse = math.min(worker.coresFree, app.coresLeft)
  32. if(coresToUse >0){
  33. val exec= app.addExecutor(worker, coresToUse)
  34. launchExecutor(worker,exec, app.desc.sparkHome)
  35. app.state =ApplicationState.RUNNING
  36. }
  37. }
  38. }
  39. }
  40. }
  41. }

当application得到调度后就会调用launchExecutor()向worker发送请求,同时向client汇报状态:

  1. def launchExecutor(worker:WorkerInfo,exec:ExecutorInfo, sparkHome:String){
  2. worker.addExecutor(exec)
  3. worker.actor !LaunchExecutor(exec.application.id,exec.id,exec.application.desc,exec.cores,exec.memory, sparkHome)
  4. exec.application.driver !ExecutorAdded(exec.id, worker.id, worker.host,exec.cores,exec.memory)
  5. }

至此client与master的交互已经转向了master与worker的交互,worker需要配置application启动环境

  1. caseLaunchExecutor(appId, execId, appDesc, cores_, memory_, execSparkHome_)=>
  2. val manager =newExecutorRunner(
  3. appId, execId, appDesc, cores_, memory_,self, workerId, ip,newFile(execSparkHome_), workDir)
  4. executors(appId +"/"+ execId)= manager
  5. manager.start()
  6. coresUsed += cores_
  7. memoryUsed += memory_
  8. master !ExecutorStateChanged(appId, execId,ExecutorState.RUNNING,None,None)

Worker在接收到LaunchExecutor消息后创建ExecutorRunner实例,同时汇报master executor环境启动。

ExecutorRunner在启动的过程中会创建线程,配置环境,启动新进程:

  1. def start(){
  2. workerThread =newThread("ExecutorRunner for "+ fullId){
  3. overridedef run(){ fetchAndRunExecutor()}
  4. }
  5. workerThread.start()
  6. // Shutdown hook that kills actors on shutdown.
  7. ...
  8. }
  9. def fetchAndRunExecutor(){
  10. try{
  11. // Create the executor's working directory
  12. val executorDir =newFile(workDir, appId +"/"+ execId)
  13. if(!executorDir.mkdirs()){
  14. thrownewIOException("Failed to create directory "+ executorDir)
  15. }
  16. // Launch the process
  17. val command = buildCommandSeq()
  18. val builder =newProcessBuilder(command: _*).directory(executorDir)
  19. val env = builder.environment()
  20. for((key, value)<- appDesc.command.environment){
  21. env.put(key, value)
  22. }
  23. env.put("SPARK_MEM", memory.toString +"m")
  24. // In case we are running this from within the Spark Shell, avoid creating a "scala"
  25. // parent process for the executor command
  26. env.put("SPARK_LAUNCH_WITH_SCALA","0")
  27. process = builder.start()
  28. // Redirect its stdout and stderr to files
  29. redirectStream(process.getInputStream,newFile(executorDir,"stdout"))
  30. redirectStream(process.getErrorStream,newFile(executorDir,"stderr"))
  31. // Wait for it to exit; this is actually a bad thing if it happens, because we expect to run
  32. // long-lived processes only. However, in the future, we might restart the executor a few
  33. // times on the same machine.
  34. val exitCode = process.waitFor()
  35. val message ="Command exited with code "+ exitCode
  36. worker !ExecutorStateChanged(appId, execId,ExecutorState.FAILED,Some(message),
  37. Some(exitCode))
  38. }catch{
  39. case interrupted:InterruptedException=>
  40. logInfo("Runner thread for executor "+ fullId +" interrupted")
  41. case e:Exception=>{
  42. logError("Error running executor", e)
  43. if(process !=null){
  44. process.destroy()
  45. }
  46. val message = e.getClass +": "+ e.getMessage
  47. worker !ExecutorStateChanged(appId, execId,ExecutorState.FAILED,Some(message),None)
  48. }
  49. }
  50. }

ExecutorRunner启动后worker向master汇报ExecutorStateChanged,而master则将消息重新pack成为ExecutorUpdated发送给client。

至此整个application提交过程基本结束,提交的过程并不复杂,主要涉及到的消息的传递。

Application的结束

由于各种原因(包括正常结束,异常返回等)会造成application的结束,我们现在就来看看applicatoin结束的整个流程。

application的结束往往会造成client的结束,而client的结束会被master通过Actor检测到,master检测到后会调用removeApplication()函数进行操作:

  1. def removeApplication(app:ApplicationInfo){
  2. if(apps.contains(app)){
  3. logInfo("Removing app "+ app.id)
  4. apps -= app
  5. idToApp -= app.id
  6. actorToApp -= app.driver
  7. addressToWorker -= app.driver.path.address
  8. completedApps += app // Remember it in our history
  9. waitingApps -= app
  10. for(exec<- app.executors.values){
  11. exec.worker.removeExecutor(exec)
  12. exec.worker.actor !KillExecutor(exec.application.id,exec.id)
  13. }
  14. app.markFinished(ApplicationState.FINISHED)// TODO: Mark it as FAILED if it failed
  15. schedule()
  16. }
  17. }

removeApplicatoin()首先会将application从master自身所管理的数据结构中删除,其次它会通知每一个work,请求其KillExecutor。worker在收到KillExecutor后调用ExecutorRunnerkill()函数:

  1. caseKillExecutor(appId, execId)=>
  2. val fullId = appId +"/"+ execId
  3. executors.get(fullId) match {
  4. caseSome(executor)=>
  5. logInfo("Asked to kill executor "+ fullId)
  6. executor.kill()
  7. caseNone=>
  8. logInfo("Asked to kill unknown executor "+ fullId)
  9. }

ExecutorRunner内部,它会结束监控线程,同时结束监控线程所启动的进程,并且向worker汇报ExecutorStateChanged

  1. def kill(){
  2. if(workerThread !=null){
  3. workerThread.interrupt()
  4. workerThread =null
  5. if(process !=null){
  6. logInfo("Killing process!")
  7. process.destroy()
  8. process.waitFor()
  9. }
  10. worker !ExecutorStateChanged(appId, execId,ExecutorState.KILLED,None,None)
  11. Runtime.getRuntime.removeShutdownHook(shutdownHook)
  12. }
  13. }

Application结束的同时清理了master和worker上的关于该application的所有信息,这样关于application结束的整个流程就介绍完了,当然在这里我们对于许多异常处理分支没有细究,但这并不影响我们对主线的把握。

http://jerryshao.me/architecture/2013/04/30/Spark%E6%BA%90%E7%A0%81%E5%88%86%E6%9E%90%E4%B9%8B-deploy%E6%A8%A1%E5%9D%97/

分享到:
评论

相关推荐

    maven-deploy-plugin-2.8.2.jar

    maven-deploy-plugin-2.8.2.jar

    maven-deploy-plugin-2.7.jar

    maven-deploy-plugin-2.7.jar

    前端开源库-fis3-deploy-replace

    本文将深入探讨FIS3中的`deploy-replace`模块,这是FIS3部署过程中用于替换资源引用路径的组件。 ### 一、FIS3简介 FIS3是一个高效、灵活且强大的前端构建工具,旨在提高开发效率,降低维护成本。它支持多种语言,...

    前端开源库-fis-deploy-amdpackage

    通过分析模块依赖关系,它能够智能地组织代码,确保正确执行。 2. **AMD到CommonJS转换**:虽然AMD在浏览器环境中很流行,但在服务器端或Node.js环境中,CommonJS规范更常见。"fis-deploy-amdpackage"可以将AMD格式...

    swift-ios-deploy从命令行安装和调试iPhone应用程序而不使用Xcode

    在这种情况下,`ios-deploy` 工具就派上用场了。`ios-deploy` 是一个命令行工具,允许开发者将Swift或Objective-C编译的iOS应用安装到连接的设备上,并进行调试。 **一、ios-deploy简介** `ios-deploy` 是由GitHub...

    Go-go-deploy基于SVN的web集群一键部署工具

    文件名“go-deploy-master”表明这是该项目的主分支源代码,通常包含了项目的最新功能和改进。用户在获取并解压此压缩包后,可以按照项目文档进行编译和安装,然后根据提供的指南配置和使用Go-go-deploy。通过定制化...

    ios-deploy

    **iOS部署:超越Xcode的调试利器——ios-deploy** 在iOS开发过程中,Xcode是开发者最常用的工具,它提供了完整的开发环境和设备调试功能。然而,有些情况下,开发者可能需要在不使用Xcode或者在未越狱的设备上安装...

    ceph-deploy部署(3).docx

    ceph-deploy部署文档(2020最新)centos7 ceph.14 最详细的部署教程。系统:centos7,ceph版本为14.走过路过不要错过

    .archanyconnect-win-3.1.00495-pre-deploy-k9.msi

    .archanyconnect-win-3.1.00495-pre-deploy-k9.msi

    spark-sql sql on yarn -deploy-mode cluster 改造

    spark-sql sql on yarn --deploy-mode cluster 改造为 可以cluster提交

    maven-deploy-plugin-2.4.jar

    maven-deploy-plugin-2.4.jar

    maven-deploy-plugin-2.0.jar

    maven-deploy-plugin-2.0.jar

    maven-deploy-plugin-2.5.jar

    maven-deploy-plugin-2.5.jar

    shipyard-deploy中文改良版

    Shipyard-deploy自动部署脚本,操作说明:sh shipyard-deploy 具体部署方法请参考:https://blog.csdn.net/qq_35175995

    kubernetes-1.20.0-centos-7.9-deploy(New).zip,可以修改安装目录,并且可以安装在欧拉系

    kubernetes-1.20.0-centos-7.9-deploy(New).zip,可以修改安装目录,并且可以安装在欧拉系kubernetes-1.20.0-centos-7.9-deploy(New).zip,可以修改安装目录,并且可以安装在欧拉系kubernetes-1.20.0-centos-7.9-...

    hook-deploy-源码.rar

    《hook-deploy源码解析与应用实践》 在软件开发过程中,部署自动化是提升效率、减少错误的关键环节。"hook-deploy"项目提供了一种基于钩子(hook)的自动化部署解决方案,允许开发者自定义各个阶段的操作,实现灵活...

    前端开源库-fis3-deploy-zip

    **标签“前端开源”**意味着fis3-deploy-zip是一个开放源码的项目,开发者可以查看其源代码,理解其工作原理,并且根据自身需求进行定制或贡献代码。这促进了社区的交流和共享,推动了前端开发工具的进步。 综上所...

    ember-cli-deploy-build:一个用于ember-cli-deploy的构建插件

    ember-cli-deploy-build 一个ember-cli-deploy插件,用于构建您的ember-cli应用程序 该插件将构建您的ember-cli应用程序文件并将其输出到目录。 什么是ember-cli-deploy插件? 插件是一个插件,可以作为ember-cli...

    ceph-deploy.zip

    ceph-deploy 是一个 Ceph 的简易部署工具,可以非常方便的部署 Ceph 集群存储系统。 标签:cephdeploy

    前端开源库-ember-cli-deploy-appshell

    为了优化前端应用的性能和用户体验,`ember-cli-deploy-appshell` 库应运而生。这个开源项目是 Ember 社区为 Ember CLI 设计的一个插件,其主要目的是实现基于 Application Cache(AppCache)的 Shell 模式部署,以...

Global site tag (gtag.js) - Google Analytics