`

16.Spark Streaming源码解读之数据清理机制解析

阅读更多

原创文章,转载请注明:转载自听风居士博客(http://zhou-yuefei.iteye.com/)

本期内容:

一、Spark Streaming 数据清理总览

二、Spark Streaming 数据清理过程详解

三、Spark Streaming 数据清理的触发机制

 

    Spark Streaming不像普通Spark 的应用程序,普通Spark程序运行完成后,中间数据会随着SparkContext的关闭而被销毁,而Spark Streaming一直在运行,不断计算,每一秒中在不断运行都会产生大量的中间数据,所以需要对对象及元数据需要定期清理。每个batch duration运行时不断触发job后需要清理rdd和元数据。下面我们就结合源码详细解析一下Spark Streaming程序的数据清理机制。

 

一、数据清理总览

    Spark Streaming 运行过程中,随着时间不断产生Job,当job运行结束后,需要清理相应的数据(RDD,元数据信息,Checkpoint数据),Job由JobGenerator定时产生,数据的清理也是有JobGenerator负责。

    JobGenerator负责数据清理控制的代码位于一个消息循环体eventLoop中:

 

eventLoop =newEventLoop[JobGeneratorEvent]("JobGenerator"){
override protected def onReceive(event:JobGeneratorEvent):Unit= processEvent(event)
 
override protected def onError(e:Throwable):Unit={
jobScheduler.reportError("Error in job generator", e)
}
}
eventLoop.start()
 
其中的核心逻辑位于processEvent(event)函数中:
 
/** Processes all events */
private def processEvent(event:JobGeneratorEvent){
logDebug("Got event "+ event)
event match {
caseGenerateJobs(time)=> generateJobs(time)
caseClearMetadata(time)=> clearMetadata(time)
caseDoCheckpoint(time, clearCheckpointDataLater)=>
doCheckpoint(time, clearCheckpointDataLater)
caseClearCheckpointData(time)=> clearCheckpointData(time)
}
}
 
可以看到当JobGenerator收到ClearMetadata(time) 和 ClearCheckpointData(time)是会进行相应的数据清理,其中 clearMetadata(time)会清理RDD数据和一些元数据信息, ClearCheckpointData(time)会清理Checkpoint数据。
 
二、数据清理过程详解
    2.1 ClearMetaData 过程详解
首先看一下clearMetaData函数的处理逻辑:
/** Clear DStream metadata for the given `time`. */
private def clearMetadata(time:Time){
ssc.graph.clearMetadata(time)
 
// If checkpointing is enabled, then checkpoint,
// else mark batch to be fully processed
if(shouldCheckpoint){
eventLoop.post(DoCheckpoint(time, clearCheckpointDataLater =true))
}else{
// If checkpointing is not enabled, then delete metadata information about
// received blocks (block data not saved in any case). Otherwise, wait for
// checkpointing of this batch to complete.
val maxRememberDuration = graph.getMaxInputStreamRememberDuration()
jobScheduler.receiverTracker.cleanupOldBlocksAndBatches(time - maxRememberDuration)
jobScheduler.inputInfoTracker.cleanup(time - maxRememberDuration)
markBatchFullyProcessed(time)
}
}
 
首先调用了DStreamGraph的clearMetadata方法:
 
def clearMetadata(time:Time){
logDebug("Clearing metadata for time "+ time)
this.synchronized{
outputStreams.foreach(_.clearMetadata(time))
}
logDebug("Cleared old metadata for time "+ time)
}
 
这里调用了所有OutputDStream (关于DStream 的分类请参考http://blog.csdn.net/zhouzx2010/article/details/51460790)的clearMetadata方法
 
private[streaming] def clearMetadata(time:Time){
val unpersistData = ssc.conf.getBoolean("spark.streaming.unpersist",true)
//获取需要清理的RDD
val oldRDDs = generatedRDDs.filter(_._1 <=(time - rememberDuration))
logDebug("Clearing references to old RDDs: ["+
oldRDDs.map(x => s"${x._1} -> ${x._2.id}").mkString(", ")+"]")
 
//将要清除的RDD从generatedRDDs 中清除
generatedRDDs --= oldRDDs.keys
if(unpersistData){
logDebug(s"Unpersisting old RDDs: ${oldRDDs.values.map(_.id).mkString(",")}")
oldRDDs.values.foreach { rdd =>
    //将RDD 从persistence列表中移除
rdd.unpersist(false)
// Explicitly remove blocks of BlockRDD
rdd match {
case b:BlockRDD[_]=>
logInfo(s"Removing blocks of RDD $b of time $time")
//移除RDD的block 数据
b.removeBlocks()
case _ =>
}
}
}
logDebug(s"Cleared ${oldRDDs.size} RDDs that were older than "+
s"${time - rememberDuration}: ${oldRDDs.keys.mkString(",")}")
//清除依赖的DStream
dependencies.foreach(_.clearMetadata(time))
}
 
关键的清理逻辑在代码中做了详细注释,首先清理DStream对应的RDD的元数据信息,然后清理RDD的数据,最后对DStream所依赖的DStream进行清理。
 
回到JobGenerator的clearMetadata函数:
 
if(shouldCheckpoint){
eventLoop.post(DoCheckpoint(time, clearCheckpointDataLater =true))
}else{
// If checkpointing is not enabled, then delete metadata information about
// received blocks (block data not saved in any case). Otherwise, wait for
// checkpointing of this batch to complete.
val maxRememberDuration = graph.getMaxInputStreamRememberDuration()
jobScheduler.receiverTracker.cleanupOldBlocksAndBatches(time - maxRememberDuration)
jobScheduler.inputInfoTracker.cleanup(time - maxRememberDuration)
markBatchFullyProcessed(time)
}
 
调用了ReceiverTracker的 cleanupOldBlocksAndBatches方法,最后调用了clearupOldBatches方法:
 
def cleanupOldBatches(cleanupThreshTime:Time, waitForCompletion:Boolean):Unit=synchronized{
require(cleanupThreshTime.milliseconds < clock.getTimeMillis())
val timesToCleanup = timeToAllocatedBlocks.keys.filter { _ < cleanupThreshTime }.toSeq
logInfo(s"Deleting batches: ${timesToCleanup.mkString("")}")
if(writeToLog(BatchCleanupEvent(timesToCleanup))){
//将要删除的Batch数据清除
timeToAllocatedBlocks --= timesToCleanup
//清理WAL日志
writeAheadLogOption.foreach(_.clean(cleanupThreshTime.milliseconds, waitForCompletion))
}else{
logWarning("Failed to acknowledge batch clean up in the Write Ahead Log.")
}
}
 
可以看到ReceiverTracker的clearupOldBatches方法清理了Receiver数据,也就是Batch数据和WAL日志数据。
最后对InputInfoTracker信息进行清理:
def cleanup(batchThreshTime:Time):Unit=synchronized{
val timesToCleanup = batchTimeToInputInfos.keys.filter(_ < batchThreshTime)
logInfo(s"remove old batch metadata: ${timesToCleanup.mkString("")}")
batchTimeToInputInfos --= timesToCleanup
}
这简单的清除了batchTimeToInputInfos 的输入信息。
 
2.2 ClearCheckPoint 过程详解
看一下clearCheckpointData的处理逻辑:
/** Clear DStream checkpoint data for the given `time`. */
private def clearCheckpointData(time:Time){
ssc.graph.clearCheckpointData(time)
 
// All the checkpoint information about which batches have been processed, etc have
// been saved to checkpoints, so its safe to delete block metadata and data WAL files
val maxRememberDuration = graph.getMaxInputStreamRememberDuration()
jobScheduler.receiverTracker.cleanupOldBlocksAndBatches(time - maxRememberDuration)
jobScheduler.inputInfoTracker.cleanup(time - maxRememberDuration)
markBatchFullyProcessed(time)
}
 
后面的ReceiverTraker和InputInforTracker的清理逻辑和ClearMetaData的相同,这分析DStreamGraph的clearCheckpointData方法:
 
def clearCheckpointData(time:Time){
logInfo("Clearing checkpoint data for time "+ time)
this.synchronized{
outputStreams.foreach(_.clearCheckpointData(time))
}
logInfo("Cleared checkpoint data for time "+ time)
}
 
同样的调用了DStreamGraph中所有OutputDStream的clearCheckPiontData 方法:
 
private[streaming] def clearCheckpointData(time:Time){
logDebug("Clearing checkpoint data")
checkpointData.cleanup(time)
dependencies.foreach(_.clearCheckpointData(time))
logDebug("Cleared checkpoint data")
}
 
这里的核心逻辑在checkpointData.cleanup(time)方法,这里的CheckpointData 是 DStreamCheckpointData对象, DStreamCheckpointData的clearup方法如下:
def cleanup(time:Time){
// 获取需要清理的Checkpoint 文件 时间
timeToOldestCheckpointFileTime.remove(time) match {
caseSome(lastCheckpointFileTime)=>
//获取需要删除的文件
val filesToDelete = timeToCheckpointFile.filter(_._1 < lastCheckpointFileTime)
logDebug("Files to delete:\n"+ filesToDelete.mkString(","))
filesToDelete.foreach {
case(time, file)=>
try{
val path =newPath(file)
if(fileSystem ==null){
fileSystem = path.getFileSystem(dstream.ssc.sparkContext.hadoopConfiguration)
}
//  删除文件     
fileSystem.delete(path,true)
timeToCheckpointFile -= time
logInfo("Deleted checkpoint file '"+ file +"' for time "+ time)
}catch{
case e:Exception=>
logWarning("Error deleting old checkpoint file '"+ file +"' for time "+ time, e)
fileSystem =null
}
}
caseNone=>
logDebug("Nothing to delete")
}
}
 
可以看到checkpoint的清理,就是删除了指定时间以前的checkpoint文件。
 
三、数据清理的触发
    3.1 ClearMetaData 过程的触发
JobGenerator 生成job后,交给JobHandler执行, JobHandler的run方法中,会在job执行完后给JobScheduler 发送JobCompleted消息:
_eventLoop = eventLoop
if(_eventLoop !=null){
_eventLoop.post(JobCompleted(job, clock.getTimeMillis()))
 
}
JobScheduler 收到JobCompleted 消息调用 handleJobCompletion 方法,源码如下:
 
private def processEvent(event:JobSchedulerEvent){
try{
event match {
caseJobStarted(job, startTime)=> handleJobStart(job, startTime)
caseJobCompleted(job, completedTime)=> handleJobCompletion(job, completedTime)
caseErrorReported(m, e)=> handleError(m, e)
}
}catch{
case e:Throwable=>
reportError("Error in job scheduler", e)
}
}
 
在 JobScheduler 的handleJobCompletion方法中会调用JobGenerator的onBatchCompletion方法,我们看一下JobGenerator的 onBatchCompletion 方法的源码:
 
def onBatchCompletion(time:Time){
eventLoop.post(ClearMetadata(time))
}
 
可以看到JobGenerator的onBatchCompletion方法给自己发送了ClearMetadata消息从而触发了ClearMetaData操作。
 
3.2 ClearCheckPoint 过程的触发
    清理CheckPoint数据发生在CheckPoint完成之后,我们先看一下CheckPointHandler的run方法:
 
// All done, print success
val finishTime =System.currentTimeMillis()
logInfo("Checkpoint for time "+ checkpointTime +" saved to file '"+ checkpointFile +
"', took "+ bytes.length +" bytes and "+(finishTime - startTime)+" ms")
//调用JobGenerator的方法进行checkpoint数据清理
jobGenerator.onCheckpointCompletion(checkpointTime, clearCheckpointDataLater)
return
 
可以看到在checkpoint完成后,会调用JobGenerator的onCheckpointCompletion方法进行checkpoint数据清理,我查看JobGenerator的onCheckpointCompletion方法源码:
 
def onCheckpointCompletion(time:Time, clearCheckpointDataLater:Boolean){
if(clearCheckpointDataLater){
eventLoop.post(ClearCheckpointData(time))
}
}
 
可以看到JobGenerator的onCheckpointCompletion方法中首先对传进来的 clearCheckpointDataLater 参数进行判断,如果该参数为true,就会给JobGenerator的eventLoop循环体发送ClearCheckpointData消息,从而触发clearCheckpointData 方法的调用,进行Checkpoint数据的清理。
什么时候该参数会true呢?
我们回到JobGenerator的 ClearMetadata 方法:
 
private def clearMetadata(time:Time){
ssc.graph.clearMetadata(time)
 
 
if(shouldCheckpoint){
//发送DoCheckpoint消息,并进行相应的Checkpoint数据清理
eventLoop.post(DoCheckpoint(time, clearCheckpointDataLater =true))
}else{
 
val maxRememberDuration = graph.getMaxInputStreamRememberDuration()
jobScheduler.receiverTracker.cleanupOldBlocksAndBatches(time - maxRememberDuration)
jobScheduler.inputInfoTracker.cleanup(time - maxRememberDuration)
markBatchFullyProcessed(time)
}
}
 
可以看到在clearMetadata方法中,发送了DoCheckpoint消息,其中参数 clearCheckpointDataLater 为ture。Generator的eventLoop收到该消息后调用 doCheckpoint 方法:
 
private def doCheckpoint(time:Time, clearCheckpointDataLater:Boolean){
if(shouldCheckpoint &&(time - graph.zeroTime).isMultipleOf(ssc.checkpointDuration)){
logInfo("Checkpointing graph for time "+ time)
ssc.graph.updateCheckpointData(time)
checkpointWriter.write(newCheckpoint(ssc, time), clearCheckpointDataLater)
}
}
 
这里关键一步:调用了CheckpointWriter的write方法,注意此时参数 clearCheckpointDataLater 为true。我们进入该方法:
 
def write(checkpoint:Checkpoint, clearCheckpointDataLater:Boolean){
try{
val bytes =Checkpoint.serialize(checkpoint, conf)
//将参数clearCheckpointDataLater传入CheckpoitWriteHandler
executor.execute(newCheckpointWriteHandler(
checkpoint.checkpointTime, bytes, clearCheckpointDataLater))
logInfo("Submitted checkpoint of time "+ checkpoint.checkpointTime +" writer queue")
}catch{
case rej:RejectedExecutionException=>
logError("Could not submit checkpoint task to the thread pool executor", rej)
}
}
 
可以看到此时参数 clearCheckpointDataLater 传入CheckpointWriteHandler 。这样Checkpoint完成之后就会发送ClearCheckpointData消息给JobGenerator进行Checkpoint数据的清理。
 
原创文章,转载请注明:转载自 听风居士博客(http://zhou-yuefei.iteye.com/)

 

 

 

 


 

0
2
分享到:
评论

相关推荐

    7.SparkStreaming(下)--SparkStreaming实战.pdf

    1.Spark及其生态圈简介.pdf2.Spark编译与部署(上)--基础环境搭建.pdf2.Spark编译与部署(下)--Spark编译安装.pdf2.Spark编译与部署(中)--Hadoop编译安装.pdf3.Spark编程模型(上)--概念及SparkShell实战.pdf3....

    7.SparkStreaming(上)--SparkStreaming原理介绍.pdf

    7.SparkStreaming(上)--SparkStreaming原理介绍.pdf 7.SparkStreaming(下)--SparkStreaming实战.pdf 8.SparkMLlib(上)--机器学习及SparkMLlib简介.pdf 8.SparkMLlib(下)--SparkMLlib实战.pdf 9.SparkGraphX...

    SparkStreaming源码解读之数据清理内幕彻底解密

    c)源码解析SparkStreaming数据清理的工作无论是在实际开发中,还是自己动手实践中都是会面临的,Spark Streaming中BatchDurations中会不断的产生RDD,这样会不断的有内存对象生成,其中包含元数据和数据本身。由此...

    Apress.Pro.Spark.Streaming.The.Zen.of.Real-Time.Analytics.Using.Apache.Spark

    《Apress.Pro.Spark.Streaming.The.Zen.of.Real-Time.Analytics.Using.Apache.Spark》这本书专注于探讨如何利用Apache Spark进行实时数据分析,是Spark流处理技术的深入解析。Apache Spark作为一个快速、通用且可...

    Spark-2.3.1源码解读

    Spark的缓存,变量,shuffle数据等清理及机制 Spark-submit关于参数及部署模式的部分解析 GroupByKey VS ReduceByKey OrderedRDDFunctions那些事 高效使用mappartitions standalone模式下executor调度策略 ...

    SparkStreaming预研报告

    Spark Streaming的计算流程涉及从数据源接收数据,转换成RDDs(弹性分布式数据集),然后对这些数据执行转换操作,并最终进行输出处理。在容错性方面,该平台利用了RDD的不变性和容错计算机制,确保了即使在发生故障...

    4.Spark运行架构.pdf

    7.SparkStreaming(上)--SparkStreaming原理介绍.pdf 7.SparkStreaming(下)--SparkStreaming实战.pdf 8.SparkMLlib(上)--机器学习及SparkMLlib简介.pdf 8.SparkMLlib(下)--SparkMLlib实战.pdf 9.SparkGraphX...

    flume整合 SparkStreaming.rar

    1.Spark Streaming整合Flume需要的安装包. 2. Spark Streaming拉取Flume数据的flume配置文件.conf 3. Flume向Spark Streaming推数据的flume配置文件.conf

    8.SparkMLlib(上)--机器学习及SparkMLlib简介.pdf

    7.SparkStreaming(上)--SparkStreaming原理介绍.pdf 7.SparkStreaming(下)--SparkStreaming实战.pdf 8.SparkMLlib(上)--机器学习及SparkMLlib简介.pdf 8.SparkMLlib(下)--SparkMLlib实战.pdf 9.SparkGraphX...

    8.SparkMLlib(下)--SparkMLlib实战.pdf

    7.SparkStreaming(上)--SparkStreaming原理介绍.pdf 7.SparkStreaming(下)--SparkStreaming实战.pdf 8.SparkMLlib(上)--机器学习及SparkMLlib简介.pdf 8.SparkMLlib(下)--SparkMLlib实战.pdf 9.SparkGraphX...

    9.SparkGraphX介绍及实例.pdf

    7.SparkStreaming(上)--SparkStreaming原理介绍.pdf 7.SparkStreaming(下)--SparkStreaming实战.pdf 8.SparkMLlib(上)--机器学习及SparkMLlib简介.pdf 8.SparkMLlib(下)--SparkMLlib实战.pdf 9.SparkGraphX...

    2.Spark编译与部署(下)--Spark编译安装.pdf

    7.SparkStreaming(上)--SparkStreaming原理介绍.pdf 7.SparkStreaming(下)--SparkStreaming实战.pdf 8.SparkMLlib(上)--机器学习及SparkMLlib简介.pdf 8.SparkMLlib(下)--SparkMLlib实战.pdf 9.SparkGraphX...

    1.Spark及其生态圈简介.pdf

    7.SparkStreaming(上)--SparkStreaming原理介绍.pdf 7.SparkStreaming(下)--SparkStreaming实战.pdf 8.SparkMLlib(上)--机器学习及SparkMLlib简介.pdf 8.SparkMLlib(下)--SparkMLlib实战.pdf 9.SparkGraphX...

    2.Spark编译与部署(中)--Hadoop编译安装.pdf

    7.SparkStreaming(上)--SparkStreaming原理介绍.pdf 7.SparkStreaming(下)--SparkStreaming实战.pdf 8.SparkMLlib(上)--机器学习及SparkMLlib简介.pdf 8.SparkMLlib(下)--SparkMLlib实战.pdf 9.SparkGraphX...

    3.Spark编程模型(下)--IDEA搭建及实战.pdf

    7.SparkStreaming(上)--SparkStreaming原理介绍.pdf 7.SparkStreaming(下)--SparkStreaming实战.pdf 8.SparkMLlib(上)--机器学习及SparkMLlib简介.pdf 8.SparkMLlib(下)--SparkMLlib实战.pdf 9.SparkGraphX...

    Apress.Pro.Spark.Streaming.The.Zen.of.Real-Time.A

    《Apress.Pro.Spark.Streaming.The.Zen.of.Real-Time.A》这本书主要聚焦于Apache Spark Streaming这一实时数据处理框架,深入探讨了如何利用Spark Streaming构建高效、可靠的实时数据处理系统。Spark Streaming是...

    3.Spark编程模型(上)--概念及SparkShell实战.pdf

    7.SparkStreaming(上)--SparkStreaming原理介绍.pdf 7.SparkStreaming(下)--SparkStreaming实战.pdf 8.SparkMLlib(上)--机器学习及SparkMLlib简介.pdf 8.SparkMLlib(下)--SparkMLlib实战.pdf 9.SparkGraphX...

    SparkStreaming和kafka的整合.pdf

    Spark Streaming将数据流分割成一系列小批量的数据块进行处理,这种机制使得Spark Streaming既能够处理实时数据流,又能利用Spark的核心API进行复杂的数据处理。 #### 2. Kafka简介 Apache Kafka是一种分布式的发布...

    2.Spark编译与部署(上)--基础环境搭建.pdf

    7.SparkStreaming(上)--SparkStreaming原理介绍.pdf 7.SparkStreaming(下)--SparkStreaming实战.pdf 8.SparkMLlib(上)--机器学习及SparkMLlib简介.pdf 8.SparkMLlib(下)--SparkMLlib实战.pdf 9.SparkGraphX...

Global site tag (gtag.js) - Google Analytics