- object OnlineTheTop3ItemForEachCategory2DB {
- def main(args: Array[String]){
- val conf = new SparkConf() //创建SparkConf对象
- //设置应用程序的名称,在程序运行的监控界面可以看到名称
- conf.setAppName("OnlineTheTop3ItemForEachCategory2DB")
- conf.setMaster("spark://Master:7077") //此时,程序在Spark集群
- //设置batchDuration时间间隔来控制Job生成的频率并且创建Spark Streaming执行的入口
- val ssc = new StreamingContext(conf, Seconds(5))
- ssc.checkpoint("/root/Documents/SparkApps/checkpoint")
- val soketDStream = ssc.socketTextStream("Master", 9999)
- /// 业务处理逻辑 .....
- ssc.start()
- ssc.awaitTermination()
- }
- }
2.1 创建StreamingContext
- /**
- * Create a StreamingContext by providing the configuration necessary for a new SparkContext.
- * @param conf a org.apache.spark.SparkConf object specifying Spark parameters
- * @param batchDuration the time interval at which streaming data will be divided into batches
- */
- def this(conf: SparkConf, batchDuration: Duration) = {
- this(StreamingContext.createNewSparkContext(conf), null, batchDuration)
- }
- private[streaming] def createNewSparkContext(conf: SparkConf): SparkContext = {
- new SparkContext(conf)
- }
2)案例最开始的地方肯定要通过数据流创建一个InputDStream。
- val socketDstram = ssc.socketTextStream("Master", 9999)
- /**
- * Create a input stream from TCP source hostname:port. Data is received using
- * a TCP socket and the receive bytes is interpreted as UTF8 encoded `\n` delimited
- * lines.
- * @param hostname Hostname to connect to for receiving data
- * @param port Port to connect to for receiving data
- * @param storageLevel Storage level to use for storing the received objects
- * (default: StorageLevel.MEMORY_AND_DISK_SER_2)
- */
- def socketTextStream(
- hostname: String,
- port: Int,
- storageLevel: StorageLevel = StorageLevel.MEMORY_AND_DISK_SER_2
- ): ReceiverInputDStream[String] = withNamedScope("socket text stream") {
- socketStream[String](hostname, port, SocketReceiver.bytesToLines, storageLevel)
- }
- /**
- * Create a input stream from TCP source hostname:port. Data is received using
- * a TCP socket and the receive bytes it interepreted as object using the given
- * converter.
- * @param hostname Hostname to connect to for receiving data
- * @param port Port to connect to for receiving data
- * @param converter Function to convert the byte stream to objects
- * @param storageLevel Storage level to use for storing the received objects
- * @tparam T Type of the objects received (after converting bytes to objects)
- */
- def socketStream[T: ClassTag](
- hostname: String,
- port: Int,
- converter: (InputStream) => Iterator[T],
- storageLevel: StorageLevel
- ): ReceiverInputDStream[T] = {
- new SocketInputDStream[T](this, hostname, port, converter, storageLevel)
- }
- private[streaming]
- class SocketInputDStream[T: ClassTag](
- ssc_ : StreamingContext,
- host: String,
- port: Int,
- bytesToObjects: InputStream => Iterator[T],
- storageLevel: StorageLevel
- ) extends ReceiverInputDStream[T](ssc_) {
- def getReceiver(): Receiver[T] = {
- new SocketReceiver(host, port, bytesToObjects, storageLevel)
- }
- }
总结一下SocketInputDStream的继承关系:
- // RDDs generated, marked as private[streaming] so that testsuites can access it
- @transient
- private[streaming] var generatedRDDs = new HashMap[Time, RDD[T]] ()
- /**
- * Get the RDD corresponding to the given time; either retrieve it from cache
- * or compute-and-cache it.
- */
- private[streaming] final def getOrCompute(time: Time): Option[RDD[T]] = {
- // If RDD was already generated, then retrieve it from HashMap,
- // or else compute the RDD
- generatedRDDs.get(time).orElse {
- // Compute the RDD if time is valid (e.g. correct time in a sliding window)
- // of RDD generation, else generate nothing.
- if (isTimeValid(time)) {
- val rddOption = createRDDWithLocalProperties(time, displayInnerRDDOps = false) {
- // Disable checks for existing output directories in jobs launched by the streaming
- // scheduler, since we may need to write output to an existing directory during checkpoint
- // recovery; see SPARK-4835 for more details. We need to have this call here because
- // compute() might cause Spark jobs to be launched.
- PairRDDFunctions.disableOutputSpecValidation.withValue(true) {
- compute(time)
- }
- }
- rddOption.foreach { case newRDD =>
- // Register the generated RDD for caching and checkpointing
- if (storageLevel != StorageLevel.NONE) {
- newRDD.persist(storageLevel)
- logDebug(s"Persisting RDD ${newRDD.id} for time $time to $storageLevel")
- }
- if (checkpointDuration != null && (time - zeroTime).isMultipleOf(checkpointDuration)) {
- newRDD.checkpoint()
- logInfo(s"Marking RDD ${newRDD.id} for time $time for checkpointing")
- }
- generatedRDDs.put(time, newRDD)
- }
- rddOption
- } else {
- None
- }
- }
- }
2.2 启动StreamingContext
代码实例中的ssc.start() 方法启动StreamingContext,主要的逻辑发生在这个start方法中:
* 在StreamingContext调用start方法的内部其实是会启动JobScheduler的Start方法,进行消息循环,
* 在JobScheduler的start内部会构造JobGenerator和ReceiverTacker,并且调用JobGenerator和
* ReceiverTacker的start方法:
*
* 1,JobGenerator启动后会不断的根据batchDuration生成一个个的Job
* 其实这里的Job不是Spark Core中所指的Job,它只是基于DStreamGraph而生成的RDD的DAG
* 而已,从Java角度讲,相当于Runnable接口实例,此时要想运行Job需要提交给JobScheduler,
* 在JobScheduler中通过线程池的方式找到一个单独的线程来提交Job到集群运行(其实是在线程中
* 基于RDD的Action触发真正的作业的运行)
*
* 2,ReceiverTracker启动后首先在Spark Cluster中启动Receiver(其实是在Executor中先启动
* ReceiverSupervisor),在Receiver收到数据后会通过ReceiverSupervisor存储到Executor并且把
* 数据的Metadata信息发送给Driver中的ReceiverTracker,在ReceiverTracker内部会通过
* ReceivedBlockTracker来管理接受到的元数据信息.
- /**
- * Start the execution of the streams.
- *
- * @throws IllegalStateException if the StreamingContext is already stopped.
- */
- def start(): Unit = synchronized {
- state match {
- case INITIALIZED =>
- startSite.set(DStream.getCreationSite())
- StreamingContext.ACTIVATION_LOCK.synchronized {
- StreamingContext.assertNoOtherContextIsActive()
- try {
- validate()
- // Start the streaming scheduler in a new thread, so that thread local properties
- // like call sites and job groups can be reset without affecting those of the
- // current thread.
- //线程本地存储,线程有自己的私有属性,设置这些线程的时候不会影响其他线程,
- ThreadUtils.runInNewThread("streaming-start") {
- sparkContext.setCallSite(startSite.get)
- sparkContext.clearJobGroup()
- sparkContext.setLocalProperty(SparkContext.SPARK_JOB_INTERRUPT_ON_CANCEL, "false")
- //启动JobScheduler
- scheduler.start()
- }
- state = StreamingContextState.ACTIVE
- } catch {
- case NonFatal(e) =>
- logError("Error starting the context, marking it as stopped", e)
- scheduler.stop(false)
- state = StreamingContextState.STOPPED
- throw e
- }
- StreamingContext.setActiveContext(this)
- }
- shutdownHookRef = ShutdownHookManager.addShutdownHook(
- StreamingContext.SHUTDOWN_HOOK_PRIORITY)(stopOnShutdown)
- // Registering Streaming Metrics at the start of the StreamingContext
- assert(env.metricsSystem != null)
- env.metricsSystem.registerSource(streamingSource)
- uiTab.foreach(_.attach())
- logInfo("StreamingContext started")
- case ACTIVE =>
- logWarning("StreamingContext has already been started")
- case STOPPED =>
- throw new IllegalStateException("StreamingContext has already been stopped")
- }
- }
- def start(): Unit = synchronized {
- if (eventLoop != null) return // scheduler has already been started
- logDebug("Starting JobScheduler")
- eventLoop = new EventLoop[JobSchedulerEvent]("JobScheduler") {
- override protected def onReceive(event: JobSchedulerEvent): Unit = processEvent(event)
- override protected def onError(e: Throwable): Unit = reportError("Error in job scheduler", e)
- }
- // 启动消息循环处理线程。用于处理JobScheduler的各种事件。
- eventLoop.start()
- // attach rate controllers of input streams to receive batch completion updates
- for {
- inputDStream <- ssc.graph.getInputStreams
- // rateController可以控制输入速度
- rateController <- inputDStream.rateController
- } ssc.addStreamingListener(rateController)
- // 启动监听器。用于更新Spark UI中StreamTab的内容。
- listenerBus.start(ssc.sparkContext)
- receiverTracker = new ReceiverTracker(ssc)
- // 生成InputInfoTracker。用于管理所有的输入的流,以及他们输入的数据统计。这些信息将通过 StreamingListener监听。
- inputInfoTracker = new InputInfoTracker(ssc)
- // 启动ReceiverTracker。用于处理数据接收、数据缓存、Block生成。
- receiverTracker.start()
- // 启动JobGenerator。用于DStreamGraph初始化、DStream与RDD的转换、生成Job、提交执行等工作。
- jobGenerator.start()
- logInfo("Started JobScheduler")
- }
- private def processEvent(event: JobSchedulerEvent) {
- try {
- event match {
- case JobStarted(job, startTime) => handleJobStart(job, startTime)
- case JobCompleted(job, completedTime) => handleJobCompletion(job, completedTime)
- case ErrorReported(m, e) => handleError(m, e)
- }
- } catch {
- case e: Throwable =>
- reportError("Error in job scheduler", e)
- }
- }
- private val eventQueue: BlockingQueue[E] = new LinkedBlockingDeque[E]()
- private val eventThread = new Thread(name) {
- setDaemon(true)
- override def run(): Unit = {
- try {
- while (!stopped.get) {
- val event = eventQueue.take()
- try {
- onReceive(event)
- } catch {
- case NonFatal(e) => {
- try {
- onError(e)
- } catch {
- case NonFatal(e) => logError("Unexpected error in " + name, e)
- }
- }
- }
- }
- } catch {
- case ie: InterruptedException => // exit even if eventQueue is not empty
- case NonFatal(e) => logError("Unexpected error in " + name, e)
- }
- }
- }
1 def start(): Unit = synchronized { 2 if (isTrackerStarted) { 3 throw new SparkException("ReceiverTracker already started") 4 } 5 6 if (!receiverInputStreams.isEmpty) { 7 endpoint = ssc.env.rpcEnv.setupEndpoint( 8 "ReceiverTracker", new ReceiverTrackerEndpoint(ssc.env.rpcEnv)) 9 if (!skipReceiverLaunch) launchReceivers() 10 logInfo("ReceiverTracker started") 11 trackerState = Started 12 } 13 }
/** * Get the receivers from the ReceiverInputDStreams, distributes them to the * worker nodes as a parallel collection, and runs them. */ private def launchReceivers(): Unit = { val receivers = receiverInputStreams.map(nis => { val rcvr = nis.getReceiver() rcvr.setReceiverId(nis.id) rcvr }) runDummySparkJob() logInfo("Starting " + receivers.length + " receivers") endpoint.send(StartAllReceivers(receivers)) }
/** * Run the dummy Spark job to ensure that all slaves have registered. This avoids all the * receivers to be scheduled on the same node. * * TODO Should poll the executor number and wait for executors according to * "spark.scheduler.minRegisteredResourcesRatio" and * "spark.scheduler.maxRegisteredResourcesWaitingTime" rather than running a dummy job. */ private def runDummySparkJob(): Unit = { if (!ssc.sparkContext.isLocal) { ssc.sparkContext.makeRDD(1 to 50, 50).map(x => (x, 1)).reduceByKey(_ + _, 20).collect() } assert(getExecutors.nonEmpty) }
ReceiverTracker.launchReceivers()还调用了endpoint.send(StartAllReceivers(receivers))方法,Rpc消息通信体发送StartAllReceivers消息。
ReceiverTrackerEndpoint它自己接收到消息后,先根据调度策略获得Recevier在哪个Executor上运行,然后在调用startReceiver(receiver, executors)方法,来启动Receiver。
override def receive: PartialFunction[Any, Unit] = { // Local messages case StartAllReceivers(receivers) => val scheduledLocations = schedulingPolicy.scheduleReceivers(receivers, getExecutors) for (receiver <- receivers) { val executors = scheduledLocations(receiver.streamId) updateReceiverScheduledExecutors(receiver.streamId, executors) receiverPreferredLocations(receiver.streamId) = receiver.preferredLocation startReceiver(receiver, executors) }
/** * Start a receiver along with its scheduled executors */ private def startReceiver( receiver: Receiver[_], scheduledLocations: Seq[TaskLocation]): Unit = { def shouldStartReceiver: Boolean = { // ........... 此处省略1万字 (无关代码) , 呵呵哒 ......... // Function to start the receiver on the worker node val startReceiverFunc: Iterator[Receiver[_]] => Unit = (iterator: Iterator[Receiver[_]]) => { if (!iterator.hasNext) { throw new SparkException( "Could not start receiver as object not found.") } if (TaskContext.get().attemptNumber() == 0) { val receiver = iterator.next() assert(iterator.hasNext == false) //实例化Receiver监控者 val supervisor = new ReceiverSupervisorImpl( receiver, SparkEnv.get, serializableHadoopConf.value, checkpointDirOption) supervisor.start() supervisor.awaitTermination() } else { // It's restarted by TaskScheduler, but we want to reschedule it again. So exit it. } } // Create the RDD using the scheduledLocations to run the receiver in a Spark job val receiverRDD: RDD[Receiver[_]] = if (scheduledLocations.isEmpty) { ssc.sc.makeRDD(Seq(receiver), 1) } else { val preferredLocations = scheduledLocations.map(_.toString).distinct ssc.sc.makeRDD(Seq(receiver -> preferredLocations)) } receiverRDD.setName(s"Receiver $receiverId") ssc.sparkContext.setJobDescription(s"Streaming job running receiver $receiverId") ssc.sparkContext.setCallSite(Option(ssc.getStartSite()).getOrElse(Utils.getCallSite())) val future = ssc.sparkContext.submitJob[Receiver[_], Unit, Unit]( receiverRDD, startReceiverFunc, //提交Job时候传入startReceiverFunc这个方法,因为startReceiverFunc该方法是在Executor上执行的 Seq(0), (_, _) => Unit, ()) // 一直重启 receiver job直到 ReceiverTracker is stopped future.onComplete { case Success(_) => if (!shouldStartReceiver) { onReceiverJobFinish(receiverId) } else { logInfo(s"Restarting Receiver $receiverId") self.send(RestartReceiver(receiver)) } case Failure(e) => if (!shouldStartReceiver) { onReceiverJobFinish(receiverId) } else { logError("Receiver has been stopped. Try to restart it.", e) logInfo(s"Restarting Receiver $receiverId") self.send(RestartReceiver(receiver)) } }(submitJobThreadPool) logInfo(s"Receiver ${receiver.streamId} started") }
/** Start the supervisor */ def start() { onStart() startReceiver() }
override protected def onStart() { registeredBlockGenerators.foreach { _.start() } }
/** Start receiver */ def startReceiver(): Unit = synchronized { try { if (onReceiverStart()) { logInfo("Starting receiver") receiverState = Started receiver.onStart() logInfo("Called receiver onStart") } else { // The driver refused us stop("Registered unsuccessfully because Driver refused to start receiver " + streamId, None) } } catch { case NonFatal(t) => stop("Error starting receiver " + streamId, Some(t)) } }
override protected def onReceiverStart(): Boolean = { val msg = RegisterReceiver( streamId, receiver.getClass.getSimpleName, host, executorId, endpoint) trackerEndpoint.askWithRetry[Boolean](msg) }
override def receiveAndReply(context: RpcCallContext): PartialFunction[Any, Unit] = { // Remote messages case RegisterReceiver(streamId, typ, host, executorId, receiverEndpoint) => val successful = registerReceiver(streamId, typ, host, executorId, receiverEndpoint, context.senderAddress) context.reply(successful)
/** Register a receiver */ private def registerReceiver( streamId: Int, typ: String, host: String, executorId: String, receiverEndpoint: RpcEndpointRef, senderAddress: RpcAddress ): Boolean = { if (!receiverInputStreamIds.contains(streamId)) { throw new SparkException("Register received for unexpected id " + streamId) } // ........... 此处省略1万字 (无关代码) , 呵呵哒 ......... if (!isAcceptable) { // Refuse it since it's scheduled to a wrong executor false } else { val name = s"${typ}-${streamId}" val receiverTrackingInfo = ReceiverTrackingInfo( streamId, ReceiverState.ACTIVE, scheduledLocations = None, runningExecutor = Some(ExecutorCacheTaskLocation(host, executorId)), name = Some(name), endpoint = Some(receiverEndpoint)) receiverTrackingInfos.put(streamId, receiverTrackingInfo) listenerBus.post(StreamingListenerReceiverStarted(receiverTrackingInfo.toReceiverInfo)) logInfo("Registered receiver for stream " + streamId + " from " + senderAddress) true } }
private[streaming] class SocketReceiver[T: ClassTag]( host: String, port: Int, bytesToObjects: InputStream => Iterator[T], storageLevel: StorageLevel ) extends Receiver[T](storageLevel) with Logging { def onStart() { // Start the thread that receives data over a connection new Thread("Socket Receiver") { setDaemon(true) override def run() { receive() } }.start() } /** Create a socket connection and receive data until receiver is stopped */ def receive() { var socket: Socket = null try { logInfo("Connecting to " + host + ":" + port) socket = new Socket(host, port) logInfo("Connected to " + host + ":" + port) val iterator = bytesToObjects(socket.getInputStream()) while(!isStopped && iterator.hasNext) { store(iterator.next) } if (!isStopped()) { restart("Socket data stream had no more data") } else { logInfo("Stopped receiving") } } catch { // ........... 此处省略1万字 (无关代码) , 呵呵哒 ......... } }
//根据创建StreamContext时传入的batchInterval,定时发送GenerateJobs消息 private val timer = new RecurringTimer(clock, ssc.graph.batchDuration.milliseconds, longTime => eventLoop.post(GenerateJobs(new Time(longTime))), "JobGenerator") JobGenerator的start()方法: /** Start generation of jobs */ def start(): Unit = synchronized { if (eventLoop != null) return // generator has already been started // Call checkpointWriter here to initialize it before eventLoop uses it to avoid a deadlock. // See SPARK-10125 checkpointWriter eventLoop = new EventLoop[JobGeneratorEvent]("JobGenerator") { override protected def onReceive(event: JobGeneratorEvent): Unit = processEvent(event) override protected def onError(e: Throwable): Unit = { jobScheduler.reportError("Error in job generator", e) } } // 启动消息循环处理线程 eventLoop.start() if (ssc.isCheckpointPresent) { restart() } else { // 开启定时生成Job的定时器 startFirstTime() } }
/** Starts the generator for the first time */ private def startFirstTime() { val startTime = new Time(timer.getStartTime()) graph.start(startTime - graph.batchDuration) timer.start(startTime.milliseconds) logInfo("Started JobGenerator at " + startTime) }
- /** Processes all events */
- private def processEvent(event: JobGeneratorEvent) {
- logDebug("Got event " + event)
- event match {
- case GenerateJobs(time) => generateJobs(time)
- case ClearMetadata(time) => clearMetadata(time)
- case DoCheckpoint(time, clearCheckpointDataLater) =>
- doCheckpoint(time, clearCheckpointDataLater)
- case ClearCheckpointData(time) => clearCheckpointData(time)
- }
- }
/** Generate jobs and perform checkpoint for the given `time`. */ private def generateJobs(time: Time) { // Set the SparkEnv in this thread, so that job generation code can access the environment // Example: BlockRDDs are created in this thread, and it needs to access BlockManager // Update: This is probably redundant after threadlocal stuff in SparkEnv has been removed. SparkEnv.set(ssc.env) Try { // 根据特定的时间获取具体的数据 jobScheduler.receiverTracker.allocateBlocksToBatch(time) // allocate received blocks to batch //调用DStreamGraph的generateJobs生成Job graph.generateJobs(time) // generate jobs using allocated block } match { case Success(jobs) => val streamIdToInputInfos = jobScheduler.inputInfoTracker.getInfo(time) jobScheduler.submitJobSet(JobSet(time, jobs, streamIdToInputInfos)) case Failure(e) => jobScheduler.reportError("Error generating jobs for time " + time, e) } eventLoop.post(DoCheckpoint(time, clearCheckpointDataLater = false)) }
// 输出流:具体Action的输出操作 private val outputStreams = new ArrayBuffer[DStream[_]]() def generateJobs(time: Time): Seq[Job] = { logDebug("Generating jobs for time " + time) val jobs = this.synchronized { outputStreams.flatMap { outputStream => val jobOption = outputStream.generateJob(time) jobOption.foreach(_.setCallSite(outputStream.creationSite)) jobOption } } logDebug("Generated " + jobs.length + " jobs for time " + time) jobs }
- /**
- * Generate a SparkStreaming job for the given time. This is an internal method that
- * should not be called directly. This default implementation creates a job
- * that materializes the corresponding RDD. Subclasses of DStream may override this
- * to generate their own jobs.
- */
- private[streaming] def generateJob(time: Time): Option[Job] = {
- getOrCompute(time) match {
- case Some(rdd) => {
- val jobFunc = () => {
- val emptyFunc = { (iterator: Iterator[T]) => {} }
- context.sparkContext.runJob(rdd, emptyFunc)
- }
- Some(new Job(time, jobFunc))
- }
- case None => None
- }
- }
def submitJobSet(jobSet: JobSet) { if (jobSet.jobs.isEmpty) { logInfo("No jobs added for time " + jobSet.time) } else { listenerBus.post(StreamingListenerBatchSubmitted(jobSet.toBatchInfo)) jobSets.put(jobSet.time, jobSet) jobSet.jobs.foreach(job => jobExecutor.execute(new JobHandler(job))) logInfo("Added jobs for time " + jobSet.time) } }
private class JobHandler(job: Job) extends Runnable with Logging { import JobScheduler._ def run() { try { // *********** 此处省略无关代码 ******************* // We need to assign `eventLoop` to a temp variable. Otherwise, because // `JobScheduler.stop(false)` may set `eventLoop` to null when this method is running, then // it's possible that when `post` is called, `eventLoop` happens to null. var _eventLoop = eventLoop if (_eventLoop != null) { _eventLoop.post(JobStarted(job, clock.getTimeMillis())) // Disable checks for existing output directories in jobs launched by the streaming // scheduler, since we may need to write output to an existing directory during checkpoint // recovery; see SPARK-4835 for more details. PairRDDFunctions.disableOutputSpecValidation.withValue(true) { job.run() } _eventLoop = eventLoop if (_eventLoop != null) { _eventLoop.post(JobCompleted(job, clock.getTimeMillis())) } } else { // JobScheduler has been stopped. } } finally { ssc.sc.setLocalProperty(JobScheduler.BATCH_TIME_PROPERTY_KEY, null) ssc.sc.setLocalProperty(JobScheduler.OUTPUT_OP_ID_PROPERTY_KEY, null) } } } }
- private[streaming]
- class Job(val time: Time, func: () => _) {
- private var _id: String = _
- private var _outputOpId: Int = _
- private var isSet = false
- private var _result: Try[_] = null
- private var _callSite: CallSite = null
- private var _startTime: Option[Long] = None
- private var _endTime: Option[Long] = None
- def run() {
- _result = Try(func())
- }
相关推荐
在本项目中,"基于SparkStreaming的实时音乐推荐系统源码.zip",主要涉及的是如何利用Apache Spark Streaming这一强大的实时处理框架,构建一个能够实时分析用户行为并进行个性化音乐推荐的系统。Spark Streaming是...
1. Spark概述:Spark作为一个分布式计算框架,以其内存计算特性(In-Memory Computing)和高效率著称,广泛应用于大数据处理、机器学习、实时流处理等多个领域。它的设计目标是提供低延迟的数据处理,同时保持易于...
Spark Streaming能够以微批处理的方式处理数据流,将实时数据流分解成小的时间窗口(称为DStream,Discretized Stream),然后应用Spark的计算模型进行处理。 **DStream:离散化数据流** DStream是Spark Streaming...
Spark作为一个分布式计算框架,以其高效、易用和多模态处理能力在大数据处理领域备受青睐。以下是关于Spark的一些关键知识点的详细说明: **1. Spark概述** Spark是由加州大学伯克利分校AMPLab开发的大数据处理框架...
《深入理解Spark:核心思想及源码分析》这本书旨在帮助读者深入掌握Apache Spark这一大数据处理框架的核心原理与实现细节。Spark作为一个快速、通用且可扩展的数据处理系统,已经在大数据领域得到了广泛应用。它提供...
源码分析有助于深入理解Spark的内部工作原理,例如: - DAGScheduler如何将作业拆分成Stage,Stage再拆分成Task。 - TaskScheduler如何将Task分配到Executor上执行。 - Shuffle过程是如何实现的,包括...
8. **Spark Streaming**:Spark Streaming构建在微批处理之上,通过将流数据划分为小批次处理,实现了低延迟的实时流处理。 9. **MLlib与Spark ML**:Spark提供了机器学习库MLlib,以及基于DataFrame的ML,支持各种...
5. `graphx/`:GraphX图计算框架,用于创建、操作和分析图形数据。 6. `examples/`:包含各种示例程序,展示了Spark的基本用法和功能。 7. `build/`:构建脚本和配置文件,如`build.gradle`,用于编译Spark源码。 8....
Spark Streaming是Apache Spark框架的一个扩展,它允许实时处理连续的数据流,提供了强大的流处理能力。 【描述】"基于spark开发的完整项目算法源码,可用于毕业设计、课程设计、练手学习等" 这个项目包含了完整的...
《Spark源码分析》 Spark,作为大数据处理领域的重要框架,以其高效、易用和弹性伸缩等特性,被广泛应用于数据处理、机器学习和实时流处理等多个场景。本资料将深入探讨Spark的核心思想和源码,帮助读者从底层原理...
标题中的“Flume push数据到SparkStreaming”是指在大数据处理领域中,使用Apache Flume将实时数据流推送到Apache Spark Streaming进行进一步的实时分析和处理的过程。这两个组件都是Apache Hadoop生态系统的重要...
5. **Spark Streaming**:Spark Streaming支持实时流处理,通过微批处理的方式实现低延迟的数据处理。 6. **Spark MLlib**:MLlib是Spark的机器学习库,包含了多种常见的机器学习算法,如分类、回归、聚类等,同时...
Spark作为一个流行的分布式计算框架,其源码结构复杂且深奥,但理解它对于深入学习和优化Spark应用至关重要。源码结构主要分为几个关键部分,包括核心库、模块化组件以及插件机制等。以下是对Spark源码结构的详细...
3. **Spark Streaming**:提供了实时流处理能力,它将数据流划分为微批次,然后用Spark Core进行处理。这种方式结合了实时处理和批处理的优点,保证了低延迟的同时也具备高吞吐量。 4. **MLlib**:Spark的机器学习...
Spark Streaming是Apache Spark框架的一个模块,专门用于处理实时数据流。它提供了一个高级抽象,使得开发人员能够轻松地处理持续的数据流,同时保持Spark核心的弹性、容错性和高吞吐量特性。Spark Streaming通过将...
Spark是Apache基金会下的一个分布式计算框架,以其高效、易用和可扩展性而著名。它在大数据处理领域扮演着重要角色,尤其在实时处理、批处理以及机器学习等任务上表现出色。本资源集合中包含了Spark与ETL(数据提取...
Spark Streaming通过DStream(Discretized Stream)抽象,将连续的数据流分解为一系列小批次的批处理任务,从而在Spark的计算框架上高效运行。 其次,ALS是协同过滤算法的一种,常用于推荐系统中。它假设用户和商品...
Spark是Apache软件基金会下的一个开源大数据处理框架,其2.2.0版本是该框架的一个重要里程碑。这个版本包含了丰富的功能改进和性能优化,对于理解Spark的工作原理、开发分布式应用程序或者进行性能调优来说,研究...
4. **Spark Core**: Spark的核心组件,提供了分布式计算框架,支持内存计算,显著提升了数据处理速度。Spark Streaming构建在Spark Core之上,共享了Spark的高效计算能力。 5. **HBase**: HBase是一个分布式的、...