`

[spark-src-core] 3.2.run spark in standalone(client) mode

 
阅读更多

1.startup command

./bin/spark-submit  --class org.apache.spark.examples.JavaWordCount --deploy-mode client --master spark://gzsw-02:7077 lib/spark-examples-1.4.1-hadoop2.4.0.jar hdfs://host02:/user/hadoop/input.txt

   note:1) the master is the cluster manager which stated in spark master ui page,ie,

URL: spark://gzsw-02:7077

    other than the 'REST URL xxxx' .

   2) the --deploy-mode is optional,ie it's the same effect as specified as above 'client'.

2.run logs

Spark Command: /usr/local/jdk/jdk1.6.0_31/bin/java -cp /home/hadoop/spark/spark-1.4.1-bin-hadoop2.4/conf/:/home/hadoop/spark/spark-1.4.1-bin-hadoop2.4/lib/spark-assembly-1.4.1-hadoop2.4.0.jar:/home/hadoop/spark/spark-1.4.1-bin-hadoop2.4/lib/datanucleus-api-jdo-3.2.6.jar:/home/hadoop/spark/spark-1.4.1-bin-hadoop2.4/lib/datanucleus-rdbms-3.2.9.jar:/home/hadoop/spark/spark-1.4.1-bin-hadoop2.4/lib/datanucleus-core-3.2.10.jar:/usr/local/hadoop/hadoop-2.5.2/etc/hadoop/ -Xms6g -Xmx6g -XX:MaxPermSize=256m org.apache.spark.deploy.SparkSubmit --master spark://gzsw-02:7077 --deploy-mode client --class org.apache.spark.examples.JavaWordCount lib/spark-examples-1.4.1-hadoop2.4.0.jar hdfs://hd02:/user/hadoop/input.txt
========================================
-executed cmd retruned by Main.java:/usr/local/jdk/jdk1.6.0_31/bin/java -cp /home/hadoop/spark/spark-1.4.1-bin-hadoop2.4/conf/:/home/hadoop/spark/spark-1.4.1-bin-hadoop2.4/lib/spark-assembly-1.4.1-hadoop2.4.0.jar:/home/hadoop/spark/spark-1.4.1-bin-hadoop2.4/lib/datanucleus-api-jdo-3.2.6.jar:/home/hadoop/spark/spark-1.4.1-bin-hadoop2.4/lib/datanucleus-rdbms-3.2.9.jar:/home/hadoop/spark/spark-1.4.1-bin-hadoop2.4/lib/datanucleus-core-3.2.10.jar:/usr/local/hadoop/hadoop-2.5.2/etc/hadoop/ -Xms6g -Xmx6g -XX:MaxPermSize=256m org.apache.spark.deploy.SparkSubmit --master spark://gzsw-02:7077 --deploy-mode client --class org.apache.spark.examples.JavaWordCount lib/spark-examples-1.4.1-hadoop2.4.0.jar hdfs://hd02:/user/hadoop/input.txt
16/09/19 11:28:17 INFO spark.SparkContext: Running Spark version 1.4.1
16/09/19 11:28:18 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/09/19 11:28:18 INFO spark.SecurityManager: Changing view acls to: hadoop
16/09/19 11:28:18 INFO spark.SecurityManager: Changing modify acls to: hadoop
16/09/19 11:28:18 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop); users with modify permissions: Set(hadoop)
16/09/19 11:28:19 INFO slf4j.Slf4jLogger: Slf4jLogger started
16/09/19 11:28:19 INFO Remoting: Starting remoting
16/09/19 11:28:19 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@192.168.100.4:55817]
16/09/19 11:28:19 INFO util.Utils: Successfully started service 'sparkDriver' on port 55817.
16/09/19 11:28:19 INFO spark.SparkEnv: Registering MapOutputTracker
16/09/19 11:28:19 INFO spark.SparkEnv: Registering BlockManagerMaster
16/09/19 11:28:19 INFO storage.DiskBlockManager: Created local directory at /tmp/spark-e19f684c-686b-4841-a863-2e143face3c3/blockmgr-b6ea03ec-ee30-4133-992f-ccf27ef35c93
16/09/19 11:28:19 INFO storage.MemoryStore: MemoryStore started with capacity 2.6 GB
16/09/19 11:28:19 INFO spark.HttpFileServer: HTTP File server directory is /tmp/spark-e19f684c-686b-4841-a863-2e143face3c3/httpd-1c0a5a85-eef0-4e19-a244-ca7a9c852204
16/09/19 11:28:19 INFO spark.HttpServer: Starting HTTP Server
16/09/19 11:28:19 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/09/19 11:28:19 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:48267
16/09/19 11:28:19 INFO util.Utils: Successfully started service 'HTTP file server' on port 48267.
16/09/19 11:28:19 INFO spark.SparkEnv: Registering OutputCommitCoordinator
16/09/19 11:28:19 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/09/19 11:28:19 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:7106
16/09/19 11:28:19 INFO util.Utils: Successfully started service 'SparkUI' on port 7106.
16/09/19 11:28:19 INFO ui.SparkUI: Started SparkUI at http://192.168.100.4:7106
16/09/19 11:28:19 INFO spark.SparkContext: Added JAR file:/home/hadoop/spark/spark-1.4.1-bin-hadoop2.4/lib/spark-examples-1.4.1-hadoop2.4.0.jar at http://192.168.100.4:48267/jars/spark-examples-1.4.1-hadoop2.4.0.jar with timestamp 1474255699921
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Connecting to master akka.tcp://sparkMaster@gzsw-02:7077/user/Master...
16/09/19 11:28:20 INFO cluster.SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-20160919112820-0002
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor added: app-20160919112820-0002/0 on worker-20160914175458-192.168.100.15-36198 (192.168.100.15:36198) with 2 cores
16/09/19 11:28:20 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20160919112820-0002/0 on hostPort 192.168.100.15:36198 with 2 cores, 2.0 GB RAM
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor added: app-20160919112820-0002/1 on worker-20160914175458-192.168.100.15-36198 (192.168.100.15:36198) with 2 cores
16/09/19 11:28:20 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20160919112820-0002/1 on hostPort 192.168.100.15:36198 with 2 cores, 2.0 GB RAM
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor added: app-20160919112820-0002/2 on worker-20160914175457-192.168.100.11-41800 (192.168.100.11:41800) with 2 cores
16/09/19 11:28:20 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20160919112820-0002/2 on hostPort 192.168.100.11:41800 with 2 cores, 2.0 GB RAM
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor added: app-20160919112820-0002/3 on worker-20160914175457-192.168.100.11-41800 (192.168.100.11:41800) with 2 cores
16/09/19 11:28:20 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20160919112820-0002/3 on hostPort 192.168.100.11:41800 with 2 cores, 2.0 GB RAM
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor added: app-20160919112820-0002/4 on worker-20160914175457-192.168.100.10-46154 (192.168.100.10:46154) with 2 cores
16/09/19 11:28:20 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20160919112820-0002/4 on hostPort 192.168.100.10:46154 with 2 cores, 2.0 GB RAM
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor added: app-20160919112820-0002/5 on worker-20160914175457-192.168.100.10-46154 (192.168.100.10:46154) with 2 cores
16/09/19 11:28:20 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20160919112820-0002/5 on hostPort 192.168.100.10:46154 with 2 cores, 2.0 GB RAM
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor added: app-20160919112820-0002/6 on worker-20160914175457-192.168.100.6-51383 (192.168.100.6:51383) with 2 cores
16/09/19 11:28:20 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20160919112820-0002/6 on hostPort 192.168.100.6:51383 with 2 cores, 2.0 GB RAM
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor added: app-20160919112820-0002/7 on worker-20160914175457-192.168.100.6-51383 (192.168.100.6:51383) with 2 cores
16/09/19 11:28:20 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20160919112820-0002/7 on hostPort 192.168.100.6:51383 with 2 cores, 2.0 GB RAM
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor added: app-20160919112820-0002/8 on worker-20160914175457-192.168.100.9-50567 (192.168.100.9:50567) with 2 cores
16/09/19 11:28:20 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20160919112820-0002/8 on hostPort 192.168.100.9:50567 with 2 cores, 2.0 GB RAM
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor added: app-20160919112820-0002/9 on worker-20160914175457-192.168.100.9-50567 (192.168.100.9:50567) with 2 cores
16/09/19 11:28:20 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20160919112820-0002/9 on hostPort 192.168.100.9:50567 with 2 cores, 2.0 GB RAM
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor added: app-20160919112820-0002/10 on worker-20160914175456-192.168.100.7-36541 (192.168.100.7:36541) with 2 cores
16/09/19 11:28:20 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20160919112820-0002/10 on hostPort 192.168.100.7:36541 with 2 cores, 2.0 GB RAM
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor added: app-20160919112820-0002/11 on worker-20160914175456-192.168.100.7-36541 (192.168.100.7:36541) with 2 cores
16/09/19 11:28:20 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20160919112820-0002/11 on hostPort 192.168.100.7:36541 with 2 cores, 2.0 GB RAM
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor added: app-20160919112820-0002/12 on worker-20160914175457-192.168.100.8-38650 (192.168.100.8:38650) with 2 cores
16/09/19 11:28:20 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20160919112820-0002/12 on hostPort 192.168.100.8:38650 with 2 cores, 2.0 GB RAM
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor added: app-20160919112820-0002/13 on worker-20160914175457-192.168.100.8-38650 (192.168.100.8:38650) with 2 cores
16/09/19 11:28:20 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20160919112820-0002/13 on hostPort 192.168.100.8:38650 with 2 cores, 2.0 GB RAM
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor added: app-20160919112820-0002/14 on worker-20160914175457-192.168.100.13-43911 (192.168.100.13:43911) with 2 cores
16/09/19 11:28:20 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20160919112820-0002/14 on hostPort 192.168.100.13:43911 with 2 cores, 2.0 GB RAM
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor added: app-20160919112820-0002/15 on worker-20160914175457-192.168.100.13-43911 (192.168.100.13:43911) with 2 cores
16/09/19 11:28:20 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20160919112820-0002/15 on hostPort 192.168.100.13:43911 with 2 cores, 2.0 GB RAM
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor added: app-20160919112820-0002/16 on worker-20160914175457-192.168.100.12-44199 (192.168.100.12:44199) with 2 cores
16/09/19 11:28:20 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20160919112820-0002/16 on hostPort 192.168.100.12:44199 with 2 cores, 2.0 GB RAM
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor added: app-20160919112820-0002/17 on worker-20160914175457-192.168.100.12-44199 (192.168.100.12:44199) with 2 cores
16/09/19 11:28:20 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20160919112820-0002/17 on hostPort 192.168.100.12:44199 with 2 cores, 2.0 GB RAM
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor added: app-20160919112820-0002/18 on worker-20160914175456-192.168.100.14-36693 (192.168.100.14:36693) with 2 cores
16/09/19 11:28:20 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20160919112820-0002/18 on hostPort 192.168.100.14:36693 with 2 cores, 2.0 GB RAM
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor added: app-20160919112820-0002/19 on worker-20160914175456-192.168.100.14-36693 (192.168.100.14:36693) with 2 cores
16/09/19 11:28:20 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20160919112820-0002/19 on hostPort 192.168.100.14:36693 with 2 cores, 2.0 GB RAM
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor updated: app-20160919112820-0002/0 is now LOADING
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor updated: app-20160919112820-0002/2 is now LOADING
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor updated: app-20160919112820-0002/4 is now LOADING
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor updated: app-20160919112820-0002/1 is now LOADING
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor updated: app-20160919112820-0002/3 is now LOADING
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor updated: app-20160919112820-0002/6 is now LOADING
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor updated: app-20160919112820-0002/5 is now LOADING
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor updated: app-20160919112820-0002/7 is now LOADING
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor updated: app-20160919112820-0002/10 is now LOADING
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor updated: app-20160919112820-0002/8 is now LOADING
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor updated: app-20160919112820-0002/9 is now LOADING
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor updated: app-20160919112820-0002/12 is now LOADING
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor updated: app-20160919112820-0002/11 is now LOADING
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor updated: app-20160919112820-0002/13 is now LOADING
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor updated: app-20160919112820-0002/14 is now LOADING
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor updated: app-20160919112820-0002/16 is now LOADING
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor updated: app-20160919112820-0002/15 is now LOADING
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor updated: app-20160919112820-0002/17 is now LOADING
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor updated: app-20160919112820-0002/18 is now LOADING
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor updated: app-20160919112820-0002/0 is now RUNNING
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor updated: app-20160919112820-0002/19 is now LOADING
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor updated: app-20160919112820-0002/1 is now RUNNING
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor updated: app-20160919112820-0002/2 is now RUNNING
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor updated: app-20160919112820-0002/3 is now RUNNING
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor updated: app-20160919112820-0002/4 is now RUNNING
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor updated: app-20160919112820-0002/5 is now RUNNING
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor updated: app-20160919112820-0002/6 is now RUNNING
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor updated: app-20160919112820-0002/7 is now RUNNING
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor updated: app-20160919112820-0002/8 is now RUNNING
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor updated: app-20160919112820-0002/9 is now RUNNING
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor updated: app-20160919112820-0002/10 is now RUNNING
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor updated: app-20160919112820-0002/11 is now RUNNING
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor updated: app-20160919112820-0002/12 is now RUNNING
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor updated: app-20160919112820-0002/13 is now RUNNING
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor updated: app-20160919112820-0002/14 is now RUNNING
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor updated: app-20160919112820-0002/15 is now RUNNING
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor updated: app-20160919112820-0002/16 is now RUNNING
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor updated: app-20160919112820-0002/17 is now RUNNING
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor updated: app-20160919112820-0002/18 is now RUNNING
16/09/19 11:28:20 INFO client.AppClient$ClientActor: Executor updated: app-20160919112820-0002/19 is now RUNNING
16/09/19 11:28:20 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 38299.
16/09/19 11:28:20 INFO netty.NettyBlockTransferService: Server created on 38299
16/09/19 11:28:20 INFO storage.BlockManagerMaster: Trying to register BlockManager
16/09/19 11:28:20 INFO storage.BlockManagerMasterEndpoint: Registering block manager 192.168.100.4:38299 with 2.6 GB RAM, BlockManagerId(driver, 192.168.100.4, 38299)
16/09/19 11:28:20 INFO storage.BlockManagerMaster: Registered BlockManager
16/09/19 11:28:20 INFO scheduler.EventLoggingListener: Logging events to file:/home/hadoop/spark/spark-eventlog/app-20160919112820-0002
16/09/19 11:28:20 INFO cluster.SparkDeploySchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
16/09/19 11:28:21 INFO storage.MemoryStore: ensureFreeSpace(228680) called with curMem=0, maxMem=2778306969
16/09/19 11:28:21 INFO storage.MemoryStore: Block broadcast_0 stored as values in memory (estimated size 223.3 KB, free 2.6 GB)
16/09/19 11:28:21 INFO storage.MemoryStore: ensureFreeSpace(18130) called with curMem=228680, maxMem=2778306969
16/09/19 11:28:21 INFO storage.MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 17.7 KB, free 2.6 GB)
16/09/19 11:28:21 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.100.4:38299 (size: 17.7 KB, free: 2.6 GB)
16/09/19 11:28:21 INFO spark.SparkContext: Created broadcast 0 from textFile at JavaWordCount.java:45
16/09/19 11:28:21 INFO mapred.FileInputFormat: Total input paths to process : 1
16/09/19 11:28:21 INFO spark.SparkContext: Starting job: collect at JavaWordCount.java:68
16/09/19 11:28:21 INFO scheduler.DAGScheduler: Registering RDD 3 (mapToPair at JavaWordCount.java:54)
16/09/19 11:28:21 INFO scheduler.DAGScheduler: Got job 0 (collect at JavaWordCount.java:68) with 1 output partitions (allowLocal=false)
16/09/19 11:28:21 INFO scheduler.DAGScheduler: Final stage: ResultStage 1(collect at JavaWordCount.java:68)
16/09/19 11:28:21 INFO scheduler.DAGScheduler: Parents of final stage: List(ShuffleMapStage 0)
16/09/19 11:28:21 INFO scheduler.DAGScheduler: Missing parents: List(ShuffleMapStage 0)
16/09/19 11:28:21 INFO scheduler.DAGScheduler: Submitting ShuffleMapStage 0 (MapPartitionsRDD[3] at mapToPair at JavaWordCount.java:54), which has no missing parents
16/09/19 11:28:21 INFO storage.MemoryStore: ensureFreeSpace(4760) called with curMem=246810, maxMem=2778306969
16/09/19 11:28:21 INFO storage.MemoryStore: Block broadcast_1 stored as values in memory (estimated size 4.6 KB, free 2.6 GB)
16/09/19 11:28:21 INFO storage.MemoryStore: ensureFreeSpace(2666) called with curMem=251570, maxMem=2778306969
16/09/19 11:28:21 INFO storage.MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 2.6 KB, free 2.6 GB)
16/09/19 11:28:21 INFO storage.BlockManagerInfo: Added broadcast_1_piece0 in memory on 192.168.100.4:38299 (size: 2.6 KB, free: 2.6 GB)
16/09/19 11:28:21 INFO spark.SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:874
16/09/19 11:28:21 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 0 (MapPartitionsRDD[3] at mapToPair at JavaWordCount.java:54)
16/09/19 11:28:21 INFO scheduler.TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
16/09/19 11:28:23 INFO cluster.SparkDeploySchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor@192.168.100.14:32872/user/Executor#-695323121]) with ID 19
16/09/19 11:28:23 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, 192.168.100.14, ANY, 1474 bytes)
16/09/19 11:28:23 INFO cluster.SparkDeploySchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor@192.168.100.10:34254/user/Executor#680848952]) with ID 4
16/09/19 11:28:23 INFO cluster.SparkDeploySchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor@192.168.100.12:45597/user/Executor#61882158]) with ID 17
16/09/19 11:28:23 INFO cluster.SparkDeploySchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor@192.168.100.11:55846/user/Executor#-109266911]) with ID 2
16/09/19 11:28:23 INFO cluster.SparkDeploySchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor@192.168.100.11:39056/user/Executor#-730178427]) with ID 3
16/09/19 11:28:23 INFO cluster.SparkDeploySchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor@192.168.100.6:41904/user/Executor#172197607]) with ID 7
16/09/19 11:28:23 INFO cluster.SparkDeploySchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor@192.168.100.15:48188/user/Executor#1126474595]) with ID 1
16/09/19 11:28:23 INFO cluster.SparkDeploySchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor@192.168.100.15:53231/user/Executor#643650421]) with ID 0
16/09/19 11:28:23 INFO cluster.SparkDeploySchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor@192.168.100.7:44498/user/Executor#1819346495]) with ID 10
16/09/19 11:28:23 INFO cluster.SparkDeploySchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor@192.168.100.10:33380/user/Executor#1519517929]) with ID 5
16/09/19 11:28:23 INFO cluster.SparkDeploySchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor@192.168.100.13:51226/user/Executor#107130314]) with ID 14
16/09/19 11:28:23 INFO cluster.SparkDeploySchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor@192.168.100.14:51287/user/Executor#-786570003]) with ID 18
16/09/19 11:28:23 INFO cluster.SparkDeploySchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor@192.168.100.8:42736/user/Executor#-1654106840]) with ID 13
16/09/19 11:28:23 INFO cluster.SparkDeploySchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor@192.168.100.8:46813/user/Executor#-1544202525]) with ID 12
16/09/19 11:28:23 INFO cluster.SparkDeploySchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor@192.168.100.13:59695/user/Executor#758748544]) with ID 15
16/09/19 11:28:23 INFO cluster.SparkDeploySchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor@192.168.100.9:32978/user/Executor#1666271797]) with ID 9
16/09/19 11:28:23 INFO cluster.SparkDeploySchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor@192.168.100.9:43395/user/Executor#1432829817]) with ID 8
16/09/19 11:28:23 INFO cluster.SparkDeploySchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor@192.168.100.6:55244/user/Executor#-583314465]) with ID 6
16/09/19 11:28:23 INFO cluster.SparkDeploySchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor@192.168.100.7:36554/user/Executor#464137168]) with ID 11
16/09/19 11:28:23 INFO cluster.SparkDeploySchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor@192.168.100.12:51715/user/Executor#1409392060]) with ID 16
16/09/19 11:28:23 INFO storage.BlockManagerMasterEndpoint: Registering block manager 192.168.100.14:41531 with 906.2 MB RAM, BlockManagerId(19, 192.168.100.14, 41531)
16/09/19 11:28:23 INFO storage.BlockManagerMasterEndpoint: Registering block manager 192.168.100.10:55997 with 906.2 MB RAM, BlockManagerId(4, 192.168.100.10, 55997)
16/09/19 11:28:23 INFO storage.BlockManagerMasterEndpoint: Registering block manager 192.168.100.12:39237 with 906.2 MB RAM, BlockManagerId(17, 192.168.100.12, 39237)
16/09/19 11:28:23 INFO storage.BlockManagerMasterEndpoint: Registering block manager 192.168.100.11:55993 with 906.2 MB RAM, BlockManagerId(2, 192.168.100.11, 55993)
16/09/19 11:28:23 INFO storage.BlockManagerMasterEndpoint: Registering block manager 192.168.100.10:50592 with 906.2 MB RAM, BlockManagerId(5, 192.168.100.10, 50592)
16/09/19 11:28:23 INFO storage.BlockManagerMasterEndpoint: Registering block manager 192.168.100.6:53362 with 906.2 MB RAM, BlockManagerId(7, 192.168.100.6, 53362)
16/09/19 11:28:23 INFO storage.BlockManagerMasterEndpoint: Registering block manager 192.168.100.11:38102 with 906.2 MB RAM, BlockManagerId(3, 192.168.100.11, 38102)
16/09/19 11:28:23 INFO storage.BlockManagerMasterEndpoint: Registering block manager 192.168.100.7:36601 with 906.2 MB RAM, BlockManagerId(10, 192.168.100.7, 36601)
16/09/19 11:28:23 INFO storage.BlockManagerMasterEndpoint: Registering block manager 192.168.100.14:40729 with 906.2 MB RAM, BlockManagerId(18, 192.168.100.14, 40729)
16/09/19 11:28:23 INFO storage.BlockManagerMasterEndpoint: Registering block manager 192.168.100.8:53863 with 906.2 MB RAM, BlockManagerId(13, 192.168.100.8, 53863)
16/09/19 11:28:23 INFO storage.BlockManagerMasterEndpoint: Registering block manager 192.168.100.13:43366 with 906.2 MB RAM, BlockManagerId(14, 192.168.100.13, 43366)
16/09/19 11:28:23 INFO storage.BlockManagerMasterEndpoint: Registering block manager 192.168.100.9:54016 with 906.2 MB RAM, BlockManagerId(9, 192.168.100.9, 54016)
16/09/19 11:28:23 INFO storage.BlockManagerMasterEndpoint: Registering block manager 192.168.100.6:60357 with 906.2 MB RAM, BlockManagerId(6, 192.168.100.6, 60357)
16/09/19 11:28:23 INFO storage.BlockManagerMasterEndpoint: Registering block manager 192.168.100.9:47909 with 906.2 MB RAM, BlockManagerId(8, 192.168.100.9, 47909)
16/09/19 11:28:23 INFO storage.BlockManagerMasterEndpoint: Registering block manager 192.168.100.15:46824 with 906.2 MB RAM, BlockManagerId(1, 192.168.100.15, 46824)
16/09/19 11:28:23 INFO storage.BlockManagerMasterEndpoint: Registering block manager 192.168.100.15:48343 with 906.2 MB RAM, BlockManagerId(0, 192.168.100.15, 48343)
16/09/19 11:28:23 INFO storage.BlockManagerMasterEndpoint: Registering block manager 192.168.100.8:32827 with 906.2 MB RAM, BlockManagerId(12, 192.168.100.8, 32827)
16/09/19 11:28:23 INFO storage.BlockManagerMasterEndpoint: Registering block manager 192.168.100.13:52959 with 906.2 MB RAM, BlockManagerId(15, 192.168.100.13, 52959)
16/09/19 11:28:23 INFO storage.BlockManagerMasterEndpoint: Registering block manager 192.168.100.7:33396 with 906.2 MB RAM, BlockManagerId(11, 192.168.100.7, 33396)
16/09/19 11:28:23 INFO storage.BlockManagerMasterEndpoint: Registering block manager 192.168.100.12:59404 with 906.2 MB RAM, BlockManagerId(16, 192.168.100.12, 59404)
16/09/19 11:28:25 INFO storage.BlockManagerInfo: Added broadcast_1_piece0 in memory on 192.168.100.14:41531 (size: 2.6 KB, free: 906.2 MB)
16/09/19 11:28:25 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.100.14:41531 (size: 17.7 KB, free: 906.2 MB)
16/09/19 11:28:26 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 3398 ms on 192.168.100.14 (1/1)
16/09/19 11:28:26 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 
16/09/19 11:28:26 INFO scheduler.DAGScheduler: ShuffleMapStage 0 (mapToPair at JavaWordCount.java:54) finished in 4.501 s
16/09/19 11:28:26 INFO scheduler.DAGScheduler: looking for newly runnable stages
16/09/19 11:28:26 INFO scheduler.DAGScheduler: running: Set()
16/09/19 11:28:26 INFO scheduler.DAGScheduler: waiting: Set(ResultStage 1)
16/09/19 11:28:26 INFO scheduler.DAGScheduler: failed: Set()
16/09/19 11:28:26 INFO scheduler.DAGScheduler: Missing parents for ResultStage 1: List()
16/09/19 11:28:26 INFO scheduler.DAGScheduler: Submitting ResultStage 1 (ShuffledRDD[4] at reduceByKey at JavaWordCount.java:61), which is now runnable
16/09/19 11:28:26 INFO storage.MemoryStore: ensureFreeSpace(2408) called with curMem=254236, maxMem=2778306969
16/09/19 11:28:26 INFO storage.MemoryStore: Block broadcast_2 stored as values in memory (estimated size 2.4 KB, free 2.6 GB)
16/09/19 11:28:26 INFO storage.MemoryStore: ensureFreeSpace(1458) called with curMem=256644, maxMem=2778306969
16/09/19 11:28:26 INFO storage.MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 1458.0 B, free 2.6 GB)
16/09/19 11:28:26 INFO storage.BlockManagerInfo: Added broadcast_2_piece0 in memory on 192.168.100.4:38299 (size: 1458.0 B, free: 2.6 GB)
16/09/19 11:28:26 INFO spark.SparkContext: Created broadcast 2 from broadcast at DAGScheduler.scala:874
16/09/19 11:28:26 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1 (ShuffledRDD[4] at reduceByKey at JavaWordCount.java:61)
16/09/19 11:28:26 INFO scheduler.TaskSchedulerImpl: Adding task set 1.0 with 1 tasks
16/09/19 11:28:26 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1.0 (TID 1, 192.168.100.15, PROCESS_LOCAL, 1243 bytes)
16/09/19 11:28:28 INFO storage.BlockManagerInfo: Removed broadcast_1_piece0 on 192.168.100.4:38299 in memory (size: 2.6 KB, free: 2.6 GB)
16/09/19 11:28:28 INFO storage.BlockManagerInfo: Removed broadcast_1_piece0 on 192.168.100.14:41531 in memory (size: 2.6 KB, free: 906.2 MB)
16/09/19 11:28:28 INFO storage.BlockManagerInfo: Added broadcast_2_piece0 in memory on 192.168.100.15:46824 (size: 1458.0 B, free: 906.2 MB)
16/09/19 11:28:28 INFO spark.MapOutputTrackerMasterEndpoint: Asked to send map output locations for shuffle 0 to 192.168.100.15:48188
16/09/19 11:28:28 INFO spark.MapOutputTrackerMaster: Size of output statuses for shuffle 0 is 144 bytes
16/09/19 11:28:28 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1.0 (TID 1) in 2322 ms on 192.168.100.15 (1/1)
16/09/19 11:28:28 INFO scheduler.DAGScheduler: ResultStage 1 (collect at JavaWordCount.java:68) finished in 2.322 s
16/09/19 11:28:28 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool 
16/09/19 11:28:28 INFO scheduler.DAGScheduler: Job 0 finished: collect at JavaWordCount.java:68, took 6.953234 s
are: 1
back: 1
is: 3
ERROR: 1
a: 2
on: 1
content: 2
bad: 2
with: 1
some: 1
INFO: 4
to: 1
: 2
This: 3
more: 1
message: 1
More: 1
thing: 1
warning: 1
WARN: 2
normal: 1
Something: 1
happened: 1
other: 1
messages: 2
details: 1
the: 1
Here: 1
16/09/19 11:28:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/metrics/json,null}
16/09/19 11:28:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/kill,null}
16/09/19 11:28:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/api,null}
16/09/19 11:28:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/,null}
16/09/19 11:28:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/static,null}
16/09/19 11:28:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump/json,null}
16/09/19 11:28:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump,null}
16/09/19 11:28:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/json,null}
16/09/19 11:28:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors,null}
16/09/19 11:28:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment/json,null}
16/09/19 11:28:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment,null}
16/09/19 11:28:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd/json,null}
16/09/19 11:28:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd,null}
16/09/19 11:28:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/json,null}
16/09/19 11:28:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage,null}
16/09/19 11:28:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool/json,null}
16/09/19 11:28:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool,null}
16/09/19 11:28:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/json,null}
16/09/19 11:28:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage,null}
16/09/19 11:28:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/json,null}
16/09/19 11:28:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages,null}
16/09/19 11:28:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job/json,null}
16/09/19 11:28:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job,null}
16/09/19 11:28:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/json,null}
16/09/19 11:28:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs,null}
16/09/19 11:28:28 INFO ui.SparkUI: Stopped Spark web UI at http://192.168.100.4:7106
16/09/19 11:28:28 INFO scheduler.DAGScheduler: Stopping DAGScheduler
16/09/19 11:28:28 INFO cluster.SparkDeploySchedulerBackend: Shutting down all executors
16/09/19 11:28:28 INFO cluster.SparkDeploySchedulerBackend: Asking each executor to shut down
16/09/19 11:28:29 INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
16/09/19 11:28:29 INFO util.Utils: path = /tmp/spark-e19f684c-686b-4841-a863-2e143face3c3/blockmgr-b6ea03ec-ee30-4133-992f-ccf27ef35c93, already present as root for deletion.
16/09/19 11:28:29 INFO storage.MemoryStore: MemoryStore cleared
16/09/19 11:28:29 INFO storage.BlockManager: BlockManager stopped
16/09/19 11:28:29 INFO storage.BlockManagerMaster: BlockManagerMaster stopped
16/09/19 11:28:29 INFO spark.SparkContext: Successfully stopped SparkContext
16/09/19 11:28:29 INFO scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
16/09/19 11:28:29 INFO util.Utils: Shutdown hook called
16/09/19 11:28:29 INFO remote.RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
16/09/19 11:28:29 INFO util.Utils: Deleting directory /tmp/spark-e19f684c-686b-4841-a863-2e143face3c3
16/09/19 11:28:29 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
16/09/19 11:28:29 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remoting shut down.

   so we know the driver is run on local host(192.168.100.4:55817).

 

0
0
分享到:
评论

相关推荐

    spark-3.2.1 安装包 下载 hadoop3.2

    Spark 3.2.1是该框架的一个稳定版本,提供了对Hadoop 3.2的支持,这意味着它可以很好地集成到Hadoop生态系统中,利用Hadoop的存储和计算能力。Hadoop是一个分布式文件系统(HDFS)和MapReduce计算模型的集合,为大...

    含两个文件hive-jdbc-3.1.2-standalone.jar和apache-hive-3.1.2-bin.tar.gz

    含两个文件hive-jdbc-3.1.2-standalone.jar和apache-hive-3.1.2-bin.tar.gz 含两个文件hive-jdbc-3.1.2-standalone.jar和apache-hive-3.1.2-bin.tar.gz 含两个文件hive-jdbc-3.1.2-standalone.jar和apache-hive-...

    tyrus-standalone-client-1.13.1.jar

    tyrus-standalone-client-1.13.1.jar

    spark-2.1.0-bin-without-hadoop.tgz.7z

    此外,还需要设置SPARK_HOME环境变量,并在启动时指定master节点,例如本地模式(`--master local[n]`)、standalone模式(`--master spark://<master_ip>:<port>`)或YARN模式(`--master yarn-client`或`--master ...

    selenium-server-standalone-3.141.59

    selenium-server-standalone-3.141.59.jar selenium-server-standalone-3.141.59.jar

    selenium-server-standalone-3.141.0.jar

    最新版selenium-java,selenium-server-standalone-3.141.0.jar

    spark-2.1.1-bin-hadoop2.7.tgz.7z

    1. **Spark Core**:这是Spark的基础,提供了分布式任务调度、内存管理、错误恢复和与其他存储系统的接口。 2. **Spark SQL**:用于处理结构化数据,它支持SQL查询并通过DataFrame API提供了与多种数据源的交互。 3....

    spark-2.3.0-bin-hadoop2.7版本.zip

    4. **组件丰富**:Spark包含多个模块,如Spark Core、Spark SQL、Spark Streaming、MLlib(机器学习库)和GraphX(图计算)。这些组件协同工作,覆盖了批处理、交互式查询、实时流处理、机器学习和图计算等多种应用...

    spark-2.4.4-bin-hadoop2.6.tgz

    - **Spark Core**:基础执行引擎,负责任务调度、内存管理、故障恢复等。 - **Spark SQL**:提供SQL和DataFrame/Dataset API,用于结构化和半结构化数据处理,与Hive兼容。 - **Spark Streaming**:处理连续数据...

    selenium-server-standalone-3.141.59.jar

    《Selenium Server Standalone 3.141.59:自动化测试的得力助手》 Selenium Server Standalone 3.141.59.jar 是一款强大的自动化测试工具,它在软件测试领域占据着重要的地位,尤其在Web应用程序的自动化测试中不可...

    java-1.7.0-openjdk-1.7.0.261-2.6.22.2.el7-8.aarch64.tar.gz

    基于arm64架构CentOS 7.9.2009 (AltArch)版本系统 yum install java-1.7.9-openjdk-devel.aarch64 打包/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.261-2.6.22.2.el7_8.aarch64 openEuler 20.04 LTS安装GConf2-devel后,...

    rpi-apache-hive-3.1.2-src.tar.gz

    HIVE 源代码文件,针对树莓派进行过修改,在standalone-metastore/pom.xml中增加 protocCommand属性为本地protoc执行文件路径 /opt/protobuf/protobuf-2.5.0/bin/protoc的节点

    activemq-ra-3.2.jar.zip

    标题中的"activemq-ra-3.2.jar.zip"是一个归档文件,它包含了ActiveMQ的一个特定版本(3.2)的资源适配器(Resource Adapter,简称RA)。这个文件被压缩成ZIP格式,便于传输和存储。在Java环境中,JAR(Java Archive...

    hive-jdbc-3.1.2-standalone.jar

    Hive连接的jar包——hive-jdbc-3.1.2-standalone.jar,使用数据库连接软件连接数据仓库时需要使用相应的驱动器驱动,希望对大家有所帮助

    apache-hive-2.1.0-bin.tar.gz

    10. **Hive on Spark**:虽然在2.1.0版本中,Hive默认还是使用MapReduce作为计算引擎,但该版本已经支持使用Apache Spark进行更高效的计算,尤其是在交互式查询和实时分析场景下。 下载并解压`apache-hive-2.1.0-...

    rabbitmq-server-mac-standalone-3.7.10.tar.xz

    rabbitmq-server-mac-standalone-3.7.10.tar.xz包 配合erlang在mac os系统运行

    apache-hive-3.1.2-bin.tar.gz

    Apache Hive 是一个基于Hadoop的数据仓库工具,它允许用户通过SQL-like语法查询、管理大量结构化数据。在大数据处理领域,Hive 提供了一个灵活、可扩展的框架,使得数据分析人员能够对存储在Hadoop分布式文件系统...

    DBeaver (dbeaver-ce-21.2.5-linux.gtk.aarch64-nojdk.tar.gz)

    DBeaver (dbeaver-ce-21.2.5-linux.gtk.aarch64-nojdk.tar.gz)适用于Linux ARM 64 位(不含 Java 的 zip)。DBeaver 是一个通用的数据库管理工具和 SQL 客户端,支持 MySQL, PostgreSQL, Oracle, DB2, MSSQL, ...

    spark-1.6.0-bin-hadoop2.4.tgz

    1. **Spark Core**:这是Spark的基础,提供了分布式任务调度、内存管理、错误恢复和网络通信等功能。 2. **Spark SQL**:Spark SQL是Spark处理结构化数据的模块,它允许用户通过SQL或者DataFrame API来操作数据,...

    neo4j-community-3.5.8-unix.tar.gz

    Neo4j是一款强大的图形数据库系统,它以图形数据模型为核心,提供高效的数据存储和查询功能。在本案例中,我们讨论的是"neo4j-community-3.5.8-unix.tar.gz",这是一个针对Linux系统的Neo4j社区版的压缩包。...

Global site tag (gtag.js) - Google Analytics