1、准备文件
1
|
wget http: //statweb.stanford.edu/~tibs/ElemStatLearn/datasets/spam.data
|
2、加载文件
1
|
scala> val inFile = sc.textFile( "/home/scipio/spam.data" )
|
输出
1
2
3
|
14 / 06 / 28 12 : 15 : 34 INFO MemoryStore: ensureFreeSpace( 32880 ) called with curMem= 65736 , maxMem= 311387750
14 / 06 / 28 12 : 15 : 34 INFO MemoryStore: Block broadcast_2 stored as values to memory (estimated size 32.1 KB, free 296.9 MB)
inFile: org.apache.spark.rdd.RDD[String] = MappedRDD[ 7 ] at textFile at <console>: 12
|
3、显示一行
1
|
scala> inFile.first() |
输出
1
2
3
4
5
6
7
8
9
10
|
14 / 06 / 28 12 : 15 : 39 INFO FileInputFormat: Total input paths to process : 1
14 / 06 / 28 12 : 15 : 39 INFO SparkContext: Starting job: first at <console>: 15
14 / 06 / 28 12 : 15 : 39 INFO DAGScheduler: Got job 0 (first at <console>: 15 ) with 1 output partitions (allowLocal= true )
14 / 06 / 28 12 : 15 : 39 INFO DAGScheduler: Final stage: Stage 0 (first at <console>: 15 )
14 / 06 / 28 12 : 15 : 39 INFO DAGScheduler: Parents of final stage: List()
14 / 06 / 28 12 : 15 : 39 INFO DAGScheduler: Missing parents: List()
14 / 06 / 28 12 : 15 : 39 INFO DAGScheduler: Computing the requested partition locally
14 / 06 / 28 12 : 15 : 39 INFO HadoopRDD: Input split: file:/home/scipio/spam.data: 0 + 349170
14 / 06 / 28 12 : 15 : 39 INFO SparkContext: Job finished: first at <console>: 15 , took 0.532360118 s
res2: String = 0 0.64 0.64 0 0.32 0 0 0 0 0 0 0.64 0 0 0 0.32 0 1.29 1.93 0 0.96 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.778 0 0 3.756 61 278 1
|
4、函数运用
(1)map
1
2
3
4
5
6
7
8
9
10
11
12
13
|
scala> val nums = inFile.map(x=>x.split( ' ' ).map(_.toDouble))
nums: org.apache.spark.rdd.RDD[Array[Double]] = MappedRDD[ 8 ] at map at <console>: 14
scala> nums.first() 14 / 06 / 28 12 : 19 : 07 INFO SparkContext: Starting job: first at <console>: 17
14 / 06 / 28 12 : 19 : 07 INFO DAGScheduler: Got job 1 (first at <console>: 17 ) with 1 output partitions (allowLocal= true )
14 / 06 / 28 12 : 19 : 07 INFO DAGScheduler: Final stage: Stage 1 (first at <console>: 17 )
14 / 06 / 28 12 : 19 : 07 INFO DAGScheduler: Parents of final stage: List()
14 / 06 / 28 12 : 19 : 07 INFO DAGScheduler: Missing parents: List()
14 / 06 / 28 12 : 19 : 07 INFO DAGScheduler: Computing the requested partition locally
14 / 06 / 28 12 : 19 : 07 INFO HadoopRDD: Input split: file:/home/scipio/spam.data: 0 + 349170
14 / 06 / 28 12 : 19 : 07 INFO SparkContext: Job finished: first at <console>: 17 , took 0.011412903 s
res3: Array[Double] = Array( 0.0 , 0.64 , 0.64 , 0.0 , 0.32 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.64 , 0.0 , 0.0 , 0.0 , 0.32 , 0.0 , 1.29 , 1.93 , 0.0 , 0.96 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.778 , 0.0 , 0.0 , 3.756 , 61.0 , 278.0 , 1.0 )
|
(2)collecct
1
2
3
4
5
6
7
8
9
|
scala> val rdd = sc.parallelize(List( 1 , 2 , 3 , 4 , 5 ))
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[ 9 ] at parallelize at <console>: 12
scala> val mapRdd = rdd.map( 2 *_)
mapRdd: org.apache.spark.rdd.RDD[Int] = MappedRDD[ 10 ] at map at <console>: 14
scala> mapRdd.collect 14 / 06 / 28 12 : 24 : 45 INFO SparkContext: Job finished: collect at <console>: 17 , took 1.789249751 s
res4: Array[Int] = Array( 2 , 4 , 6 , 8 , 10 )
|
(3)filter
1
2
3
4
5
6
|
scala> val filterRdd = sc.parallelize(List( 1 , 2 , 3 , 4 , 5 )).map(_* 2 ).filter(_> 5 )
filterRdd: org.apache.spark.rdd.RDD[Int] = FilteredRDD[ 13 ] at filter at <console>: 12
scala> filterRdd.collect 14 / 06 / 28 12 : 27 : 45 INFO SparkContext: Job finished: collect at <console>: 15 , took 0.056086178 s
res5: Array[Int] = Array( 6 , 8 , 10 )
|
(4)flatMap
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
|
scala> val rdd = sc.textFile( "/home/scipio/README.md" )
14 / 06 / 28 12 : 31 : 55 INFO MemoryStore: ensureFreeSpace( 32880 ) called with curMem= 98616 , maxMem= 311387750
14 / 06 / 28 12 : 31 : 55 INFO MemoryStore: Block broadcast_3 stored as values to memory (estimated size 32.1 KB, free 296.8 MB)
rdd: org.apache.spark.rdd.RDD[String] = MappedRDD[ 15 ] at textFile at <console>: 12
scala> rdd.count 14 / 06 / 28 12 : 32 : 50 INFO SparkContext: Job finished: count at <console>: 15 , took 0.341167662 s
res6: Long = 127
scala> rdd.cache res7: rdd.type = MappedRDD[ 15 ] at textFile at <console>: 12
scala> rdd.count 14 / 06 / 28 12 : 33 : 00 INFO SparkContext: Job finished: count at <console>: 15 , took 0.32015745 s
res8: Long = 127
scala> val wordCount = rdd.flatMap(_.split( ' ' )).map(x=>(x, 1 )).reduceByKey(_+_)
wordCount: org.apache.spark.rdd.RDD[(String, Int)] = MapPartitionsRDD[ 20 ] at reduceByKey at <console>: 14
scala> wordCount.collect res9: Array[(String, Int)] = Array((means, 1 ), (under, 2 ), ( this , 4 ), (Because, 1 ), (Python, 2 ), (agree, 1 ), (cluster., 1 ), (its, 1 ), (YARN,, 3 ), (have, 2 ), (pre-built, 1 ), (MRv1,, 1 ), (locally., 1 ), (locally, 2 ), (changed, 1 ), (several, 1 ), (only, 1 ), (sc.parallelize( 1 , 1 ), (This, 2 ), (basic, 1 ), (first, 1 ), (requests, 1 ), (documentation, 1 ), (Configuration, 1 ), (MapReduce, 2 ), (without, 1 ), (setting, 1 ), ( "yarn-client" , 1 ), ([params]`., 1 ), (any, 2 ), (application, 1 ), (prefer, 1 ), (SparkPi, 2 ), (<http: //spark.apache.org/>,1), (version,3), (file,1), (documentation,,1), (test,1), (MASTER,1), (entry,1), (example,3), (are,2), (systems.,1), (params,1), (scala>,1), (<artifactId>hadoop-client</artifactId>,1), (refer,1), (configure,1), (Interactive,2), (artifact,1), (can,7), (file's,1), (build,3), (when,2), (2.0.X,,1), (Apac...
scala> wordCount.saveAsTextFile( "/home/scipio/wordCountResult.txt" )
|
(5)union
1
2
3
4
5
6
7
8
9
10
11
12
|
scala> val rdd = sc.parallelize(List(( 'a' , 1 ),( 'a' , 2 )))
rdd: org.apache.spark.rdd.RDD[(Char, Int)] = ParallelCollectionRDD[ 10 ] at parallelize at <console>: 12
scala> val rdd2 = sc.parallelize(List(( 'b' , 1 ),( 'b' , 2 )))
rdd2: org.apache.spark.rdd.RDD[(Char, Int)] = ParallelCollectionRDD[ 11 ] at parallelize at <console>: 12
scala> rdd union rdd2 res3: org.apache.spark.rdd.RDD[(Char, Int)] = UnionRDD[ 12 ] at union at <console>: 17
scala> res3.collect res4: Array[(Char, Int)] = Array((a, 1 ), (a, 2 ), (b, 1 ), (b, 2 ))
|
(6) join
1
2
3
4
5
6
7
8
9
10
11
12
|
scala> val rdd1 = sc.parallelize(List(( 'a' , 1 ),( 'a' , 2 ),( 'b' , 3 ),( 'b' , 4 )))
rdd1: org.apache.spark.rdd.RDD[(Char, Int)] = ParallelCollectionRDD[ 10 ] at parallelize at <console>: 12
scala> val rdd2 = sc.parallelize(List(( 'a' , 5 ),( 'a' , 6 ),( 'b' , 7 ),( 'b' , 8 )))
rdd2: org.apache.spark.rdd.RDD[(Char, Int)] = ParallelCollectionRDD[ 11 ] at parallelize at <console>: 12
scala> rdd1 join rdd2 res1: org.apache.spark.rdd.RDD[(Char, (Int, Int))] = FlatMappedValuesRDD[ 14 ] at join at <console>: 17
res1.collect res2: Array[(Char, (Int, Int))] = Array((b,( 3 , 7 )), (b,( 3 , 8 )), (b,( 4 , 7 )), (b,( 4 , 8 )), (a,( 1 , 5 )), (a,( 1 , 6 )), (a,( 2 , 5 )), (a,( 2 , 6 )))
|
(7)lookup
1
2
3
|
val rdd1 = sc.parallelize(List(( 'a' , 1 ),( 'a' , 2 ),( 'b' , 3 ),( 'b' , 4 )))
rdd1.lookup( 'a' )
res3: Seq[Int] = WrappedArray( 1 , 2 )
|
(8)groupByKey
1
2
3
4
5
|
val wc = sc.textFile( "/home/scipio/README.md" ).flatMap(_.split( ' ' )).map((_, 1 )).groupByKey
wc.collect 14 / 06 / 28 12 : 56 : 14 INFO SparkContext: Job finished: collect at <console>: 15 , took 2.933392093 s
res0: Array[(String, Iterable[Int])] = Array((means,ArrayBuffer( 1 )), (under,ArrayBuffer( 1 , 1 )), ( this ,ArrayBuffer( 1 , 1 , 1 , 1 )), (Because,ArrayBuffer( 1 )), (Python,ArrayBuffer( 1 , 1 )), (agree,ArrayBuffer( 1 )), (cluster.,ArrayBuffer( 1 )), (its,ArrayBuffer( 1 )), (YARN,,ArrayBuffer( 1 , 1 , 1 )), (have,ArrayBuffer( 1 , 1 )), (pre-built,ArrayBuffer( 1 )), (MRv1,,ArrayBuffer( 1 )), (locally.,ArrayBuffer( 1 )), (locally,ArrayBuffer( 1 , 1 )), (changed,ArrayBuffer( 1 )), (sc.parallelize( 1 ,ArrayBuffer( 1 )), (only,ArrayBuffer( 1 )), (several,ArrayBuffer( 1 )), (This,ArrayBuffer( 1 , 1 )), (basic,ArrayBuffer( 1 )), (first,ArrayBuffer( 1 )), (documentation,ArrayBuffer( 1 )), (Configuration,ArrayBuffer( 1 )), (MapReduce,ArrayBuffer( 1 , 1 )), (requests,ArrayBuffer( 1 )), (without,ArrayBuffer( 1 )), ( "yarn-client" ,ArrayBuffer( 1 )), ([params]`.,Ar...
|
(9)sortByKey
1
2
3
4
|
val rdd = sc.textFile( "/home/scipio/README.md" )
val wordcount = rdd.flatMap(_.split( ' ' )).map((_, 1 )).reduceByKey(_+_)
val wcsort = wordcount.map(x => (x._2,x._1)).sortByKey( false ).map(x => (x._2,x._1))
wcsort.saveAsTextFile( "/home/scipio/sort.txt" )
|
升序的话,sortByKey(true)
转
http://my.oschina.net/scipio/blog/284957#OSC_h5_11
http://bit1129.iteye.com/blog/2171799
http://bit1129.iteye.com/blog/2171811
相关推荐
### Spark简单测试案例 #### 一、测试环境 在本案例中,我们将使用特定的环境配置来进行测试。这些配置包括: - **集群环境**:基于Hadoop 1.2 和 Spark 1.0 构建的集群环境。 - **操作系统**:所有节点均采用 ...
本实例“HadoopSparkExampler”旨在通过实际操作,帮助用户理解如何结合使用这两个技术进行机器学习任务。 Hadoop是Apache软件基金会开发的一个开源框架,主要用于处理和存储大规模数据集。它采用了分布式计算模型...
Spark 的交互式脚本是一种学习 API 的简单途径,也是分析数据集交互的有力工具。Spark 包含多种运行模式,可使用单机模式,也可以使用分布式模式。为简单起见,本节采用单机模式运行 Spark。 无论采用哪种模式,只要...
这个"spark-windows helloworld入门例子"是专为Windows用户设计的,旨在帮助初学者快速理解和掌握Spark的基本用法。 首先,我们需要了解Spark的核心概念。Spark的核心在于它的弹性分布式数据集(Resilient ...
Spark算子实例maven版是基于Apache Spark框架的开发示例,主要针对的是使用Maven构建项目的开发者。Apache Spark是一个用于大规模数据处理的快速、通用且可扩展的开源框架,它提供了一种分布式、内存计算的编程模型...
总的来说,这个"spark机器学习简单实例文档"很可能包含以上提到的各种算法的实现细节,包括数据预处理、模型训练、模型评估和参数调优的过程。如果你想要深入理解Spark上的机器学习,这份文档会是一个很好的学习资源...
【标题】中的“hadoop scala spark 例子项目,运行了单机wordcount”指的是一个使用Hadoop、Scala和Spark框架实现的简单WordCount程序。在大数据处理领域,WordCount是入门级的经典示例,用于统计文本文件中单词出现...
在 Kafka 生产者方面,`kafkProducer.zip` 文件可能包含了一个简单的应用,用于将数据发布到 Kafka 主题。生产者代码通常涉及以下步骤: 1. 创建 KafkaProducer 实例,传入配置参数,如 brokers、key 和 value 的...
滑动平均是一种简单的时间序列平滑技术,可以用来去除噪声并揭示数据的基本趋势。在Spark中,可以自定义窗口函数实现滑动平均,例如使用`window`函数和`avg`函数,定义一个窗口大小,对每个窗口内的数据进行平均。 ...
这些方法都是Spark SQL API的一部分,它们使得数据处理变得简单且高效。 为了对外提供接口,我们可以在`SparkController`中创建RESTful API,调用`SparkService`中的方法处理数据,并将结果返回给客户端。使用`@...
Spark是Apache Hadoop生态系统中的一个快速、通用且可扩展的大数据处理引擎,它设计用于处理大规模数据集。Spark的核心特性包括分布式计算、内存计算、容错性和编程模型的易用性。Spark Stream是Spark的一个模块,专...
非常好用,自己测试过,非常好用,自己测试过,非常好用,自己测试过
基于spark的scala maven实例项目两个简单的统计实例,适合初学者了解。 /** * 第一步:创建Spark的配置对象SparkConf,设置Spark程序的运行时的配置信息, * 例如说通过setMaster来设置程序要链接的Spark集群的...
通过一个与Spark捆绑的启动器来设置集群,用户可以指定实例类型、密钥名称、私钥文件路径、从节点数量、EC2区域和备用价格(spot-price),从而在5分钟内创建出一个Spark Standalone集群,一个HDFS集群和一个...
**Spark上的WordCount程序详解** ...通过这个简单的例子,我们可以深入理解Spark的工作原理,以及如何在Java环境中编写并执行Spark任务。在实际应用中,这些基本操作可以扩展到更复杂的分布式数据处理场景。
2. 表达式和简单函数 在 Scala 中,表达式是计算结果的基本单元。表达式可以是纯函数,也可以是带副作用的函数。在 Spark 中,表达式广泛应用于数据处理和计算。 3. 条件表达式 条件表达式是 Scala 中的一种基本...
接着,我们将创建一个简单的Spark Java程序,它执行一个基本的Word Count任务。这个任务会读取输入文本,计算每个单词出现的次数。 ```java import org.apache.spark.SparkConf; import org.apache.spark.api.java....
本文将深入探讨Spark自定义分区的原理,并通过一个具体的Scala实现例子来阐述其应用。 首先,理解Spark中的分区。默认情况下,Spark会根据数据源的大小和可用的executor数量自动决定分区数。然而,有时我们需要根据...
此外,书中还会详细讲解Spark Streaming,这是Spark处理实时数据流的模块,它提供了DStream(离散化流)的概念,使实时数据分析变得简单。读者将学习如何设置流处理任务,实现低延迟的数据处理和实时监控。 机器...