快速开始

本文将介绍如何用scala、java、python编写一个spark单击模式的程序。

首先你只需要在一台机器上成功建造Spark;做法:

进入Spark的根目录,输入命令:$ sbt/sbt package
(由于天朝伟大的防火墙,大陆地区是无法成功的,除非你可以顺利FQ),不想爬墙的可以下载预编译好的Spark ,spark-0.7.2-prebuilt-hadoop1.tgz.gz

 

Spark shell的交互式分析

一、基础

概念:

Spark的交互式脚本是一种学习API的简单途径,也是分析数据集交互的有力工具。在Spark根目录运行:./spark-shell

Spark抽象的分布式集群空间叫做Resilient Distributed Dataset (RDD)弹性数据集。

RDD有两种创建方式:1、从Hadoop的文件系统输入(例如HDFS);2、有其他已存在的RDD转换得到新的RDD。

实践:

1、现在我们利用Spark目录下的README文件来创建一个新的RDD:

scala> val textFile = sc.textFile("README.md")
textFile: spark.RDD[String] = spark.MappedRDD@2ee9b6e3
scala> textFile.count() // Number of items in this RDD
res0: Long = 74
scala> textFile.first() // First item in this RDD
res1: String = # Spark

3、下面使用transformations中的filter返回一个文件子集的新RDD
scala> textFile.filter(line => line.contains("Spark")).count() // How many lines contain "Spark"?
res3: Long = 15
二、基于RDD的更多操作
1、RDD的actions和transformations可以被用于更多复杂的计算。例如,我们想找出含有字数最多的行:
scala> textFile.map(line => line.split(" ").size).reduce((a, b) => if (a > b) a else b)
res4: Long = 16
scala> import java.lang.Math
import java.lang.Math

scala> textFile.map(line => line.split(" ").size).reduce((a, b) => Math.max(a, b))
res5: Int = 16

3、Spark可以很容易的执行MapReaduce流
scala> val wordCounts = textFile.flatMap(line => line.split(" ")).map(word => (word, 1)).reduceByKey((a, b) => a + b)
wordCounts: spark.RDD[(java.lang.String, Int)] = spark.ShuffledAggregatedRDD@71f027b8
scala> wordCounts.collect()
res6: Array[(java.lang.String, Int)] = Array((need,2), ("",43), (Extra,3), (using,1), (passed,1), (etc.,1), (its,1), (`/usr/local/lib/libmesos.so`,1), (`SCALA_HOME`,1), (option,1), (these,1), (#,1), (`PATH`,,2), (200,1), (To,3),...
scala> linesWithSpark.cache()
res7: spark.RDD[String] = spark.FilteredRDD@17e51082
scala> linesWithSpark.count()
res8: Long = 15

四、一个单机版的scala作业
/*** SimpleJob.scala ***/
import spark.SparkContext
import SparkContext._

object SimpleJob {
  def main(args: Array[String]) {
    val logFile = "/var/log/syslog" // Should be some file on your system
    val sc = new SparkContext("local", "Simple Job", "$YOUR_SPARK_HOME",
      List("target/scala-2.9.3/simple-project_2.9.3-1.0.jar"))
    val logData = sc.textFile(logFile, 2).cache()
    val numAs = logData.filter(line => line.contains("a")).count()
    val numBs = logData.filter(line => line.contains("b")).count()
    println("Lines with a: %s, Lines with b: %s".format(numAs, numBs))
  }
}
name := "Simple Project"

version := "1.0"

scalaVersion := "2.9.3"

libraryDependencies += "org.spark-project" %% "spark-core" % "0.7.3"

resolvers ++= Seq(
  "Akka Repository" at "http://repo.akka.io/releases/",
  "Spray Repository" at "http://repo.spray.cc/")
为了让sbt正确的工作,我们必须将SimpleJob.scala和simple.sbt根据典型的目录结构进行布局。完成布局后,我们可以创建一个包含了程序源码的JAR包,然后使用sbt的run命令来执行示例程序
$ find .
.
./simple.sbt
./src
./src/main
./src/main/scala
./src/main/scala/SimpleJob.scala

$ sbt package
$ sbt run
...
Lines with a: 8422, Lines with b: 1836

这样就完成了程序在本地运行的示例