Spark Structured streaming API支持的输出源有:Console、Memory、File和Foreach。其中Console在前两篇博文中已有详述,而Memory使用非常简单。本文着重介绍File和Foreach两种方式,并介绍如何在源码基本扩展新的输出方式。
1. File
Structured Streaming支持将数据以File形式保存起来,其中支持的文件格式有四种:json、text、csv和parquet。其使用方式也非常简单只需设置checkpointLocation和path即可。checkpointLocation是检查点保存的路径,而path是真实数据保存的路径。
如下所示的测试例子:
// Create DataFrame representing the stream of input lines from connection to host:port
val lines = spark.readStream
.format("socket")
.option("host", host)
.option("port", port)
.load()
// Split the lines into words
val words = lines.as[String].flatMap(_.split(" "))
// Generate running word count
val wordCounts = words.groupBy("value").count()
// Start running the query that prints the running counts to the console
val query = wordCounts.writeStream
.format("json")
.option("checkpointLocation","root/jar")
.option("path","/root/jar")
.start()
|
注意:
File形式不能设置"compelete"模型,只能设置"Append"模型。由于Append模型不能有聚合操作,所以将数据保存到外部File时,不能有聚合操作。
2. Foreach
foreach输出方式只需要实现ForeachWriter抽象类,并实现三个方法,当Structured Streaming接收到数据就会执行其三个方法,如下的测试示例:
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
// scalastyle:off println
package org.apache.spark.examples.sql.streaming
import org.apache.spark.sql.SparkSession
/**
* Counts words in UTF8 encoded, '\n' delimited text received from the network.
*
* Usage: StructuredNetworkWordCount <hostname> <port>
* <hostname> and <port> describe the TCP server that Structured Streaming
* would connect to receive data.
*
* To run this on your local machine, you need to first run a Netcat server
* `$ nc -lk 9999`
* and then run the example
* `$ bin/run-example sql.streaming.StructuredNetworkWordCount
* localhost 9999`
*/
object StructuredNetworkWordCount {
def main(args: Array[String]) {
if (args.length < 2) {
System.err.println("Usage: StructuredNetworkWordCount <hostname> <port>")
System.exit(1)
}
val host = args(0)
val port = args(1).toInt
val spark = SparkSession
.builder
.appName("StructuredNetworkWordCount")
.getOrCreate()
import spark.implicits._
// Create DataFrame representing the stream of input lines from connection to host:port
val lines = spark.readStream
.format("socket")
.option("host", host)
.option("port", port)
.load()
// Start running the query that prints the running counts to the console
val query = wordCounts.writeStream
.outputMode("append")
.foreach(new ForearchWriter[Row]{
override def open(partitionId:Long,version:Long):Boolean={
println("open")
return true
}
override def process(value:Row):Unit={
val spark = SparkSession.builder.getOrCreate()
val seq = value.mkString.split(" ")
val row = Row.fromSeq(seq)
val rowRDD:RDD[Row] = sparkContext.getOrCreate().parallelize[Row](Seq(row))
val userSchema = new StructType().add("name","String").add("age","String")
val peopleDF = spark.createDataFrame(rowRDD,userSchema)
peopleDF.createOrReplaceTempView(myTable)
spark.sql("select * from myTable").show()
}
override def close(errorOrNull:Throwable):Unit={
println("close")
}
})
.start()
query.awaitTermination()
}
}
// scalastyle:on println
|
上述程序是直接继承ForeachWriter类的接口,并实现了open()、process()、close()三个方法。若采用显示定义一个类来实现,需要注意Scala的泛型设计,如下所示:
class myForeachWriter[T<:Row](stream:CatalogTable) extends ForearchWriter[T]{
open(partionId:Long,version:Long):Boolean={
println("open")
true
}
process(value:T):Unit={
println(value)
}
close(errorOrNull:Throwable):Unit={
println("close")
}
}
|
3. 自定义
若上述Spark Structured Streaming API提供的数据输出源仍不能满足要求,那么还有一种方法可以使用:修改源码。
如下通过实现一种自定义的Console来介绍这种使用方式:
3.1 ConsoleSink
Spark有一个Sink接口,用户可以实现该接口的addBatch方法,其中的data参数是接收的数据,如下所示直接将其输出到控制台:
class ConsoleSink(streamName:String) extends Sink{
override def addBatch(batchId:Long, data;DataFrame):Unit = {
data.show()
}
}
|
3.2 DataStreamWriter
在用户自定义的输出形式时,并调用start()方法后,Spark框架会去调用DataStreamWriter类的start()方法。所以用户可以直接在该方法中添加自定义的输出方式,如我们向其传递上述创建的ConsoleSink类示例,如下所示:
def start():StreamingQuery={
if(source == "memory"){
...
}else if(source=="foreach"){
...
else if(source=="consoleSink"){
val streamName:String = extraOption.get("streamName") mathc{
case Some(str):str
case None=>throw new AnalysisException("streamName option must be specified for Sink")
}
val sink = new consoleSink(streamName)
df.sparkSession.sessionState.streamingQueryManager.startQuery(
extraOption.get("queryName"),
extraOption.get("checkpointLocation"),
df,
sink,
outputMode,
useTempCheckpointLocaltion = true,
recoverFromCheckpointLocation = false,
trigger = trigger
)
}else{
...
}
}
|
3.3 Structured Streaming
在前两部修改和实现完成后,用户就可以按正常的Structured Streaming API方式使用了,唯一不同的是在输出形式传递的参数是"consoleSink"字符串,如下所示:
def execute(stream:CatalogTable):Unit={
val spark = SparkSession
.builder
.appName("StructuredNetworkWordCount")
.getOrCreate()
/**1. 获取数据对象DataFrame*/
val lines = spark.readStream
.format("socket")
.option("host", "localhost")
.option("port", 9999)
.load()
/**2. 启动Streaming开始接受数据源的信息*/
val query:StreamingQuery = lines.writeStream
.outputMode("append")
.format("consoleSink")
.option("streamName","myStream")
.start()
query.awaitTermination()
}
|
4. 参考文献
[1]. Structured Streaming Programming Guide.
[2].
Kafka Integration Guide
.