Spark Structured Streaming processes data streams with the same API as batch. One code for historical and real-time data.
Structured Streaming¶
Stream as an infinite table — new data are new rows.
from pyspark.sql import SparkSession
from pyspark.sql.functions import window, sum, count
spark = SparkSession.builder.appName("Streaming").getOrCreate()
orders = (
spark.readStream.format("kafka")
.option("subscribe", "orders").load()
.select(from_json(col("value").cast("string"), schema).alias("d"))
.select("d.*")
)
revenue = (
orders.withWatermark("order_time", "10 minutes")
.groupBy(window("order_time", "5 minutes"))
.agg(sum("amount").alias("revenue"))
)
revenue.writeStream.format("delta")
.option("checkpointLocation", "/cp/revenue")
.start("/data/revenue")
Trigger Modes¶
- Default — micro-batch ASAP
- Fixed interval — processingTime
- Once / Available-now — one-time processing
Summary¶
Spark Structured Streaming is ideal for teams with Spark who want to add stream processing.
spark streamingapache sparkmicro-batchreal-time