_CORE
AI & Agentic Systems Core Information Systems Cloud & Platform Engineering Data Platform & Integration Security & Compliance QA, Testing & Observability IoT, Automation & Robotics Mobile & Digital Banking & Finance Insurance Public Administration Defense & Security Healthcare Energy & Utilities Telco & Media Manufacturing Logistics & E-commerce Retail & Loyalty
References Technologies Blog Know-how Tools
About Collaboration Careers
CS EN
Let's talk

Spark Structured Streaming — Unified Batch and Stream Processing

22. 08. 2025 1 min read intermediate

Spark Structured Streaming processes data streams with the same API as batch. One code for historical and real-time data.

Structured Streaming

Stream as an infinite table — new data are new rows.

from pyspark.sql import SparkSession
from pyspark.sql.functions import window, sum, count

spark = SparkSession.builder.appName("Streaming").getOrCreate()

orders = (
    spark.readStream.format("kafka")
    .option("subscribe", "orders").load()
    .select(from_json(col("value").cast("string"), schema).alias("d"))
    .select("d.*")
)

revenue = (
    orders.withWatermark("order_time", "10 minutes")
    .groupBy(window("order_time", "5 minutes"))
    .agg(sum("amount").alias("revenue"))
)

revenue.writeStream.format("delta")
    .option("checkpointLocation", "/cp/revenue")
    .start("/data/revenue")

Trigger Modes

  • Default — micro-batch ASAP
  • Fixed interval — processingTime
  • Once / Available-now — one-time processing

Summary

Spark Structured Streaming is ideal for teams with Spark who want to add stream processing.

spark streamingapache sparkmicro-batchreal-time
Share:

CORE SYSTEMS tým

Stavíme core systémy a AI agenty, které drží provoz. 15 let zkušeností s enterprise IT.