Transform, filter, aggregate, and join Kafka streams using SQL, with sub-second results. Automatically scale from 10 events to millions per second. Pay only for what you use.
Serverless stream processing
No clusters to manage. No fixed costs. Effortless autoscaling.
You already know how to build streaming pipelines. Easily upgrade batch jobs to real-time.
Read from your existing Kafka or Kinesis streams. Write to one of dozens of sinks.
Three Steps to Streaming
Tell us how to connect and authenticate to your Kafka or Kinesis streams. We support Confluent cloud, MSK, self-hosted Kafka, and data in JSON, Protobuf, or Avro.
Construct your pipelines using the SQL you already know, plus streaming window functions and WASM user-defined functions.
Tell us where to send your data. We support stream sinks and many other data systems, or query your results directly via our API.
What can you build with Arroyo?
Traffic accidents. Supply-demand imbalances. Unsafe situations. If you operate in the real world, you need to incorporate information in seconds.
No need to wait for the daily batch job to know how your business is doing. With Arroyo, you can move your analytics to real-time without added complexity or cost.
Generate ML features in real-time to proactively respond to the changing world and customer behavior.
Transform logs into metrics and find anomalous behavior of your systems. Catch fraudsters before they exploit your business.
Micah was previously tech lead for streaming compute at Splunk and Lyft, where he built real-time data infra powering Lyft's dynamic pricing, ETA, and safety features. He spends his time rock climbing, playing music, and bringing real-time data to companies that can't hire a streaming infra team.
Jackson spent a decade at Quantcast building in-house distributed systems. He relishes designing maximally efficient systems with aggressive performance targets and massive scale. He's thrilled to be helping companies move their data processing into the stream-first future.