As more and more organizations need insights on streams of data as they’re produced, stream processing is quickly becoming standard in modern data architectures. There are many solutions for processing real-time streams but the majority of them target developers and assume proficiency in Java, Scala, or other programming languages. This reliance on developers and data engineers is fine for sophisticated algorithms and complex systems, but the first step in stream processing is often simple filter, extract, and transform pipelines to prepare raw data for further analysis. Putting control of filter, extract, and transform into the hands of administrators and analysts increases the velocity of analysis and the flexibility of deployed systems. In addition, these preparation phases need to be embedded in multiple contexts, especially if your data pipelines support both streaming and batch computations over the same source streams. Writing the same data preparation pipeline in four different frameworks is expensive and increases long-term maintenance costs…